ARKit 3 – another milestone for augmented reality (AR)
Apple’s augmented reality platform “ARKit” is already well developed. The platform gives app developers a solid basis for efficiently creating apps with augmented reality content. The AR objects can then be placed and experienced very precisely in the real environment using an iPhone or iPad. But now Apple is going one step further. With the release of the new iOS 13 operating system in fall 2019, the new version of the ARKit will also be released, offering many new features. The new ARKit 3 toolkit enables app developers to react to human bodies (people occlusion) and their movements (motion capturing). In this blog post, you can find out more about the new features of ARKit 3, before we briefly explain what exactly AR is.

AR - what exactly is it?
Augmented reality (AR) is the computer-supported extension of reality perception. Virtual elements (AR elements) can be placed in your own real environment and can then be perceived with the help of a smartphone, tablet or AR glasses. For example, as in the picture above, you can place your desired car as a virtual object in real size on the street in front of your home and check whether you like it. The following links will tell you more about the history of AR and what specific use cases there are in this area
AR platforms
Augmented reality apps can be programmed on AR platforms such as Apple’s ARKit or Google’s ARCore. Developers can generate virtual objects and view them on the display of a smartphone or tablet. With the release of iOS 13 in the fall, Apple’s existing ARKit will also be expanded. With the new ARKit 3 toolkit, Apple is offering various additional options for displaying AR content, which we will now discuss in detail.
Features of the ARKit 3
The new ARKit 3 has set another milestone in augmented reality technology. To program an AR app, you can use the ARKit 3 to capture movements and project them onto AR elements, place AR elements behind real elements and benefit from improved facial recognition and other features. The following YouTube video shows some of the new features offered by the ARKit 3.
Motion Capture
This function allows you to transfer human movements to AR elements. The advantage of this is that AR figures can move in a very realistic way. Furthermore, you can use a camera to project the movements of your own body onto the virtual character in real time.
Face Tracking
Previously, only one face could be recognized with the ARKit. With the new ARKit Face Tracking, up to three faces can be tracked simultaneously. To ensure that face recognition remains consistent, an ID is assigned to each face. This means that the person being tracked can leave the camera image and re-enter it at a later time with their face recognized.
People Occlusion
AR experiences are set to become more realistic and immersive. In future, AR elements with this feature can be placed in front of and behind people in the real world. They can also circle around people. The view of them is obscured by the person standing in front of the AR element in the meantime.
Collaborative sessions
In the previous ARKit 2, an augmented reality world map was introduced so that users could share snapshots of their surroundings with other users. With the new ARKit 3, multiple app developers can collaboratively extend this world map. This enables them to develop and share AR experiences in real time and to obtain them.
With these new features of ARKit 3, Apple has set another milestone for augmented reality technology. We are excited to see how AR will develop once ARKit 3 is available. The toolkit will be available with the release of the iOS 13 operating system and iPadOS in fall 2019. However, you will need a device with an A12 chip or newer and a TrueDepth camera to test the new features of ARKit 3. These requirements are currently met by the iPhone XS and the new iPad Pros. Would you like to find out more about the features of the new iPad operating system? The blog article “Apple announces the iPadOS” will tell you more.