ARKit 3: The Updates and New Features
Recently, Apple has unveiled ARKit 3, which is jam-packed with the new updates and features developers will all excited about. Some of the new features that everybody is talking about are real game changers that will enable developers to create an experience that was not possible before. Apple prominently displayed all of these features at this year’s WWDC conference. Let’s take a look at some of these features in greater detail.
What is ARKit?
For those of you are not familiar with the ARKit, here is a quick refresher. The ARKit is a framework developed by Apple back in 2017, which allows devices to use their front and back cameras to understand the environment that is surrounding them even if they are on the move. As soon as the device has recognized the environment, the user can place a virtual object into that place. Even though the objects that you set are virtual, they will feel real because you will be able to move closer or around them just like regular physical objects.
However, since the ARKit is very CPU intensive, you will need a compelling iOS device to enjoy it. In fact, most of the updates and features that we will talk about require the device to have the A12 Bionic chip or ANE.
Motion capture is used by developers to take movements and poses in real-time and then transfer them into augmented reality experiences. Thanks to this new feature, you can capture someone’s motion using a single camera in real-time and use it to animate virtual character or object. You have a much deeper understanding of the subject’s movements, down small details such as a series of bones and joint actions which leads to a much more realistic AR experience.
This new feature gives apps human stencils and depth segmentation images. In lay man’s terms, this allows the app to determine if a person’s picture is contained on each pixel. Thanks to these new developments, the 3D content can be better rendered, which means that people will more realistically occlude it. In fact, the stencil image alone can be used to produce visual effects such as outlining or tinting humans in the frame. Since this feature drains a lot of CPU, you will need a device that has the A12 Bionic chip or ANE as well.
Face Tracking Improvements
Thanks to the release of the ARKit 3, iPhone XS and XR as well as the newest generation of iPad Pros will get expanded face tracking support. First of all, the front camera is now able to recognize as many as three unique faces in a given session, and you can pick how many faces you would like to be tracked simultaneously. However, perhaps the most significant development in programming augmented reality apps in this category is the power to give the TrueDepth camera the ability to track peoples’ faces in a session designed for world tracking. The users will be able to capture their facial expression with the front camera and then transfer it to a character that was rendered via the blackfacing camera. Just like people occlusion, this feature will only be available to users who have a device that has the ANE & A12 Bionic chip.
In the previous version of the ARKit, we witnessed the introduction of the AR World Map, which allowed users to take a picture of the environment and share it. With the opening of the collaborative session, the ARKit 3 takes this one step further by allowing apps that communicate with each other about the environment they are in and share it in real-time.
Some other updates have been flying under the radar so that we will bring them to you. The object detection and image tracking are now more accurate than ever before, and devices can now detect as many as 100 images simultaneously. Also, object detection is more powerful since it can now detect objects under challenging environments.
Thanks to all of these new upgrades, AR developers will have all of the tools they need to create even more realistic and immersive AR experiences. Perhaps all of these developments with motion tracking and facial recognition are hinting to what future ARKit releases will bring. Imagine you are playing a game of catch. The motion tracking will be able to determine that you are trying to throw the ball, and the facial recognition will determine who you are playing with. If you accidentally hit someone with the ball, the game will know that you made a mistake. If someone besides the person you are playing with made a motion to catch the ball, the game would see that it must ignore them. While the ARKit does not have such ability yet, this is something to look forward to in the future.