Search

ARKit 2: Bringing richer experiences through collaboration, enhanced detection, and greater realism

Michael Williams

4 min read

Jun 5, 2018

iOS

ARKit 2: Bringing richer experiences through collaboration, enhanced detection, and greater realism

At WWDC 2018, Apple continued to push the envelope with ARKit by releasing ARKit 2. Late in 2017, Apple released ARKit 1, which allowed users to place 3D objects in the real world. This technology was showcased by companies like Ikea who allowed users to try out furniture in their homes. A few months later, Apple released ARKit 1.5. This release gave users the ability to detect not only horizontal planes like the floor of their homes, but also their walls. In addition to vertical plane detection, Apple added Image Recognition, which allowed users to detect images and paintings just by pointing their phones at them. At WWDC 2018, Apple introduced enhancements to face tracking, as well as the following new features: saving and loading maps, environmental texturing, image tracking, and object detection.

Saving and Loading Maps

The first, and probably the most exciting feature, is Saving and Loading Maps. Apple added a new object to the ARKit API called an ARWorldMap. This object, which includes a mutable array of ARAnchors and raw feature points and extent, allows the developer to save and share the mapping of a 3D space. This becomes very exciting when you realize that this gives users two key features: persistence and shared experiences. Persistence allows the user to pause their AR experience and return to it at a later time, without losing any of their previous experience. This contrasts to previous versions of ARKit, which forced users to restart their ARKit experiences when the app entered the background. In addition, using local sharing like AirDrop or Bluetooth, users can share their AR experiences with one another as well as experience them at the same time. This means that two or more people can now play the same AR game or watch the same AR educational experience at the exact same time, seeing the exact same scene. This isn’t like two people watching the same movie at the same time, instead, it would be like two people creating the same movie together or playing the same game of Monopoly, each seeing the other’s moves and changes in realtime.

Saving and loading the world map is simple. Once you have obtained a reference to an ARWorldMap object by using the getCurrentWorldMap(completionHandler:) and have saved it, you can assign it to a configuration’s initialWorldMap property, which loads in all of the same data and feature points recorded previously.

Environmental Texturing

In previous versions of ARKit, 3D objects placed in the real world didn’t have the ability to gather much information about the world around them. This left objects looking unrealistic and out of place. Now, with environmental texturing, objects can reflect the world around them, giving them a greater sense of realism and place. When the user scans the scene, ARKit records and maps the environment onto a cube map. This cube map is then placed over the 3D object allowing it to reflect back the environment around it. What’s even cooler about this is that Apple is using machine learning to generate parts of the cube map that can’t be recorded by the camera, such as overhead lights or other aspects of the scene around it. This means that even if a user isn’t able to scan the entire scene, the object will still look as if it exists in that space because it can reflect objects that aren’t even directly in the scene.

To enable environmental texturing, we simply set the configuration’s environmentalTexturing property to .automatic.

Image Tracking

Next, not only can ARKit detect images, but it can now track images. Images no longer need to be static for ARKit to place objects in 3D space using detected images. Now, when ARKit detects an image, that image can move freely in the real world while still retaining its ARAnchor. This can be done for multiple images at the same time. Each image is tracked simultaneously at 60fps.

Much like ARWorldTrackingConfiguration, which can detect images in the real world, Apple has provided an ARImageTrackingConfiguration. While both can detect images in the real world, the main difference is that instead of tracking everything in the AR world, ARImageTrackingConfiguration just tracks the images, allowing the user to anchor 3D objects to the moving images. To set the number of images to be tracked, set the trackingImages property of the configuration. If the camera detects more images than that specified, the image will be detected and given an anchor but will not be tracked.

Object Detection

ARKit 2 Object Scanning

The new ARKit gives you the ability to scan 3D objects in the real world, creating a map of the scanned object that can be loaded when the object comes into view in the camera. Similar to the ARWorldMap object, ARKit creates a savable ARReferenceObject that can be saved and loaded during another session.

To load a scanned reference object, the detectionObjects property, of the ARWorldTrackingConfiguration needs to be set to the reference object(s) needed.

Apple has created a project that can be used to scan 3D objects, and can be downloaded here.

Face Tracking Enhancements

Lastly, Apple has improved its face tracking capabilities by adding two features: gaze tracking and tongue detection. Both of these features allow for more immersive and expressive Animojis and Memojis.

Apple has put a lot of time and effort into improving ARKit, clearly shown by this year’s updates. I encourage you to go to Apple’s developer website, download the betas and sample projects, and try each of these for yourself.

Mark Dalrymple

Reviewer Big Nerd Ranch

MarkD is a long-time Unix and Mac developer, having worked at AOL, Google, and several start-ups over the years.  He’s the author of Advanced Mac OS X Programming: The Big Nerd Ranch Guide, over 100 blog posts for Big Nerd Ranch, and an occasional speaker at conferences. Believing in the power of community, he’s a co-founder of CocoaHeads, an international Mac and iPhone meetup, and runs the Pittsburgh PA chapter. In his spare time, he plays orchestral and swing band music.

Speak with a Nerd

Schedule a call today! Our team of Nerds are ready to help

Let's Talk

Related Posts

We are ready to discuss your needs.

Not applicable? Click here to schedule a call.

Stay in Touch WITH Big Nerd Ranch News