Why You Should Be Freaking Out Over ARKit 2

Mehdi Rashadatjou

Even though Augmented Reality (AR) is nothing new to the tech world with projects like Google tango and Vuforia, the rapid rise of the leading player Apple in this field is enviable. Their AR tech in smartphone apps and games is pretty advanced.

Apple is investing heavily in augmented reality for the future and has recently introduced the ARKit 2, making the development of the high quality AR projects possible for everyone. This neat toolkit puts the likes of Pokemon go and Google s ARCore to shame.

What is ARKit?
“The simplest, shortest explanation of ARKit is that it does a lot of the heavy lifting for app developers in terms of working with the iOS device s camera, scanning images and objects in the environment, and positioning 3D models in real space and making them fit in.”
Samuel Axon, Senior Reviews Editor at Ars Technica

“I think AR is big and profound,” Cook said in August, 2017. He says the technology will prove itself to be as important as the App Store and will make the smartphone even more essential to people.

Now before jumping the gun on the ARKit 2, let s see what ARKit 1.5 has brought to the table.

ARKit 1.5 features

Vertical plane detection

The biggest update to ARKit 1.5 is the ability to detect vertical and irregular shaped surfaces. That doesn t sound like much a first, but considering these features can run on a single iPhone 6s camera, it makes them pure magic.

Apple has significantly increased the capabilities of ARKit by pushing the barrier from only horizontal to both vertical and horizontal surfaces using just a single line of code.

import UIKit
import ARKit
@IBOutlet weak var sceneView: ARSCNView!
//Create session configuration for @IBOutlet sceneView
//Configuration that uses the back-facing camera, tracks a device s orientation and position, and detects real-world flat surfaces.
let configuration = ARWorldTrackingConfiguration()
//Set this option depending on object you want to place
configuration.planeDetection = [.horizontal, .vertical]
//Run configuration for @IBOutlet sceneView
sceneView.session.run(configuration, options: .resetTracking)

Note this only set s ARKit up on a base level

Vertical and horizontal plane detection

With this one line of code you re now able to create almost any AR app you can imagine. The handling of the hardware, the “heavy lifting”, is all brilliantly done in the background of ARKit, so you focus solely on the development process

Face recognition

You don t like the filters Snapchat offers? Don t use them. Create your own instead, and with ease. Face recognition is one of the greatest features ARKit offers to developers. It sounds awesome, right?

But just how much coding does this require?

From the code snippet above we can see how ARKit makes things easy for us. With a small amount of code you can create basic level apps. The main class we re going to use is ARFaceTrackingConfiguration()

Here is how Apple documentation defines this configuration:

“.. a configuration that tracks the movement and expressions of the user s face with the True Depth camera.”

With it, the phone does all the work for us. Here is how to implement the ARFaceTrackingConfiguration() :

import ARKit
import UIKit

@IBOutlet weak var sceneView: ARSCNView!
private func setupSceneView() {
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.isLightEstimationEnabled = true
sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}

Once running, ARSession will begin to call your delegate functions with a special type of anchor: ARFaceAnchor(). It will also call the session (_ session: ARSession, didUpdate anchors: [ARAnchor]) many times per second with an up-to-date ARFaceAnchor object.

ARFaceAnchor has a property called blendShapes; a dictionary full of detailed information about the user s face. Each facial part is defined by an ARBlendShapeLocation key and a numerical value from 0 1.

For example, faceAnchor.blendShapes[.mouthSmileLeft] returns an integer that tells you how much is the user smiling on the left side of their face (note: “left” in ARKit terms means from an external point of view, and not from your point of view).

There s an impressive number of facial parts that can be tracked:

“As of iOS 11.2, ARKit defines 50 different ARBlendShapeLocation and has everything from nose sneers to mouth rolls to cheek puffing.”

Apple Animoji

Image recognition

ARKit has the capability to recognize 2D images since the iOS 11.3 version.

How can we use this exactly?

Let s say you are walking near a cinema theatre and see a poster of a movie that really catches your eye. Using ARKit you can easily scan the poster with your phone and get the trailer and more information about the desired movie instantly.

You could also get real time information about transport (buses, trains etc.) by scanning the appropriate images eg. bus stop signs. The possibilities are endless with this amazing feature.

Many AR experiences can be enhanced by using known features of the user s environment to trigger the appearance of virtual content. For example, a museum app might show a virtual curator when the user points their device at a painting, or a board game might place virtual pieces when the player points their device at a game board. In iOS 11.3 and later, you can add such features to your AR experience by enabling image recognition in ARKit: Your app provides known 2D images, and ARKit tells you when and where those images are detected during an AR session.

Apple Documentation

.. and the amount of code you need in order to make this work is *drum roll*:

import ARKit
import UIKit
@IBOutlet weak var sceneView: ARSCNView!
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
let options: ARSession.RunOptions = [.resetTracking, .removeExistingAnchors]
sceneView.session.run(configuration, options: options)

Last thoughts on ARKit 1.5

Apple really placed a lot of effort into ARKit. By giving developers tools like this they invited and challenged us to become better at what we love and do.

Apple debuted ARKit in iOS 11 as a base for their future efforts in the AR area.

Augmented Reality became more than a gimmick or future-tech. Overnight, iPhones became AR capable devices opening the market to millions of users. This was perhaps ARKit s greatest selling point No additional headset or external sensors required. All you need is your iPhone.

Now for the main course of this article

ARKit 2

Apple just introduced its newest update for ARKit on WWDC 2018, the incredible AR framework that powers Pokemon GO s AR Plus mode on iOS. The new version brings the following updates (pay attention now, this is huge):

What s new?

First demo of ARKit 2.0 revealed the following features:

For non developers

  • Improved face tracking – not so important for anything that s not animoji
  • Realistic rendering – pretty straightforward
  • 3D object detection – ARKit 1.0 could detect flat surfaces and 2.0 builds on top of that to detect actual volumetric objects
  • Persistent experiences – allows developers and AR players to leave objectsaround the world for others to explore
  • Shared experiences – allows multiple devices to share and interact inside of one AR session

For developers

  • Mapping (Savings and loading maps) – Enables powerful new features of persistence and multi-user experience
  • Environment texturing – Realistically render your augmented reality scene
  • Image tracking (2D objects) – Track static or non-static 2D images in real time
  • Object detection (3D objects) –Track static real world 3D objects in real time
  • Face tracking enhancements –A better and more stable API for face recognition

Mapping

In the previous version AR Mapping was only available while ARSession was alive, but with ARKit 2.0 a new API is available for developers and it s given to us as an ARWorldMap object.

ARWolrdMap object represents mapping of a physical 3D space that can be detected by a phone and used to create an ARScene. This enables two new experiences in ARKit; Persistence and Multi user experiences.

  • Persistence

Save objects in ARScene and load them again in a new session.

Multi-user experience

experience is not limited to just a single device or a single user. It can be shared in real time with multiple users. You are now free to use any technology to share ARWorldMap object, like AirDrop or Multipeer connectivity that relies on Bluetooth or Wi-fi connection.

Apple WWDC 2018

Environment Texturing

Greatly enhances your rendering for end user experience.

It adds:

  • Position and Tracking
  • Scale
  • Lightning (giving you ambient light estimate)
  • Shadows (adds shadow to virtual object that will improve human visual perception)
  • Reflection of textures ( see reflection of environment from the surface of the object)

Environment texturing gathers scene texture information and maps it during the entire time the scene session is active. This gives the virtual object more of a realistic look.

Apple WWDC 2018

Image Tracking

In iOS 11.3 Apple introduced Image Detection as part of world tracking. Image detection searches for known 2D images in the scene (Note: the images need to be static).

What s new ?

  • Image Tracking is an extension to Image Detection with one huge advantage; images no longer need to be static. ARKit will estimate the position and orientation for every frame at 60 frames per second.
    Multi image detection is supported also, which means that ARKit can track, recognize and position multiple images simultaneously.
Apple WWDC 2018

Object Tracking

Object tracking can be used to detect known 3D objects in an ARScene. (Note* Just like Image Detection the term detection means that these objects need to be static)

For this to work objects need to be scanned first via app and then saved locally. (Note: Objects that are being scanned need to be well textured, rigid, and non-reflective.) By scanning the object you get its position and orientation.

Apple WWDC 2018

Face Tracking

With the release of the iPhone X and it s amazing front facing camera, Apple gave us a powerful ARKit face detection and tracking.

  • ARKit does this by calculating the position and orientation of the face for each frame at 60 fps.
  • Directional Light Estimate was also added; It uses the face as a light probe to get the position of the light in a scene.
  • Finally, ARKit added Blend Shape recap which tracks over 50+ specific face features.

What is new ?

  • Gaze detection tracks both left and right eye movement at real time.
  • Tongue support ARKit can now detect if users tongue is out.
Apple WWDC 2018

Summary

We have seen how Apple is really trying to give its users the best AR experience with the newest ARKit update. The fact that ARKit works smoothly from the iPhone 6s onward is mind blowing.

ARKit 2 brought us:

  • Saving and loading maps
  • World tracking enhancements
  • Environment texturing
  • Image tracking
  • Object detection
  • Gaze and tongue detection

These tools are a fantastic addition for our quest to build quality, state-of-the-art AR apps.

If you liked this article, press the clap button ­ƒæÅ below for more content. If that doesn t show you more content try pressing it again.

Leave a Reply

Your email address will not be published. Required fields are marked *

After you leave a comment, it will be held for moderation, and published afterwards.


The reCAPTCHA verification period has expired. Please reload the page.