Overview

This demo application shows how to place an object and how to communicate with your virtual items utilizing gestures and hit testing.

Prerequisites

Project Setup


Open Xcode and create a new project. Choose “Augmented Reality App” and fill the required details.

Apple provides options for Content Technology like SceneKit, SpriteKit, and Metal. Here we will choose SceneKit. If you want to place any 3D object model then Xcode needs to read that 3D object file in SceneKit supported format(.scn) or .dae.

Ensure that "Include Unit Tests" and "Include UI Tests" are unchecked as we will not be needing them. We will be coding in Swift and using Apple's SceneKit for creating content. 

The first thing we are going to do is go into the storyboard by clicking "Main.storyboard" in the dropdown menu on the left side. We will now be taken to a page presenting the "ARSCNView". This is the main view in which all virtual objects are rendered onto. This is what makes Augmented Reality possible. 

Now in here, we need to add a UILabel to the scene. This  infoLabel will be used for informing the user about the AR session states and any updates of the node. Follow this example to add the label to the scene visually and logically. Don't forget to add the constraints so that it stays it constrained to the top of the device!

NOTE: The "Green Circle" indicates that I am holding "Control" as I click and drag from the label.

Now open, ViewController.swift. For debugging purpose we can set sceneView.debugOptions = .showFeaturePoints and can see how ARKit detects surfaces. When you run the app, you should see a lot of yellow dots on the scene. These are feature points and it is helpful to estimate properties like the orientation and position of physical objects in the current base environment. The more feature points in the area, the better chance ARKit can determine and track the environment.

override func viewDidLoad() {

        super.viewDidLoad()

        

        // Set the view's delegate

        sceneView.delegate = self

        

        // Show statistics such as fps and timing information

        sceneView.showsStatistics = true

        sceneView.debugOptions = .showFeaturePoints

        

        // Create a new scene

        let scene = SCNScene(named: "art.scnassets/ship.scn")!

        

        // Set the scene to the view

        sceneView.scene = scene

    }

Now it’s time to set a world-tracking session with a horizontal plane. As you can see in your viewWillAppear method session has been already created and set to run.

configuration.planeDetection = .horizontal

So now your method will look like this:

override func viewWillAppear(_ animated: Bool) {

        super.viewWillAppear(animated)

        

        // Create a session configuration

        let configuration = ARWorldTrackingConfiguration()

        configuration.planeDetection = .horizontal


        // Run the view's session

        sceneView.session.run(configuration)

    }

First ARKit App @ZakW

Detect plane and place object

When we detect any surface in ARKit, it will provide ARPlaneAnchor an object. An ARPlaneAnchor object is basically containing information about position and orientation of a real world detected surface.

To know when surfaces will be detected, updated or removed , use ARSCNViewDelegate methods (which work like a magic in ARKit). Implement the following ARSCNViewDelegate methods so you will be notified when an update is available in sceneview.

// MARK: - ARSCNView delegate

 

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

        // Called when any node has been added to the anchor

    }

 

func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) {

        // This method will help when any node has been removed from sceneview

    }

 

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {

        // Called when any node has been updated with data from anchor

    }

ARSessionDelegate protocol provides current tracking state of the camera so you are able to know that your app is ready to detect or not. When you are getting a normal state, you are ready to detect plane. For that implement these delegates (You may have to replace some of the functions if they are already preloaded into the file).

// MARK: - ARSessionObserver

 

    func sessionWasInterrupted(_ session: ARSession) {

        infoLabel.text = "Session was interrupted"

    }

 

    func sessionInterruptionEnded(_ session: ARSession) {

        infoLabel.text = "Session interruption ended"

        resetTracking()

    }

 

    func session(_ session: ARSession, didFailWithError error: Error) {

        infoLabel.text = "Session failed: \(error.localizedDescription)"

        resetTracking()

    }

 

     func resetTracking() {

        let configuration = ARWorldTrackingConfiguration()

        configuration.planeDetection = .horizontal

        sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])

    }

 

func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {

        // help us inform the user when the app is ready

        switch camera.trackingState {

        case .normal :

            infoLabel.text = "Move the device to detect horizontal surfaces."

 

        case .notAvailable:

            infoLabel.text = "Tracking not available."

 

        case .limited(.excessiveMotion):

            infoLabel.text = "Tracking limited - Move the device more slowly."

 

        case .limited(.insufficientFeatures):

            infoLabel.text = "Tracking limited - Point the device at an area with visible surface detail."

 

        case .limited(.initializing):

            infoLabel.text = "Initializing AR session."

 

        default:

            infoLabel.text = ""

        }

    }

When plane has been detected, add object onto it. Here we are going to add 3D model named “ship.scn”.

class ViewController: UIViewController, ARSessionDelegate {

  ...

  var ship: SCNNode!

  ...

 

  func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

        // Called when any node has been added to the anchor

        guard let planeAnchor = anchor as? ARPlaneAnchor else { return }

        DispatchQueue.main.async {

            self.infoLabel.text = "Surface Detected."

        }

        

        let shipScn = SCNScene(named: "ship.scn", inDirectory: "art.scnassets")

        ship = shipScn?.rootNode

        ship.simdPosition = float3(planeAnchor.center.x, planeAnchor.center.y, planeAnchor.center.z)

        sceneView.scene.rootNode.addChildNode(ship)

        node.addChildNode(ship)

    }

}

Now build and run your app. You can see that some surfaces show more feature points and in some area, you can not get much better result compared to others. Surfaces which are shiny or one colored make it difficult for the ARKit to obtain a strong reference point for plane detection and to be able to determine unique points in the environment. If you are unable to see more feature points then move your device around the area and try to detect it with different objects or surfaces. Once ARKit is ready with the detected plane then your object will be added on it.

When you find a plane, you should see your object in AR. Walk around and notice how it stays pinned in its position.

Change position of object to tap location with UITapGestureRecognizer

For placing an object on tap first add UITapGestureRecognizer in scene view.

override func viewDidLoad() {

        ...        

        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))

        sceneView.addGestureRecognizer(tapGesture)

        ...       

}

Then in the handling of tap gesture add a node at tap position. A node represents the position and the coordinates of an object in a 3D space. Here we set a position of the node to tap position.

@objc func didTap(_ gesture: UIPanGestureRecognizer) {

        // Ensure ship is on screen

        guard let _ = ship else { return }

        

        let tapLocation = gesture.location(in: sceneView)

        let results = sceneView.hitTest(tapLocation, types: .featurePoint)

        

        if let result = results.first {

            let translation = result.worldTransform.translation

            ship.position = SCNVector3Make(translation.x, translation.y, translation.z)

            sceneView.scene.rootNode.addChildNode(ship)

        }

    }

Xcode is probably giving you an error with something like Value of type 'simd_float4x4' has no member 'translation'

For getting the translation of worldTransform add this extension to the bottom of the Viewcontroller.swift file (outside all the brackets).

extension float4x4 {

    var translation: float3 {

        let translation = self.columns.3

        return float3(translation.x, translation.y, translation.z)

    }

}

Build and run the app again! Notice you can touch to replace where the ship is located! NOTE: The movement is probably a bit funky because of how the default ship.scn is made. Don't worry if the ship doesn't move to your finger correctly, the code is executing properly, but the ship.scn is reacting differently... anyway...

Scaling object with UIPinchGestureRecognizer

For zoom-in and zoom-out 3D object we have to change a scale of object while user pinch. For recognize when user pinch on sceneview, add UIPinchGestureRecognizer.

override func viewDidLoad() {

        ...        

        let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(didPinch(_:)))

        sceneView.addGestureRecognizer(pinchGesture)

        ...       

}

Here we set a maximum zoom scale as 2 (200% zoom-out then original object) and minimum scale as 0.5 (50% zoom-in then original object). You can play around with it!

@objc func didPinch(_ gesture: UIPinchGestureRecognizer) {

        // Ensure ship is on screen

        guard let _ = ship else { return }

        var originalScale = ship?.scale

        

        // Get state of pinch

        switch gesture.state {

        case .began:

            // Pinch began

            originalScale = ship?.scale

            gesture.scale = CGFloat((ship?.scale.x)!)

        case .changed:

            // Pinch changed

            // Get new scale (if any)

            guard var newScale = originalScale else { return }

            if gesture.scale < 0.5{

                newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5)

            }

            else if gesture.scale > 2 {

                newScale = SCNVector3(2, 2, 2)

            }else{

                newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale)

            }

            ship?.scale = newScale

        case .ended:

            // Pinch ended

            guard var newScale = originalScale else { return }

            if gesture.scale < 0.5 {

                newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5)

            } else if gesture.scale > 2 {

                newScale = SCNVector3(2, 2, 2)

            } else{

                newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale)

            }

            ship?.scale = newScale

            gesture.scale = CGFloat((ship?.scale.x)!)

        default:

            gesture.scale = 1.0

            originalScale = nil

        }

    }

Rotate object using UIPanGestureRecognizer

For rotation of any object using a  UIRotationGestureRecognizer is useful. This will recognize a rotation using two fingers.

override func viewDidLoad() {

        ...        

        

        let rotateGesture = UIRotationGestureRecognizer(target: self, action: #selector(didRotate(_:)))

        sceneView.addGestureRecognizer(rotateGesture)

        ...       

}

 

class ViewController: UIViewController, ARSessionDelegate {

    @IBOutlet var infoLabel: UILabel!

    @IBOutlet var sceneView: ARSCNView!

    var ship: SCNNode!

    ...

    var currentAngleY: Float = 0.0

    ...

 

    

    

    @objc func didRotate(_ gesture: UIRotationGestureRecognizer) {

        guard let _ = ship else { return }

        // Negative for correct direction

        var newAngleY = (Float)(-gesture.rotation)

        

        newAngleY += currentAngleY

        ship?.eulerAngles.z = newAngleY

        

        if gesture.state == .ended{

            currentAngleY = newAngleY

        }

    }

}

That's it! 

You can view the working code here: https://github.com/zweg25/FirstARKitApp/tree/master