Tutorial 1: First ARCore app
This tutorial will walk you through the process of creating a simple ARCore app that detects flat planes (like tables and walls) and allows you to place virtual objects on these planes. The placed objects can then be dragged, rotated, and resized using finger gestures. Thanks to ARCore's built-in functions, this will be a fairly simple project in terms of amount of code required, but it will introduce you to many important concepts of AR that will be helpful in more complex projects to come.
This is what the final product will look like.
A compatible Android device (see this list of compatible devices)
A Windows or Linux laptop
Android Studio (version 3.2+, preferably the most up-to-date version)
A cable to connect your phone to your laptop
Before diving into the project itself, let's first introduce the big concepts involved in this project.
Environmental understanding - In order to place virtual objects in the real world, ARCore must have some understanding of the surrounding environment. In particular, it must be able to detect surfaces on which to place the objects. ARCore accomplishes this by using computer vision to detect feature points. Features could range from the texture of a wooden tabletop to the white lines of a brick wall to predefined 2D visual markers. In general, it is easier to track objects with complex features (like a speckled wall) than objects that lack texture (like a smooth, white tabletop). For this tutorial, we will tell ARCore to find flat surfaces to place virtual objects on.
Motion tracking - ARCore uses a combination of odometry and mapping (COM, for short) to keep track of the device's position over time. Visual information from the camera is combined with data from the device's IMU sensor to estimate how much the phone moves relative to the environment. This helps ARCore keep virtual objects anchored at their set locations no matter how much you move the phone.
Anchors - COM alone is not enough to reliably keep virtual objects anchored at their set locations for extended periods of time, especially if the phone moves a lot. IMU sensors are prone to error drifts.
Now that we've reviewed these basic concepts of AR, let's get started!
Part 0: Source code
Source code for this tutorial can be found here: github.com/Goldenchest/ARCoreTutorials. Go ahead and download/clone it somewhere convenient for you - if you ever get lost on any of the steps, feel free to reference this source code.
Part 1: Create and configure the Android Studio project
When you open Android Studio, go ahead and create a new project (if you don't see this option in a popup window, then go to File->New->New Project).
You will see a popup asking you to "Choose your project". Choose "Empty activity" and click next.
You will now to asked to configure your project. Feel free to give your app any name you want (I named mine "ARTutorial1", and feel free to modify your save location if you'd like. Make sure the Minimum API level is set to API 24. Now click finish.
Now, import the Sceneform plugin into your project by following the instructions here.
You should see a "Project" tab on the left of the screen, with app and Gradle scripts. Expand Gradle scripts and open "build.gradle (Module: app)". Add the following to the bottom of the "android" block:
In the same file, add the following to the "dependencies" block:
Android Studio should now be prompting you to sync the Gradle scripts, near the top of the screen. Click Sync Now to apply the changes you made to the Gradle scripts.
Now, open app/manifests/AndroidManifest.xml. Add the following inside "manifest":
And add the following inside "application":
The full manifest file should look like this:
Now, open res/layout/activity_main.xml. Switch to the Text view tab near the bottom of the screen, to directly edit the xml file. Delete the TextView element, and replace it with the following:
The fragment (which we named "sceneform_fragment") will display the camera view and virtual AR models on your screen when you run the app. You are now done configuring the project - we can finally dive into the code!
Part 2: Detecting flat surfaces
A great thing about ARCore is that plane detection is already built-in, so that you don't have to implement it yourself! To start, open MainActivity.java and add the following member variable to the top of the class:
At the bottom of the onCreate function, initialize your fragment by using the fragment manager to find our previously defined "sceneform_fragment":
And that's it! Connect your phone and deploy the app (Run->Run 'app'). Select your phone from the list of deployment targets, and press OK. The app should soon launch on your phone. Point the camera at a flat, textured surface, like a table. You should see the view from your phone's back-facing camera, with an animation prompting you to move your phone around. This helps ARCore detect the table's surface. Keep moving the phone around until white dots appear on top of the table. This means that ARCore has detected the surface of the table!
Part 3: Placing virtual objects in the real world
Now that we can detect surfaces, let's place objects on them! If you look in the source code you downloaded, you should see a folder called "sampledata". This folder contains a 3D model of Andy, the Android mascot. Drag this folder from a file explorer to the "app" folder within your project. Expand sampledata->models. Right click on andy.obj and click New->Sceneform Asset. In the "Import Sceneform Asset" popup, change ".sfb Output Path" to "src\main\res\raw\andy.sfb", and click Finish. You should now see the following, which indicates successful creation of the sceneform asset:
Now let's return to MainActivity.java. Let's create a function renderAndy to render the Android model:
Add renderAndy() to the bottom of the onCreate function. This function attaches the 3D model we imported in the previous step (which we access with R.raw.andy) to our modelRenderable variable.
Now, we want to make the 3D model show up whenever we tap on the screen. To do so, create a function setupTapListener in the MainActivity class:
Add "setupTapListener(fragment);" to the bottom of the onCreate function. This function tells the app to listen for finger taps, and on each tap, perform the following steps:
Find the 3D point at which a line from the screen intersects some AR plane (hitResult).
Create an anchor at that point on which to place the object.
Render our modelRenderable 3D model at that anchor.
Now try deploying the app again. Now, you should be able to place 3D models on planes detected by ARCore by tapping on the planes. Once models are placed, you should also be able to drag, resize, and rotate them with finger gestures! Once you've placed some virtual droids, your app should look something like this:
Part 4: Taking it a step further!
So far, the most tedious part (at least for me) was importing the 3D model of the droid, and generating the andy.sfb file. It's not the most time-consuming task, but imagine wanting to import a large number of unique 3D models - that would involve a lot of clicking. Luckily, ARCore supports a convenient alternative method of importing 3D models: simply load them over the internet! More specifically, any 3D model stored in the .glTF or the .glb format can be loaded at runtime without conversion.
Let's dive right into it: create a new function renderURI in your MainActivity class:
This function takes in a link to a 3D model, the model's file format, and the factor to scale the model by, and sets your modelRenderable variable to the loaded model, just like you did for the droid model from before.
Now, create a new member variable in MainActivity:
And in the onCreate function, comment out the renderAndy(); line and replace it with:
This loads the .glb model from the DUCK_ASSET link for you to place in your scene. Now you can place ducks instead of the droid we imported earlier!
And that's it! You have now created an app capable of 1) detecting flat surfaces and 2) placing virtual 3D objects on these surfaces.
If you'd like, you can try importing other models besides ducks. You can find more examples of .glb models here: github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0. Just enter the "glTF-Binary" directory of each model, append the filename to the end of the url, and replace "tree" with "raw" within the url. For example, the model at github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/Avocado/glTF-Binary would yield "https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Avocado/glTF-Binary/Avocado.glb".