Mobile AR/MR

In this chapter:

  • Getting Started with Mobile AR/MR

  • Tracking: motion, location and outside-inside

  • Light estimation

  • Anchor

Introduction to Mixed Reality

What is Mixed Reality?arrow-up-right Mixed Reality in the Workspace, Mark Billinghurstarrow-up-right Introduction to Augmented Reality, Mark Billinghurstarrow-up-right Developing AR and VR experiences with Unity, Mark Billinghurstarrow-up-right AR Interaction, Mark Billinghurstarrow-up-right

Web-based API

3D graphics librariesarrow-up-right

WebGLarrow-up-right

Tracking: motion, location and outside-inside

Tracking

AR relies on computer vision to see the world and recognise the objects in it. The first step in the computer vision process is getting the visual information, the environment around the hardware to the brain inside the device. The process of scanning, recognising, segmenting, and analysing environmental information is called tracking, in immersive technologies. For AR, there’s two ways tracking happens, inside-out tracking and outside-in tracking.

Outside-In Tracking

With Outside-in Tracking, cameras or sensors aren’t housed within the AR device itself. Instead, they’re mounted elsewhere in the space. Typically, mounted on walls or on stands to have an unobstructed view of the AR device. They then feed information to the AR device directly or through a computer.

Inside-Out Tracking

With inside-out tracking, cameras and sensors are built right into the body of the device. Smartphones are the most obvious example of this type of tracking. They have cameras for seeing and processors for thinking in one wireless battery-powered portable device. On the AR headset side Microsoft’s HoloLens is another device that uses inside-out tracking in AR.

Tracking in ARarrow-up-right Environmental understanding: feature points and plane-findingarrow-up-right

Motion Tracking: Accelero meter, Gyroscope & camera Location-based AR: Magneto meter, GPS

Simultaneous Localisation and Mapping or SLAM.

This is the process by which technologies like robots and smartphones analyse, understand, and orient themselves to the physical world. SLAM processes require data collecting hardware like cameras, depth sensors, light sensors, gyroscopes, and accelerometers.

Concurrent Odometry and Mapping or COM.

COM tells a smartphone where it’s located in space in relationship to the world around it. It does this by capturing visually distinct features in your environment. These are called feature points. These feature points can be the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment. Any high-contrast visual conserve as a feature point.

Light estimation

Light Estimationarrow-up-right

Anchor

Anchorsarrow-up-right

Spatial Mapping

Hololens Microsoft HoloLens: Spatial Mappingarrow-up-right

Last updated