Drag to explore Spaces Systems
Info
Explore
Debug
Menu
Projects

SCREENS

  • APPS
  • GAMES
  • ONLINE
  • DOOH

SPACES

  • AR
  • VR
  • INSTALLATION
  • REALTIME VFX

SYSTEMS

  • PLATFORMS
  • SOFTWARE
  • AI

Back to projects

Home
Loading

Tracing the sky

7-Hour game for London's
biggest Toy Store

Debug role
Concept, Design & Development

Platforms
Web, Mobile, Billboard

Collaborators
Miura

Scroll

Tracing the sky

INTERACTIVE PROJECTION MAPPING

5 min read
Project
Installation
Arts
Technologies
Augmented Reality
ARCore
Computer Vision
Visual Odometry
Visual SLAM
Networking
Unity

An underground hideout was transformed into a mesmerising immersive and interactive space. Using AR to influence a projection mapping installation...

Phones were tracked as they moved around the space, enabling visitors to interact with fluid visuals projected on the room. Moving the phone through the air or swiping the screen repelled particles projected within view, influencing how they flowed across physical surfaces while visualising the user’s gestures being drawn over the room.

The installation was part of Distractions, a tech summit presented by Manchester International Festival of arts.

;

HOW THE EXPERIENCE WORKED

Visitors explored the space with phones which continually scanned the area (via the camera feed) to locate the device in the room - pulled off by calculating distances between the phone and features within its view. Sending this data to the projections enabled phones to affect the flow of particles across the surfaces, with their positions in physical space used as virtual light sources to illuminate the room.

PHONES CONTROLLED VIRTUAL LIGHT SOURCES IN THE SPACE

Swiping the screens further influenced the artwork by adding turbulence to the fluid visuals moving over surfaces in-view of the phone. The resulting experience was an elegant, unexpected way to interact with and influence the room, creating the feeling of being inside an ever-changing artwork under your control.

Debug built the AR tracking app running on the phones, syncing seamlessly with the 360° projection mapping systems developed by A&E.

WHAT THE EXPERIENCE WAS LIKE

01:32

PROTOTYPING

THE DISCOVERY PROCESS

With the feasibility of the project very much unknown, a prototype app was made to validate the concept and core assumptions. This allowed us to understand the tech limitations and constraints, while proving the features and systems could be developed.

A ‘ghosted’ real-scale model of our studio (created from a Photogrammetry scan) overlaid a live camera feed, constantly reorienting itself based on what's currently in-view. This proved with our process the device can understand where it is in physical space and quickly relocate itself when the camera is interrupted.

+ More

THE PROTOTYPE, A ‘GHOSTED’ 3D SCAN OVERLAID IN REAL-TIME

TECH BREAKDOWN
TRACKING WITH A
CAMERA FEED

Unlocking the full potential of Augmented Reality tech requires knowing what’s going on under the hood…

Inside-out tracking

Visual Odometry is an area of computer vision, it's the process of determining the pose (position and rotation) of a device with a camera by analysing its stream of captured images.

VO is a core component of SLAM (Simultaneous Location and Mapping), a technical approach used for ‘inside-out tracking’ - determining a device's location in physical space using only its own cameras/sensors and no external sensors or data.

Multiple systems interact in parallel to do the SLAM work; the error-prone IMU sensor (in-built phone accelerometer) running at a very high frequency, alongside complex mapper and tracker algorithms. These processes need to all run on separate threads - it takes a lot for a computer to understand space in a way that is simple and immediate for humans!

+ More

FEATURE POINTS

Through checking every pixel in an image interesting features can be spotted. A feature point is information about a distinctive, expressive area of an image. These areas could be edges, corners, highlights... essentially any stand-out reference point.

Features must be reliable so they can be referenced and tracked from one image to the next. The feature itself isn't enough to make it stand out under different viewing conditions (perspective, scale, lighting, blur), so neighbouring pixels are also inspected. If a notable number of surrounding pixels are brighter or darker than a given pixel, it's flagged as a useable feature.

This analysis creates a unique fingerprint of the point and it's surroundings within the image. If the feature's print is found in another image it can then be matched, providing a basis for calculating the relative positions and rotations of features in a physical environment.

+ More

AREA LEARNING

An understanding of a physical space is improved upon by continually detecting features and adding these to a live point cloud (3D map). As the cloud builds-up it increasingly resembles the surrounding environment's core characteristics.

The cloud's points regularly update their estimated pose relative to other points - a distance calculated as 1m may actually be 1.1m, and updates as knowledge is gained and this discrepancy is understood. The more feature points a cloud holds the greater the accuracy is between points, and in-turn the more accurate the estimated device pose is relative to the real world.

LOCATING WITH FEATURE POINTS

After recognising persistent features across successive camera frames, SLAM monitors the movement of these features to understand their real world positions. At the same time it infers the device's own movement by estimating its angular motion based on changes in the position/perspective of the features. It essentially builds a map of an environment in 3D while also figuring out where the device is and has been within that map.

Incremental errors along the device’s movement path make precision impossible. Drift gradually comes about where the device pose and cloud points start appearing out-of-place. To further complicate, drift errors escalate when devices heat-up as their sensors are calibrated to work optimally in a specific temperature range. .

A loop closure occurs when a point cloud region is revisited. When a closure takes place it mitigates the drift problem as the matched points in the intersected region correct their drifted poses. Other cloud points also shift accordingly in a chain reaction, minimising the accumulated errors and correcting the shape of the cloud to closer match the environment it resembles.

The cloud is not only used for tracking but relocating too. After losing tracking (lens covered, excessive motion between frames, low lighting) a device can near-instantly figure out where it is. It reorients itself, snapping to position by matching feature points from the current frame with features cached in the point cloud.

+ More

COLLABORATORS & CREDITS

PRODUCTION
artists & engineers

AR DEVELOPMENT
Debug
CREATE DEV
William Young

Art Direction
Jiayu Liu
Presented by
Future Everything

Sponsored by
Oppo
PRODUCTION
artists & engineers

AR DEVELOPMENT
Debug

CREATE DEV
William Young
Art Direction
Jiayu Liu

Presented by
Future Everything

Sponsored by
Oppo