AR Magic
Three layers working together, native Swift AR engine on the IPhone, a React Native orchestration layer, and a remote server managing game state. The user interacts through AR, and data flows seamlessly between all three.
Before anyone plays, we walk through the venue with a phone. The camera studies the walls, tables, corners, edges. It builds a 3D understanding of the space, by generating a point cloud and ultimately memorising thousands of visual features.
Outcome: An ARWorldMap which is a digital representation of the room.
Think of React Native running the JavaScript as the brain and the Swift the eyes and ears.
Outcome: Bridge established with brain connected to eyes.
Now the phone plays a game of spot the difference in reverse. It compares what the camera sees against the stored map, looking for matching features: corners, edges, textures, patterns.
Outcome: IPHone map aligned with original mapping scan
JavaScript sends 17 positions across the bridge to Swift. Each position is an XYZ coordinate.
Outcome: Rendered capsules in the room, only visible through phone.
The player taps the capsule. SceneKit 'ray-traces' the tap point into 3D space and identifies which capsule was hit by its UUID.
Outcome: Player tap reveals a scroll which has fun facts, and letter fragment.