
Movement is our body’s natural language, and the way we interact with the physical world feels seamless and perfect. The digital side of the equation still feels very rough in comparison, like a language we can barely speak, that limits how we express ourselves.
Tapping on glass, swiping up and down, one finger or two, moving a mouse around to click on a square. Each of these input modalities unlocked a better way to interact with digital tools and experiences, but something always gets limited or lost in translation.
Aura is a touchless spatial interface built with Spatial Vision and AI to speak your body's language. It’s so natural, you already know how to use it.
Aura doesn’t require any new hardware, learning curve or calibration, and adapts to differences in body anatomy, physiology and movement styles. Aura uses frontier spatial intelligence to passively sense your body’s ambient context, controlling and adapting experiences based on gestures and movements, and changes in your proximity, position, posture and activity.

Designed to work across spatial experiences on the devices you use everyday, Aura is powered by photons rather than atoms and lives in the air around you, so your interactions become touchless and weightless. You can't even see it with the human eye.
Aura is a first-principle rethink of how we connect with spatial technology that gives you many of the benefits of wearables, without the physical limitations.
Navigate with real-time gestures, full-body poses and movements, reinforced with responsive audio and visual cues. Interaction feels effortless—no swipes, or clicks required. Some of the current functionality includes:
When gestures open menus, the interface can be simplified. Elements appear only when needed. The interface stays clear—focused on what matters most.

