← All work
  • Spatial UI
  • Voice-first
  • Wearable
  • 0 → 1

SmartCue — spatial UI for smart glasses

Client JM Designs (founding designer engagement)
Year 2025 —
Role Founding Designer
Duration Ongoing
SPATIAL · VOICE-FIRST · AMBIENT

Problem

SmartCue is a 0 → 1 product designed for a form factor the design discipline hasn’t fully figured out yet. Smart glasses are not phones, and they are not VR headsets, and they are not watches. The screen sits in peripheral vision, the input is mostly voice, the feedback is mostly ambient, and the user is moving through the world while interacting with the device.

The brief was to figure out what a daily-wearable smart-glasses interaction model actually looks like — not a port of a phone UI, not a downscaled HoloLens, not a Stories feed in your eyeline.

Approach

The design problem decomposes into three layers that have to be designed together: the spatial layer (what gets rendered, where, in the field of view), the temporal layer (when interfaces appear, how long they persist, when they fade), and the input layer (voice, gesture, touch on the temple, ambient context).

Spatial

The single biggest decision was the rest state. Most spatial UIs default to “nothing is visible.” We landed on a different default — a small, ambient horizon line at the bottom of the field of view that signals presence without claiming attention. The system is on. You’re not looking at it. That ambient line is the entire interface most of the time.

Temporal

A wearable that you forget you’re wearing is the goal. Every UI element has a half-life. A notification fades. A query result fades. The only persistent surface is the ambient horizon. Persistence is earned by the user explicitly summoning a layer; it is not the default.

Voice-first, gesture-rescued

Voice is the primary input. The hard part is not voice recognition — it’s voice interface design. Early work focused heavily on the conversational grammar, the failure recovery patterns, and the gesture fallbacks for moments when voice is socially or environmentally unavailable.

Where we are

The product is in active development. The interaction model has stabilized; the visual layer is iterating with engineering on what the underlying display hardware can actually render at field-of-view scale.

The screen you don’t quite look at is harder to design than the screen you stare at. There are fewer places to hide.

— Working note, 2026

Outcome

0 → 1 Founding-designer engagement End-to-end product design from interaction model through launch
Voice-first Ambient by default Rest state is presence, not interface — persistence is earned
Spatial Field-of-view UI Designed for peripheral attention while the wearer moves through the world

Reflection

The lesson I keep relearning on this project is that you can’t design a UI for a form factor whose constraints you don’t yet understand. So a lot of the design work, at least early on, is actually constraint-discovery work — building enough of the interaction model to find out what it isn’t.

That’s a different kind of designer-engineer collaboration than the one I’m used to. It’s slower, it’s looser, and it produces better artifacts in the end.