What sensors are built into Glass, and what features do they enable?

Google Glass units are capable of performing a vast array of audio-visual processing functions in real-time. Each unit contains a rich array of sensors comparable to a modern smart-phone or tablet, with a twist: they sit on your face. This enables some advantages unique to smartglass:

  • direct ocular input,
  • hands-free operation, and
  • awareness of the user’s head orientation at all times.

Here’s a list of available sensor inputs that Glass apps can tap into, and potential features that those sensors might enable once clever developers start writing software:

sensor: twin microphones
gets: user voice + environmental audio

sound-enabled features:

  • voice command input
  • voice memo recording
  • speech-to-text via voice recognition, aka auto-transcription
  • raw ambient sound recording
  • music / song recognition a la shazam
  • speaker location based on stereo differential analysis


sensors: GPS, gyroscope, accelerometers, compass
gets: user location, user head position, orientation, and movement

orientation-enabled features:

  • this gives apps geospatial, orientation and motion awareness
    (i.e. the user is moving 3 mph SE and looking to the right with head pointed up to sky at 30 degrees above horizon), which partially enables…
  • augmented reality overlays (really? see explanation here)
    (spatial data overlaid with correct alignment atop natural reality)
  • proximity alerts
    (your friends are near, you are 500 feet from destination)
  • and advanced navigation assistance / wayfinding


sensor: video+still camera
gets: continuous first person PoV visual feed

camera-enabled features:

  • first person PoV snapshots
  • timelapse visual memory augmentation
  • video recording
  • (future) object recognition / face recognition


sensor: touch pad
gets: fingertip tap input, scrolling and mini-gestures

touchpad-enabled features:

  • explicit input and menu selection
  • operating camera functions (i.e. focus / exposure / zoom) during recording while aiming camera with head
  • fruit ninja


3 thoughts on “What sensors are built into Glass, and what features do they enable?

  1. ScottMGS

    re: “augmented reality overlays
    (spatial data overlaid with correct alignment atop natural reality)”

    The view doesn’t take up the whole field of vision so it cannot do “correct alignment atop natural reality”.

    1. acroyogi Post author

      I partially agree Scott. See Is Glass a true Augmented Reality experience?. That said, there are viable applications where a user can take a snapshot, post it to recognition servers, and receive an augmented image in response, a la Google Goggles, but purpose-specific. For instance, I’ve prototyped a peak identification app for recreational hikers, called Pathfinder for Glass.

  2. Russell de Silva

    Do you know if Glass will be able to process input from peripheral devices such as keyboard and mouse?
    It seems to me that when seated whether especially in an office environment there may be many applications where traditional IO could greatly widen Glass applicability.
    Additionally are we going to be able to create executables to run on glass or will we be restricted to what I have heard referred to as ‘Cards’?

    Keep up the good work, Glass promises almost as much as the self driving car, it’s going to be fun to watch its impact on people’s lives both at home and the office.

Comments are closed.