Right now, there’s a lot of speculation around the potential uses (and abuses) of Google Glass and hands-free computing technology. Discussions have centered on situations where augmented reality is used in public… But these conversations are focusing on the wrong thing. Find out where the real focus of this technology lays right now.

With Google Glass leading the way, hands-free computing has swooped onto the scene in a big way this spring. In the next two years we’ll see it how it matures—but we have a few ideas in the meantime.

Right now, there’s a lot of speculation around the potential uses (and abuses) of this technology. And discussions tend to center on situations where augmented reality is used in public…

But that’s not the focus of the technology.

The real intent is to provide a constant stream of clear text and photographic information that is context-specific and doesn’t have to be explicitly “pulled” by the user. Because of this, the greatest opportunities for hands-free computing are within the enterprise. Personal and public use will be secondary.

This means that Google Glass and its competitors will provide the most immediate benefits to workers who need data while primarily interacting with the physical world. Instead of customer-facing roles, key users will be operations and supply workers, people who maintain expensive equipment in safety-conscious environments, and hands-on professionals in emergency situations.

In these professions, hands-free computing is the first tech-based solution that can augment current processes in these environments both smoothly and naturally.

Currently, Glass’ out-of-the box functionality works only in the cloud, though Google encourages Glass developers to tinker under the Android hood of the device. Other brands’ devices are slated to ship with native Android and will enable processing directly on the device. This will allow new types of input and interaction, such as custom voice controls, gestural inputs, and machine-to-machine communication.

What does this mean for businesses? More than ever, tech strategy must be nimble enough to allow companies to pivot when confronted not simply with new features and capabilities, but with entirely new methods of computing, working, and living. Purchasing decisions that are 12-18 months out and major IT infrastructure projects currently in-flight should take this new product category into account.

Luckily, Glass uses a hybrid of web and Android technologies, so many existing back-end systems may already support it. But the most value will likely come from the riskiest and most mission-critical use cases. So, the major challenge for organizations will be designing new workflows without compromising safety or certifications.

And of course, waiting for the devices to arrive.

Use cases to try right away, and those you may want to wait on

Want some more food for thought? Check out these usage scenarios and our thoughts about when you might be able to take a stab at them on hands-free devices.

What you can try now

  • Inherently mobile, non-desk-based workers who have their hands full
  • Task-directed workflow
  • Video proxy presence for first-person views
  • Two-way messages containing simple text or photographs
  • Quick reference to compare real world conditions against digital images
  • Automated or contextually intelligent reminders and alerts
  • Photographic or location-based audit trails
  • Location-aware data displayed to the user
  • Lightweight database queries
  • Phone calls

Tasks that should become feasible in the near future

  • Data entry or editing
  • Client-side processing directly on the device
  • Real-time data interaction
  • Image recognition or OCR
  • Bar code or QR code reading
  • Safety-goggle-required environments
  • Strenuous environments requiring sturdier tech equipment
  • Continuous use for full shifts
  • Static, lightweight augmented reality
  • Environments in which wifi or LTE is unavailable
  • Screen sharing
  • Lengthy video capture
  • Extremely noisy environments
  • Workers who wear gloves
  • Dynamic data requiring interaction
  • Heavy image processing

Concerns with hands-free computing

  • Safety of distracted workers
  • Breaking perception of empathy and trust in customer-facing use
  • Personal security and privacy in augmented reality contexts (though Google has explicitly banned using personal information in augmented reality)
  • Information overload
  • Legality of driving motorized vehicles during use
  • Legality of automatic photographic audit trails in employee off-time use of employer-owned devices
  • Wearing while on others’ private property, particularly that of customers or suppliers

Problems not solved by hands-free computing

  • Real-time, immersive augmented reality
  • Indoor positioning
  • Dimensioning

Concerns with Google Glass

  • Corporate use, or even lending out Explorer devices, currently violates Google’s terms of service
  • Sensitive corporate data would be exposed to Google until they offer secure enterprise cloud services or fully client-side processing
  • Carrying and breakage; the device doesn’t fold
  • Inefficiency caused by misalignment of the display arm