Without a doubt, the biggest news of the week of Google’s preview of their “project Glass“, an augmented reality monocular viewer that, in the words, of the Google bloggers, “helps you explore and share your world, putting you back in the moment“.
The focus in this particular incarnation appears to be more of a personal assistant and phone accessory. All the actions shown in the video teaser could be performed on a smart phone, except that the phone is in your pocket and performing these action would typically require taking the phone out. In that respect, the preview is true to the stated goal of getting technology out of the way, though there is a bit of contradiction in that to get technology out of the way you’d have to wear a piece of technology on your glasses.
The user interface shown is simple and non-intrusive and the type of information goes to answering the question ‘what kind of information would I like to have in any particular context?’. No use cases are shown with true image augmentation such as “how would a sofa look in this particular place in the room”. The goggle concept shown is monocular, so no 3D/stereo vision in this particular one, though perhaps one is not needed for the type of user interface actions performed, not to mention that having dual displays would likely double the costs.
User interface at the moment seems to be a head tracker and voice recognition. No launch date has been set yet, nor has a formal commitment to launch been publicly made.
This preview from a tier 1 player, especially one that now has acquired Motorola Mobility, is likely to accelerate the market. What will be the response from Microsoft, Apple and maybe Samsung? Is this a breakthrough concept like an iPad on your head, or is this a nice-to-have accessory like a Bluetooth headset.
It is clear to us at Sensics that Google – and companies like it – could benefit greatly from incorporating our SmartGoggles technology inside such designs and we are excited about the opportunity to make a contribution to the field.