Please mouse over diagram to obtain additional information:
The key architectural components are described below, grouped into key functions
The natural interaction subsystem is responsible for providing both raw and processed data to the application. Data includes:
- Head tracking information, including data processed through a predictive tracker algorithm. Raw data is typically read from a 6 sensor or 9-sensor tracker. Different tracker architectures are supported. Data includes angular position (yaw/pitch/roll), velocity and acceleration, as well as linear (x/y/z) position, velocity and acceleration.
- Hand, object and marker tracking information. High-speed hardware analyzes signals coming from multiple cameras to detect and locate hands as viewed from the user’s perspective. The exact instantaneous tracking area is determined by the number and placement of the cameras, as well as their particular specifications. Since the tracking moves with the user, tracking area is limitless. The raw data includes X,Y,Z location and orientation as seen from the user’s viewpoint. This data is also be processed through predictive tracking algorithms.
- Combined data: by combining both head and hand tracking information, hand and object location are also be provided in the world coordinate system.
- Gesture analysis. A gesture engine combines head and hand data to detect and report gestures which drive user interface.
The video management subsystem is responsible for real-time video processing that is required for optimal display on the specific micro-displays and optics selected for the SmartGoggles. This includes:
- Detection of video resolutions.
- Scaling of incoming video to fit displays.
- Processing of side-by-side video signals to make them suitable for stereo viewing.
- Color matching between displays.
- Contrast, brightness and gamma controls.
- Distortion correction to allow lower-cost optics to be used while still producing high-quality results.
- Electrical driving of the actual micro displays
- Noise reduction
The application processor executes the following:
- An operating system. In the reference design, this is an Android 4.0 build, but other versions of Android or other operating systems such as Windows Embedded are supported.
- libSensics – an application library that manages the various hardware components and provides application developers with full access to SmartGoggle capabilities
- If required, application-specific software such as voice or face recognition
- The actual application: a game, a simulator, a media player, etc.
SmartGoggles are designed to be portable and thus battery-operated. While several power management schemes are possible, the reference design includes a pair of rechargeable batteries that can be recharged inside or outside the SmartGoggles. A hot swap capability allows replacing one battery while the unit continues to be powered from the other.
Connectivity and expansion
SmartGoggles are designed to be expandable and to be connected to the outside world. This is why the reference design includes:
- WiFi for connecting to networks such as the Internet or to other devices
- Bluetooth for local peripherals or to be paired with phones and tablets
- Accessible SD card
- External HDMI input for those instances where video comes from outside the goggles
- High-definition cameras to support augmented reality, face recognition and more
- and more…