Sunday, May 2, 2010

augmented reality display

idea by me

virtual displays using a 3D anchor for distance, size and 3-axis angle placement.

devices:
1. sensory and display eyeglasses:
- stereoscopic high definition cameras mounted on both the left and right sides of the frame, used to capture moving images particularly the distinct image of the anchor.
- transparent or semi-opaque LCD lens to display processed image (overlay over the image of the anchor while parts not covered by image remains transparent).
- 3 accelerometers to allow for accurate head 3-axis movement sensing and positioning.
- wire connected to a processing unit to send stereo image and accelerometer information to it as well as receive audio signals from the processor.
- directional head phone stubs that project audio to the ears. stubs are located near the ear cups for private audio transmissions.
2. processing unit:
- wireless transceiver that receives display data from the anchor and sends instructions to the anchor.
- wire connected to eyeglasses that receives stereo image and accelerometer information for processing and also transmit audio signals to the eyeglasses.
- high speed real time processor that computes the positioning and size of the image to display on the eyeglasses.
3. anchor:
- 3-coloured (red,green,blue) cubes suspended in open space on a stand connected to a base.
- wireless tranceiver on the base to transmit display image and audio to the processing unit as well as receive instructions from the processing unit.
- wired connection to another device (e.g. computer) that generates the image for display.
4. algorithm:
- display image is transmitted with a template cube size.
- size of the image to be displayed is based on the relative size of the cubes calculated against the template cube size. perceived distance is emulated by sizing the image.
- image real-time tilt and roll is calculated based on the 3 eyeglass accelerometers and their relative changes to each other as well as gravity.
- image real-time rotation is based on the relative sizes of the cubes captured by the stereoscopic cameras on the eyeglasses.
- processing unit validates the positioning and size of image by modelling the 3 cubes in a virtual stage and then projects/overlays the result on the image on the eyeglasses.
- tri-coloured cubes give the processor imaging hints such as hue, shadows, lighting, contrast to allow a more life-life image.

diagrams to follow...

downside:
1. not a true depth display, e.g. objects actually in front of the anchor will actually get obscured by the projected image - can be mitigated by blue/green screen behind the anchor.
2. there will always be a noticeable lag unless processing performance is incredible.

No comments:

Post a Comment