When I was very young, I remember being mesmerized by the holographic bird on my parents’ credit cards, how the bird seemed to flap its wings as you tilted the card at different angles. More recently, I was awed by the use of holograms in art in the “The Jeweled Net: Views of Contemporary Holography” exhibit at the MIT Museum. I thought it was really cool how three-dimensional objects rendered on a flat surface could be viewed from different perspectives and angles, as if the object was really sitting there.
Holograms are featured numerous times in the Star Wars movies, primarily as a form of telecommunication (perhaps the early inspiration for telepresence robots given that members of the Jedi Council could attend via holograms?). While traditional, real-life holograms rely on lasers and special photosensitive materials to capture physical objects, a 2013 letter in Nature discussed current research to create 3D displays which could one day fit in devices as small as a mobile phone. The 1971 Nobel Prize in Physics was awarded to Dennis Gabor for “his invention and development of the holographic method,” in 1947 while attempting to improve electron microscopes. With the invention of lasers, the first optical holograms are attributed to Yuri Denisyuk (Soviet Union), and Emmett Leith and Juris Upatnieks (both at University of Michigan, USA) in 1962. Holograms are created by splitting a laser beam in two and using mirrors and lenses, shining the beams onto an object. These reflected beams are then recorded on some recording medium, such as silver halide photographic emulsion. The light wave patterns generated by the two beams interfere with each other, and this interference pattern is what is ultimately recorded on the medium. Then to view the recorded hologram, a laser of the same frequency as one used to create the hologram is shone onto the developed film, and the resulting light pattern is projected onto our retinas as a virtual image.
Because we as humans have two forward-facing eyes, each eye receives a slightly different image of the world and our brains convert that into a 3 dimensional representation; this is known as stereoscopic vision. Currently 3D displays involve using some form of glasses to display slightly different images to each eye. This can be down by actively shuttering images between the left and right eye, or with different polarizing lenses.
A group at Hewlett-Packard Laboratories in Palo Alto, CA built on the idea of autostereoscopic displays to create a diffractive backlight system that could generate 3D images. Autostereoscopic displays are give the perception of 3D images without needing the viewer to wear any special headgear or glasses; an example of this type of display can be found in the Nintendo 3DS gaming system. Existing displays are limited in the viewing angles, and thus Fattal and colleagues sought to overcome this limitation with their new backlight display. They used standard LED lights for edge lighting, and this light is guided to a series of etched directional gratings which then scatter light across the viewing area. Due to physical hardware limitations, Fattal and colleagues were only able to build a prototype that allows 14 viewing directions, although in theory they could eventually build a device that has 64 viewing directions, allowing for smooth 3D renditions of objects (the number of viewing zones reflects the number of positions around the screen that would allow for the correct differential display of images on each eye resulting in the perception of three dimensional objects; outside of the viewing zone, the objects would appear two dimensional).
Additionally what is promising about this new backlight display is that it is small and compact. Although there are many hardware and computational challenges that must be overcome before a device like this could hit the markets, perhaps one day instead of video conferencing, we could be virtually transported to our meeting site for an “in-person” conversation.