Use artificial intelligence to generate real-time 3D holograms

Use artificial intelligence to generate real-time 3D holograms

The experimental demonstration of 2D and 3D holographic projection. The left photo is focused on the mouse toy (in yellow box) closer to the camera, and the right photo is focused on the perpetual desktop calendar (in blue box). Credit: Liang Shi, Wojciech Matusik, et al

Despite years of hype, virtual reality headphones have not yet overthrown TV or computer screens as the video viewing devices. One reason: VR can make users feel sick. Nausea and eye strain can arise because VR creates the illusion of 3D viewing, even though the user is actually staring at a fixed distance 2D screen. The solution for better 3D visualization may lie in a 60-year-old technology version of the digital world: holograms.

Holograms provide an exceptional representation of the 3D world around us. On top of that, they are beautiful. (Continue – look at the holographic pigeon on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position and allow the eye to adjust the focal length to focus alternately on foreground and background.

Researchers have long tried to make computer-generated holograms, but the process has traditionally required a supercomputer to be cracked by physics simulations, which are time-consuming and can produce less than photorealistic results. Now, MIT researchers have almost immediately developed a new way to produce holograms – and the deep-learning-based method is so effective that it can work on a laptop in the blink of an eye, the researchers say.

“People used to think that with existing consumer-grade hardware, it’s impossible to do real-time 3D holography calculations,” said Liang Shi, lead author of the study and a Ph.D. student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It is often said that holographic exhibits will be on sale within ten years, but this statement has been around for decades.”

Shi believes the new approach, which the team calls ‘tensor holography’, will eventually be able to achieve the elusive ten-year goal. The march could fuel a proliferation of holography in fields such as VR and 3D printing.

Shi worked on the study, published in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now on Facebook) and Petr Kellnhofer (now at Stanford University).

The search for better 3D

A typical lens-based photo encodes the brightness of each light wave – a photo can faithfully reproduce the colors of a scene, but it ultimately produces a flat image.

In contrast, a hologram encodes both the brightness and phase of each light wave. This combination produces a more true version of the parallax and depth of a scene. So, while a photo of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, reflecting the unique 3D texture of each brushstroke. But despite their realism, holograms are a challenge to make and share.

The first holograms developed in the mid-1900s were optically recorded. This requires a laser beam to be split, with half of the beam used to illuminate the subject and the other half as a reference for the phase of the light waves. This reference generates a unique hologram of depth. The resulting images were static and therefore could not capture motion. And it was only a hard copy, which made it difficult to reproduce and share.

Computer-generated holography overcomes these challenges by simulating the optical setup. But the process can be a calculations. “Because each point in the scene has a different depth, you can not apply the same operations to everyone,” says Shi. “It increases the complexity significantly.” Sending a grouped supercomputer to perform these physics-based simulations can take seconds or minutes for a single holographic image. Moreover, existing algorithms do not model occlusion with photorealistic precision. So Shi’s team followed a different approach: the computer lets physics learn on its own.

They used deep learning to accelerate computer-generated holography, enabling real-time hologram generation. The team designed an evolutionary neural network – a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network usually requires a large, high-quality dataset that did not previously exist for 3D holograms.

The team built up a personal database of 4000 pairs of computer-generated images. Each pair paired an image – with color and depth information for each pixel – with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels evenly distributed from the background to the foreground, and with a new set of physics-based calculations to to deal with occlusion. This approach has led to photorealistic training data. Then the algorithm started working.

By learning from each image pair, the tensor network adjusted the parameters of its own calculations, thus improving the ability to create holograms in turn. The fully optimized network is faster than computing based on physics. That efficiency surprised the team itself.

“We are amazed at how well it performs,” ​​says Matusik. In just milliseconds, tensor holography can produce holograms from in-depth images – provided by typical computer-generated images and computed from a multi-camera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest phone,” he says.

“A Significant Leap”

Real-time 3D holography will enhance a range of systems, from VR to 3D printing. The team says the new system can help immerse VR viewers in a more realistic landscape, while eliminating the eye strain and other side effects of long-term VR use. The technology can be easily used on screens that modulate the phase of light waves. Currently most affordable consumer grade screens only modulate the brightness, although the cost of phase modulating screens will decrease if generally accepted.

Three-dimensional holography can also promote the development of volumetric 3D printing, the researchers say. This technology can be faster and more accurate than traditional layer-by-layer 3D printing, as volumetric 3D printing enables simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data and the design of surfaces with unique optical properties.

“This is a significant leap that could completely change people’s attitudes towards holography,” says Matusik. “We feel that neural networks were born for this task.”


Improvements to holographic exhibits are ready to enhance virtual and augmented reality


More information:
Real-time photorealistic three-dimensional holography with deep neural networks, Nature (2021). DOI: 10.1038 / s41586-020-03152-0, dx.doi.org/10.1038/s41586-020-03152-0

Provided by Massachusetts Institute of Technology

Quotation: Use Artificial Intelligence to Generate Real-Time 3D Holograms (2021, March 10), Retrieved March 11, 2021 from https://phys.org/news/2021-03-artificial-intelligence-3d-holograms-real- time.html

This document is subject to copyright. Except for any fair trade for the purpose of private study or research, no portion may be reproduced without the written permission. The content is provided for informational purposes only.

Source