Meta reveals new research: avatars, AR and brain-computer interface

Meta reveals new research: avatars, AR and brain-computer interface

At Meta Connect 2022, Meta showed new research results in the field of virtual and augmented reality. An overview with video examples.

Ad
Ad

Meta's research is designed to last ten years or more and push the boundaries of what is possible today in technologies such as virtual reality, augmented reality, and artificial intelligence. At Meta Connect 2022, the company gave an overview of research in many areas, from Meta's AR headset to neural interfaces and 3D scanning to photorealistic codec avatars.

Augmented Reality

Metas aims to launch a sleek, visually appealing yet powerful AR headset in the upcoming years. Since the technical challenges in terms of miniaturization, power, battery capacity, and waste heat are great, Meta is pursuing a dual strategy in its development.

"Glasses need to be relatively small to look and feel good. So, we're approaching building augmented-reality glasses from two different angles. The first is building on all the technology we need for full-AR glasses, and then working to fit it into the best glasses form factor we can. The second approach is starting with the ideal form factor and working to fit more and more technology into it over time," Mark Zuckerberg said in the keynote.

The former effort goes by the code name Project Nazare, while the latter is a joint project between Meta and EssilorLuxottica, the world's largest eyewear manufacturer. This partnership has already resulted in one product: the Ray-Ban Stories, which offers several smart features but does not have a display built in.

Ad
Ad

At Meta Connect 2022, Meta and EssilorLuxottica gave an update on its data glasses project and the cooperation:

  • The Ray-Ban Stories will soon get the ability to call contacts hands-free or send a text message via a software update.
  • Also new is a feature called Spotify Tap. "You’ll just tap and hold the side of your glasses to play Spotify, and if you want to hear something different, tap and hold again and Spotify will recommend something new," Meta writes.
  • EssilorLuxottica wearables chief Rocco Basilico announced during the keynote that his company and Meta are working on a new headset that will open a "portal into the Metaverse." Will the next generation of Ray-Ban Stories come with a display? Zuckerberg and Basilico left this open.

What about Project Nazare?

At Meta Connect 2021, Meta simulated what a view through Project Nazare might look like. This year, Zuckerberg delivered another teaser of the AR headset without showing it.

Meta's CEO moves down a hallway with the device and controls it using an EMG wristband. Apparently, you can see a view through Project Nazare.

Ad
Ad

Zuckerberg sends Meta's head of research Michael Abrash a message and records a video, both using micro gestures. This is made possible by the EMG wristband, which intercepts motor brain signals on the wrist and converts them into computer commands with the help of AI. Meta sees this type of interface as the most important AR operating concept of the future alongside voice control and hand tracking.

Zuckerberg did not say when Project Nazare might appear. According to one report, Meta plans to unveil it in 2024 and commercialize it in 2026.

Neural interface

Another block in Meta's research update involves the aforementioned EMG wristband. Meta relies on a combination of this technology and personalized AI support for the AR interface of the future, which recognizes the context of a situation and action and proactively supports glasses wearers in their everyday lives. This should enable an intuitive, nearly frictionless interface between humans and computers.

"By combining machine learning and neuroscience, this future interface will work for different people while accounting for their differences in physiologies, sizes, and more through a process known as “co-adaptive learning," Meta writes.

Ad
Ad

A video illustrates this. In it, two Meta employees can be seen playing a simple arcade game via EMG bracelet and movements of their fingers. Note that they use slightly different gestures - the artificial intelligence learns from the signals and movements and generates an individual model.

"Each time one of them performs the gesture, the algorithm adapts to interpret that person’s signals, so each person’s natural gesture is quickly recognized with high reliability. In other words, the system gets better at understanding them over time," Meta writes.

The better the algorithm is trained, the fewer hands and fingers have to be moved. The system recognizes the actions the person has already decided on by decoding the signals on the wrist and converting them into computer commands.

AR navigation for the visually impaired

Meta is working with Carnegie Mellon University (CMU) on a research project to help the visually impaired navigate complex indoor environments.

logo

Ad
Ad

The university researchers used Meta's Project Aria sensing glasses to scan the Pittsburgh airport in 3D. They used this 3D map of the environment to train AI localization models. As a result, the smartphone app NavCog, developed by CMU, can guide users more safely through the airport by relaying audio instructions. The following video explains the technology.

Simple 3D scanning

Mixed reality headsets like Meta Quest Pro display the physical environment in the headset. They cannot yet scan objects and save them as 3D models. If this were an option, it would be possible to bring real objects into virtual environments.

"It’s hard to build 3D objects from scratch, and using physical objects as templates could be easier and faster. But there’s no seamless way to do that today, so we’re researching two different technologies to help solve that problem," Meta writes.

The first uses machine learning, called Neural Radiance Fields or NeRFs for short, to create an enormously detailed 3D object from a few photos.

Ad

The second technology is called Inverse Rendering. Objects digitized with this method react dynamically to the lighting and physics in VR environments.

A disadvantage of both technologies is that they do not yet work in real time. However, Meta sees them as important steps on the way to simple 3D scanning of physical objects.

Codec Avatars

Photorealistic digital encounters - for Mark Zuckerberg, this is the killer app of virtual and augmented reality.

To this end, Meta has been working for many years on so-called codec avatars: digital alter egos that barely differ in appearance from the human original.

Ad

At Meta Connect 2021, Meta showed second-generation codec avatars and demonstrated full-body avatars. This year, there was another update on the technology.

Codec Avatars 2.0 can now switch between virtual outfits and are even more expressive. To demonstrate the improved expressiveness, Mark Zuckerberg had a Codec avatar made of himself. The following video shows what the technology now does.

One of the biggest problems for the marketing and appropriation of codec avatars is their complex creation: users would have to have themselves scanned in a special 3D studio.

To simplify the generation of a personal codec avatar, Meta is working on Instant Codec Avatars. All it takes is a two-minute scan of the face with a smartphone. The following video illustrates the recording process.

Ad

The downside of this process is that the finished avatar doesn't look quite as realistic as Zuckerberg's, and it still takes hours for the avatar to be created and ready to use. However, Meta is working to speed up the process.

Meta Connect 2022: Watch the research update on Youtube

Meta emphasizes that the projects show research and that the technologies do not necessarily have to find their way into products. "Still, it’s a glimpse at where the technology is headed over the next five to 10 years," Meta writes. Below is the video excerpt that introduces the innovations featured in this article.