Meta shows impressive AI-enhanced AR glasses technology

Meta shows impressive AI-enhanced AR glasses technology

Meta's latest AI demonstration shows how the technology can help AR glasses recognize objects around you without depth sensors.

FACTS

Meta SceneScript

Meta AI posted a video demonstrating how its Project Aria AR glasses can capture a point cloud that the SceneScript model based on Llama AI can interpret as identifiable real-world objects, like walls, windows, doors, and furniture.

The output is an English language text file formatted in structured markup code with the dimensions of the objects. The data is enough to outline the objects or create a 3D rendering using bounding boxes.

A Meta Quest 3 ran the SceneScript software and researchers used the VR headset’s passthrough view to show the overlaid point cloud data and labeled scene elements.

Most impressive, SceneScript can also create more complex geometries so tables, office chairs, and sofas have more detail.

CONTEXT

A preview of what’s coming

While Meta’s Project Aria glasses capture data to reconstruct the scene, the Qualcomm Snapdragon 835 processor in these prototype devices isn’t sufficient for processing.

Project Aria doesn’t contain a depth sensor, so the point clouds are based solely on visual processing. That’s important for AR glasses since every sensor adds weight, which is major concern for something that rests on the bridge of your nose.

SceneScript isn’t available for consumers to use on the Quest 3 or Ray-Ban Meta Smart Glasses, but Meta’s research provides an intriguing look at what will be possible with upcoming AR glasses.

logo

Identifying parts of a room will allow much more immersive augmented reality experiences where virtual content can interact with your surroundings. An app like the Quest 3’s First Encounters might run on AR glasses in the future.

Here's Meta's X post.

Sources: X