Google transforms Google Maps into the backbone of our AR future

Google transforms Google Maps into the backbone of our AR future

At its development conference, I/O 2022, Google is introducing Immersive View and the Geospatial API, two new features for Google Maps that could prove to be important components of the augmented reality future.

Google’s Immersive View is an approximation of a visual digital twin of the world, an evolution of the 3D view of larger cities or well-known landmarks familiar from Google Earth VR.

Neural rendering turns 2D photos into a 3D perspective

The 3D perspective of the Immersive View is computer-generated. Google uses neural rendering to combine 2D satellite and Street View images into 3D scenes. In addition, Google integrates visual real-time information into the 3D view, such as traffic and visitor volume or weather.

Thanks to AI-supported rendering, Google can also generate 3D spaces from 2D photos of restaurants, for example. This allows users to dive into individual streets via Immersive View and navigate further into interior spaces – all in 3D.

Video: Google

According to Google, the new immersive view runs on every smartphone via “Immersive Stream” from the Google Cloud. This remarkable technology still has one small catch: it is available immediately, but initially only for selected areas in San Francisco, New York, Los Angeles, London and Tokyo. Other cities are to follow.

Is Google Maps becoming the next big digital infrastructure?

Immersive View shows that the Maps platform has long outgrown its status as a navigation app. Google now crawls the real world as vigorously as it crawls the Internet.

Recommended articles

Niantic brings the real and digital worlds together for a shared AR future

As a result, Maps is increasingly becoming a digital representation of the real world, also known as the “AR Cloud” in the technical jargon of the augmented reality industry. This 3D coordinate system of the real world is a fundamental building block for a shared AR future.

logo
  • checkMIXED.de ohne Werbebanner
  • checkZugriff auf mehr als 9.000 Artikel
  • checkKündigung jederzeit online möglich
ab 2,80 € / Monat
logo

Whoever masters the AR cloud could, for example, determine – and get paid for – when and where audiovisual digital information appears in reality. Meta is also researching such a digital twin of the world with Live Maps, as is Niantic, but Google is probably way ahead thanks to Maps Data.

With the “ARCore Geospatial API”, Google is demonstrating the next step in the development of Maps towards the AR cloud at I/O. The programming interface allows developers to place digital content at real locations in 87 countries – without having to visit the real place or physically scan it. In this way, developers can anchor digital games or exhibitions in fixed real-world locations that are visible to anyone with a compatible smartphone.

According to Google, the know-how from almost 15 years of Google Maps development flows into the new API. The AI-supported visual positioning method VPS, which is already used for AR navigation arrows in Maps, is intended to help locate digital content as accurately as possible in reality. VPS, in turn, is based on images from Street View combined with AI image analysis.

The Geospatial API is an evolution of Cloud Anchors, which debuted in 2019. Google’s dev blog has application examples and code information about the new interface.

At I/O, Google also unveiled new tech glasses for real-time visual translation and made a clear commitment to AR glasses as a new computing platform.

Sources: Google Blog zu Immersive View