Meta Quest gets better spatial audio
Meta’s new audio tools promise more natural and localizable sound in VR and AR, from distant screams to whispers next to your ear.
On February 7, Meta added immersive audio capabilities to its proprietary Presence Platform. The new “XR Audio SDK” is designed to make it easier for developers to incorporate spatial, localizable sound. It is currently only available for the Unity engine, which is widely used in VR. Support for Unreal Engine, Wwise and FMOD is planned.
The Presence Platform is a collection of development tools and programming interfaces that enable hand and voice interaction and augmented reality capabilities with Quest 2 and Quest Pro, for example. A recent addition in Fall 2022 was the Movement SDK for face-, eye-, and body-tracking. The platform was first introduced in the fall of 2021.
3D sound for Quest 2 and other VR headsets
Applications for the new immersive sound features include virtual reality, augmented reality, and mixed reality. For the latter, Meta’s Quest 2 and Quest Pro VR headsets add computer graphics to the video image from their front-facing cameras. In addition to Meta devices, the new audio SDK supports “almost any standalone mobile VR device” as well as PC VR (e.g. Steam VR) and third-party devices.
New features include better handling of the head, outer ear, and torso filtering effects that greatly affect sound in the real world: The Head-Related Transfer Function (HRTF) is designed to mimic authentic audio perception accurately. Without it, sounds in your immediate environment will sound unnatural.
Room acoustic simulation benefits from sound reflections and reverberations that depend on the size, shape, and material surfaces of the room. So if you start an AR game in an empty office, the sounds of virtual enemies may bounce off the walls just like your own voice. This homogeneity of sound is intended to enhance the sense of presence.
The Spatial Audio Rendering and Room Acoustics features build on the previous Oculus Spatializer and will continue to be developed. The system is much better suited for use in VR than the built-in audio systems in popular game engines, which are primarily designed for consoles and PCs, Meta said in its developer blog.
Acoustic immersion for audio novices
The new Audio SDK offers both flexibility and ease of use. According to Meta, even developers with no audio experience will be able to mix audio, which is essential for immersion.
As an example, Meta demonstrated its use in the Horizon Workrooms virtual office and meeting application for Quest 2 and Quest Pro at its Connect event in October 2022. At conference tables, the sound of voices coming from different directions is critical to a believable overall understanding of the scene.
The previous Oculus Spatializer will continue to be supported in Unreal Engine, FMOD or Wwise or for those who prefer a native API solution. Meta does not recommend upgrading in these areas yet.
For new projects in Unity Engine, Meta recommends using the new “XR Audio SDK” to better maintain applications in the long run or to try out experimental features.
Instructions and examples can be found in the official documentation for the “XR Audio SDK” and in the Meta Developer Center.
Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.