Metas Metaverse: Experiment exposes serious moderation problems

Metas Metaverse: Experiment exposes serious moderation problems

With an experiment, journalists wanted to find out how Meta moderates its own Metaverse platform. The answer: hardly at all.

When Mark Zuckerberg unveiled his Metaverse vision, he promised it would be spared the problems that plague Facebook and other social networks.

The website Buzzfeed News sent Meta a catalog of 19 questions to learn more about how the company moderates Horizon Worlds virtual spaces and ensures security. Horizon Worlds is Meta's core Metaverse platform, which opened in the U.S. and Canada in late 2021.

Since Meta didn't want to address the individual questions, Buzzfeed launched an experiment and created a space with content that is banned on Facebook and Instagram.

The result was called "The Qniverse" and was plastered with slogans from the QAnon movement as well as Covid-19 disinformation that drew a ban on Facebook or Instagram. The room, however, remained private, accessible only to a handful of invited Buzzfeed News editors, modeled after secret Facebook groups.

Metaverse moderation: uncharted territory, even for meta

For the next 36 hours, the Qniverse was not discovered by Meta. At least, the company did not respond. When one of the editors reported the world to the moderation team, nothing happened for two days.

Only after the third report did a response come, saying the space did not violate content guidelines. Buzzfeed News speculates that Meta may have left the room in because it thought it was a parody. Only after the site contacted Meta's communications department directly was the room deleted.

logo

Die Horizon-Worlds-Welt Quniverse mit QAnon- und Pro-Trump-Parolen.

A screenshot of the "Qniverse" in Horizon Worlds. | Image: Buzzfeed

On the one hand, the experiment shows that Meta has some catching up to do in terms of monitoring and identifying questionable content. For another, it shows that the moderation rules and how they are enforced are not as transparent as the company claims. Presumably, Meta has not yet defined all the rules and needs to figure out how to effectively moderate 3D spaces.

Moderating behavior instead of content

Since Horizon Worlds is only a few months old and may have comparatively few users, solving this problem is less pressing than it is for Facebook and Instagram. It will be different when millions of people start flocking to Meta's Metaverse platform. Mark Zuckerberg recently announced that he would open the platform for smartphones, which could lead to a strong influx of users.

In recent weeks, Horizon Worlds has received a lot of negative press. There was talk of toxic behavior and sexual harassment in Horizon. Abuses that are hardly punished or not punished at all.

A fundamental issue of Metaverse moderation is that not only written content, but also voiced content and behavior would have to be moderated, which is significantly more difficult.

Clearly, Meta doesn't want to find itself in the complicated role of a behavioral police force that monitors its user base around the clock. A step that Meta's VR and AR boss Andrew Bosworth ruled out in advance. Still, there's no way around balancing security on the one hand and privacy on the other. For Meta, there's still a lot of work to be done.

Read more about the Metaverse:

Sources: Buzzfeed News