The metaverse is already populated with bad behavior on the Internet

An avatar of a person is holding a bottle of "vodka" to an avatar in Horizon Worlds

In Meta’s Horizon Worlds, a player’s avatar is force-fed a bottle of “vodka”. The report’s writers noted that taking place virtually didn’t make it feel any less invasive.
Screenshot: Video SumOfUs

It took less than an hour for a researcher studying the titular metaverse of Meta to be “virtually raped” after putting on the VR headset for the first time. After days of witnessing and experiencing rampant conspiracy theories, sexual harassment, racism and homophobia, the writers of a recent report said the tech giant is unprepared to achieve its dream of creating a shared online space for millions of users.

The nonprofit corporate watchdog group SumOfUs launched your report last week describing Meta is single Facebook transition from a company focused on social media to one trying to define what it means to be online in a pseudo-physical environment. The group also documented how, as Meta’s VR platform have grown to over 300,000 users, Meta’s flagship metaverse product, Horizon Worlds, is already home to the internet’s worst kinds of racist and misogynistic cartoons. SumOfUs even included a link to a video (note, sexual assault trigger warning) from when users brought an investigator into a room and proceeded to move their avatar torsos back and forth in a hump-like motion, all while another user tried to pass a bottle of vodka.

SumOfUs has not been shy about its stance against goal and his defense against big corporations on a host of other issues, but this report shares much of evidence of how little moderation is happening within the Horizon gaming space. Apparently, the report’s researchers were stalked through different worlds in the Meta-owned product. There was examples of fake drugs being placed on tables and users constantly calling each other racist and homophobic slurs.

“Meta is moving forward with the Metaverse without a clear plan for how it will curb harmful content and behavior, misinformation, and hate speech,” the report says. Not only does the company know that content moderation is a problem, but they don’t have a precise plan on how to fix it. Report writers quoted an inside note from march Shared by the Financial Times and written by Meta’s Vice President of VR, Andrew Bosworth. The Meta VP said that moderating users “at any scale is practically impossible,” according to the FT.

The explicit promise of the metaverse is to occupy a digital realm and interact with people as if they were all really there. Although online bullying is nothing new, explicit content of that nature takes on a more visceral nature once you put on the goggles meant to make you feel like you exist in the space.

The watchdog group also pointed out several other examples where users with female avatars reported sexual assault, including one where a male user said he recorded a female player’s voice to “jerk off” to. Meta had already introduced a “personal boundary” feature in February that prevents other avatars from getting too close to another player’s body. Other virtual chat rooms like VR Chat have already included similar features, but avatars of SumOfUs researchers were constantly being asked to remove personal limit settings. When another user tries to touch or interact with you, the VR controllers vibrate “creating a very disorienting and even disturbing physical experience during a virtual assault,” according to the report.

A Meta spokesperson told Insider that personal limits settings are on by default and they don’t recommend turning them off when around strangers, adding, “We want everyone who uses our products to have a good experience and easily find the right things.” tools that can help in situations. like these, so we can investigate and take action.”

Although Horizon Worlds allows for parental controls and the ability to mute other users, the platform still represents a major issue for me.any young user, especially when the researchers saw other avatars encouraging people to remove security features. The app is technically 18+, but current users said the platform is already full of underage people. Unlike social media platforms that may use systems to monitor written content or even video, today’s virtual reality chat rooms rely on individual users to report misbehavior.

Current self-styled “metaverses” like the kid-focused Roblox have shown how difficult it is to curb obnoxious player behavior. There have been previous examples of avatars sexually assault other players as young as 7. Horizon Worlds supposedly includes in-game moderators to enforce the guidelines, but the report says that in interviews players said there aren’t enough of them. And it’s not just a Meta platform problem. virtual reality talk has hosted disturbing content on a platform Small children those who know how to falsify a date of birth can easily access it.

The report’s authors aren’t the only ones criticizing Meta and its CEO, Mark Zuckerberg, specifically for the type of product they are trying to build. from amazon head of devices he recently pointed out how no one can really define what the “metaverse” is. Snap CEO Evan Spiegel called the technology more “hypothetical“Nothing. Former Nintendo of America executive Reggie-Fils-Aimé said Meta is “it is not an innovative company”, adding that the self-proclaimed pioneer of the metaverse has not shown to know how to lead innovation.

But more than tech executives slapping the competition, the comments highlight how much ambiguity still exists for the idea of ​​a shared digital space. Nick Clegg, head of global affairs at Meta, recently wrote Asking Meta to record the player’s speech for content moderation would be like asking a bar manager to listen in on conversations and mute things you don’t like. Instead, the company wants to focus on AI-powered systems that help respond to user reports.

“We are in the early stages of this journey,” Clegg wrote.

But the extremely metacritical writers at SumOfUs essentially said that if anyone is going to steer the metaverse ship, it had better not be Meta.

Goal has repeatedly shown that it is unable to monitor and respond to harmful content on Facebook, Instagram and WhatsApp, so it’s no surprise that it’s already crashing in the Metaverse as well,” SumOfUs writers said.

Leave a Reply

Your email address will not be published.