MillionaireMatch.com - the best millionaire dating site for sexy, successful singles!
The Big Three makers of video gaming hardware on Monday announced a united front to make online gaming safe for all their players.
“All players deserve to have fantastic social gaming experiences in settings where respect and safety are mutual,” Dave McCarthy, Microsoft’s corporate vice president for Xbox operations, wrote at the Xbox news website.
“At Xbox,” he continued, “we are aligned with both Nintendo, on behalf of the community of Nintendo Switch players, and PlayStation in our belief that protecting players online requires a multidisciplinary approach — one that combines the benefits of advanced technology, a supportive community, and skilled human oversight.”
“We can accomplish more when we work toward the same goal, and so we will each continue investing in, evolving, and amplifying our approaches to user safety,” he added.
In their pursuit of safer gaming, the companies outlined three guiding principles.
A commitment to empowering players and parents to understand and control gaming experiences.
To further that commitment, the companies said they will provide players with controls to customize their gaming experience and parents with tools and information to create appropriate gaming experiences for their children.
They also vowed to invest in technology to help thwart improper conduct and content before a player is subject to harm.
A commitment to partner with the industry, regulators, law enforcement, and the companies’ communities to advance user safety.
The firms promise to work with the industry, law enforcement, regulators and experts to develop and advance online safety initiatives, as well as partner with rating agencies to ensure games are appropriately rated.
They also pledged to partner with their communities to promote safe gaming behavior and encourage the use of reporting tools to call out bad actors.
A commitment to hold themselves accountable for making their platforms as safe as possible.
They declared that they will make it easy for players to report violations of their platforms’ code of conduct, as well as remove content and take enforcement actions for violations, including restricting players from using their services for misconduct.
The announcement by the hardware makers surprised Michael Goodman, director for digital media at Strategy Analytics, a research, advisory and analytics firm in Newton, Mass.
“It seems like it’s come out of the blue,” he told TechNewsWorld. “There doesn’t seem to be any privacy issues or breaches that would require this sort of response.”
“The only thing I can think of is these companies are not only game companies, but they’re also tech companies, and tech companies have been under a lot of pressure from Congress,” he said. “This could be a way for them to get out ahead of the curve and disassociate them from Facebook, which is the poster child for Congressional ire.”
Michael Inouye, a principal analyst with ABI Research, explained that Microsoft, Sony and Nintendo are trying to build communities around their platforms and the social aspect of that is a critical piece of it.
“Just as we’re seeing a great deal of scrutiny being directed at social networks like Facebook,” he told TechNewsWorld, “the gaming companies are no less subject to similar standards and evaluation and in this case, they are taking a proactive approach to work towards a safe environment for all gamers.”
Online gaming has become so incredibly popular — especially during the pandemic — that kids are more exposed than ever to online threats, bullying and other nefarious behavior from strangers, explained Mark N. Vena, a senior analyst with Moor Insights & Strategy.
“Children — even teenagers — tend to be quite vulnerable to these type of activities and the major game leaders realize that this can be an exposure for them, as well as limiting appeal of online gaming in the future,” he told TechNewsWorld.
Although the companies want consumers to be more confident their children won’t be abused online, there doesn’t seem to be a lot in their statement that the companies haven’t been doing individually, observed Ross Rubin, the principal analyst with Reticle Research, a consumer technology advisory firm in New York City.
“This idea of safety is something being picked up by other companies in the industry,” he told TechNewsWorld.
“For example,” he continued, “there’s a revival of a brand from the ’80s, Intellivision Entertainment, which is focused on a family-friendly environment. Every game is rated ‘E’ for everyone. There’s no extreme violence. No foul language. And there’s a focus on the family playing together in the living room, as they did in the ’80s.”
While the companies’ statement may be lacking in detail, their stance is a praiseworthy one, maintained Lewis Ward, research director for gaming at IDC.
“It’s certainly good to hear that some of the leading platform providers in gaming have a united front in terms of dealing with trolls and other bad actors that go over the line — even if the fine print of what that means isn’t spelled out in their joint statement,” he told TechNewsWorld.
No Good Guy, Bad Guy
Since Microsoft, Sony and Nintendo have tried to insulate their users from each other in the past, their cooperation is unusual, although limited, said Karen Kovacs North, director of the Annenberg Program on Online Communities at the University of Southern California.
“It’s not like they’re creating a bridge to build products together,” she told TechNewsWorld, “but it’s a smart move to offer the same approach to safety and protection.”
“Nobody’s the good guy or the bad guy,” she said. “Nobody’s blindsided by somebody else’s approach.”
As online multiplayer and live-streaming gaming gets more popular, more sophisticated tools will be needed to meet the safety commitments made by the leading hardware makers.
“I think there will need to be a much larger investment in artificial intelligence and machine language tools designed to very quickly identify and isolate potential bad actors in gaming communities,” Ward observed.
“A manual-human approach to policing these platforms should be one aspect of what gets more investment moving forward as well,” he added, “but these communities are just too large and fast moving to make that cost-effective, so AI and ML tools will have to be a big piece of what gets stepped up funding.”