Meta’s new supercomputer leads to concerns about censorship

by mcardinal

Chris Lieberman, FISM News

 

Social media giant Meta announced on Monday that they are working on a supercomputer that they claim will be the world’s fastest by the end of 2022, a move which some believe could increase censorship on the company’s platform.

Meta, the company formerly known as Facebook, said in a press release, “Today we’re introducing the AI Research SuperCluster (RSC), which we believe is among the fastest AI supercomputers running today and will be the fastest in the world once fully built out in mid-2022. AI can currently perform tasks like translating text between languages and helping identify potentially harmful content, but developing the next generation of AI will require powerful supercomputers capable of quintillions of operations per second.”

Meta CEO Mark Zuckerberg also congratulated the development team in a Facebook post of his own.

 

The unveiling of this new supercomputer is in line with the company’s shift in focus to the metaverse, a shared virtual reality that the company expects to be the successor to the mobile internet. Meta explained, “Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.”

The company boasted of the RSC’s applications, such as being able to translate between multiple languages in real time. However, one application that has many concerned is the removal of what the company describes as “harmful content.”

“With RSC, we can more quickly train models that use multimodal signals to determine whether an action, sound or image is harmful or benign,” said Meta. “This research will not only help keep people safe on our services today, but also in the future, as we build for the metaverse.”

Meta announced in December that they would be using artificial intelligence (AI) to remove harmful content on their platform. The RSC will further enable the company to moderate and remove posts that they deem as harmful misinformation or hate speech.

Concerns over big tech censorship have come primarily from conservatives, who have argued that companies like Facebook, Twitter, and YouTube are using the pretense of removing “harmful content” to silence conservative voices. The companies have flagged and removed posts that question the efficacy of masks or vaccines or suggest voter fraud occurred in the 2020 election. These companies have even banned users for posting such viewpoints, most notably former president Trump after the Jan. 6 capitol riots.

Rachel Bovard expressed her concern over big tech censorship in a New York Post editorial in July, writing:

There is a dystopian element to telling social media platforms to control ‘misinformation’ when the very definition of that keeps changing. In the early months of the pandemic, Facebook began banning anti-lockdown protest content. Not because it violated any laws, but because such gatherings might run afoul of local guidance and public health recommendations. YouTube began censoring any content that disagreed with the error-prone World Health Organization, removing videos from emergency room doctors and podcasts from Stanford University neuroradiologists alike.

DONATE NOW