World Economic Forum wants to use AI to automatically censor internet speech

The World Economic Forum (WEF) has proposed a new way to censor online content that requires a small group of experts to train artificial intelligence in identifying “misinformation” and abusive content.

The WEF published an article on Wednesday outlining a plan to overcome frequent instances of “child abuse, extremism, misinformation, hate speech and fraud” online, which the organization says cannot be managed by human “trust and safety teams”, according to ActiveFence Trusty & Inbal Goldberger, vice president of security, author of the article. Instead, the WEF has proposed an AI-based online content moderation method, where subject matter experts provide AI training packages so that it can learn to recognize and report or restrict content that human moderators deem harmful.

“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives, and then feeding those results back into training sets will allow us to build AI with human intelligence built in,” Goldberger said. .

In other words, trust and safety teams can help AI in anomalous cases, allowing it to detect nuances in content that a purely automated system might otherwise miss or misinterpret, according to Goldberger.

“A human moderator who is an expert on European white supremacy will not necessarily be able to recognize harmful content in India or misinformation stories in Kenya,” she explained. As time passes and the AI ​​practices with more training sets, it begins to identify the types of content moderation teams that would be offensive, reaching “near-perfect detection” at large. ladder,

Goldberger said the system would protect against “increasingly advanced players who abuse the platforms in unique ways.”

Trust and safety teams at online media platforms, such as Facebook and Twitter, bring a “nuanced understanding of misinformation campaigns” that they apply to content moderation, Goldberger said.

This includes working with government organizations to filter content that communicates a narrative about COVID-19, for example. The Centers for Disease Control and Prevention has advised Big Tech companies on what types of content qualify as misinformation on their sites.

Social media companies have also targeted conservative content, including posts that negatively portray abortion and transgender activism, or contradict the mainstream understanding of climate change, either by labeling it “misinformation” or blocking it. completely.

The WEF document did not specify how AI training team members would be chosen, how they would be held accountable, or whether countries could exercise controls over AI.

The elite corporate leaders who attend WEF gatherings have a track record of proposals that extend corporate control over people’s lives. At the last annual WEF summit in March, the head of Chinese multinational tech company Alibaba Group boasted of a system for monitoring individual carbon footprints derived from diet, travel and similar behaviors.

“The future is being built by us, by a powerful community like you here in this room,” WEF Founder and Chairman Klaus Schwab told an audience of more than 2,500 global business and political elites.

The WEF did not immediately respond to the Daily Caller News Foundation’s request for comment.

Content created by the Daily Caller News Foundation is available at no cost to any eligible news publisher who can provide a large audience. For licensing opportunities of our original content, please contact [email protected]

Comments are closed.