Page Nav

HIDE

Pages

Classic Header

{fbt_classic_header}

Breaking News:

latest

SPY vs SPY: Networks built using government funds are using citizen informants to flag “misinformation” in private messages

  By now, most Americans are aware of the fact that the government works with Big Tech to censor the information people discuss and share on...

 By now, most Americans are aware of the fact that the government works with Big Tech to censor the information people discuss and share on social media, ensuring that those who support the government’s preferred narratives get plenty of airtime while burying conflicting views. However, you may be surprised to learn that government-funded entities have been hard at work building networks that flag conversations that go against their interests in private messages.

Executive director of the Foundation for Freedom Online and former State Department official Mike Benz warned on X: “The US gov’t is paying millions of dollars to censorship mercenary firms to build up a snitch network of citizen informants to report private text messages on WhatsApp for ‘misinformation’ – then create a vast database of banned memes & ideas.”

Using government funds, companies are being enlisted to monitor platforms of interest with the aim of getting content flagged and ultimately censored. It’s a common practice on platforms known for encrypted private messaging, such as Telegram and WhatsApp, that are immune to the types of monitoring used on less protected platforms like YouTube and Facebook.

The reports provided by these citizen informants and the information they flag is fed into databases and analyzed by AI.

Meedan's Check tool offers a tip line for snitching on those who privately share unpopular views

One of the companies that is reportedly engaging in this behavior is the U.S.-based nonprofit Meedan, which Foundation For Freedom Online says is “at the forefront of creating this snitch network." The group was the recipient of a $5.7 million grant from the National Science Foundation. They also received a grant of more than $144,000 from the government-funded Open Technology Fund for their work on a “claims and memes database” being used to monitor “fact-checked claims and debunked visual misinformation from internet repressive countries.”

Meedan is the name behind the tool Check, which enables private messaging platform users to report so-called “misinformation” disseminated via private chats using a tip line. The tool’s AI bot will automatically draft a “fact check” that users can share in closed discussion groups, and claims that are reported to their tip line will be stored on a permanent basis and connected to similar claims in order to create a database of banned ideas.

Its blog justifies its existence by claiming: “Misinformation is destabilizing elections, slowing pandemic recovery, entrenching climate change denial, and creating civil unrest and violence. Today’s internet has four billion publishers distributing memes and reels through open networks and closed messaging platforms. Much of this content is enriching communities, but the underlying infrastructure and business model that power it make it fertile soil to sow division, discord, and ignorance.”

It adds that it seeks to “discover and address” misinformation “in closed messaging spaces.” Check says it is a global fact checking leader and runs in more than 35 countries.

Another company involved in this type of censorship is the Algorithmic Transparency Institute, which was chartered by Congress and now has a sub-branch known as the Civic Listening Corps tasked with training volunteers on reporting misinformation to the censorship industry and Big Tech platforms’ content moderation departments.

Their website says they are “a volunteer network of individuals trained to monitor for, critically evaluate, and report misinformation on diverse topics central to our civic life: voting, elections, public health, civil rights, and other important issues.”

They have also formed Junkipedia, a database of the information that has been reported to them.

The director of the Algorithmic Transparency Institute, Cameron Hickey, admitted in a Zoom interview that they have tip lines, phone numbers and email addresses that can be used to forward questionable content from private chat groups. The group then monitors trends and develops messaging to counter unfavored narratives.

Hickey made their intentions clear, stating: “We need to inoculate against problematic narratives that are spreading on social media.”

No comments