By Matt Agorist
Source: The Free Thought Project
Beginning this week, the censorship heavy Facebook rolled out a new warning to users alerting them to the possibility that they may have seen “extremist content” or may be friends with extremists. Every member of the Free Thought Project received one of these warnings — despite years of promoting only peace and freedom for all.
“Are you concerned that someone you know is becoming an extremist?” read one of the messages that some users received. Another read, “you may have been exposed to harmful extremist content recently.” Both included links to “get support” where users can report content they deem extremist.
This move comes after Joe Biden announced last month that they are creating a means for family and friends to snitch on each other.
In a teleconference three weeks ago, a senior administration official told reporters of a plan that sounds reminiscent of the Minority Report by attacking “pre-crime.”
“We will work to improve public awareness of federal resources to address concerning or threatening behavior before violence occurs,” the official said.
The official went on to explain how this would work, which involves family members and friends snitching on each other.
We will work to improve public awareness of federal resources to address concerning or threatening behavior before violence occurs. And on that, I would just note that one of the things we’re talking about is the need to do something in this space, like the “See something” — “If you see something, say something” concept that has been promulgated previously by DHS. This involves creating contexts in which those who are family members or friends or co-workers know that there are pathways and avenues to raise concerns and seek help for those who they have perceived to be radicalizing and potentially radicalizing towards violence.
The official also announced that the government would be partnering with big tech to achieve “increased information sharing” between tech platforms to help combat this potential for radicalization.
It now appears that it’s here.
On Thursday, Facebook said the extremist warning was a test for a global approach to prevent radicalization on the site.
“This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” said a Facebook spokesperson in an emailed statement to Reuters. “We are partnering with NGOs and academic experts in this space and hope to have more to share in the future.”
But exactly what Facebook considers “extremism” or “extremist content” remains unclear. What is perfectly clear, however, is that they will undoubtedly crack down on political speech, vaccine safety speech, and all other legal free speech that may challenge the status quo — as this has been their MO from the start.
As Facebook moves the needle on censorship of free speech to an all time high, last week, they were sued in the Texas Supreme Court for allowing child predators to groom and recruit children for sex-trafficking.
The lawsuit, carried out by a group of children who were recruited on Facebook by their abusers, was successful in moving forward last week. The group sued Facebook for negligence and product liability, saying that Facebook failed to warn about or attempt to prevent sex trafficking from taking place on its internet platforms. The suits also alleged that Facebook benefited from the sexual exploitation of trafficking victims, according to a report in the Houston Chronicle.
The three victims accused Facebook of “running “an unrestricted platform to stalk, exploit, recruit, groom, and extort children into the sex trade.” One was 15 when an older man contacted her on Facebook, offered her a modeling job, photographed her, posted the pictures on the now-defunct BackPage website, and prostituted her to other men, leading her to be “raped, beaten, and forced into further sex trafficking.” The other two girls were 14, and reported almost identical experiences, with one openly pimped out for “dates” on Instagram, a Facebook subsidiary,” Graham Dockery explained.
Facebook lawyers argued the company was shielded from liability under Section 230 of the federal Communications Decency Act, which states that what users say or write online is not akin to a publisher conveying the same message.
This should totally be the case, but if Facebook can claim Section 230 on child trafficking, then why do they target and eliminate political speech so viciously? If Facebook does not act as a neutral party and removes peaceful anti-establishment content, they have no legal basis to claim entitlement under Section 230.
The court disagreed with Facebook’s lawyers, ruling, “We do not understand Section 230 to ‘create a lawless no-man’s-land on the Internet’ in which states are powerless to impose liability on websites that knowingly or intentionally participate in the evil of online human trafficking.”
“Holding internet platforms accountable for the words or actions of their users is one thing, and the federal precedent uniformly dictates that Section 230 does not allow it,” the opinion said. “Holding internet platforms accountable for their own misdeeds is quite another thing. This is particularly the case for human trafficking.”
For years, TFTP has reported on this phenomenon of Facebook attacking political speech while child exploitation goes unchecked. In 2018, Facebook and Twitter — without warning or justification — deleted the pages of Free Thought Project and Police the Police which had over 5 million followers.
During this purge, they also removed hundreds of other pages including massive police accountability groups, antiwar activists, alternative media, and libertarian news outlets. Facebook claimed to remove these pages in the name of fighting disinformation online and creating a safer user experience. But this was a farce. Illustrating just how big of an ostentatious sham this was, just weeks after claiming to keep their community safe, a child was openly sold on their platform.
This was no isolated incident either. The Guardian reported a study in 2020 that suggested Facebook is not fully enforcing its own standards banning content that exploits or endangers children.
According to the study, it examined at least 366 cases between January 2013 and December 2019, according to a report from the not-for-profit investigative group Tech Transparency Project (TPP) analyzing Department of Justice news releases.
Of the 366 cases of child sex abuse on Facebook, the social media giant reported just 9% of them to authorities. Investigations initiated by authorities discovered the other 91% of the cases — not Facebook.
As the great purge of anti-establishment views continues, remember that this company who claims they have your best interests in mind, according to the aforementioned lawsuit, is benefiting from the exploitation and trafficking of children.