The Dystopian Future of Facebook

By Mark Kernan

Source: CounterPunch

This year Facebook filed two very interesting patents in the US. One was a patent for emotion recognition technology; which recognises human emotions through facial expressions and so can therefore assess what mood we are in at any given time-happy or anxious for example. This can be done either by a webcam or through a phone cam. The technology is relatively straight forward. Artificially intelligent driven algorithms analyses and then deciphers facial expressions, it then matches the duration and intensity of the expression with a corresponding emotion. Take contempt for example. Measured by a range of values from 0 to 100, an expression of contempt could be measured by a smirking smile, a furrowed brow and a wrinkled nose. An emotion can then be extrapolated from the data linking it to your dominant personality traits: openness, introverted, neurotic, say.

The accuracy of the match may not be perfect, its always good to be sceptical about what is being claimed, but as AI (Artificial Intelligence) learns exponentially and the technology gets much better; it is already much, much quicker than human intelligence.

Recently at Columbia University a competition was set up between human lawyers and their AI counterparts. Both read a series of non-disclosure agreements with loopholes in them. AI found 95% compared to 88% by humans. The human lawyers took 90 minutes to read them; AI took 22 seconds. More incredibly still, last year Google’s AlphaZero beat Stockfish 8 in chess. Stockfish 8 is an open-sourced chess engine with access to centuries of human chess experience. Yet AlphaZero taught itself using machine learning principles, free of human instruction, beating Stockfish 8 28 times and drawing 72 out of 100. It took AlphaZero four hours to independently teach itself chess. Four hours from blank slate to genius.

A common misconception about algorithms is that they can be easily controlled, rather they can learn, change and run themselves-a process known as deep “neural” learning. In other words, they run on self-improving feed back loops. Much of this is positive of course, unthought of solutions by humans to collective problems like climate change are more possible in the future. The social payoffs could be huge too. But what of the use of AI for other means more nefarious. What if, as Yuval Noah Hariri says, AI becomes just another tool to be used by elites to consolidate their power even further in the 21stcentury. History teaches us that it isn’t luddite to ask this question, nor is it merely indulging in catastrophic thinking about the future. Rapidly evolving technology ending up in the hands of just a few mega companies, unregulated and uncontrolled, should seriously concern us all.

Algorithms, as Jamie Bartlett the author of The People Vs Tech puts it, are “the keys to the magic kingdom” of understanding deep seated human psychology: they filter, predict, correlate, target & learn. They also manipulate. We would be naive in the extreme to think they already don’t, and even more naive to think the manipulation is done only by commercial entities. After all, it’s not as if there aren’t lots of online tribes, some manufactured and some not, to be manipulated into and out of political viewpoints, our fleeced of their money.

In 2017 Facebook said they could detect teenagers’ moods and emotions such as feeling nervous and insecure by their entries, a claim they denied later, adding we do not, “offer tools to target people based on their emotional state”. The internal report was written by two Australian executives-Andy Sinn and David Fernandez. The report according to The Guardian was written for a large bank and said that, “the company has a database of its young users – 1.9 million high schoolers, 1.5 million tertiary students and 3 million young workers”.

Going one better still, Affectiva, a Boston company, claims to be able to detect and decode complex emotional and cognitive data from your face, voice and physiological state using emotion recognition technology (ECT)-amassing 12 billion “emotion data points” across gender, age & ethnicity.  Its founder has declared that Affectiva’s ECT can read your heart rate from a webcam without the you wearing any sensors, simply by using the reflection of your face which highlights blood flow-a reflection of your blood pressure. Next time you’re listening to Newstalk’s breakfast show, think of that.

Affectiva’s ultimate goal of course, when you get past all the feel-good optimistic guff about “social connectivity”, “awesome innovation”, and worst of all “empowering” is, to use their own words, to “enable media creators to optimize their content”. Profiting from decoding our emotional states in other words.

Maybe Facebook (and Google) would use this technology wisely for our benefit, then again maybe not. It isn’t such a stretch to imagine how it could be used unethically too. To microtarget customised ads and messages at us depending on our state of mind at given time, say, and allowing Cambridge Analytica to harvest the personal data of 87 million Facebook users to subvert democracy with Brexit & Trump. Facebook claims they weren’t aware of this though.  Well, maybe, maybe not, and in spite of their protests in recent years they are still not especially transparent or accountable given their enormous cultural and social power in our lives. Curiouser and Curiouser you might think, and you’d be right.

The second Facebook patent is even more interesting, if that’s the right word, or dystopian if you prefer. Patented this June, published under the code US20180167677 (with the abstract title of Broadcast Content View Analysis Based on Ambient Audio Recording, application no: 15/376,515) illustrates a process by which secret messages- ‘ambient audio fingerprints’ in the jargon-embedded in TV ads, would trigger your smart technology (phone or TV) to record you while the ad was playing. Presumably to gauge your reaction to the product being advertised at you through, perhaps, voice biometrics (i.e. the identification and recognition of the pitch and tone of your voice).

As the patent explains in near impenetrable but just about understandable jargon this is done by first, detecting one or more broadcasting signals (the advertisement) of a content item. Second, ambient audio of the content item is recorded, and then the audio feature is extracted “from the recorded ambient audio to generate an ambient fingerprint” and finally, wait for it, “ the ambient audio fingerprint, time information of the recorded ambient audio, and an identifier of an individual associated with a client device (you and your phone or smart TV) recording the ambient audio” is sent, “to an online system for determining whether there was an impression of the content by the individual.” It goes on to say that “the impression of the identified content item by the identified individual” is logged in a “data store of the online system”.

It goes on to state that “content providers have a vested interest in knowing who have listened and/or viewed their content” and that the feature described in the patent are not exhaustive, and that “many additional features and advantages will be apparent to one of ordinary skill in the art…”.

It is already obvious we don’t know how much Facebook and other big tech platforms monitor us, neither do we know how much data they hold on us individually and collectively and, critically, who has access to that data and how they could use it.

If you can sell consumer goods by such manipulation why not whole ideologies, chipping away at our human agency one dystopian tech innovation at a time, paving the way for the morphing of late stage capitalism into authoritarian capitalism; one efficiency gain at a time.

If put into place such “innovations” are designed to monitor our emotional states for monetary gain. In essence, it is a type of online mood tracking where we are the digital lab rats.  Facebook is already valued at half a trillion US dollars giving it huge economic and cultural power.

According to Private Eye magazine, Facebook’s legal team say the patent was filed “to prevent aggression from other companies”, and that “patents tend to focus on future-looking technology that is often speculative in nature and could be commercialised by other companies”. As Private Eye pointed out though, it’s not as if Facebook has been completely transparent about such secretive issues in the past or present. The fact that Facebook generates billions by manipulating our emotions is not a surprise us, their business model is based on it, but how they intend to do it in the future should surprise, and alert us. We are after all the product. Over 90% of their revenues comes from selling adverts. They have the market incentive.

How will all this play out in the future? It isn’t difficult to build a picture of a commercialised and rapacious big tech dystopia, the very opposite of the freedoms and civil liberties envisaged by the original pioneers of the internet, and the opposite of how they currently perceive themselves.

Verint, a leading multinational analytics & biometric corporation, with an office in Ireland, has been known to install and sell, “intrusive mass surveillance systems worldwide including to authoritarian governments”, according to Privacy International. Governments that routinely commit human rights abuses on their own citizens.

China, a world leader in surveillance capitalism, recently declared that by 2020 a national video surveillance network, Xueliang, will be fully operationable, Sharp Eyes in English-Kafka and Orwell must be smirking knowingly somewhere. The term sharp eyes harks back to the post war slogan in communist China of “The people have sharp eyes”, when neighbours were encouraged to spy and tell on other neighbours of counter revolutionary or defeatist gossip about the 1949 revolution.

Democracies too have built overarching systems of surveillance. Edward Snowden told us in 2013 that the NSA was given secret direct access to the servers of big tech companies (Facebook, YouTube, Google and others) to collect private communications. As Glenn Greenwald said, the NSA’s unofficial “motto of omniscience” is: Know it all, Collect it all, Process it all.

Jaron Lanier, pioneer of virtual reality technology and a tech renegade, and an apostate to some, recently called the likes of Facebook and Google “behaviour manipulation empires”. Their pervasive surveillance and subtle manipulation through “weaponised advertising” he argues debases democracy by polarising debate at a scale unthinkable even just five or ten years ago, and it’s not only advertising that can be weaponised. Facebook, Google, Twitter and Instagram all have “manipulation engines” (algorithms we know little about) running in the background Lanier says, designed specifically by thousands of psychological & “emotional engineers” (“choice architects” or “product philosophers” to use the inane corporate gobbledygook). Their job is to keep you addicted to what’s now known as the “attention economy”-and attention equals profit. A better description still might be the attention/anxiety economy. Twitter has for instance a 3 second time delay between the page loading and notification loading, Facebook something similar-and always red for urgent. They are known in psychology as intermittent variable rewards, negative reinforcement in this context which keep behaviour going by the hope of maybe being rewarded, with a like or a follower. This builds anticipation and releases feel good neurotransmitters, and taps into your need to belong, and to be heard-we’re intensely social creatures. The downside is the opposite of course,where we can be thrown into an emotional rollercoaster if the expected dopamine hit doesn’t come.

The goal is addiction into a consumption frenzy of socially approved validation. Big Tech’s social media universe is, as one reformed “choice architect” put it, “an attention seeking gravitational wormhole” that sucks you into their profit seeking universe. If you don’t think so, check how many times you look at your phone every day. The average person checks 150 times. Most of that is social media. We’re all in an attention arms race now.

There is a great German word: Zukunftsangst. It means translated, roughly, future-anxiety. Maybe it should be renamed Zuckerbergangst instead.

No Need To Wait – Dystopia Is Almost Upon Us

Source: TruePublica

Microsoft’s CEO has warned the technology industry against creating a dystopian future, the likes of which have been predicted by authors including George Orwell and Aldous Huxley. Satya Nadella kicked off the the company’s 2017 Build conference with a keynote that was as unexpected as it was powerful. He told the developers in attendance that they have a huge responsibility, and that the choices they make could have enormous implications.

They won’t listen of course. The collection of big data along with management, selling and distribution and the systems architecture to control it is now worth exactly double global military defence expenditure. In fact, this year, the big data industry overtook the worlds most valuable traded commodity – oil.

The truth is that the tech giants have already captured us all. We are already living in the beginnings of a truly dystopian world.

Leaving aside the endemic surveillance society our government has chosen on our behalf with no debate, politically or otherwise, we already have proof of the now and where it is leading. With fingerprint scanning, facial recognition, various virtual wallets to pay for deliveries, some would say your identity is as good as stolen. If it isn’t, it soon will be. That’s because the hacking industry, already worth a mind blowing $1trillion annually is expected to reach $2.1 trillion in just 14 months time.

The reality of not being able to take public transportation, hire a car, buy a book, or a coffee – requiring full personal identification is almost upon us. Britain even had an intention to be completely cashless by 2025 – postponed only by the impact of Brexit.

Alexa, the Amazon home assistant listens to everything said in the house. It is known to record conversations. Recently, police in Arkansas, USA demanded that Amazon turn over information collected from a murder suspect’s Echo — the speaker that controls Alexa, because they already knew what information could be extracted from it.

32M is the first company in the US that provides a human chip, allowing employees “to make purchases in their break-room micro market, open doors, login to computers, use the copy machine.” 3M also confirmed what the chip could really do – telling employees to “use it as your passport, public transit and all purchasing opportunities.”

Various Apps now locate people you may know and your own location can be shared amongst others without your knowledge and we’ve known for years that governments and private corporations have access to this data, whether you like it not.

Other countries are providing even scarier technologies.  Hypebeast Magazine reports that  Aadhaar is a 12-digit identity number issued to all Indian residents based on their biometric and demographic data. “This data must be linked to their bank account or else they’ll face the risk of losing access to their account. Folks have until the end of the year to do this, with phone numbers soon to be connected through the 12 digits by February. Failure to do so will deactivate the service. ” The technology has the ability to refuse access to state supplied services such as healthcare.

Our article “Insurance Industry Leads The Way in Social Credit Systems” also highlights what the fusion of technology and data is likely to end up doing for us. An astonishing 96 per cent of insurers think that ecosystems or applications made by autonomous organisations are having a major impact on the insurance industry. The use of social credit mechanisms is being developed, some already implemented, which will determine our future behaviour, which will affect us all – both individually and negatively.”

The Chinese government plans to launch its Social Credit System in 2020. Already being piloted on 12 million of its citizens, the aim is to judge the trustworthiness – or otherwise – of its 1.3 billion residents. Something as innocuous as a person’s shopping habits become a measure of character. But the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like. Friends are considered as well and individual credit scores fall depending on their trustworthiness. It’s not possible to imagine how far this will go in the end.

However to get us all there, to that situation, we need to be distracted from what is going on in the background. Some, are already concerned.

 

Distraction – detaching us from truth and reality

The Guardian wrote an interesting piece recently which highlighted some of the concerns of those with expert insider knowledge of the tech industry. For instance, Justin Rosenstein, the former Google and Facebook engineer who helped build the ‘like’ button –  is concerned. He believes there is a case for state regulation of smartphone technology because it is “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies.

If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.” Rosenstien also makes the observation that after Brexit and the election of Trump, digital forces have completely upended the political system and, left unchecked, could render democracy as we know it obsolete.

Carole Cadwalladre’s recent Exposé in the Observer/Guardian proved beyond doubt that democracy has already departed.  Here we learn about a shadowy global operation involving big data and billionaires who influenced the result of the EU referendum. Britain’s future place in the world has been altered by technology.

Nir Eyal 39, the author of Hooked: How to Build Habit-Forming Products writes: “The technologies we use have turned into compulsions, if not full-fledged addictions.” Eyal continues: “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Eyal feels the threat and protects his own family by cutting off the internet completely at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.”

The truth is we are no longer in control and have not been since we learned that our government was lying to us with the Snowden revelations back in 2013.

Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry agrees about the lack of control. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” Harris insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.

Harris is a tech whistleblower. He is lifting the lid on the vast powers accumulated by technology companies and the ways they are abusing the influence they have at their fingertips – literally.

“A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today.”

The techniques these companies use such as social reciprocity, autoplay and the like are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, ultimately revealed that the company can identify when teenagers feel “worthless or “insecure.” Harris adds, that this is “a perfect model of what buttons you can push in a particular person”.

Chris Marcellino, 33, a former Apple engineer is now in the final stages of retraining to be a neurosurgeon and notes that these types of technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

Roger McNamee, a venture capitalist who benefited from hugely profitable investments in Google and Facebook, has grown disenchanted with both of the tech giants. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”

James Williams ex-Google strategist who built the metrics system for the company’s global search advertising business, says Google now has the “largest, most standardised and most centralised form of attentional control in human history”. “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

Williams also takes the view that if the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on. If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?”

“Will we be able to recognise it, if and when it happens?” Williams says. “And if we can’t, then how do we know it hasn’t happened already?”

 

The dystopian arrival

Within ten years, some are speculating that many of us will be wearing eye lenses. Coupled with social media, we’ll be able to identify strangers and work out that a particular individual, in say a bar, has a low friend compatibility, and data shows you will likely not have a fruitful conversation. This idea is literally scratching the surface of the information overload en-route right now.

It is not at all foolish to think that in that same bar a patron is shouting at the bartender, who refuses to serve him another drink because the glass he was holding measured his blood-alcohol level through the sweat in his fingers. He’ll have to wait at least 45 minutes before he’ll be permitted to order another scotch. You might even think that is a good idea – it isn’t.

Google’s Quantum Artificial Intelligence  Lab, already works with other organisations associated with NASA. Google’s boss sits on the Board of the Pentagon with links plugged directly into the surveillance architecture of the NSA in the USA and GCHQ in Britain. This world, where artificial intelligence makes its mark, as Williams mentions earlier, will deliberately undermine the ability to think for yourself.

In the scenario of the eye lenses, you might even have the ability to command your eyewear to shut down. But when you do, suddenly you are confronted with an un-Googled world. It appears drab and colourless in comparison. The people before you are bland, washed out and unattractive. The art, plants, wall paint, lighting and decorations had all been shaped by your own preferences, and without the distortion field your wearable eyewear provided, the world appears as a grey, lifeless template.

You find it difficult to last without the assistance of your self imposed augmented life, and accompanied by nervous laughter you switch it back on. The world you view through the prism of your computer eyewear has become your default setting. You know you have free will, but don’t feel like you need it. As Marcellino says the same neurological pathways as gambling and drug use drive how you choose to see the world.

This type of technology will be available and these types of scenario’s will become real, sooner than you think.

Our governments, allied with the tech giants are coercing us into a place of withering obedience with the use of 360 degree state surveillance. New technology, which is somehow seen as the road to liberty, contentment and prosperity, is really our future being shaped by a system that will destroy our civil liberties, crush our human rights and it will eventually ensnare and trap us all. This much they are already attempting in China and Japan with social credit mechanisms and pre-crime technology which is a truly frightening prospect. Without debate or our knowledge, here in western democracies, these technologies are already in use.

 

What Country Is This? Forced Blood Draws, Cavity Searches and Colonoscopies

By John W. Whitehead

Source: The Rutherford Institute

“The Fourth Amendment was designed to stand between us and arbitrary governmental authority. For all practical purposes, that shield has been shattered, leaving our liberty and personal integrity subject to the whim of every cop on the beat, trooper on the highway and jail official.”—Herman Schwartz, The Nation

Our freedoms—especially the Fourth Amendment—are being choked out by a prevailing view among government bureaucrats that they have the right to search, seize, strip, scan, shoot, spy on, probe, pat down, taser, and arrest any individual at any time and for the slightest provocation.

Forced cavity searches, forced colonoscopies, forced blood draws, forced breath-alcohol tests, forced DNA extractions, forced eye scans, forced inclusion in biometric databases: these are just a few ways in which Americans are being forced to accept that we have no control over our bodies, our lives and our property, especially when it comes to interactions with the government.

Worse, on a daily basis, Americans are being made to relinquish the most intimate details of who we are—our biological makeup, our genetic blueprints, and our biometrics (facial characteristics and structure, fingerprints, iris scans, etc.)—in order to clear the nearly insurmountable hurdle that increasingly defines life in the United States: we are now guilty until proven innocent.

Such is life in America today that individuals are being threatened with arrest and carted off to jail for the least hint of noncompliance, homes are being raided by police under the slightest pretext, property is being seized on the slightest hint of suspicious activity, and roadside police stops have devolved into government-sanctioned exercises in humiliation and degradation with a complete disregard for privacy and human dignity.

Consider, for example, what happened to Utah nurse Alex Wubbels after a police detective demanded to take blood from a badly injured, unconscious patient without a warrant.

Wubbels refused, citing hospital policy that requires police to either have a warrant or permission from the patient in order to draw blood. The detective had neither. Irate, the detective threatened to have Wubbels arrested if she didn’t comply. Backed up by her supervisors, Wubbels respectfully stood her ground only to be roughly grabbed, shoved out of the hospital, handcuffed and forced into an unmarked car while hospital police looked on and failed to intervene (take a look at the police body camera footage, which has gone viral, and see for yourself).

Michael Chorosky didn’t have an advocate like Wubbels to stand guard over his Fourth Amendment rights. Chorosky was surrounded by police, strapped to a gurney and then had his blood forcibly drawn after refusing to submit to a breathalyzer test. “What country is this? What country is this?” cried Chorosky during the forced blood draw.

What country is this indeed?

Unfortunately, forced blood draws are just the tip of the iceberg when it comes to the indignities and abuses being heaped on Americans in the so-called name of “national security.”

Forced cavity searches, forced colonoscopies and forced roadside strip searches are also becoming par for the course in an age in which police are taught to have no respect for the citizenry’s bodily integrity whether or not a person has done anything wrong.

For example, 21-year-old Charnesia Corley was allegedly being pulled over by Texas police in 2015 for “rolling” through a stop sign. Claiming they smelled marijuana, police handcuffed Corley, placed her in the back of the police cruiser, and then searched her car for almost an hour. No drugs were found in the car.

As the Houston Chronicle reported:

Returning to his car where Corley was held, the deputy again said he smelled marijuana and called in a female deputy to conduct a cavity search. When the female deputy arrived, she told Corley to pull her pants down, but Corley protested because she was cuffed and had no underwear on. The deputy ordered Corley to bend over, pulled down her pants and began to search her. Then…Corley stood up and protested, so the deputy threw her to the ground and restrained her while another female was called in to assist. When backup arrived, each deputy held one of Corley’s legs apart to conduct the probe.

The cavity search lasted 11 minutes. This practice is referred to as “rape by cop.”

Although Corley was charged with resisting arrest and with possession of 0.2 grams of marijuana, those charges were subsequently dropped.

David Eckert was forced to undergo an anal cavity search, three enemas, and a colonoscopy after allegedly failing to yield to a stop sign at a Wal-Mart parking lot. Cops justified the searches on the grounds that they suspected Eckert was carrying drugs because his “posture [was] erect” and “he kept his legs together.” No drugs were found.

During a routine traffic stop, Leila Tarantino was subjected to two roadside strip searches in plain view of passing traffic, while her two children—ages 1 and 4—waited inside her car. During the second strip search, presumably in an effort to ferret out drugs, a female officer “forcibly removed” a tampon from Tarantino. No contraband or anything illegal was found.

Thirty-eight-year-old Angel Dobbs and her 24-year-old niece, Ashley, were pulled over by a Texas state trooper on July 13, 2012, allegedly for flicking cigarette butts out of the car window. Insisting that he smelled marijuana, the trooper proceeded to interrogate them and search the car. Despite the fact that both women denied smoking or possessing any marijuana, the police officer then called in a female trooper, who carried out a roadside cavity search, sticking her fingers into the older woman’s anus and vagina, then performing the same procedure on the younger woman, wearing the same pair of gloves. No marijuana was found.

Sixty-nine-year-old Gerald Dickson was handcuffed and taken into custody (although not arrested or charged with any crime) after giving a ride to a neighbor’s son, whom police suspected of being a drug dealer. Despite Dickson’s insistence that the bulge under his shirt was the result of a botched hernia surgery, police ordered Dickson to “strip off his clothes, bend over and expose all of his private parts. No drugs or contraband were found.”

Meanwhile, four Milwaukee police officers were charged with carrying out rectal searches of suspects on the street and in police district stations over the course of several years. One of the officers was accused of conducting searches of men’s anal and scrotal areas, often inserting his fingers into their rectums and leaving some of his victims with bleeding rectums.

It’s gotten so bad that you don’t even have to be suspected of possessing drugs to be subjected to a strip search.

A North Carolina public school allegedly strip-searched a 10-year-old boy in search of a $20 bill lost by another student, despite the fact that the boy, J.C., twice told school officials he did not have the missing money. The assistant principal reportedly ordered the fifth grader to disrobe down to his underwear and subjected him to an aggressive strip-search that included rimming the edge of his underwear. The missing money was later found in the school cafeteria.

Suspecting that Georgia Tech alum Mary Clayton might have been attempting to smuggle a Chick-Fil-A sandwich into the football stadium, a Georgia Tech police officer allegedly subjected the season ticket-holder to a strip search that included a close examination of her underwear and bra. No contraband chicken was found.

What these incidents show is that while forced searches may span a broad spectrum of methods and scenarios, the common denominator remains the same: a complete disregard for the rights of the citizenry.

In fact, in the wake of the U.S. Supreme Court’s ruling in Florence v. Burlison, any person who is arrested and processed at a jail house, regardless of the severity of his or her offense (i.e., they can be guilty of nothing more than a minor traffic offense), can be subjected to a strip search by police or jail officials without reasonable suspicion that the arrestee is carrying a weapon or contraband.

Examples of minor infractions which have resulted in strip searches include: individuals arrested for driving with a noisy muffler, driving with an inoperable headlight, failing to use a turn signal, riding a bicycle without an audible bell, making an improper left turn, and engaging in an antiwar demonstration (the individual searched was a nun, a Sister of Divine Providence for 50 years).

Police have also carried out strip searches for passing a bad check, dog leash violations, filing a false police report, failing to produce a driver’s license after making an illegal left turn, having outstanding parking tickets, and public intoxication. A failure to pay child support can also result in a strip search.

As technology advances, these searches are becoming more invasive on a cellular level, as well.

For instance, close to 600 motorists leaving Penn State University one Friday night were stopped by police and, without their knowledge or consent, subjected to a breathalyzer test using flashlights that can detect the presence of alcohol on a person’s breath. These passive alcohol sensors are being hailed as a new weapon in the fight against DUIs. (Those who refuse to knowingly submit to a breathalyzer test are being subjected to forced blood draws. Thirty states presently allow police to do forced blood draws on drivers as part of a nationwide “No Refusal” initiative funded by the federal government. Not even court rulings declaring such practices to be unconstitutional in the absence of a warrant have slowed down the process. Now police simply keep a magistrate on call to rubber stamp the procedure over the phone.)

The National Highway Safety Administration, the same government agency that funds the “No Refusal” DUI checkpoints and forcible blood draws, is also funding nationwide roadblocks aimed at getting drivers to “voluntarily” provide police with DNA derived from saliva and blood samples, reportedly to study inebriation patterns. In at least 28 states, there’s nothing voluntary about having one’s DNA collected by police in instances where you’ve been arrested, whether or not you’re actually convicted of a crime. All of this DNA data is being fed to the federal government.

Airline passengers, already subjected to virtual strip searches, are now being scrutinized even more closely, with the Customs and Border Protection agency tasking airport officials with monitoring the bowel movements of passengers suspected of ingesting drugs. They even have a special hi-tech toilet designed to filter through a person’s fecal waste.

Iris scans, an essential part of the U.S. military’s boots-on-the-ground approach to keeping track of civilians in Iraq and Afghanistan, are becoming a de facto method of building the government’s already mammoth biometrics database. Funded by the Dept. of Justice, along with other federal agencies, the iris scan technology is being incorporated into police precincts, jails, immigration checkpoints, airports and even schools. School officials—from elementary to college—have begun using iris scans in place of traditional ID cards. In some parts of the country, parents wanting to pick their kids up from school have to first submit to an iris scan.

As for those endless pictures everyone so cheerfully uploads to Facebook (which has the largest facial recognition database in the world) or anywhere else on the internet, they’re all being accessed by the police, filtered with facial recognition software, uploaded into the government’s mammoth biometrics database and cross-checked against its criminal files. With good reason, civil libertarians fear these databases could “someday be used for monitoring political rallies, sporting events or even busy downtown areas.”

While the Fourth Amendment was created to prevent government officials from searching an individual’s person or property without a warrant and probable cause—evidence that some kind of criminal activity was afoot—the founders could scarcely have imagined a world in which we needed protection against widespread government breaches of our privacy, including on a cellular level.

Yet that’s exactly what we are lacking and what we so desperately need.

Unfortunately, the indignities being heaped upon us by the architects and agents of the American police state—whether or not we’ve done anything wrong—are just a foretaste of what is to come.

As I make clear in my book Battlefield America: The War on the American People, the government doesn’t need to tie you to a gurney and forcibly take your blood or strip you naked by the side of the road in order to render you helpless. It has other methods—less subtle perhaps but equally humiliating, devastating and mind-altering—of stripping you of your independence, robbing you of your dignity, and undermining your rights.

With every court ruling that allows the government to operate above the rule of law, every piece of legislation that limits our freedoms, and every act of government wrongdoing that goes unpunished, we’re slowly being conditioned to a society in which we have little real control over our bodies or our lives.

Grooming Students for A Lifetime of Surveillance

index

The same technologists who protest against the NSA’s metadata collection programs are the ones profiting the most from the widespread surveillance of students.

By Jessy Irwin

Source: Model View Culture

Since 2011, billions of dollars of venture capital investment have poured into public education through private, for-profit technologies that promise to revolutionize education. Designed for the “21st century” classroom, these tools promise to remedy the many, many societal ills facing public education with artificial intelligence, machine learning, data mining, and other technological advancements.

They are also being used to track and record every move students make in the classroom, grooming students for a lifetime of surveillance and turning education into one of the most data-intensive industries on the face of the earth. The NSA has nothing on the monitoring tools that education technologists have developed in to “personalize” and “adapt” learning for students in public school districts across the United States.

(Mega)data Collection + Analysis

“Adaptive”, “personalized” learning platforms are one of the most heavily-funded verticals in education technology. By breaking down learning into a series of tasks, and further distilling those tasks down to a series of clicks that can be measured and analyzed, companies like Knewton (which has raised $105 million in venture capital), or the recently shuttered inBloom (which raised over $100 million from the Gates Foundation) gather immense amounts of information about students into a lengthy profile containing personal information, socioeconomic status and other data that is mined for patterns and insights to improve performance. For students, these clickstreams and data trails begin when they are 5 years old, barely able to read much less type in usernames and passwords required to access their online learning portals.

Data collection and number crunching aren’t the only technologies being explored to revolutionize education– technology billionaire and philanthropist Bill Gates funded a $1.1 million project to fit middle-school students with biometric sensors to monitor their response and engagement levels during lessons, and advocated a $5 billion program to install video cameras in every classroom to record teachers for evaluation.

The Family Educational Rights and Privacy Act, a law put in place in 1974 to protect student academic records, does nothing to protect student data when it is in the hands of education technology companies. Instead, FERPA threatens to take federal funding away from schools who are found to have breached student privacy while it fails to mandate bare minimum security standards for the storage and transmission of student data. In fact, a recent revision of FERPA increased the power that companies have to collect and mine student data.  Though lawmakers and privacy advocates are regularly outraged at the immense volume of student data freely floating through the web, the repeated failure to create legislation that protects student data from being used for profit is astounding.

One thing is clear: those who have the power to protect student privacy will not do so as long as they can continue to subsidize the cost of public education with student data.

Internet Censorship in Schools

In most educational institutions, the vast majority of IT operations are focused on monitoring, filtering and blocking web traffic instead of building secure networks that safeguard student records and sensitive behavioral data. Nowhere is this more apparent than in the widespread adoption of web filtering software tools in K-12 schools. Usage of these technologies is required for compliance with programs like E-Rate, which grant federal money to schools to fund internet access for their students.

To be eligible for funding from the E-Rate program, schools are required to comply with federal regulations that ban access to websites displaying pornography, graphic material, or any other that could otherwise be judged as immoral, improper or lewd. More often than not, this subjective criteria is determined by the opinions and belief systems of school administrators under political pressure to deny students access to content on controversial issues about topics like evolution, birth control and sex education. These decisions disproportionately affect young girls and LGBTQ students by denying them access to sites that provide important information about their rights, their developing bodies, their sexuality and their access to contraceptives. In the case of Securly, the first filtering tool designed for schools, the controls set by IT and administration for web access can extend far beyond the walls of the school and determine what content students can access while using school- issued machines from their home internet connections.

Despite the many positive contributions of the internet in the distribution and dissemination of knowledge across the planet, students are regularly denied access to valuable information that could positively impact their learning… all to safeguard a small percentage of federal budget money granted to their schools. The implications of this are particularly severe for low-income students who do not have access to the Internet at home; without the ability to freely access the web on their own terms, their digital literacy skills lag behind those of their affluent peers. Though teachers request better and broader internet access for students in their classrooms, administrator-imposed blocks and filters on school internet leave most students woefully unprepared to navigate the realities of the web. When students do find a way around the tools used to limit their access to the outside world (this happened with a group of students who were given iPads in the Los Angeles United School district last year), they’re labelled as “hackers” or miscreants, and disciplined for using Tor, a tool popular among students for anonymous web browsing and circumventing blacklists that ban websites from school networks.

Social Media Surveillance

Schools are adopting many other surveillance technologies with unprecedented reach into the private communications and lives of students and their families. In Lower Merion, PA, a suburb outside of Philadelphia, educators engaged remote administration tools on students’ laptops to regularly spy on their activities while at home. In a case that made its way into federal courts, one student was punished by administrators who mistook candy pictured through his laptop’s camera for drugs. While the full extent of the spying was never exposed, parents and students have expressed concern about educators having the ability to watch young girls undress in the privacy of their homes, unaware that they were being watched through their school-issued computers.

In 2013, the Glendale Unified School District in Glendale, CA took a move straight from the NSA surveillance handbook by seeking out a $40,000 contract with Geo Listening, a social media monitoring company that charges schools to eavesdrop on student social media chatter. While the company claims to only access posts that are public in the school districts they work with, and says it works closely with school districts to tailor their monitoring programs to prevent cyberbullying, suicide and active shooter incidents, it is very easy— too easy, in fact— to use such technologies to identify and target students who have been labeled deviant or delinquent within their communities, or who are otherwise outspoken and critical of their teachers and schools.

Schools are also demanding access to students’ social media communications in ways that severely harm their constitutionally protected rights to free speech. In Minnewaska, MN, a female student who complained about a hall monitor’s behavior in a Facebook post was questioned and given in-school suspension. Later, when a parent reported the student for “sexting” over Facebook with a classmate, she was removed from class again as a group of educators and a police officer armed with a taser demanded that the student hand over her password. They then read private communications that took place outside of school through her Facebook account. After being pulled from class multiple times, suspended from school, and barred from attending a school field trip (the same punishment was not doled out to the male student involved in the messaging), the ACLU stepped in to defend the student’s right to privacy and free speech in communications outside of school property. Though the ruling in the case upheld students’ protection under the 1st and 4th amendments, school districts around the country continue to demand access to students’ social media accounts and threaten to mark students’ academic records to make it difficult to get into a desired university or to seek other avenues for continued education.

Physical Surveillance

In addition to the online monitoring taking place in schools, there are many surveillance mechanisms in place to enforce physical security in public schools. Since the shootings that took place at Virginia Tech in 2007, and again after those that took place in Sandy Hook, CT in 2012, technology companies have launched myriad tools designed to minimize the potential loss of life in the next active shooting incident at a school. Some of these technologies include:

By preying on the absolute worst fears of administrators and parents across the country, technology companies are earning millions of dollars selling security “solutions” that do not accurately address the threat model these tools claim to dispel. School districts that purchase these systems further perpetuate the farce of security theater and infringe on students’ rights to privacy and individual freedom.

A Lifetime of Surveillance

When we develop and use educational technologies that monitor a student’s every moment in school and online, we groom that student for a lifetime of surveillance from the NSA, from data brokers, from advertisers, marketers, and even CCTV cameras. By watching every move that students make while learning, we model to students that we do not trust them– that ultimately, their every move will be under scrutiny from others. When students recognize that they are being watched, they begin to act differently– and from that very moment they begin to cede one small bit of freedom at a time.

Though the education technology revolution continually promises a silver bullet that will be a great democratizing force for all of society’s ills, it categorically disregards the patriarchal power structures and biases that both legitimate and perpetuate discrimination against minorities and marginalized groups. Despite it being well within the scope of educational technology tools to track, identify and expose biases towards groups of students, technologists avoid implementing small changes that monitor educator performance and correct for unconscious biases that negatively affect student learning. Because the surveillance taking place in schools is typically based on qualitative criteria like morality, appropriateness and good behavior, these technologies extend current practices and prejudices that perpetuate injustices against marginalized groups.

There are few to no safeguards built into the online and offline monitoring systems to protect students from the abuse of these tools. Young female students who are active on social media can be unfairly targeted, slut-shamed and disciplined for suggestive language that takes place outside of school, while their male counterparts are not held equally accountable for participating in sexually charged online conversations. Youth of color, a group that is disproportionately stereotyped as angry, aggressive, and unpredictable by educators, can easily be monitored, disciplined, and entered into the juvenile justice system for any outburst that could vaguely be misinterpreted as a threat to a homogeneous caucasian school culture. Any student grappling with issues of abuse, depression, disability, gender identity or sexuality could easily be discovered by online surveillance tools, stigmatized and outed to their teachers, parents and wider community.

Education technologists also continue to widen the digital divide between affluent and economically oppressed. Despite an industry-wide insistence that technology is not being developed to replace educators in the classroom, many poor school districts faced with massive budget cuts are implementing experimental blended learning programs reliant on “adaptive” and “personalized” software as a way to mitigate the effect of large class sizes on student learning. This means that students who attend costly private schools or live within rich school districts that can afford to employ more educators and maintain smaller class sizes receive much more personalized instruction from their teachers. Instead of receiving much-needed interaction and personalized learning directly from educators, poor students living in disadvantaged communities receive instruction from educational software that collects their data (which is likely to be sold), and have less individual instruction time from teachers than their affluent counterparts.

By developing technologies that collect, track, record, analyze every move a student makes both online and off, technologists and investors and educators are ensuring that today’s students will have less privacy than any other generation that came before them, threatening to make privacy and anonymity unattainable for future generations. Though the surveillance mechanisms at play in education technologies affect the privacy of millions of students who pass through the education system each year, this system is a profound, persistent threat to the privacy and individual liberty of LGBTQ students, low-income students, and students of color who have already been so severely failed by the status quo.

Ironically, the same technologists and investors who protest against the NSA’s metadata collection programs are the ones profiting the most from the widespread surveillance of students across the country, by building educational tools with the same function.