Technocensorship: The Government’s War on So-Called Dangerous Ideas

By John & Nisha Whitehead

Source: The Rutherford Institute

“There is more than one way to burn a book. And the world is full of people running about with lit matches.”—Ray Bradbury

What we are witnessing is the modern-day equivalent of book burning which involves doing away with dangerous ideas—legitimate or not—and the people who espouse them. Seventy years after Ray Bradbury’s novel Fahrenheit 451 depicted a fictional world in which books are burned in order to suppress dissenting ideas, while televised entertainment is used to anesthetize the populace and render them easily pacified, distracted and controlled, we find ourselves navigating an eerily similar reality.

Welcome to the age of technocensorship.

On paper—under the First Amendment, at least—we are technically free to speak.

In reality, however, we are now only as free to speak as a government official—or corporate entities such as Facebook, Google or YouTube—may allow.

Case in point: internal documents released by the House Judiciary Select Subcommittee on Weaponization of the Federal Government confirmed what we have long suspected: that the government has been working in tandem with social media companies to censor speech.

By “censor,” we’re referring to concerted efforts by the government to muzzle, silence and altogether eradicate any speech that runs afoul of the government’s own approved narrative.

This is political correctness taken to its most chilling and oppressive extreme.

The revelations that Facebook worked in concert with the Biden administration to censor content related to COVID-19, including humorous jokes, credible information and so-called disinformation, followed on the heels of a ruling by a federal court in Louisiana that prohibits executive branch officials from communicating with social media companies about controversial content in their online forums.

Likening the government’s heavy-handed attempts to pressure social media companies to suppress content critical of COVID vaccines or the election to “an almost dystopian scenario,” Judge Terry Doughty warned that “the United States Government seems to have assumed a role similar to an Orwellian ‘Ministry of Truth.’

This is the very definition of technofascism.

Clothed in tyrannical self-righteousness, technofascism is powered by technological behemoths (both corporate and governmental) working in tandem to achieve a common goal.

The government is not protecting us from “dangerous” disinformation campaigns. It is laying the groundwork to insulate us from “dangerous” ideas that might cause us to think for ourselves and, in so doing, challenge the power elite’s stranglehold over our lives.

Thus far, the tech giants have been able to sidestep the First Amendment by virtue of their non-governmental status, but it’s a dubious distinction at best when they are marching in lockstep with the government’s dictates.

As Philip Hamburger and Jenin Younes write for The Wall Street Journal: “The First Amendment prohibits the government from ‘abridging the freedom of speech.’ Supreme Court doctrine makes clear that government can’t constitutionally evade the amendment by working through private companies.”

Nothing good can come from allowing the government to sidestep the Constitution.

The steady, pervasive censorship creep that is being inflicted on us by corporate tech giants with the blessing of the powers-that-be threatens to bring about a restructuring of reality straight out of Orwell’s 1984, where the Ministry of Truth polices speech and ensures that facts conform to whatever version of reality the government propagandists embrace.

Orwell intended 1984 as a warning. Instead, it is being used as a dystopian instruction manual for socially engineering a populace that is compliant, conformist and obedient to Big Brother.

This is the slippery slope that leads to the end of free speech as we once knew it.

In a world increasingly automated and filtered through the lens of artificial intelligence, we are finding ourselves at the mercy of inflexible algorithms that dictate the boundaries of our liberties.

Once artificial intelligence becomes a fully integrated part of the government bureaucracy, there will be little recourse: we will all be subject to the intransigent judgments of techno-rulers.

This is how it starts.

First, the censors went after so-called extremists spouting so-called “hate speech.”

Then they went after so-called extremists spouting so-called “disinformation” about stolen elections, the Holocaust, and Hunter Biden.

By the time so-called extremists found themselves in the crosshairs for spouting so-called “misinformation” about the COVID-19 pandemic and vaccines, the censors had developed a system and strategy for silencing the nonconformists.

Eventually, depending on how the government and its corporate allies define what constitutes “extremism, “we the people” might all be considered guilty of some thought crime or other.

Whatever we tolerate now—whatever we turn a blind eye to—whatever we rationalize when it is inflicted on others, whether in the name of securing racial justice or defending democracy or combatting fascism, will eventually come back to imprison us, one and all.

Watch and learn.

We should all be alarmed when any individual or group—prominent or not—is censored, silenced and made to disappear from Facebook, Twitter, YouTube and Instagram for voicing ideas that are deemed politically incorrect, hateful, dangerous or conspiratorial.

Given what we know about the government’s tendency to define its own reality and attach its own labels to behavior and speech that challenges its authority, this should be cause for alarm across the entire political spectrum.

Here’s the point: you don’t have to like or agree with anyone who has been muzzled or made to disappear online because of their views, but to ignore the long-term ramifications of such censorship is dangerously naïve, because whatever powers you allow the government and its corporate operatives to claim now will eventually be used against you by tyrants of your own making.

As Glenn Greenwald writes for The Intercept:

The glaring fallacy that always lies at the heart of pro-censorship sentiments is the gullible, delusional belief that censorship powers will be deployed only to suppress views one dislikes, but never one’s own views… Facebook is not some benevolent, kind, compassionate parent or a subversive, radical actor who is going to police our discourse in order to protect the weak and marginalized or serve as a noble check on mischief by the powerful. They are almost always going to do exactly the opposite: protect the powerful from those who seek to undermine elite institutions and reject their orthodoxies. Tech giants, like all corporations, are required by law to have one overriding objective: maximizing shareholder value. They are always going to use their power to appease those they perceive wield the greatest political and economic power.

Be warned: it’s a slippery slope from censoring so-called illegitimate ideas to silencing truth.

Eventually, as George Orwell predicted, telling the truth will become a revolutionary act.

If the government can control speech, it can control thought and, in turn, it can control the minds of the citizenry.

It’s happening already.

With every passing day, we’re being moved further down the road towards a totalitarian society characterized by government censorship, violence, corruption, hypocrisy and intolerance, all packaged for our supposed benefit in the Orwellian doublespeak of national security, tolerance and so-called “government speech.”

Little by little, Americans are being conditioned to accept routine incursions on their freedoms.

This is how oppression becomes systemic, what is referred to as creeping normality, or a death by a thousand cuts.

It’s a concept invoked by Pulitzer Prize-winning scientist Jared Diamond to describe how major changes, if implemented slowly in small stages over time, can be accepted as normal without the shock and resistance that might greet a sudden upheaval.

Diamond’s concerns related to Easter Island’s now-vanished civilization and the societal decline and environmental degradation that contributed to it, but it’s a powerful analogy for the steady erosion of our freedoms and decline of our country right under our noses.

As Diamond explains, “In just a few centuries, the people of Easter Island wiped out their forest, drove their plants and animals to extinction, and saw their complex society spiral into chaos and cannibalism… Why didn’t they look around, realize what they were doing, and stop before it was too late? What were they thinking when they cut down the last palm tree?”

His answer: “I suspect that the disaster happened not with a bang but with a whimper.”

Much like America’s own colonists, Easter Island’s early colonists discovered a new world—“a pristine paradise”—teeming with life. Yet almost 2000 years after its first settlers arrived, Easter Island was reduced to a barren graveyard by a populace so focused on their immediate needs that they failed to preserve paradise for future generations.

The same could be said of the America today: it, too, is being reduced to a barren graveyard by a populace so focused on their immediate needs that they are failing to preserve freedom for future generations.

In Easter Island’s case, as Diamond speculates:

The forest…vanished slowly, over decades. Perhaps war interrupted the moving teams; perhaps by the time the carvers had finished their work, the last rope snapped. In the meantime, any islander who tried to warn about the dangers of progressive deforestation would have been overridden by vested interests of carvers, bureaucrats, and chiefs, whose jobs depended on continued deforestation… The changes in forest cover from year to year would have been hard to detect… Only older people, recollecting their childhoods decades earlier, could have recognized a difference. Gradually trees became fewer, smaller, and less important. By the time the last fruit-bearing adult palm tree was cut, palms had long since ceased to be of economic significance. That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. No one would have noticed the felling of the last small palm.

Sound painfully familiar yet?

We’ve already torn down the rich forest of liberties established by our founders. It has vanished slowly, over the decades. The erosion of our freedoms has happened so incrementally, no one seems to have noticed. Only the older generations, remembering what true freedom was like, recognize the difference. Gradually, the freedoms enjoyed by the citizenry have become fewer, smaller and less important. By the time the last freedom falls, no one will know the difference.

This is how tyranny rises and freedom falls: with a thousand cuts, each one justified or ignored or shrugged over as inconsequential enough by itself to bother, but they add up.

Each cut, each attempt to undermine our freedoms, each loss of some critical right—to think freely, to assemble, to speak without fear of being shamed or censored, to raise our children as we see fit, to worship or not worship as our conscience dictates, to eat what we want and love who we want, to live as we want—they add up to an immeasurable failure on the part of each and every one of us to stop the descent down that slippery slope.

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, we are on that downward slope now.

The Digitization Of Humanity Shows Why The Globalist Agenda Is Evil

By Brandon Smith

Source: Alt-Market.us

In recent weeks I’ve been seeing an interesting narrative fallacy being sold to the general public when it comes to the designs of globalists. The mainstream media and others are now openly suggesting that it’s actually okay to be opposed to certain aspects of groups like the World Economic Forum. They give you permission to be concerned, just don’t dare call it conspiracy.

This propaganda is a deviation from the abject denials we’re accustomed to hearing in the Liberty Movement for the past decade or more. We have all been confronted with the usual cognitive dissonance – The claims that globalist groups “just sit around talking about boring economic issues” and nothing they do has any bearing on global politics or your everyday life. In some cases we were even told that these groups of elites “don’t exist”.

Now, the media is admitting that yes, perhaps the globalists do have more than just a little influence over governments, social policies and economic outcomes. But, what the mainstream doesn’t like is the assertion that globalists have nefarious or authoritarian intentions. That’s just crazy tinfoil hat talk, right?

The reason for the narrative shift is obvious. Far too many people witnessed the true globalist agenda in action during the pandemic lockdowns and now they see the conspiracy for what it is. The globalists, in turn, seem to have been shocked to discover many millions of people in opposition to the mandates and the refusals to comply were clearly far greater than they expected. They are still trying to push their brand of covid fear, but the cat is out of the bag now.

They failed to get what they wanted in the west, which was a perpetual Chinese-style medical tyranny with vaccine passports as the norm. So, the globalist strategy has changed and they are seeking to adapt. They admit to a certain level of influence, but they pretend as if they are benevolent or indifferent.

The response to this lie is relatively straightforward. I could point out how Klaus Schwab of the WEF savored the thrill of the initial pandemic outbreak and declared that covid was the perfect “opportunity” to initiate what the WEF calls the “Great Reset.”

I could also point out that Klaus Schwab’s vision of the Reset, what he calls the “4th Industrial Revolution”, is a veritable nightmare world in which Artificial Intelligence runs everything, society is condensed into digital enclaves called “smart cities” and people are oppressed by carbon taxation. I could point out that the WEF actively supports the concept of the “Shared Economy” in which you will “own nothing, have no privacy” and you will supposedly be happy about it, but only because you won’t have any other choice.

What I really want to talk about, however, is the process by which the elites hope to achieve their dystopian epoch, as well as the globalist mindset which lends itself to the horrors of technocracy. The common naive assumption among skeptics of conspiracy is that the globalists are regular human beings with the same drives and limited desires as the rest of us. They might have some power, but world events are still random and certainly not controlled.

This is a fallacy. The globalists are not like us. They are not human. Or, I should say, they despise humanity and seek to do away with it. And, because of this, they have entirely different aspirations compared to the majority of us which include aspirations of dominance.

What we are dealing with here are not normal people with conscience, ethics or empathy. Their behavior is much more akin to higher functioning psychopaths and sociopaths rather than the everyday person on the street. We saw this on full display during the covid lockdowns and the vicious attempts to enforce vaccine passports; their actions betray their long game.

Take a look at comments by New Zealand’s prime minister and WEF attendee, Jacinda Ardern, from a year ago. She admits to the deliberate tactic of creating a two-tier class system within her own country based on vaccination status. There is no remorse or guilt in her demeanor, she is proud of taking such authoritarian actions despite numerous studies that prove the mandates are ineffective.

Beyond the covid response, though, I suggest people who deny globalist conspiracy take a deeper dive into the philosophical roots of organizations like the WEF. Their entire ideology can be summed up in a couple words – Futurism and godhood.

Futurism is an ideological movement which believes that all “new” innovations, social or technological, should supplant the previous existing systems for the sake of progress. They believe that all old ways of thinking, including notions of principles, heritage, religious belief systems, codes of conduct, etc. are crutches holding humanity back from greatness.

But what is the greatness the futurists seek? As mentioned above, they want godhood. An era in which the natural world and human will is enslaved by the hands of a select few. Case in point – The following presentation from 2018 by WEF “guru” Yuval Harari on the future of humanity as the globalists see it:

Harari’s conclusions are rooted in elitist biases and ignore numerous psychological and social realities, but we can set those aside for a moment and examine his basic premise that humanity as we know it will no longer exist in the next century because of “digital evolution” and “human hacking.”

The foundation of the WEF vision is built on the idea that data is the new Holy Grail, the new conquest. This is something I have written about extensively in the past (check out my article ‘Artificial Intelligence: A Secular Look At The Digital Antichrist’) but it is good to see it expressed with such arrogance by someone like Harari because it is undeniable evidence – The globalists think they are going to build a completely centralized economy and society based on human data rather than production. In other words, YOU become the product. The average citizen, your thoughts and your behaviors, become the stock in trade.

Globalists also believe that data is most valuable because it can be exploited to control people’s behaviors, to hack the body and mind in order to create human puppets, or create super-beings. They dream of becoming little gods with omnipotent knowledge. Yuval even proudly proclaims that intelligent design will no longer be the realm of God in heaven, but of the new digitized man.

While Harari pays lip service to “democracy” vs “digital dictatorship”, he goes on to assert that centralization may become the defacto system of governance. He says this not because he fears dictatorship, but because that has always been the WEF’s intent. The globalist argues that governments cannot be trusted to hold a monopoly on the digital wellspring and that someone needs to step in to regulate data; but “who would do this?”, he asks.

He already knows the answer. The UN, a globalist edifice, has consistently said it should be the governing body that takes control of AI and data regulation through UNESCO. That is to say, Harari is playing coy, he knows that the people who will step in to control the data are people just like him.

At no point in Harari’s speech does he suggest that that any of these developments should be obstructed or stopped. At no point does he offer the idea that the digitization of humanity is wrong and that there are other better ways of living. He actually mocks the concept of “going back” to old ways; only the future and the Tabula Rasa (blank slate) hold promise for the globalists, everything else is an impediment to their designs.

But here’s the thing, what the globalists are trying to accomplish is a fantasy. People are not algorithms, despite how much Harari would like them to be. People have habits, yes, but they are also unpredictable and are prone to sudden awakenings and epiphanies in the moment of crisis.

Psychopaths tend to be robotic people, acting impulsively but also very predictably. They lack imagination, intuition and foresight, and so it’s not surprising that organizations of psychopaths like the WEF would place such an obsessive value on AI, algorithms and a cold technocratic evolution. They don’t view their data Shangri-La as humanity’s future; they see it as THEIR future – The future of the non-humans, or the anti-humans as it were.

Who will produce all the goods, services and necessities required in this brave new world? Well, all of us peons, of course. Sure, the globalists will offer grand promises of a robot driven production economy in which people no longer need to engage in menial labor, but this will be another lie. They’ll still need people to plant the crops, maintain infrastructure, take care of manufacturing, do their fighting for them, etc., they’ll just need less of us.

At bottom, an economy built on data is an economy dependent on illusion.

Data is vaporous and oftentimes meaningless because it is subject to the biases of the interpreter. Algorithms can also be programmed to the biases of the engineers. There is nothing inherently objective about data – it is all dependent on the intentions of the people analyzing it.

For example, to use Harari’s anecdote of an algorithm that “knows you are gay” before you do; any twisted group of people could simply write code for an algorithm that tells the majority of easily manipulated kids that they are gay, even when they are not. And, if you are gullible enough to believe the algorithm is infallible, then you could be led to believe that numerous falsehoods are true and be convinced to behave against your nature. You have allowed a biased digital phantom to dictate your identity, and have made yourself “hackable.”

In the meantime, the elitists entertain delusions of surpassing their mortal limitations by “hacking” the human body, as well as reading the minds of the masses and predicting the future based on data trends. This is an obsession which ignores the unpredictable wages of the human soul, that very element of conscience and of imagination which psychopaths lack. It’s something that cannot be hacked.

The legitimacy of the data based system and the hacking of humanity that the WEF aspires to is less important than what the masses can be convinced of. If the average person can be persuaded to implant their cell phone in their skull in the near future, then yes, humanity might become hackable in a rudimentary way.

The algorithms then supplant conscience, empathy and principles.  And, without these things all morality becomes relative by default.  Evil becomes good, and good becomes evil. 

By the same token, if humanity can be persuaded to set down their cell phones and live a less tech focused life, then the digital empire of the globalists comes crashing down quite easily. There is no system the elites can impose that would make their digital consciousness a reality without the consent of the public at large.

Without a vast global framework in which people willingly embrace the algorithms rather than their own experience and intuitions, the globalist religion of total centralization dies. The first step is to accept that the conspiracy does indeed exist. The second step is to accept that the conspiracy is malicious and destructive. The third step is to refuse to comply, by whatever means necessary.

Automatons – Life Inside the Unreal Machine

By Kingsley L. Dennis

Source: Waking Times

ɔːˈtɒmət(ə)n/

noun

a moving mechanical device made in imitation of a human being.

a machine which performs a range of functions according to a predetermined set of coded instructions.

used in similes and comparisons to refer to a person who seems to act in a mechanical or unemotional way.

“Don’t you wish you were free, Lenina?”

“I don’t know what you mean. I am free. Free to have the most wonderful time. Everybody’s happy nowadays.”

He laughed. “Yes, ‘Everybody’s happy nowadays.’ We have been giving the children that at five. But wouldn’t you like to be free to be happy in some other way, Lenina? In your own way, for example; not in everybody else’s way.”

“I don’t know what you mean,” she repeated.

Aldous Huxley, Brave New World

Are we turning into a mass of unaware sleepwalkers? Our eyes are seemingly open and yet we are living as if asleep and the dream becomes our waking lives. It seems that more and more people, in the highly technologized nations at least, are in danger of succumbing to the epidemic of uniformity. People follow cycles of fashions and wear stupid clothes when they think it is the ‘in thing;’ and hyper-budget films take marketing to a whole new level forcing parents to rush out to buy the merchandise because their kids are screaming for it. And if one child in the class doesn’t have the latest toy like all their classmates then they are ostracized for this lack. Which means that poor mummy and daddy have to make sure they get their hands on these gadgets. Put the two items together – zombies and uniformity – and what do you get? Welcome to the phenomenon of Black Fridays, which have become the latest manifestation of national Zombie Days.

Unless you’ve been living in a cave somewhere (or living a normal, peaceful existence) then you will know what this event is – but let me remind you anyway of what a Black Friday is. It is a day when members of the public are infected with the ‘must buy’ and ‘act like an idiot’ virus that turns them into screaming, raging hordes banging on the doors of hyper-market retailers hours before they open. Many of these hordes sleep outside all night to get early entry. Then when the doors are finally opened they go rushing in fighting and screaming as if re-enacting a scene from Games of Thrones. Those that do survive the fisticuffs come away with trolleys full of boxes too big to carry. This display of cultural psychosis, generally named as idiocracy, is also a condition nurtured by societies based on high-consumption with even higher inequalities of wealth distribution. In other words, a culture conditioned to commodity accumulation will buy with fervour when things are cheap. This is because although conditioned to buy, they lack the financial means to satiate this desire. Many people suffer from a condition which psychologists have named as ‘miswanting,’ which means that we desire things we don’t like and like things we don’t desire. What this is really saying is that we tend to ‘want badly’ rather than having genuine need. What we are witnessing in these years is an epidemic of idiocracy and its propagating faster than post-war pregnancies. And yet we are programmed by our democratic societies to not think differently. In this respect, many people also suffer from a condition known as ‘confirmation bias.’

Confirmation bias is our conditioned tendency to pick and choose that information which confirms our pre-existing beliefs or ideas. Two people may be able to look at the same evidence and yet they will interpret it according to how it fits into and validates their own thinking. That’s why so many debates go nowhere as people generally don’t wish to be deviated away from those ideas they have invested so much time and effort in upholding. It’s too much of a shock to realize that what we thought was true, or valid, is not the case. To lose the safety and security of our ideas would be too much for many people. It is now well understood in psychology that we like to confirm our existing beliefs; after all, it makes us feel right!

Many of our online social media platforms are adhering to this principle by picking and choosing those items of news, events, etc that their algorithms have deemed we are most likely to want to see. As convenient as it may seem, it is unlikely to be in our best interests in the long term. The increasing automation of the world around us is set to establish a new ecology in our hyperreality. We will be forced to acknowledge that algorithms and intelligent software will soon, if it isn’t already, be running nearly everything in our daily lives. Historian Yuval Harari believes that ‘the twenty-first century will be dominated by algorithms. “Algorithm” is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is.’1 Algorithms already follow our shopping habits, recommend products for us, pattern recognize our online behavior, help us drive our cars, fly our planes, trade our economies, coordinate our public transport, organize our energy distribution, and a lot, lot more that we are just not really aware of. One of the signs of living in a hyperreality is that we are surrounded by an invisible coded environment, written in languages we don’t understand, making our lives more abstracted from reality.

Modern societies are adapting to universal computing infrastructures that will usher in new arrangements and relations. Of course, these are only the early years, although there is already a lot of uncertainty and unpredictability. As it is said, industrialization didn’t turn us into machines and automation isn’t going to turn us into automatons. Which is more or less correct; after all, being human is not that simple. Yet there will be new dependencies and relations forming as algorithms continue to create and establish what can be called ‘pervasive assistance.’ Again, it is a question of being alert so that we don’t feel compelled just to give ourselves over to our algorithms. The last thing we want is for a bunch of psychologists trying to earn yet more money from a new disease of ‘algorithmic dependency syndrome’ or something similar.

It needs stating that by automating the world we also run the risk of being distanced from our own responsibilities. And this also implies, importantly, the responsibility we have to ourselves – to transcend our own limitations and to develop our human societies for the better. We should not forget that we are here to mature as a species and we should not allow the world of automation to distract us from this. Already literature and film have portrayed such possibilities. Examples are David Brin’s science-fiction novel Kiln People (2002 – also adapted into the film Surrogates, 2009), which clearly showed how automation may provide a smokescreen for people to disappear behind their surrogate substitutes.

Algorithms are the new signals that code an unseen territory all around us. In a world of rapidly increasing automation and digital identities we’ll have to keep our wits about us in order to retain what little of our identities we have left. We want to make sure that we don’t get lost in our emoji messages, our smilies of flirtation; or, even worse, loose our life in the ‘death cult’ of the selfies. Identities by their very nature are constructs; in fact, we can go so far as to call them fake. They are constructed from layers of ongoing conditioning which a person identifies with. This identity functions as a filter to interpret incoming perceptions. The limited degree of perceptions available to us almost guarantees that identities fall into a knowable range of archetypes. We would be wise to remember that who we are is not always the same as what we project. And yet some people on social media are unable to distinguish their public image from their personal identity, which starts to sound a bit scary. Philosopher Jean Baudrillard, not opposed to saying what he thought, stated it in another way:

We are in a social trance: vacant, withdrawn, lacking meaning in our own eyes. Abstracted, irresponsible, enervated. They have left us the optic nerve, but all the others have been disabled…All that is left is the mental screen of indifference, which matches the technical in-difference of the images.2

Baudrillard would probably be the first to agree that breathing is often a disguise to make us think that someone is alive. After all, don’t we breathe automatically without thinking about it?

We must not make the human spirit obsolete just because our technological elites are dreaming of a trans-human future. Speaking of such futures, inventor and futurist Ray Kurzweil predicts that in the 2030s human brains will be able to connect to the cloud and to use it just like we use cloud computing today. That is, we will be able to transfer emails and photos directly from the cloud to our brain as well as backing up our thoughts and memories. How will this futuristic scenario be possible? Well, Kurzweil says that nanobots – tiny robots constructed from DNA strands – will be swimming around in our brains. And the result? According to Kurzweil we’re going to be funnier, sexier, and better at expressing our loving sentiments. Well, that’s okay then – nanobot my brain up! Not only will being connected to the computing cloud make us sexier and funnier humans, it will even take us closer to our gods says Kurzweil – ‘So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world – it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.’It’s hard to argue with such a bargain – a few nanobots in our brain to become godlike? I can imagine a lot of people will be signing up for this. There may even be a hefty monthly charge for those wanting more than 15GB of back-up headspace. Personally, I prefer the headspace that’s ad infinitum and priceless. I hope I’m not in the minority.

Looking at the choices on offer so far it seems that there is the zombie option, which comes with add-on idiocracy (basic model), and the trans-human nanobot sexy-god upgrade (pricy). But then let’s not forget that in an automated world it may be the sentient robots that come out on top. Now, that would be an almost perfect demonstration of a simulation reality.

Life in Imitation

There are those who believe that self-awareness is going to be the end game of artificial intelligence – the explosive ‘wow factor’ that really throws everything into high gear. The new trend now is deep machine-learning to the point where machines will program not only themselves but also other machines. Cognitive computer scientists are attempting to recapture the essence of human consciousness in the hope of back-engineering this complexity into machine code. It’s a noble endeavor, if not at least for their persistence. The concern here is that if machines do finally achieve sentience then the next thing that we’ll need to roll out will be machine psychologists. Consciousness, after all, comes at a price. There is no free lunch when it comes to possessing a wide-awake brain. With conscious awareness comes responsibilities, such as values, ethics, morality, compassion, forgiveness, empathy, goodness, and good old-fashioned love. And I personally like the love part (gives me a squishy feeling every time).

It may not actually be the sentient robots we need to worry about; it’s the mindless ones we need to be cautious of (of course, we could say the same thing about ourselves). One of the methods used in training such robots is, in the words of their trainers, to provide them with enough ‘intrinsic motivation.’ Not only will this help the robots to learn their environments, it is also hoped that it will foster attention in them to acquire sufficient situational awareness. If I were to write a science-fiction scenario on this I would make it so that the sentient robots end up being more human than we are, and humans turn into their automated counterparts. Funny, maybe – but more so in the funny-bone hurting sort of way rather than the laugh-out-loud variety. Or perhaps it’s already been done. It appears that we are attempting to imbue our devices with qualities we are also striving to possess for ourselves. Humans are naturally vulnerable; it is part of our organic make-up. Whatever we create may inherit those vulnerabilities. However, this here is not a discussion on the pros and cons of smart machines and artificial intelligence (there are many more qualified discussions on that huge topic).

While we are creating, testing, worrying, or arguing over machines and their like we are taking our attention away from the center – ourselves. The trick of surviving in the ‘unreal machine’ of life is by becoming more human, the very antithesis of the robotic. Technology can assist us in interacting and participating to a better degree with our environments. The question, as always, is the uses to which such tools are put – and by whom. Such tools can help us realize our dreams, or they can entrap us in theirs. Algorithms, smart machines, intelligent infrastructure, and automated processes: these are all going to come about and be a part of our transforming world. And in many respects, they will make life more comfortable for us. Yet within this comfort zone we still need to strive and seek for our betterment. We should not allow an automated environment to deprive us of our responsibility, and need, to find meaning and significance in our world. Our technologies should force us to acknowledge our human qualities and to uplift them, and not to turn us into an imitation of them.

Another metaphor for the simulated ‘robotic’ creature is the golem. The golem legend speaks of a creature fashioned from clay, a Cabbalistic motif which has appeared frequently in literary and cinematic form (such as Frankenstein). The Cabbalistic automaton that is the golem, which means ‘unformed,’ has often been used to show the struggle between mechanical limitation and human feelings. This struggle depicts the tension that combines cogs and consciousness; the entrapment in matter and the spirit of redemption and liberation. This is a myth that speaks of the hubris in humanity fashioning its own creatures and ‘magically’ bestowing life upon them. It is the act of creating a ‘sacred machine’ from the parts and pieces of a material world and then to imbue them with human traits. And through this human likeness they are required to fulfil human chores and work as slaves. Sounds familiar? The Cabbalistic humanoid – the sentient robot – is forever doomed, almost like the divine nature of Man trapped within the confines and limitations of a material reality. They represent the conflict of being torn between a fixed fate and freedom.

Our material reality may be the ultimate unreal machine. We are the cogs, the clay golem, the imperfect creature fashioned by another. Our fears of automation may only be a reflection of our own automation. We struggle to express some form of release whilst unaware that the binds that mechanize us are forever tightening.

We have now shifted through the zombie-idiocracy model (basic), the trans-human nanobot sexy-god model (pricy), to arrive at the realization that it is us – and not our sentient robots – that are likely to be the automaton (tragic). And this is the biblical fall from grace; the disconnection from our god(s). We have come loose from Central Source and we have lost our way.

We are now living in the hyperreal realm where zombies, cyborgs, and golem robots all reside – but it is not the place for the genuine human. Things are going to have to change. Not only do we have to retain our humanity, we also must remain sane. With our continuing modern technologies, our augmented reality and bioengineering, the difference between fiction and reality will blur even further. And this blurring is likely to become more prominent as people increasingly try to reshape reality to fit around their own imaginative fictions. Staying sane, grounded, and balanced is going to be a very, very good option for the days to come.

We are going to be sharing our planetary space with the new smart machines. I am reminded of the Dr. Seuss book Horton Hears a Who! that has the refrain, ‘a person’s a person no matter how small.’ Size doesn’t count – but being human does. And staying human in these years will be the hard task allotted to us.

Jacques Ellul: A Prophet for Our Tech-Saturated Times

Read his works to understand how we’ve been caught in technology’s nightmarish hold.

By Andrew Nikiforuk

Source: The Tyee

By now you have probably read about the so-called “tech backlash.”

Facebook and other social media have undermined what’s left of the illusion of democracy, while smartphones damage young brains and erode the nature of discourse in the family.

Meanwhile computers and other gadgets have diminished our attention spans along with our ever-failing connection to reality.

The Foundation for Responsible Robotics recently created a small stir by asking if “sexual intimacy with robots could lead to greater social isolation.”

What could possibly go wrong?

The average teenager now works about two hours of every day — for free — providing Facebook and other social media companies with all the data they need to engineer young people’s behaviour for bigger Internet profits.

Without shame, technical wonks now talk of building artificial scientists to resolve climate change, poverty and, yes, even fake news.

The media backlash against Silicon Valley and its peevish moguls, however, typically ends with nothing more radical than an earnest call for regulation or a break-up of Internet monopolies such as Facebook and Google.

The problem, however, is much graver, and it is telling that most of the backlash stories invariably omit any mention of technology’s greatest critic, Jacques Ellul.

The ascent of technology

Ellul, the Karl Marx of the 20th century, predicted the chaotic tyranny many of us now pretend is the good and determined life in technological society.

He wrote of technique, about which he meant more than just technology, machines and digital gadgets but rather “the totality of methods rationally arrived at and having absolute efficiency” in the economic, social and political affairs of civilization.

For Ellul, technique, an ensemble of machine-based means, included administrative systems, medical tools, propaganda (just another communication technique) and genetic engineering.

The list is endless because technique, or what most of us would just call technology, has become the artificial blood of modern civilization.

“Technique has taken substance,” wrote Ellul, and “it has become a reality in itself. It is no longer merely a means and an intermediary. It is an object in itself, an independent reality with which we must reckon.”

Just as Marx deftly outlined how capitalism threw up new social classes, political institutions and economic powers in the 19th century, Ellul charted the ascent of technology and its impact on politics, society and economics in the 20th.

My copy of Ellul’s The Technological Society has yellowed with age, but it remains one of the most important books I own. Why?

Because it explains the nightmarish hold technology has on every aspect of life, and also remains a guide to the perplexing determinism that technology imposes on life.

Until the 18th century, technical progress occurred slowly and with restraint. But with the Industrial Revolution it morphed into something overwhelming due in part to population, cheap energy sources and capitalism itself.

Since then it has engulfed Western civilization and become the globe’s greatest colonizing force.

“Technique encompasses the totality of present-day society,” wrote Ellul. “Man is caught like a fly in a bottle. His attempts at culture, freedom, and creative endeavour have become mere entries in technique’s filing cabinet.”

Ellul, a brilliant historian, wrote like a physician caught in the middle of a plague or physicist exposed to radioactivity. He parsed the dynamics of technology with a cold lucidity.

Yet you’ve probably never heard of the French legal scholar and sociologist despite all the recent media about the corrosive influence of Silicon Valley.

His relative obscurity has many roots. He didn’t hail from Paris, but rural Bordeaux. He didn’t come from French blue blood; he was a “meteque.”

He didn’t travel much, criticized politics of every stripe and was a radical Christian.

But in 1954, just a year before American scientists started working on artificial intelligence, Ellul wrote his monumental book, The Technological Society.

The dense and discursive work lays out in 500 pages how technique became for civilization what British colonialism was for parts of 19th-century Africa: a force of total domination.

In the book Ellul explains in bold and uncompromising terms how the logic of technological innovation conquered every aspect of human culture.

Ellul didn’t regard technology as inherently evil; he just recognized that it was a self-augmenting force that engineered the world on its terms.

Machines, whether mechanical or digital, aren’t interested in truth, beauty or justice. Their goal is to make the world a more efficient place for more machines.

Their proliferation combined with our growing dependence on their services inevitably led to an erosion of human freedom and unintended consequences in every sphere of life.

Ellul was one of the first to note that you couldn’t distinguish between bad and good effects of technology. There were just effects and all technologies were disruptive.

In other words, it doesn’t matter if a drone is delivering a bomb or book or merely spying on the neighbourhood, because technique operates outside of human morality: “Technique tolerates no judgment from without and accepts no limitations.”

Facebook’s mantra “move fast and break things” epitomizes the technological mindset.

But some former Facebook executives such as Chamath Palihapitiya belatedly realized they have engineered a force beyond their control. (“The short-term dopamine-driven feedback loops that we have created are destroying how society works,” Palihapitiya has said.)

That, argued Ellul, is what technology does. It disrupts and then disrupts again with unforeseen consequences, requiring more techniques to solve the problems created by latest innovations.

As Ellul noted back in 1954, “History shows that every technical application from its beginnings presents certain unforeseeable secondary effects which are more disastrous than the lack of the technique would have been.”

Ellul also defined the key characteristics of technology.

For starters, the world of technique imposes a rational and mechanical order on all things. It embraces artificiality and seeks to replace all natural systems with engineered ones.

In a technological society a dam performs better than a running river, a car takes the place of the pedestrians — and may even kill them — and a fish farm offers more “efficiencies” than a natural wild salmon migration.

There is more. Technique automatically reduces actions to the “one best way.” Technical progress is also self-augmenting: it is irreversible and builds with a geometric progression.

(Just count the number of gadgets telling you what to do or where to go or even what music to play.)

Technology is indivisible and universal because everywhere it goes it shows the same deterministic face with the same consequences. And it is autonomous.

By autonomous, Ellul meant that technology had become a determining force that “elicits and conditions social, political and economic change.”

The role of propaganda

The French critic was the first to note that technologies build upon each other and therefore centralize power and control.

New techniques for teaching, selling things or organizing political parties also required propaganda.

Here again Ellul saw the future.

He argued that propaganda had to become as natural as breathing air in a technological society, because it was essential that people adapt to the disruptions of a technological society.

“The passions it provokes — which exist in everybody — are amplified. The suppression of the critical faculty — man’s growing incapacity to distinguish truth from falsehood, the individual from the collectivity, action from talk, reality from statistics, and so on — is one of the most evident results of the technical power of propaganda.”

Faking the news may have been a common practice on Soviet radio during Ellul’s day, but it is now a global phenomenon leading us towards what Ellul called “a sham universe.”

We now know that algorithms control every aspect of digital life and have subjected almost aspect of human behaviour to greater control by techniques whether employed by the state or the marketplace.

But in 1954 Ellul saw the beast emerging in infant form.

Technology, he wrote, can’t put up with human values and “must necessarily don mathematical vestments. Everything in human life that does not lend itself to mathematical treatment must be excluded… Who is too blind to not see that a profound mutation is being advocated here.”

He, too, warned about the promise of leisure provided by the mechanization and automatization of work.

“Instead of being a vacuum representing a break with society,” our leisure time will be “literally stuffed with technical mechanisms of compensation and integration.”

Good citizens today now leave their screens at work only to be guided by robots in their cars that tell them the most efficient route to drive home.

At home another battery of screens awaits to deliver entertainments and distractions, including apps that might deliver a pizza to the door.

Stalin and Mao would be impressed — or perhaps disappointed — that so much social control could be exercised with such sophistication and so little bloodletting.

Ellul wasn’t just worried about the impact of a single gadget such as the television or the phone but “the phenomenon of technical convergence.”

He feared the impact of systems or complexes of techniques on human society and warned the result could only be “an operational totalitarianism.”

“Convergence,” he wrote, “is a completely spontaneous phenomenon, representing a normal stage in the evolution of technique.”

Social media, a web of behavioural and psychological systems, is just the latest example of convergence.

Here psychological techniques, surveillance techniques and propaganda have all merged to give the Russians and many other groups a golden opportunity to intervene in the political lives of 126 million North Americans.

Social media has achieved something novel, according to former Facebook engineer Sam Lessin.

For the first time ever a political candidate or party can “effectively talk to each individual voter privately in their own home and tell them exactly what they want to hear… in a way that can’t be tracked or audited.”

In China the authorities have gone one step further. Using the Internet the government can now track the movements of every citizen and rank their political trustworthiness based on their history of purchases and associations. It is, of course, a fantastic “counterterrorism” tool.

The Silicon Valley moguls and the digerati promised something less totalitarian. They swore that social media would help citizens fight bad governments and would connect all of us.

Facebook, vowed the pathologically adolescent Mark Zuckerberg, would help the Internet become “a force for peace in the world.”

But technology obeys its own rules and prefers “the psychology of tyranny.”

The digerati also promised that digital technologies would usher in a new era of decentralization and undo what mechanical technologies have already done: centralize everything into big companies, big boxes and big government.

Technology assuredly fragments human communities, but in the world of technique centralization remains the norm.

“The idea of effecting decentralization while maintaining technical progress is purely utopian,” wrote Ellul.

Towards ‘hypernormalization’

It is worth noting that the word “normal” didn’t come into currency until the 1940s along with technological society.

In many respects global society resembles the Soviet Union just prior to its collapse when “hypernormalization” ruled the day.

A recent documentary defined what hypernormalization did for Russia: it “became a society where everyone knew that what their leaders said was not real, because they could see with their own eyes that the economy was falling apart. But everybody had to play along and pretend that it was real because no one could imagine any alternative.”

In many respects technology has hypernormalized a technological society in which citizens exercise less and less control over their lives every day and can’t imagine anything different.

Throughout his life Ellul maintained that he was “neither by nature, nor doctrinally, a pessimist, nor have I pessimistic prejudices. I am concerned only with knowing whether things are so or not.”

He called a spade a spade, and did not sugarcoat his observations.

If you are growing more anxious about our hypernormalized existence and are wondering why you own a phone that tracks your every movement, then read The Technological Society.

Ellul believed that the first act of freedom a citizen can exercise is to recognize the necessity of understanding technique and its colonizing powers.

Resistance, which is never futile, can only begin by becoming aware and bearing witness to the totalitarian nature of technological society.

Ellul believed that Christians had a special duty to condemn the worship of technology, which has become society’s new religion.

To Ellul, resistance meant teaching people how to be conscious amphibians, with one foot in traditional human societies, and to purposefully choose which technologies to bring into their communities.

Only citizens who remain connected to traditional human societies can see, hear and understand the disquiet of the smartphone blitzkrieg or the Internet circus.

Children raised by screens and vaccinated only by technology will not have the capacity to resist, let alone understand, this world any more than someone born in space could appreciate what it means to walk in a forest.

Ellul warned that if each of us abdicates our human responsibilities and leads a trivial existence in a technological society, then we will betray freedom.

And what is freedom but the ability to overcome and transcend the dictates of necessity?

In 1954, Ellul appealed to all sleepers to awake.

Read him. He remains the most revolutionary, prophetic and dangerous voice of this or any century.

Philip K. Dick and the Fake Humans

(Editor’s note: on this 36th anniversary of the passing of Philip K. Dick, it seems an appropriate time to note the relevance of his work to our current dystopia as Henry Farrell does in the following essay. Unfortunately the author is less astute regarding the ways in which the dystopias of Orwell and Huxley are equally relevant to our current milieu.)

By Henry Farrell

Source: Boston Review

This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.

Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.

This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.

Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).

In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.

Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together. As Dick described his work (in the opening essay to his 1985 collection, I Hope I Shall Arrive Soon):

The two basic topics which fascinate me are “What is reality?” and “What constitutes the authentic human being?” Over the twenty-seven years in which I have published novels and stories I have investigated these two interrelated topics over and over again.

These obsessions had some of their roots in Dick’s complex and ever-evolving personal mythology (in which it was perfectly plausible that the “real” world was a fake, and that we were all living in Palestine sometime in the first century AD). Yet they were also based on a keen interest in the processes through which reality is socially constructed. Dick believed that we all live in a world where “spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into heads of the reader.” He argued:

the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans—as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans.

In Dick’s books, the real and the unreal infect each other, so that it becomes increasingly impossible to tell the difference between them. The worlds of the dead and the living merge in Ubik (1969), the experiences of a disturbed child infect the world around him in Martian Time-Slip (1964), and consensual drug-based hallucinations become the vector for an invasive alien intelligence in The Three Stigmata of Palmer Eldritch (1965). Humans are impersonated by malign androids in Do Androids Dream of Electric Sheep? (1968) and “Second Variety” (1953); by aliens in “The Hanging Stranger” (1953) and “The Father-Thing” (1954); and by mutants in “The Golden Man” (1954).

This concern with unreal worlds and unreal people led to a consequent worry about an increasing difficulty of distinguishing between them. Factories pump out fake Americana in The Man in the High Castle (1962), mirroring the problem of living in a world that is not, in fact, the real one. Entrepreneurs build increasingly human-like androids in Do Androids Dream of Electric Sheep?, reasoning that if they do not, then their competitors will. Figuring out what is real and what is not is not easy. Scientific tools such as the famous Voight-Kampff test in Do Androids Dream of Electric Sheep? (and Blade Runner, Ridley Scott’s 1982 movie based loosely on it) do not work very well, leaving us with little more than hope in some mystical force—the I Ching, God in a spray can, a Martian water-witch—to guide us back toward the real.

We live in Dick’s world—but with little hope of divine intervention or invasion. The world where we communicate and interact at a distance is increasingly filled with algorithms that appear human, but are not—fake people generated by fake realities. When Ashley Madison, a dating site for people who want to cheat on their spouses, was hacked, it turned out that tens of thousands of the women on the site were fake “fembots” programmed to send millions of chatty messages to male customers, so as to delude them into thinking that they were surrounded by vast numbers of potential sexual partners.

These problems are only likely to get worse as the physical world and the world of information become increasingly interpenetrated in an Internet of (badly functioning) Things. Many of the aspects of Joe Chip’s future world in Ubik look horrendously dated to modern eyes: the archaic role of women, the assumption that nearly everyone smokes. Yet the door to Joe’s apartment—which argues with him and refuses to open because he has not paid it the obligatory tip—sounds ominously plausible. Someone, somewhere, is pitching this as a viable business plan to Y Combinator or the venture capitalists in Menlo Park.

This invasion of the real by the unreal has had consequences for politics. The hallucinatory realities in Dick’s worlds—the empathetic religion of Do Androids Dream of Electric Sheep?, the drug-produced worlds of The Three Stigmata of Palmer Eldritch, the quasi–Tibetan Buddhist death realm of Ubik—are usually experienced by many people, like the television shows of Dick’s America. But as network television has given way to the Internet, it has become easy for people to create their own idiosyncratic mix of sources. The imposed media consensus that Dick detested has shattered into a myriad of different realities, each with its own partially shared assumptions and facts. Sometimes this creates tragedy or near-tragedy. The deluded gunman who stormed into Washington, D.C.’s Comet Ping Pong pizzeria had been convinced by online conspiracy sites that it was the coordinating center for Hillary Clinton’s child–sex trafficking ring [likewise, the masses may have been convinced by mainstream media that a real child-sex trafficking ring never existed].

Such fractured worlds are more vulnerable to invasion by the non-human. Many Twitter accounts are bots, often with the names and stolen photographs of implausibly beautiful young women, looking to pitch this or that product (one recent academic study found that between 9 and 15 percent of all Twitter accounts are likely fake). Twitterbots vary in sophistication from automated accounts that do no more than retweet what other bots have said, to sophisticated algorithms deploying so-called “Sybil attacks,” creating fake identities in peer-to-peer networks to invade specific organizations or degrade particular kinds of conversation.

Twitter has failed to become a true mass medium, but remains extraordinarily important to politics, since it is where many politicians, journalists, and other elites turn to get their news. One research project suggests that around 20 percent of the measurable political discussion around the last presidential election came from bots. Humans appear to be no better at detecting bots than we are, in Dick’s novel, at detecting replicant androids: people are about as likely to retweet a bot’s message as the message of another human being. Most notoriously, the current U.S. president recently retweeted a flattering message that appears to have come from a bot densely connected to a network of other bots, which some believe to be controlled by the Russian government and used for propaganda purposes.

In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.

Such widespread falsehood is especially explosive when combined with our fragmented politics. Liberals’ favorite term for the right-wing propaganda machine, “fake news,” has been turned back on them by conservatives, who treat conventional news as propaganda, and hence ignore it. On the obverse, it may be easier for many people on the liberal left to blame Russian propaganda for the last presidential election than to accept that many voters had a very different understanding of America than they do.

Dick had other obsessions—most notably the politics of Richard Nixon and the Cold War. It is not hard to imagine him writing a novel combining an immature and predatory tycoon (half Arnie Kott, half Jory Miller) who becomes the president of the United States, secret Russian political manipulation, an invasion of empathy-free robotic intelligences masquerading as human beings, and a breakdown in our shared understanding of what is real and what is fake.

These different elements probably would not cohere particularly well, but as in Dick’s best novels, the whole might still work, somehow. Indeed, it is in the incongruities of Dick’s novels that salvation is to be found (even at his battiest, he retains a sense of humor). Obviously, it is less easy to see the joke when one is living through it. Dystopias may sometimes be grimly funny—but rarely from the inside.

Something is wrong on the internet

By James Bridle

Source: Medium

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of the thus-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them.

From the article linked above:

The maker of my particular favorite videos is “Blu Toys Surprise Brinquedos & Juegos,” and since 2010 he seems to have accrued 3.7 million subscribers and just under 6 billion views for a kid-friendly channel entirely devoted to opening surprise eggs and unboxing toys. The video titles are a continuous pattern of obscure branded lines and tie-ins: “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Smurfs Kinder Play Doh Sparkle Brilho,” “Cars Screamin’ Banshee Eats Lightning McQueen Disney Pixar,” “Disney Baby Pop Up Pals Easter Eggs SURPRISE.”

As I write this he has done a total of 4,426 videos and counting. With so many views — for comparison, Justin Bieber’s official channel has more than 10 billion views, while full-time YouTube celebrity PewDiePie has nearly 12 billion — it’s likely this man makes a living as a pair of gently murmuring hands that unwrap Kinder eggs. (Surprise-egg videos are all accompanied by pre-roll, and sometimes mid-video and ads.)

That should give you some idea of just how odd the world of kids online video is, and that list of video titles hints at the extraordinary range and complexity of this situation. We’ll get into the latter in a minute; for the moment know that it’s already very strange, if apparently pretty harmless, out there.

Another huge trope, especially the youngest children, is nursery rhyme videos.

Little Baby Bum, which made the above video, is the 7th most popular channel on YouTube. With just 515 videos, they have accrued 11.5 million subscribers and 13 billion views. Again, there are questions as to the accuracy of these numbers, which I’ll get into shortly, but the key point is that this is a huge, huge network and industry.

On-demand video is catnip to both parents and to children, and thus to content creators and advertisers. Small children are mesmerised by these videos, whether it’s familiar characters and songs, or simply bright colours and soothing sounds. The length of many of these videos — one common video tactic is to assemble many nursery rhyme or cartoon episodes into hour+ compilations —and the way that length is marketed as part of the video’s appeal, points to the amount of time some kids are spending with them.

YouTube broadcasters have thus developed a huge number of tactics to draw parents’ and childrens’ attention to their videos, and the advertising revenues that accompany them. The first of these tactics is simply to copy and pirate other content. A simple search for “Peppa Pig” on YouTube in my case yielded “About 10,400,000 results” and the front page is almost entirely from the verified “Peppa Pig Official Channel”, while one is from an unverified channel called Play Go Toys, which you really wouldn’t notice unless you were looking out for it:

Play Go Toys’ channel consists of (I guess?) pirated Peppa Pig and other cartoons, videos of toy unboxings (another kid magnet), and videos of, one supposes, the channel owner’s own children. I am not alleging anything bad about Play Go Toys; I am simply illustrating how the structure of YouTube facilitates the delamination of content and author, and how this impacts on our awareness and trust of its source.

As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.

(Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.)

A second way of increasing hits on videos is through keyword/hashtag association, which is a whole dark art unto itself. When some trend, such as Surprise Egg videos, reaches critical mass, content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration. This is the origin of all the weird names in the list above: branded content and nursery rhyme titles and “surprise egg” all stuffed into the same word salad to capture search results, sidebar placement, and “up next” autoplay rankings.

Play Go Toys’ channel consists of (I guess?) pirated Peppa Pig and other cartoons, videos of toy unboxings (another kid magnet), and videos of, one supposes, the channel owner’s own children. I am not alleging anything bad about Play Go Toys; I am simply illustrating how the structure of YouTube facilitates the delamination of content and author, and how this impacts on our awareness and trust of its source.

As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.

(Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.)

A second way of increasing hits on videos is through keyword/hashtag association, which is a whole dark art unto itself. When some trend, such as Surprise Egg videos, reaches critical mass, content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration. This is the origin of all the weird names in the list above: branded content and nursery rhyme titles and “surprise egg” all stuffed into the same word salad to capture search results, sidebar placement, and “up next” autoplay rankings.

A striking example of the weirdness is the Finger Family videos (harmless example embedded above). I have no idea where they came from or the origin of the children’s rhyme at the core of the trope, but there are at least 17 million versions of this currently on YouTube, and again they cover every possible genre, with billions and billions of aggregated views.

Once again, the view numbers of these videos must be taken under serious advisement. A huge number of these videos are essentially created by bots and viewed by bots, and even commented on by bots. That is a whole strange world in and of itself. But it shouldn’t obscure that there are also many actual children, plugged into iphones and tablets, watching these over and over again — in part accounting for the inflated view numbers — learning to type basic search terms into the browser, or simply mashing the sidebar to bring up another video.

What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine. The example above, from a channel called Bounce Patrol Kids, with almost two million subscribers, show this effect in action. It posts professionally produced videos, with dedicated human actors, at the rate of about one per week. Once again, I am not alleging anything untoward about Bounce Patrol, which clearly follows in the footsteps of pre-digital kid sensations like their fellow Australians The Wiggles.

And yet, there is something weird about a group of people endlessly acting out the implications of a combination of algorithmically generated keywords: “Halloween Finger Family & more Halloween Songs for Children | Kids Halloween Songs Collection”, “Australian Animals Finger Family Song | Finger Family Nursery Rhymes”, “Farm Animals Finger Family and more Animals Songs | Finger Family Collection – Learn Animals Sounds”, “Safari Animals Finger Family Song | Elephant, Lion, Giraffe, Zebra & Hippo! Wild Animals for kids”, “Superheroes Finger Family and more Finger Family Songs! Superhero Finger Family Collection”, “Batman Finger Family Song — Superheroes and Villains! Batman, Joker, Riddler, Catwoman” and on and on and on. This is content production in the age of algorithmic discovery — even if you’re a human, you have to end up impersonating the machine.

Other channels do away with the human actors to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos. The above channel, Videogyan 3D Rhymes — Nursery Rhymes & Baby Songs, posts several videos a week, in increasingly byzantine combinations of keywords. They have almost five million subscribers — more than double Bounce Patrol — although once again it’s impossible to know who or what is actually racking up these millions and millions of views.

I’m trying not to turn this essay into an endless list of examples, but it’s important to grasp how vast this system is, and how indeterminate its actions, process, and audience. It’s also international: there are variations of Finger Family and Learn Colours videos for Tamil epics and Malaysian cartoons which are unlikely to pop up in any Anglophone search results. This very indeterminacy and reach is key to its existence, and its implications. Its dimensionality makes it difficult to grasp, or even to really think about.

We’ve encountered pretty clear examples of the disturbing outcomes of full automation before — some of which have been thankfully leavened with a dark kind of humour, others not so much. Much has been made of the algorithmic interbreeding of stock photo libraries and on-demand production of everything from tshirts to coffee mugs to infant onesies and cell phone covers. The above example, available until recently on Amazon, is one such case, and the story of how it came to occur is fascinating and weird but essentially comprehensible. Nobody set out to create phone cases with drugs and medical equipment on them, it was just a deeply weird mathematical/probabilistic outcome. The fact that it took a while to notice might ring some alarm bells however.

Likewise, the case of the “Keep Calm and Rape A Lot” tshirts (along with the “Keep Calm and Knife Her” and “Keep Calm and Hit Her” ones) is depressing and distressing but comprehensible. Nobody set out to create these shirts: they just paired an unchecked list of verbs and pronouns with an online image generator. It’s quite possible that none of these shirts ever physically existed, were ever purchased or worn, and thus that no harm was done. Once again though, the people creating this content failed to notice, and neither did the distributor. They literally had no idea what they were doing.

What I will argue, on the basis of these cases and of those I’m going to describe further, is that the scale and logic of the system is complicit in these outputs, and requires us to think through their implications.

(Also again: I’m not going to dig into the wider social implications of such processes outside the scope of what I am writing about here, but it’s clear that one can draw a clear line from examples such as these to pressing contemporary issues such as racial and gender bias in big data and machine intelligence-driven systems, which require urgent attention but in the same manner do not have anything resembling easy or even preferable solutions.)

Let’s look at just one video among the piles of kid videos, and try to parse out where it comes from. It’s important to stress that I didn’t set out to find this particular video: it appeared organically and highly ranked in a search for ‘finger family’ in an incognito browser window (i.e. it should not have been influenced by previous searches). This automation takes us to very, very strange places, and at this point the rabbithole is so deep that it’s impossible to know how such a thing came into being.

Once again, a content warning: this video is not inappropriate in any way, but it is decidedly off, and contains elements which might trouble anyone. It’s very mild on the scale of such things, but. I describe it below if you don’t want to watch it and head down that road. This warning will recur.

The above video is entitled Wrong Heads Disney Wrong Ears Wrong Legs Kids Learn Colors Finger Family 2017 Nursery Rhymes. The title alone confirms its automated provenance. I have no idea where the “Wrong Heads” trope originates, but I can imagine, as with the Finger Family Song, that somewhere there is a totally original and harmless version that made enough kids laugh that it started to climb the algorithmic rankings until it made it onto the word salad lists, combining with Learn Colors, Finger Family, and Nursery Rhymes, and all of these tropes — not merely as words but as images, processes, and actions — to be mixed into what we see here.

The video consists of a regular version of the Finger Family song played over an animation of character heads and bodies from Disney’s Aladdin swapping and intersecting. Again, this is weird but frankly no more than the Surprise Egg videos or anything else kids watch. I get how innocent it is. The offness creeps in with the appearance of a non-Aladdin character —Agnes, the little girl from Despicable Me. Agnes is the arbiter of the scene: when the heads don’t match up, she cries, when they do, she cheers.

The video’s creator, BABYFUN TV (screenshot above), has produced many similar videos. As many of the Wrong Heads videos as I could bear to watch all work in exactly the same way. The character Hope from Inside Out weeps through a Smurfs and Trolls head swap. It goes on and on. I get the game, but the constant overlaying and intermixing of different tropes starts to get inside you. BABYFUN TV only has 170 subscribers and very low view rates, but then there are thousands and thousands of channels like this. Numbers in the long tail aren’t significant in the abstract, but in their accumulation.

The question becomes: how did this come to be? The “Bad Baby” trope also present on BABYFUN TV features the same crying. While I find it disturbing, I can understand how it might provide some of the rhythm or cadence or relation to their own experience that actual babies are attracted to in this content, although it has been warped and stretched through algorithmic repetition and recombination in ways that I don’t think anyone actually wants to happen.

Toy Freaks is a hugely popular channel (68th on the platform) which features a father and his two daughters playing out — or in some cases perhaps originating — many of the tropes we’ve identified so far, including “Bad Baby”, above. As well as nursery rhymes and learning colours, Toy Freaks specialises in gross-out situations, as well as activities which many, many viewers feel border on abuse and exploitation, if not cross the line entirely, including videos of the children vomiting and in pain. Toy Freaks is a YouTube verified channel, whatever that means. (I think we know by now it means nothing useful.)

As with Bounce Patrol Kids, however you feel about the content of these videos, it feels impossible to know where the automation starts and ends, who is coming up with the ideas and who is roleplaying them. In turn, the amplification of tropes in popular, human-led channels such as Toy Freaks leads to them being endlessly repeated across the network in increasingly outlandish and distorted recombinations.

There’s a second level of what I’m characterising as human-led videos which are much more disturbing than the mostly distasteful activities of Toy Freaks and their kin. Here is a relatively mild, but still upsetting example:

A step beyond the simply pirated Peppa Pig videos mentioned previously are the knock-offs. These too seem to teem with violence. In the official Peppa Pig videos, Peppa does indeed go to the dentist, and the episode in which she does so seems to be popular — although, confusingly, what appears to be the real episode is only available on an unofficial channel. In the official timeline, Peppa is appropriately reassured by a kindly dentist. In the version above, she is basically tortured, before turning into a series of Iron Man robots and performing the Learn Colours dance. A search for “peppa pig dentist” returns the above video on the front page, and it only gets worse from here.

Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are, it turns out very widespread. They make up an entire YouTube subculture. Many are obviously parodies, or even satires of themselves, in the pretty common style of the internet’s outrageous, deliberately offensive kind. All the 4chan tropes are there, the trolls are out, we know this.

In the example above, the agency is less clear: the video starts with a trollish Peppa parody, but later syncs into the kind of automated repetition of tropes we’ve seen already. I don’t know which camp it belongs to. Maybe it’s just trolls. I kind of hope it is. But I don’t think so. Trolls don’t cover the intersection of human actors and more automated examples further down the line. They’re at play here, but they’re not the whole story.

I suppose it’s naive not to see the deliberate versions of this coming, but many are so close to the original, and so unsignposted — like the dentist example — that many, many kids are watching them. I understand that most of them are not trying to mess kids up, not really, even though they are.

I’m trying to understand why, as plainly and simply troubling as it is, this is not a simple matter of “won’t somebody think of the children” hand-wringing. Obviously this content is inappropriate, obviously there are bad actors out there, obviously some of these videos should be removed. Obviously too this raises questions of fair use, appropriation, free speech and so on. But reports which simply understand the problem through this lens fail to fully grasp the mechanisms being deployed, and thus are incapable of thinking its implications in totality, and responding accordingly.

The New York Times, headlining their article on a subset of this issue “On YouTube Kids, Startling Videos Slip Past Filters”, highlights the use of knock-off characters and nursery rhymes in disturbing content, and frames it as a problem of moderation and legislation. YouTube Kids, an official app which claims to be kid-safe but is quite obviously not, is the problem identified, because it wrongly engenders trust in users. An article in the British tabloid The Sun, “Kids left traumatised after sick YouTube clips showing Peppa Pig characters with knives and guns appear on app for children” takes the same line, with an added dose of right-wing technophobia and self-righteousness. But both stories take at face value YouTube’s assertions that these results are incredibly rare and quickly removed: assertions utterly refuted by the proliferation of the stories themselves, and the growing number of social media posts, largely by concerned parents, from which they arise.

But as with Toy Freaks, what is concerning to me about the Peppa videos is how the obvious parodies and even the shadier knock-offs interact with the legions of algorithmic content producers until it is completely impossible to know what is going on. (“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”)

Here’s what is basically a version of Toy Freaks produced in Asia (screenshot above). Here’s one from Russia. I don’t really want to use the term “human-led” any more about these videos, although they contain all the same tropes and actual people acting them out. I no longer have any idea what’s going on here and I really don’t want to and I’m starting to think that that is kind of the point. That’s part of why I’m starting to think about the deliberateness of this all. There is a lot of effort going into making these. More than spam revenue can generate — can it? Who’s writing these scripts, editing these videos? Once again, I want to stress: this is still really mild, even funny stuff compared to a lot of what is out there.

Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.

I’ve written enough, too much, but I feel like I actually need to justify all this raving about violence and abuse and automated systems with an example that sums it up. Maybe after everything I’ve said you won’t think it’s so bad. I don’t know what to think any more.

This video, BURIED ALIVE Outdoor Playground Finger Family Song Nursery Rhymes Animation Education Learning Video, contains all of the elements we’ve covered above, and takes them to another level. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams. And of course there are vast, vast numbers of these videos. Channel after channel after channel of similar content, churned out at the rate of hundreds of new videos every week. Industrialised nightmare production.

For the final time: There is more violent and more sexual content like this available. I’m not going to link to it. I don’t believe in traumatising other people, but it’s necessary to keep stressing it, and not dismiss the psychological effect on children of things which aren’t overtly disturbing to adults, just incredibly dark and weird.

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos — of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.

This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want.

I’m going to stop here, saying only this:

What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. As I said at the beginning of this essay: this is being done by people and by things and by a combination of things and people. Responsibility for its outcomes is impossible to assign but the damage is very, very real indeed.

 

Algorithmic Control and the Revolution of Desire

zuckerberg_VR_people-625x350

By Alfie Brown

Source: ROAR Magazine

Last year, Stanford University published a study confirming what many of us may long have suspected: that your computer can predict what you want with more accuracy than your spouse or your friends. Your digital footprint betrays the truth not only about what you “like” but about what you really like — or so the argument goes. But what if our digital footprints, besides revealing our desires, are also responsible for the very construction of these desires? If that were the case, we would need to display a far deeper level of suspicion towards the complex patterns of corporate and state control found in contemporary cyberspace.

There is little doubt that innovations in mobile technologies are part of emerging methodologies of social control. In particular, games and applications that make use of the Google Maps back-end system — including Uber, Grindr, Pokémon Go and hundreds of others — which should be seen as one of the most important technological developments of the last decade or so, are particularly complicit in these new regulatory practices. Putting the well-publicized data collection issue aside, such applications have two powerful ideological functions. First, they construct the new “geographical contours” of the city, regulating the paths we take and mapping the city in the service of both corporate interest and the prevention of uprisings. Second, and more unconsciously, they enact what Jean-Francois Lyotard once called the “desirevolution” — an evolution and revolution of desire, in which that what we want is itself now determined by the digital paths we tread.

The Psycho-Geographical Contours of the City 

In 1981, the French theorist Guy Debord famously wrote of the “psycho-geographical contours” of the city that govern the routes we take, even when we may feel we are wandering freely around the physical space. At that time, it was Debord’s topic — architecture — that was the dominant force in re-organizing our routes through the city. Today, however, that role is increasingly taken up by the mobile phone. It is Uber that dictates the path of your taxi, Maps that dictates the route of your walks and drives, and Pokémon Go that (for a summer at least) determined where the next crowd would gather.

Other similar map-based application programing interfaces, or APIs, dictate our jogging routes (MapMyRun), our recreational hikes (LiveTrekker) and our tourist activities (TripAdvisor Guides). Pokémon Go attracted some publicity because it accidentally and humorously gathered crowds in weird places, but this should only alert us to its potential ability to gather crowds in the right places (to serve corporate interest) or to prevent the gathering of crowds in the wrong ones (to prevent organized uprisings, for instance). Such applications should be seen as a testing phase in the project of Google and its affiliated corporations as they work out how best to regulate the movements of large populations via their phones. Pokémon Go players were the early cyborgs, complete with hiccups and malfunctions — a beta version of Google’s future human. These future humans will go where instructed.

On a smaller scale, this point can be seen in concrete terms with a case study of London. A recent Transport for London talk discussed the possibility of “gamifying” commuting. In order to facilitate this possibility, Transport for London have made the internet API and data streams used to monitor all London Transport vehicles open source and open access, in the hope that developers will build London-focused apps based around the public transport system, thus maximizing profit. One idea is that if a particular tube station is at risk of becoming clogged up due to other delays, TfL could give “in-game rewards” for people willing to use alternative routes and thus smooth out the jam.

While traffic jam prevention may not seem like evidence that we have arrived in the dystopia of total corporate and state control, it does actually reveal the dangerous potentiality in such technologies. It shows that the UK is not as far away from the “social credit” game system recently implemented in Beijing to rate each citizen’s trustworthiness and give them rewards for their dedication to the Chinese state. While the UK media reacted with shock to these innovations in Chinese app development, a closer look at the electronic structures of mapping and controlling our own movements shows that a similar framework is already in its development phase in London too. In the “smart city” of the future, it won’t just be traffic jams that are smoothed out. Any inefficient misuse or any occupation of public space deemed dangerous by the authorities can be specifically targeted.

The Corporate Surveillance State

When it comes to these developments in technology, state and corporate forces work more closely with each other than ever before — and much more closely than they are willing to admit. Srećko Horvat has pointed out the short distance between the creators of Pokémon Go and Hillary Clinton, despite her odd and unsolicited recent public claim that she didn’t know who made the game. Likewise, Julian Assange’s strangely under-discussed 2014 book When Google Met WikiLeaks showed the shocking proximity of Google chief Eric Schmidt and the Washington state apparatus. In terms of surveillance and the use of big data, it has become impossible to sustain the distinction between state control and the production of wealth, since the two have become so irrevocably intertwined. As such, old arguments that “it’s all just about money” need to be treated with greater suspicion, since major firms today are so closely tied to the state. Various aspects of state organization should likewise be considered equally suspect because of their corporate underpinnings.

Of course, when it comes to the mapping applications that promise to help us access the best quality objects of our desire with the greatest efficiency and the least cost, these tempting forces of joint corporate and state control are entered into willingly by participants. As such, they require something else in order to function in the all-consuming way that they do. Far from simply channeling and transforming our movements, they also need to channel and even transform our desires.

We are now firmly within the world of the electronic object, where the mediation of everything from lovers and friends to meals and activities via our mobile phones and computers makes it virtually impossible to separate physical from electronic objectivity. Whilst the electronic Pokémon or the “in-game rewards” offered by many applications may not yet have the physicality of a lover who can be accessed via Tinder, or a burger that can be located via JustEat, the burger and the lover certainly have the electronic objectivity of the Pokémon. We can therefore see a transformation in the objects of desire taking place by and through our devices, so that we are confronted not only with a change in how we get what we want, but with a change in what we want in the first place.

Italo Calvino once wrote of the “amorous relationship” that “erases the lines between our bodies and sopa de frijoles, huachinango a la vera cruzana, and enchiladas.” While in such a moment food and lover become one in a kind of orgy of physical consumption, in the same novel Calvino warned of a time “when the olfactory alphabet, which made them so many words in a precious lexicon, is forgotten,” and in which “perfumes will be left speechless, inarticulate, illegible.”

It is this world that we find ourselves desiring in, where an orgy of electronic objects with no olfactory physicality blurs the distinction between lovers, meals and “in-game” rewards. The purpose of this shift, of course, is to increase the power of technological corporations by giving them a new sort of control over the way we relate to our objects of desire. If the boundaries between the way we search, desire and acquire our burgers, lovers and Pikachus are dissolving, it is not so much the old point that everything has become a commodity, but a new point that this kind of substitutional electronic objectivity endows corporate and state technologists with unprecedented power to distribute and redistribute the objects of the desire around the “smart city.”

Data Centralization in China and the West

There is, moreover, a significant centralization of power underpinning these developments. Like the social credit idea, the Chinese phenomenon of WeChat — developed in 2011 by Tencent, one of the largest internet and mobile media companies in the world — has received concerned media coverage in the West. WeChat is the first truly successful “SuperApp,” the basic premise of which is that all applications like WhatsApp, Facebook, Instagram, OpenRice, Tinder, TripAdvisor and many more, are rolled into one cohesive application. All for our convenience, of course.

As a result, however, there is now a new level of cohesion between the data-collection and movement monitoring going on in the mobile phone as a whole, where all data is now directly collected in a single place. More than half of the 1.1 billion WeChat users access the app over 10 times per day, and many users simply leave it on continuously, using it to map, shop, date and play. This means that the app sets a new precedent for continually monitoring the movements of a whole nation of citizens. WeChat’s incredibly strange “heat map” feature actually lets users — and authorities — see where crowds are forming. The claim is that this has nothing to do with crowd control: the objective is simply to help us access the least crowded shopping malls, doing nothing more than helping us get what we want.

WeChat is already the most popular social media application in China, but it will soon have huge significance worldwide, with an international version now available and many replica “SuperApps” in production. What the Western media finds to be so concerning about WeChat is once again something that already exists here in the West, at least in beta form, without us knowing it. WeChat actually offers us a glimpse into an Orwellian future in which companies and governments can track every movement we make. While in China the blocking of Google means that WeChat uses Baidu Maps as its API, the international version of WeChat simply taps into Google Maps, showing just how deeply integrated these corporate technologies already are.

What emerges from Western media coverage of these developments is the continued insistence on an apparent division between the public and the private sphere in the United States and Europe. When it comes to digital surveillance and the monitoring of movement, the situation is almost certainly better in the West than it is in China at this moment. Yet from an analysis of recent developments in China we learn not only that we need to be attentive to similar dangers here in the West, but also that there are powerful ideological mechanisms at play to obscure these developments by presenting China and the US as fundamentally opposed to one another. Whilst in China the links between the new SuperApps and the state are commonly accepted, in the US the illusion of privacy remains paramount. Although data is often shared between different corporations and between the public and the private sectors, this fact is generally obscured. The continued expressions of shock at the more openly centralized state control visible in China serve only to further consolidate the impression that these things are not happening in the US and Europe.

Furthermore, WeChat reveals more than the dangers of mass data collection and new levels of technological surveillance. It also embodies the power of the phone over the objects of desire. Since one single app can successfully market us food, lovers, holidays, events, blogs and even charities, the connections between such “objects” become more important than the differences. While the structural similarities between Grindr, Pokémon Go and OpenRice become apparent via analysis of both their surfaces and back systems, WeChat makes the connections plain to see. The various forms and objects of each individual’s desire no longer represent discreet and separable elements of a subject’s life. Instead we enter a fully cohesive libidinal economy in which we are increasingly regulated and mapped via the organization of what and how we desire.

The Desirevolution

So what do we do when faced with this revolution — a technological revolution that is not overthrowing any existing power structures but rather transforming the world in the service of private corporations and the state? Often, the response of those concerned by such developments is to express hostility or distrust towards technology itself. Yet to break this corporate organization of desire, we need not nostalgically yearn for a desire that is free of politics and technology, for no such desire is possible. On the contrary, what we need is to recognize that desire is necessarily and always controlled by both politics and technology.

This awareness would be the first step towards ensuring that the centralized corporate and state organization of desire malfunctions — and, ultimately, it would be the first step towards its potential reprogramming. The corporate desirevolution depends on our blindness to the politics of its technologies, asking us to experience our desires as spontaneous yearning and our mobile phone and its powerful apps as just tools for our convenience, helping us get what we want in the easiest way possible. We need to recognize that this is far from the case. The principal concern of those who own the apps — perhaps even more powerful than data collection — is to transform desire itself. At the very least, we can make visible the complicity of such technologies in producing the perfect conformist modern citizen.

The new mind control

mind_control

The internet has spawned subtle forms of influence that can flip elections and manipulate everything we say, think and do

By Robert Epstein

Source: Aeon Magazine

Over the past century, more than a few great writers have expressed concern about humanity’s future. In The Iron Heel (1908), the American writer Jack London pictured a world in which a handful of wealthy corporate titans – the ‘oligarchs’ – kept the masses at bay with a brutal combination of rewards and punishments. Much of humanity lived in virtual slavery, while the fortunate ones were bought off with decent wages that allowed them to live comfortably – but without any real control over their lives.

In We (1924), the brilliant Russian writer Yevgeny Zamyatin, anticipating the excesses of the emerging Soviet Union, envisioned a world in which people were kept in check through pervasive monitoring. The walls of their homes were made of clear glass, so everything they did could be observed. They were allowed to lower their shades an hour a day to have sex, but both the rendezvous time and the lover had to be registered first with the state.

In Brave New World (1932), the British author Aldous Huxley pictured a near-perfect society in which unhappiness and aggression had been engineered out of humanity through a combination of genetic engineering and psychological conditioning. And in the much darker novel 1984 (1949), Huxley’s compatriot George Orwell described a society in which thought itself was controlled; in Orwell’s world, children were taught to use a simplified form of English called Newspeak in order to assure that they could never express ideas that were dangerous to society.

These are all fictional tales, to be sure, and in each the leaders who held the power used conspicuous forms of control that at least a few people actively resisted and occasionally overcame. But in the non-fiction bestseller The Hidden Persuaders (1957) – recently released in a 50th-anniversary edition – the American journalist Vance Packard described a ‘strange and rather exotic’ type of influence that was rapidly emerging in the United States and that was, in a way, more threatening than the fictional types of control pictured in the novels. According to Packard, US corporate executives and politicians were beginning to use subtle and, in many cases, completely undetectable methods to change people’s thinking, emotions and behaviour based on insights from psychiatry and the social sciences.

Most of us have heard of at least one of these methods: subliminal stimulation, or what Packard called ‘subthreshold effects’ – the presentation of short messages that tell us what to do but that are flashed so briefly we aren’t aware we have seen them. In 1958, propelled by public concern about a theatre in New Jersey that had supposedly hidden messages in a movie to increase ice cream sales, the National Association of Broadcasters – the association that set standards for US television – amended its code to prohibit the use of subliminal messages in broadcasting. In 1974, the Federal Communications Commission opined that the use of such messages was ‘contrary to the public interest’. Legislation to prohibit subliminal messaging was also introduced in the US Congress but never enacted. Both the UK and Australia have strict laws prohibiting it.

Subliminal stimulation is probably still in wide use in the US – it’s hard to detect, after all, and no one is keeping track of it – but it’s probably not worth worrying about. Research suggests that it has only a small impact, and that it mainly influences people who are already motivated to follow its dictates; subliminal directives to drink affect people only if they’re already thirsty.

Packard had uncovered a much bigger problem, however – namely that powerful corporations were constantly looking for, and in many cases already applying, a wide variety of techniques for controlling people without their knowledge. He described a kind of cabal in which marketers worked closely with social scientists to determine, among other things, how to get people to buy things they didn’t need and how to condition young children to be good consumers – inclinations that were explicitly nurtured and trained in Huxley’s Brave New World. Guided by social science, marketers were quickly learning how to play upon people’s insecurities, frailties, unconscious fears, aggressive feelings and sexual desires to alter their thinking, emotions and behaviour without any awareness that they were being manipulated.

By the early 1950s, Packard said, politicians had got the message and were beginning to merchandise themselves using the same subtle forces being used to sell soap. Packard prefaced his chapter on politics with an unsettling quote from the British economist Kenneth Boulding: ‘A world of unseen dictatorship is conceivable, still using the forms of democratic government.’ Could this really happen, and, if so, how would it work?

The forces that Packard described have become more pervasive over the decades. The soothing music we all hear overhead in supermarkets causes us to walk more slowly and buy more food, whether we need it or not. Most of the vacuous thoughts and intense feelings our teenagers experience from morning till night are carefully orchestrated by highly skilled marketing professionals working in our fashion and entertainment industries. Politicians work with a wide range of consultants who test every aspect of what the politicians do in order to sway voters: clothing, intonations, facial expressions, makeup, hairstyles and speeches are all optimised, just like the packaging of a breakfast cereal.

Fortunately, all of these sources of influence operate competitively. Some of the persuaders want us to buy or believe one thing, others to buy or believe something else. It is the competitive nature of our society that keeps us, on balance, relatively free.

But what would happen if new sources of control began to emerge that had little or no competition? And what if new means of control were developed that were far more powerful – and far more invisible – than any that have existed in the past? And what if new types of control allowed a handful of people to exert enormous influence not just over the citizens of the US but over most of the people on Earth?

It might surprise you to hear this, but these things have already happened.

To understand how the new forms of mind control work, we need to start by looking at the search engine – one in particular: the biggest and best of them all, namely Google. The Google search engine is so good and so popular that the company’s name is now a commonly used verb in languages around the world. To ‘Google’ something is to look it up on the Google search engine, and that, in fact, is how most computer users worldwide get most of their information about just about everything these days. They Google it. Google has become the main gateway to virtually all knowledge, mainly because the search engine is so good at giving us exactly the information we are looking for, almost instantly and almost always in the first position of the list it shows us after we launch our search – the list of ‘search results’.

That ordered list is so good, in fact, that about 50 per cent of our clicks go to the top two items, and more than 90 per cent of our clicks go to the 10 items listed on the first page of results; few people look at other results pages, even though they often number in the thousands, which means they probably contain lots of good information. Google decides which of the billions of web pages it is going to include in our search results, and it also decides how to rank them. How it decides these things is a deep, dark secret – one of the best-kept secrets in the world, like the formula for Coca-Cola.

Because people are far more likely to read and click on higher-ranked items, companies now spend billions of dollars every year trying to trick Google’s search algorithm – the computer program that does the selecting and ranking – into boosting them another notch or two. Moving up a notch can mean the difference between success and failure for a business, and moving into the top slots can be the key to fat profits.

Late in 2012, I began to wonder whether highly ranked search results could be impacting more than consumer choices. Perhaps, I speculated, a top search result could have a small impact on people’s opinions about things. Early in 2013, with my associate Ronald E Robertson of the American Institute for Behavioral Research and Technology in Vista, California, I put this idea to a test by conducting an experiment in which 102 people from the San Diego area were randomly assigned to one of three groups. In one group, people saw search results that favoured one political candidate – that is, results that linked to web pages that made this candidate look better than his or her opponent. In a second group, people saw search rankings that favoured the opposing candidate, and in the third group – the control group – people saw a mix of rankings that favoured neither candidate. The same search results and web pages were used in each group; the only thing that differed for the three groups was the ordering of the search results.

To make our experiment realistic, we used real search results that linked to real web pages. We also used a real election – the 2010 election for the prime minister of Australia. We used a foreign election to make sure that our participants were ‘undecided’. Their lack of familiarity with the candidates assured this. Through advertisements, we also recruited an ethnically diverse group of registered voters over a wide age range in order to match key demographic characteristics of the US voting population.

All participants were first given brief descriptions of the candidates and then asked to rate them in various ways, as well as to indicate which candidate they would vote for; as you might expect, participants initially favoured neither candidate on any of the five measures we used, and the vote was evenly split in all three groups. Then the participants were given up to 15 minutes in which to conduct an online search using ‘Kadoodle’, our mock search engine, which gave them access to five pages of search results that linked to web pages. People could move freely between search results and web pages, just as we do when using Google. When participants completed their search, we asked them to rate the candidates again, and we also asked them again who they would vote for.

We predicted that the opinions and voting preferences of 2 or 3 per cent of the people in the two bias groups – the groups in which people were seeing rankings favouring one candidate – would shift toward that candidate. What we actually found was astonishing. The proportion of people favouring the search engine’s top-ranked candidate increased by 48.4 per cent, and all five of our measures shifted toward that candidate. What’s more, 75 per cent of the people in the bias groups seemed to have been completely unaware that they were viewing biased search rankings. In the control group, opinions did not shift significantly.

This seemed to be a major discovery. The shift we had produced, which we called the Search Engine Manipulation Effect (or SEME, pronounced ‘seem’), appeared to be one of the largest behavioural effects ever discovered. We did not immediately uncork the Champagne bottle, however. For one thing, we had tested only a small number of people, and they were all from the San Diego area.

Over the next year or so, we replicated our findings three more times, and the third time was with a sample of more than 2,000 people from all 50 US states. In that experiment, the shift in voting preferences was 37.1 per cent and even higher in some demographic groups – as high as 80 per cent, in fact.

We also learned in this series of experiments that by reducing the bias just slightly on the first page of search results – specifically, by including one search item that favoured the other candidate in the third or fourth position of the results – we could mask our manipulation so that few or even no people were aware that they were seeing biased rankings. We could still produce dramatic shifts in voting preferences, but we could do so invisibly.

Still no Champagne, though. Our results were strong and consistent, but our experiments all involved a foreign election – that 2010 election in Australia. Could voting preferences be shifted with real voters in the middle of a real campaign? We were skeptical. In real elections, people are bombarded with multiple sources of information, and they also know a lot about the candidates. It seemed unlikely that a single experience on a search engine would have much impact on their voting preferences.

To find out, in early 2014, we went to India just before voting began in the largest democratic election in the world – the Lok Sabha election for prime minister. The three main candidates were Rahul Gandhi, Arvind Kejriwal, and Narendra Modi. Making use of online subject pools and both online and print advertisements, we recruited 2,150 people from 27 of India’s 35 states and territories to participate in our experiment. To take part, they had to be registered voters who had not yet voted and who were still undecided about how they would vote.

Participants were randomly assigned to three search-engine groups, favouring, respectively, Gandhi, Kejriwal or Modi. As one might expect, familiarity levels with the candidates was high – between 7.7 and 8.5 on a scale of 10. We predicted that our manipulation would produce a very small effect, if any, but that’s not what we found. On average, we were able to shift the proportion of people favouring any given candidate by more than 20 per cent overall and more than 60 per cent in some demographic groups. Even more disturbing, 99.5 per cent of our participants showed no awareness that they were viewing biased search rankings – in other words, that they were being manipulated.

SEME’s near-invisibility is curious indeed. It means that when people – including you and me – are looking at biased search rankings, they look just fine. So if right now you Google ‘US presidential candidates’, the search results you see will probably look fairly random, even if they happen to favour one candidate. Even I have trouble detecting bias in search rankings that I know to be biased (because they were prepared by my staff). Yet our randomised, controlled experiments tell us over and over again that when higher-ranked items connect with web pages that favour one candidate, this has a dramatic impact on the opinions of undecided voters, in large part for the simple reason that people tend to click only on higher-ranked items. This is truly scary: like subliminal stimuli, SEME is a force you can’t see; but unlike subliminal stimuli, it has an enormous impact – like Casper the ghost pushing you down a flight of stairs.

We published a detailed report about our first five experiments on SEME in the prestigious Proceedings of the National Academy of Sciences (PNAS) in August 2015. We had indeed found something important, especially given Google’s dominance over search. Google has a near-monopoly on internet searches in the US, with 83 per cent of Americans specifying Google as the search engine they use most often, according to the Pew Research Center. So if Google favours one candidate in an election, its impact on undecided voters could easily decide the election’s outcome.

Keep in mind that we had had only one shot at our participants. What would be the impact of favouring one candidate in searches people are conducting over a period of weeks or months before an election? It would almost certainly be much larger than what we were seeing in our experiments.

Other types of influence during an election campaign are balanced by competing sources of influence – a wide variety of newspapers, radio shows and television networks, for example – but Google, for all intents and purposes, has no competition, and people trust its search results implicitly, assuming that the company’s mysterious search algorithm is entirely objective and unbiased. This high level of trust, combined with the lack of competition, puts Google in a unique position to impact elections. Even more disturbing, the search-ranking business is entirely unregulated, so Google could favour any candidate it likes without violating any laws. Some courts have even ruled that Google’s right to rank-order search results as it pleases is protected as a form of free speech.

Does the company ever favour particular candidates? In the 2012 US presidential election, Google and its top executives donated more than $800,000 to President Barack Obama and just $37,000 to his opponent, Mitt Romney. And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Google’s search results routinely favoured Democratic candidates. Are Google’s search rankings really biased? An internal report issued by the US Federal Trade Commission in 2012 concluded that Google’s search rankings routinely put Google’s financial interests ahead of those of their competitors, and anti-trust actions currently under way against Google in both the European Union and India are based on similar findings.

In most countries, 90 per cent of online search is conducted on Google, which gives the company even more power to flip elections than it has in the US and, with internet penetration increasing rapidly worldwide, this power is growing. In our PNAS article, Robertson and I calculated that Google now has the power to flip upwards of 25 per cent of the national elections in the world with no one knowing this is occurring. In fact, we estimate that, with or without deliberate planning on the part of company executives, Google’s search rankings have been impacting elections for years, with growing impact each year. And because search rankings are ephemeral, they leave no paper trail, which gives the company complete deniability.

Power on this scale and with this level of invisibility is unprecedented in human history. But it turns out that our discovery about SEME was just the tip of a very large iceberg.

Recent reports suggest that the Democratic presidential candidate Hillary Clinton is making heavy use of social media to try to generate support – Twitter, Instagram, Pinterest, Snapchat and Facebook, for starters. At this writing, she has 5.4 million followers on Twitter, and her staff is tweeting several times an hour during waking hours. The Republican frontrunner, Donald Trump, has 5.9 million Twitter followers and is tweeting just as frequently.

Is social media as big a threat to democracy as search rankings appear to be? Not necessarily. When new technologies are used competitively, they present no threat. Even through the platforms are new, they are generally being used the same way as billboards and television commercials have been used for decades: you put a billboard on one side of the street; I put one on the other. I might have the money to erect more billboards than you, but the process is still competitive.

What happens, though, if such technologies are misused by the companies that own them? A study by Robert M Bond, now a political science professor at Ohio State University, and others published in Nature in 2012 described an ethically questionable experiment in which, on election day in 2010, Facebook sent ‘go out and vote’ reminders to more than 60 million of its users. The reminders caused about 340,000 people to vote who otherwise would not have. Writing in the New Republic in 2014, Jonathan Zittrain, professor of international law at Harvard University, pointed out that, given the massive amount of information it has collected about its users, Facebook could easily send such messages only to people who support one particular party or candidate, and that doing so could easily flip a close election – with no one knowing that this has occurred. And because advertisements, like search rankings, are ephemeral, manipulating an election in this way would leave no paper trail.

Are there laws prohibiting Facebook from sending out ads selectively to certain users? Absolutely not; in fact, targeted advertising is how Facebook makes its money. Is Facebook currently manipulating elections in this way? No one knows, but in my view it would be foolish and possibly even improper for Facebook not to do so. Some candidates are better for a company than others, and Facebook’s executives have a fiduciary responsibility to the company’s stockholders to promote the company’s interests.

The Bond study was largely ignored, but another Facebook experiment, published in 2014 in PNAS, prompted protests around the world. In this study, for a period of a week, 689,000 Facebook users were sent news feeds that contained either an excess of positive terms, an excess of negative terms, or neither. Those in the first group subsequently used slightly more positive terms in their communications, while those in the second group used slightly more negative terms in their communications. This was said to show that people’s ‘emotional states’ could be deliberately manipulated on a massive scale by a social media company, an idea that many people found disturbing. People were also upset that a large-scale experiment on emotion had been conducted without the explicit consent of any of the participants.

Facebook’s consumer profiles are undoubtedly massive, but they pale in comparison with those maintained by Google, which is collecting information about people 24/7, using more than 60 different observation platforms – the search engine, of course, but also Google Wallet, Google Maps, Google Adwords, Google Analytics, Chrome, Google Docs, Android, YouTube, and on and on. Gmail users are generally oblivious to the fact that Google stores and analyses every email they write, even the drafts they never send – as well as all the incoming email they receive from both Gmail and non-Gmail users.

According to Google’s privacy policy – to which one assents whenever one uses a Google product, even when one has not been informed that he or she is using a Google product – Google can share the information it collects about you with almost anyone, including government agencies. But never with you. Google’s privacy is sacrosanct; yours is nonexistent.

Could Google and ‘those we work with’ (language from the privacy policy) use the information they are amassing about you for nefarious purposes – to manipulate or coerce, for example? Could inaccurate information in people’s profiles (which people have no way to correct) limit their opportunities or ruin their reputations?

Certainly, if Google set about to fix an election, it could first dip into its massive database of personal information to identify just those voters who are undecided. Then it could, day after day, send customised rankings favouring one candidate to just those people. One advantage of this approach is that it would make Google’s manipulation extremely difficult for investigators to detect.

Extreme forms of monitoring, whether by the KGB in the Soviet Union, the Stasi in East Germany, or Big Brother in 1984, are essential elements of all tyrannies, and technology is making both monitoring and the consolidation of surveillance data easier than ever. By 2020, China will have put in place the most ambitious government monitoring system ever created – a single database called the Social Credit System, in which multiple ratings and records for all of its 1.3 billion citizens are recorded for easy access by officials and bureaucrats. At a glance, they will know whether someone has plagiarised schoolwork, was tardy in paying bills, urinated in public, or blogged inappropriately online.

As Edward Snowden’s revelations made clear, we are rapidly moving toward a world in which both governments and corporations – sometimes working together – are collecting massive amounts of data about every one of us every day, with few or no laws in place that restrict how those data can be used. When you combine the data collection with the desire to control or manipulate, the possibilities are endless, but perhaps the most frightening possibility is the one expressed in Boulding’s assertion that an ‘unseen dictatorship’ was possible ‘using the forms of democratic government’.

Since Robertson and I submitted our initial report on SEME to PNAS early in 2015, we have completed a sophisticated series of experiments that have greatly enhanced our understanding of this phenomenon, and other experiments will be completed in the coming months. We have a much better sense now of why SEME is so powerful and how, to some extent, it can be suppressed.

We have also learned something very disturbing – that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide – entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue. In one of our recent experiments, biased search results shifted people’s opinions about the value of fracking by 33.9 per cent.

Perhaps even more disturbing is that the handful of people who do show awareness that they are viewing biased search rankings shift even further in the predicted direction; simply knowing that a list is biased doesn’t necessarily protect you from SEME’s power.

Remember what the search algorithm is doing: in response to your query, it is selecting a handful of webpages from among the billions that are available, and it is ordering those webpages using secret criteria. Seconds later, the decision you make or the opinion you form – about the best toothpaste to use, whether fracking is safe, where you should go on your next vacation, who would make the best president, or whether global warming is real – is determined by that short list you are shown, even though you have no idea how the list was generated.

Meanwhile, behind the scenes, a consolidation of search engines has been quietly taking place, so that more people are using the dominant search engine even when they think they are not. Because Google is the best search engine, and because crawling the rapidly expanding internet has become prohibitively expensive, more and more search engines are drawing their information from the leader rather than generating it themselves. The most recent deal, revealed in a Securities and Exchange Commission filing in October 2015, was between Google and Yahoo! Inc.

Looking ahead to the November 2016 US presidential election, I see clear signs that Google is backing Hillary Clinton. In April 2015, Clinton hired Stephanie Hannon away from Google to be her chief technology officer and, a few months ago, Eric Schmidt, chairman of the holding company that controls Google, set up a semi-secret company – The Groundwork – for the specific purpose of putting Clinton in office. The formation of The Groundwork prompted Julian Assange, founder of Wikileaks, to dub Google Clinton’s ‘secret weapon’ in her quest for the US presidency.

We now estimate that Hannon’s old friends have the power to drive between 2.6 and 10.4 million votes to Clinton on election day with no one knowing that this is occurring and without leaving a paper trail. They can also help her win the nomination, of course, by influencing undecided voters during the primaries. Swing voters have always been the key to winning elections, and there has never been a more powerful, efficient or inexpensive way to sway them than SEME.

We are living in a world in which a handful of high-tech companies, sometimes working hand-in-hand with governments, are not only monitoring much of our activity, but are also invisibly controlling more and more of what we think, feel, do and say. The technology that now surrounds us is not just a harmless toy; it has also made possible undetectable and untraceable manipulations of entire populations – manipulations that have no precedent in human history and that are currently well beyond the scope of existing regulations and laws. The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril.