Philip K. Dick and the Fake Humans

(Editor’s note: on this 36th anniversary of the passing of Philip K. Dick, it seems an appropriate time to note the relevance of his work to our current dystopia as Henry Farrell does in the following essay. Unfortunately the author is less astute regarding the ways in which the dystopias of Orwell and Huxley are equally relevant to our current milieu.)

By Henry Farrell

Source: Boston Review

This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.

Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.

This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.

Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).

In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.

Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together. As Dick described his work (in the opening essay to his 1985 collection, I Hope I Shall Arrive Soon):

The two basic topics which fascinate me are “What is reality?” and “What constitutes the authentic human being?” Over the twenty-seven years in which I have published novels and stories I have investigated these two interrelated topics over and over again.

These obsessions had some of their roots in Dick’s complex and ever-evolving personal mythology (in which it was perfectly plausible that the “real” world was a fake, and that we were all living in Palestine sometime in the first century AD). Yet they were also based on a keen interest in the processes through which reality is socially constructed. Dick believed that we all live in a world where “spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into heads of the reader.” He argued:

the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans—as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans.

In Dick’s books, the real and the unreal infect each other, so that it becomes increasingly impossible to tell the difference between them. The worlds of the dead and the living merge in Ubik (1969), the experiences of a disturbed child infect the world around him in Martian Time-Slip (1964), and consensual drug-based hallucinations become the vector for an invasive alien intelligence in The Three Stigmata of Palmer Eldritch (1965). Humans are impersonated by malign androids in Do Androids Dream of Electric Sheep? (1968) and “Second Variety” (1953); by aliens in “The Hanging Stranger” (1953) and “The Father-Thing” (1954); and by mutants in “The Golden Man” (1954).

This concern with unreal worlds and unreal people led to a consequent worry about an increasing difficulty of distinguishing between them. Factories pump out fake Americana in The Man in the High Castle (1962), mirroring the problem of living in a world that is not, in fact, the real one. Entrepreneurs build increasingly human-like androids in Do Androids Dream of Electric Sheep?, reasoning that if they do not, then their competitors will. Figuring out what is real and what is not is not easy. Scientific tools such as the famous Voight-Kampff test in Do Androids Dream of Electric Sheep? (and Blade Runner, Ridley Scott’s 1982 movie based loosely on it) do not work very well, leaving us with little more than hope in some mystical force—the I Ching, God in a spray can, a Martian water-witch—to guide us back toward the real.

We live in Dick’s world—but with little hope of divine intervention or invasion. The world where we communicate and interact at a distance is increasingly filled with algorithms that appear human, but are not—fake people generated by fake realities. When Ashley Madison, a dating site for people who want to cheat on their spouses, was hacked, it turned out that tens of thousands of the women on the site were fake “fembots” programmed to send millions of chatty messages to male customers, so as to delude them into thinking that they were surrounded by vast numbers of potential sexual partners.

These problems are only likely to get worse as the physical world and the world of information become increasingly interpenetrated in an Internet of (badly functioning) Things. Many of the aspects of Joe Chip’s future world in Ubik look horrendously dated to modern eyes: the archaic role of women, the assumption that nearly everyone smokes. Yet the door to Joe’s apartment—which argues with him and refuses to open because he has not paid it the obligatory tip—sounds ominously plausible. Someone, somewhere, is pitching this as a viable business plan to Y Combinator or the venture capitalists in Menlo Park.

This invasion of the real by the unreal has had consequences for politics. The hallucinatory realities in Dick’s worlds—the empathetic religion of Do Androids Dream of Electric Sheep?, the drug-produced worlds of The Three Stigmata of Palmer Eldritch, the quasi–Tibetan Buddhist death realm of Ubik—are usually experienced by many people, like the television shows of Dick’s America. But as network television has given way to the Internet, it has become easy for people to create their own idiosyncratic mix of sources. The imposed media consensus that Dick detested has shattered into a myriad of different realities, each with its own partially shared assumptions and facts. Sometimes this creates tragedy or near-tragedy. The deluded gunman who stormed into Washington, D.C.’s Comet Ping Pong pizzeria had been convinced by online conspiracy sites that it was the coordinating center for Hillary Clinton’s child–sex trafficking ring [likewise, the masses may have been convinced by mainstream media that a real child-sex trafficking ring never existed].

Such fractured worlds are more vulnerable to invasion by the non-human. Many Twitter accounts are bots, often with the names and stolen photographs of implausibly beautiful young women, looking to pitch this or that product (one recent academic study found that between 9 and 15 percent of all Twitter accounts are likely fake). Twitterbots vary in sophistication from automated accounts that do no more than retweet what other bots have said, to sophisticated algorithms deploying so-called “Sybil attacks,” creating fake identities in peer-to-peer networks to invade specific organizations or degrade particular kinds of conversation.

Twitter has failed to become a true mass medium, but remains extraordinarily important to politics, since it is where many politicians, journalists, and other elites turn to get their news. One research project suggests that around 20 percent of the measurable political discussion around the last presidential election came from bots. Humans appear to be no better at detecting bots than we are, in Dick’s novel, at detecting replicant androids: people are about as likely to retweet a bot’s message as the message of another human being. Most notoriously, the current U.S. president recently retweeted a flattering message that appears to have come from a bot densely connected to a network of other bots, which some believe to be controlled by the Russian government and used for propaganda purposes.

In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.

Such widespread falsehood is especially explosive when combined with our fragmented politics. Liberals’ favorite term for the right-wing propaganda machine, “fake news,” has been turned back on them by conservatives, who treat conventional news as propaganda, and hence ignore it. On the obverse, it may be easier for many people on the liberal left to blame Russian propaganda for the last presidential election than to accept that many voters had a very different understanding of America than they do.

Dick had other obsessions—most notably the politics of Richard Nixon and the Cold War. It is not hard to imagine him writing a novel combining an immature and predatory tycoon (half Arnie Kott, half Jory Miller) who becomes the president of the United States, secret Russian political manipulation, an invasion of empathy-free robotic intelligences masquerading as human beings, and a breakdown in our shared understanding of what is real and what is fake.

These different elements probably would not cohere particularly well, but as in Dick’s best novels, the whole might still work, somehow. Indeed, it is in the incongruities of Dick’s novels that salvation is to be found (even at his battiest, he retains a sense of humor). Obviously, it is less easy to see the joke when one is living through it. Dystopias may sometimes be grimly funny—but rarely from the inside.

Governments and corporations escalate Internet censorship and attacks on free speech

By Andre Damon

Source: WSWS.org

The year 2018 has opened with an international campaign to censor the Internet. Throughout the world, technology giants are responding to the political demands of governments by cracking down on freedom of speech, which is inscribed in the US Bill of Rights, the European Convention on Human Rights, and countless international agreements.

Bloomberg, the financial news service, published a blog post titled “Welcome to 2018, the Year of Censored Social Media,” which began with the observation, “This year, don’t count on the social networks to provide its core service: an uncensored platform for every imaginable view. The censorship has already begun, and it’ll only get heavier.”

Developments over the past week include:

  • On January 1, the German government began implementation of its “Network Enforcement Law,” which threatens social media companies with fines of up to €50 million if they do not immediately remove content deemed objectionable. Both German trade groups and the United Nations have warned that the law will incentivize technology companies to ban protected speech.
  • On January 3, French President Emmanuel Macron vowed to introduce a ban during election cycles on what he called “fake news” in a further crackdown on free speech on top of the draconian measures implemented under the state of emergency. The moves by France and Germany have led to renewed calls for a censorship law applying to the entire European Union.
  • On December 28, the New York Times reported that Facebook had deleted the account of Ramzan Kadyrov, the head of the Chechen Republic, nominally because he had been added to a US sanctions list. As the American Civil Liberties Union pointed out, this creates a precedent for giving the US government essentially free rein to block freedom of expression all over the world simply by putting individuals on an economic sanctions list.
  • This week, Iranian authorities blocked social media networks, including Instagram, which were being used to organize demonstrations against inequality and unemployment.
  • Facebook has continued its crackdown on Palestinian Facebook accounts, removing over 100 accounts at the request of Israeli officials.

These moves come in the wake of the decision by the Trump administration to abolish net neutrality, giving technology companies license to censor and block access to websites and services.

In August, the World Socialist Web Site first reported that Google was censoring left-wing, anti-war, and progressive websites. When it implemented changes to its search algorithms, Google claimed they were politically neutral, aimed only at elevating “more authoritative content” and demoting “blatantly misleading, low quality, offensive or downright false information.”

Now, no one can claim that the major technology giants are not carrying out a widespread and systematic campaign of online censorship, in close and active coordination with powerful states and intelligence agencies.

In the five months since the WSWS released its findings, Google’s censorship of left-wing, anti-war, and progressive web sites has only intensified.

Even though the World Socialist Web Site‘s readership from direct entries and other websites has increased, Google’s effort to isolate the WSWS through the systematic removal of its articles from search results has continued to depress its search traffic. Search traffic to the WSWS, which fell more than any other left-wing site, has continued to trend down, with a total reduction of 75 percent, compared to a 67 percent decline in August.

Alternet.org’s search traffic is now down 71 percent, compared to 63 percent in August. Consortium News’s search traffic is down 72 percent, compared to 47 percent in August. Other sites, including Global Research and Truthdig, continue to see significantly depressed levels of search traffic.

In its statement to commemorate the beginning of the new year, the World Socialist Web Site noted, “The year 2018—the bicentenary of Marx’s birth—will be characterized, above all, by an immense intensification of… class conflict around the world.” This prediction has been confirmed in the form of mass demonstrations in Iran, the wildcat strike by auto workers in Romania and growing labour militancy throughout Europe and the Middle East.

The ruling elites all over the world are meeting this resurgence of class struggle with an attempt to stifle and suppress freedom of expression on the Internet, under the false pretence of fighting “fake news” and “foreign propaganda.”

The effort to muzzle social opposition by the working class must be resisted.

On January 16, 2018, the World Socialist Web Site will host a live video discussion on Internet censorship, featuring journalist and Truthdig contributor Chris Hedges and WSWS International Editorial Board Chairperson David North.

The discussion will explore the political context of the efforts to censor the Internet and abolish net neutrality, examine the pretexts used to justify the suppression of free speech (i.e., “fake news”), and discuss political strategies to defend democratic rights. Hedges and North will also field questions from on-line listeners.

We urge all of our readers to register to participate in this immensely important discussion, and to help publicize it to friends and co-workers.

 

The webinar will be streamed live by the WSWS on YouTube and Facebook on Tuesday, January 16 at 7:00 pm ( EST). For more information, time zone conversions and to register, click here.

 

Fight Google’s censorship!

Google is blocking the World Socialist Web Site from search results.

To fight this blacklisting:
Share this article with friends and coworkers

Zucktown, USA

Facebook, Amazon, and Google are reviving the ill-fated “company towns” of the Gilded Age

By Julianne Tveten

Source: The Baffler

EARLIER THIS YEAR IN SILICON VALLEY, a phalanx of six-figure-earning Facebook engineers confronted Mark Zuckerberg about subsidizing their extortionate rents. Meanwhile, the contract laborers who serve them bacon kimchi dogs and duck confit found themselves cordoned off from the affordable housing market—where salaries approaching $74,000 qualify—and began converting their garages into homes. Still, if these events point to a dire situation, they’re but the latest stirrings of the hulking leviathan that is the region’s housing crisis—an issue that has peppered the headlines of news outlets great and small for nearly a decade.

Thanks in part to this accretion of bad press, Zuckerberg and his fellow cyborgian billionaires have sprung into action as property developers. In July, Facebook announced plans to create “Willow Campus,” an aggressively rectilinear, Rem Koolhaas-designed rebrand of a Menlo Park office complex it purchased in 2015. The expansion of its headquarters will boast fifteen hundred units of housing, 15 percent of which it claims will be “offered at below-market rates.” If that isn’t sufficiently microcosmic, the company promises to dedicate 125,000 square feet to commercial space, promising a grocery store, pharmacy, and the cryptically worded “additional community-facing retail.”

Equally if not more responsible for crafting California’s bloodsucking geometric crapscape is Google, whose newfangled parent company Alphabet has vowed to provide temporary housing, in the form of modular dwellings, for three hundred of its employees in its home city of Mountain View. For years, Google has been seeking to wrest control of the city from its government; last year, it gained over 370,000 square feet of office space along with the right to develop 1.4 million square feet in the North Bayshore neighborhood after vying with LinkedIn to furnish the territory with a new police station, road improvements, and college scholarships. (The modular homes will be constructed on a former NASA air base, which the company signed an agreement to lease for sixty years.)


We’re witnessing, in these schemes, a revival of the company town. An oft-recurring feature of the Western capitalist imaginary, the company town’s American variety dates back to the nineteenth century; railroad industrialist George Pullman’s eponymous city in Illinois provides one of the more illustrative examples. Pullman characterized his town, completed in 1884, as a lucrative, pro-business utopia filled with satisfied participants, employee and investor alike. Its veneer was indeed shiny: the amenities it promised—yards, indoor plumbing, gas, trash removal—were rare for industrial workers of the time, and its ultra-formal gardens and shopping center, which equipped them with a barbershop, dentist’s offices, a bank, and a slew of overpriced retail, offered a vanguard capitalist’s dabbling in luxury.

There was a catch: paternalistic and omnipresent capitalism. Immaculately manicured trees were merely curtains obscuring a panopticon, one that kept workers behaviorally economized. (White workers, that is—the town expressly excluded black people.) “[Pullman] wanted to create a company town where everybody would be . . . content with their place in the capitalist system,” Jane Eva Baxter explained to Paleofuture. Workers were forced to rent—with no option to buy—the uniform row houses that corralled them, and from which they worried over persistent inspection and imminent eviction. Their employers likewise controlled which books filled their libraries and which performances took place in their theaters, and a ban precluded them from congregating at saloons or holding town meetings unless sanctioned by the Pullman Company, lest they entertain the notion of unionizing.

The forced exchange not just of labor, but of personal autonomy, for the tenuous ability to buy bread or light one’s stove is, in a word, inhumane, and in three, cause for revolt. Pullman workers had organized several strikes throughout the 1880s, but none were so monumental as the one in 1894. In response to the prior year’s economic depression, Pullman opted to slash workers’ wages; rents, however, remained steadfastly fixed, enriching the company’s reported worth of $62 million while leaving workers with as little as two cents (after paying for housing costs). In partnership with the American Railway Union, four thousand Pullman workers, galvanized and desperate, withheld their labor, and legions of workers throughout the nation would soon join them. Yet the strike collapsed when the Cleveland administration, in a violent display of authoritarianism, deployed federal troops and imprisoned labor leaders. Not long after, by Illinois Supreme Court order, the town was forced to sell everything not used expressly for “industry.”

Still, Pullman’s fiasco didn’t discourage other magnates. In 1900, chocolatier Milton Hershey began construction on a factory complex near a collection of dairy farms in rural Pennsylvania, where he declared there’d be “no poverty, no nuisances, no evil”—a Delphic precursor to Google’s now infamous and defunct slogan, “Don’t be evil.” To attract workers, Hershey reclaimed many of Pullman’s gilded comforts: indoor plumbing, pristine lawns, central heating, garbage pickup, and eventually, the theaters and sports venues any company town worth its salt would host.

What was designed as a wholesome advertisement for the company quickly morphed into a miserly surveillance state. Hershey, who served as the town’s mayor, constable, and fire chief, patrolled neighborhoods to survey the maintenance of houses and hired private detectives to monitor employees’ after-hours alcohol consumption. While the town managed to stage a sort of idyllic capitalist performance for onlookers, by the 1930s its employees resented their binding environs and the Depression-era layoffs they endured from a company earning ten times its annual payroll in after-tax profits. A crippled attempt to unionize with the Congress of Industrial Organizations (CIO) bred a 1937 sit-down strike; days later, farmers and company cheerleaders armed with rocks and pitchforks bloodied and ejected the dissidents, destabilizing for good another corporate-civic lark. Hershey’s vast estate, however, remains unscathed to this day.


If Facebook and Google have begun to revive the company town, Amazon has already given it a futuristic luster. California’s inchoate company towns pale in comparison to their northern counterpart, which occupies 19 percent of Seattle’s office space and a farcical 8.1 million square feet. (Its CEO and founder, Jeff Bezos, has vowed to acquire four million more over the next five years, a muscular move meant to complement his midlife-crisis physique.) Touting its sponsorship of local engineering and sustainability programs, Amazon crows about such “investments” as its dog park, playing fields, art installations, and Buckyball-reminiscent domical gardens. Of course, with Bezos’s colonizing aspirations comes yet another bellicose rental market—the very conditions Facebook and Google claim to be combatting. When considered alongside its recent purchase of Whole Foods, Amazon’s dream of tethering its employees to their jobs—by way of homogenized cubes for rent and lightly discounted quinoa chips—is fast becoming a reality.

Like George Pullman and Milton Hershey, the tech industry’s elites take all prisoners in their respective campaigns to expand, absorb, and dominate. The tech company town, that most contemporary of neofeudalist wangles, is the next step in West Coast corporate behemoths’ quest to lure employees into a twenty-four-hour working existence—the totalizing successor to bottomless Indian food spreads, on-site bike-repair shops, and Frank Gehrized habitats. Its premise deviates not at all from that of its antecedents: a genial, painstakingly aestheticized service to workers, where beneficent corporate hands take the reins of the public good  for the well-being of the community. This time around, though, that community will be bridled with unionbusting and data-harvesting apparatuses sure to make even the most paranoid techno-tyrant salivate.

Certainly, the megalomaniacs who aim to populate municipal fixtures with registered-trademark logos will expect cities to genuflect at every turn. Bezos has exemplified this in Seattle, whose recent measure to “tax the rich” drove him to seek another location in which to build Amazon’s second headquarters. While residents of its hometown grapple with a commandeering leech that “suck[s] up our resources and refus[es] to participate in daily upkeep,” Amazon will soon attempt to prime another city to be sapped. Meanwhile, the smooth-faced metallic vampires of California have just begun to cosplay as frontiersmen, raring to follow Bezos’s lead. Drunk on glib TED Talk propagandizing, and accustomed to dismissing the civic inconveniences of corporate regulations and poor neighborhoods, our technosettlers feel little need to heed the lessons of the past when their chief interest is to monopolize the future. Taxing the techie billionaires is a start, but only when cities refuse to be their hosts will they cease to be their parasites.

 

Julianne Tveten writes about the technology industry’s relationship with socioeconomics and culture. Her work has appeared in Current Affairs, Hazlitt, In These Times, The Outline, and elsewhere.

Are Facebook and Google the New Colonial Powers?

By Charles Hugh Smith

Source: Of Two Minds

To qualify as colonial powers, Facebook and Google must effectively limit the choices and power of users, and punish or coerce those who question or resist their power.

I was struck by a phrase from a recent essay on advertising and social media, You Are the Product: As Taplin points out, that remark ‘unwittingly revealed a previously unspoken truth: Facebook and Google are the new colonial powers.’

As you’ve no doubt noticed, the dominance of Facebook and Google in online advertising is now “in the news” for a variety of reasons: the possibility that agents of other governments influenced U.S. elections with media buys on Facebook; anti-trust concerns; the potential for these advert-tech giants to effectively silence legitimate online voices under the guise of limiting “fake news”, and of course, the ongoing issues of click fraud and the underperformance of digital ads.

The phrase that captures this broad narrative is: When an online service is free, you’re not the customer. You’re the product.

In other words, if you’re not paying for the service or content, then your information (harvested by Google, Facebook, et al.), your time online (i.e. your attention, a.k.a. eyeballs) and the content you create and post for free (videos of your cute cat, etc.) are the products being sold to advertisers at a premium.

The characterization of the two dominant digital-advert giants as new colonial powers is interesting on a number of fronts. To get a handle on a few of the issues, I recommend reading these two essays:

A Serf on Google’s Farm

Lost Context: How Did We End Up Here?

And watching this video on the archiving of digital information on individuals–including meta-data, that is, data about your behaviors, transactions, posts, etc. that have been scrubbed of your identity markers (name, account numbers, etc.)

Haunted by Data – Maciej Ceglowski (via GFB)

The key dynamics of colonialism for the residents are 1) a lack of choice and 2) a lack of power: the colonial power imposes a regime, either formally or informally, that limits the choices enjoyed by residents and limits their power to bypass or replace the colonial regime.

In the classic Plantation Economy of overt colonialism–a topic I’ve discussed numerous times here–residents are stripped of any options other than working on the plantation and buying their goods at the plantation store. This coercion need not be direct; the colonial regime can strip residents of choice and power by making it impossible to live without cash, for example, and then providing one source of paid work: the plantation.

Once cash is necessary to live, then credit is introduced–but only if you buy at the company store.

I’ve also written extensively about the Neo-Colonial Model in which corporations and banks bring the colonial model of exploitation to the home country, stripmining the domestic populace via dependence on credit.

Welcome to Neocolonialism, Exploited Peasants! (October 21, 2016)

Greece and the Endgame of the Neocolonial Model of Exploitation (February 19, 2015)

The E.U., Neofeudalism and the Neocolonial-Financialization Model (May 24, 2012)

This model is also used in the developing world, where it has replaced the old overt form of Colonialism with the new and improved credit-based version.

To qualify as colonial powers, Facebook and Google must effectively limit the choices and power of users, and punish or coerce those who question or resist their power. As the dominant corporations in search, social media and digital advertising, Facebook and Google limit the options of users simply by being essential due to their dominance.

As for punishing users–the potential to do so is what’s worrying observers. The cover for silencing or banning critics is opaque: non-compliance with guidelines. So who’s to say that users who criticized or questioned the policies of Facebook and Google aren’t silenced along with click-fraudsters, “fake news” purveyors, etc.? Who gets silenced is completely up to the companies, and there is no recourse to the corporation’s opaque judgment.

The Orwellian possibilities are real enough.

Here’s a look at the digital advert market:

And the dominance of Google/Facebook:

Godzilla Amazon: The Amount of Power in Jeff Bezos’ Hands Should Frighten All Americans

Amazon very effectively uses technology and data to sidestep traditional restrictions on monopoly power.

By Matt Stoller

Source: AlterNet

To understand the depth and breadth of Jeff Bezos’ ambitions for the company he built, type www.relentless.com into your browser. The domain Bezos registered in 1994 will redirect to Amazon, the company aptly, and ambitiously, nicknamed The Everything Store. He tells his shareholders that the company will act like an aggressive startup — that at Amazon, it is always Day One.

Like Google and Facebook, Amazon uses technology and data to sidestep traditional restrictions on monopoly power. Our lives are increasingly organized by the platforms these companies run, platforms which now mediate the way we communicate and engage in commerce with each other. We are living in a world organized by tech monopolists, a change in power relationships that no one voted for but has been imposed upon us nonetheless.

Now, Bezos is attempting to add more power to his empire with the surprise announcement that the company will pay $13.7 billion for Whole Foods Market. Amazon will now have a store footprint in neighborhoods across America.

Our communities and the way we engage in commerce will change. Imagine walking into a Whole Foods store and seeing different prices depending on whether you are a member of Amazon Prime — or seeing different prices depending on any other way that you interact with Amazon.

This isn’t implausible. It is what the company does when it opens up stores. For instance, Amazon is creating a chain of physical book stores to take the place of the book stores the company destroyed. In these stores, there are no price tags at all: You scan the items with your phone and have a price delivered to you, personalized by Amazon. Why wouldn’t Amazon extend this to Whole Foods? “Our goal with Amazon Prime, make no mistake,” says Amazon CEO Jeff Bezos, “is to make sure that if you are not a Prime member, you are being irresponsible.”

This statement and the amount of power in Bezos’ hands should frighten all Americans. Bezos meant that Amazon will soon be so good for consumers that it would just be folly not to be a member. But what he unwittingly implied is that as a citizen, you will have no choice but to interact with his institution to buy and sell key goods that everyone needs — on his terms.

Jeff Bezos, in other words, has a vision. To be everywhere, to be the platform for everything for every consumer. So when Bezos calls you irresponsible for not tithing to Amazon, America has a big political problem.

Amazon’s takeover of Whole Foods means that it can target and eliminate regional competitors one by one as it did with its online competitors. When Diapers.com emerged as a competitor to Amazon, Amazon simply sold diapers below cost until the company capitulated and sold itself to Bezos. There’s no reason to assume Amazon wouldn’t bring the same predatory pricing strategy to bear in every city in America. Why wouldn’t it? Even though predatory pricing is illegal, the government hasn’t enforced those laws for decades. Whole Foods tends to source from local farms as part of a commitment to localism; these farms will now be negotiating with a much bigger entity that is committed to a ruthless model of efficiency.

There are so many ways that Amazon can use its power that it’s simply impossible to figure out what it will do. Amazon probably doesn’t even know yet; it will discover and test them, relentlessly. Maybe you will get first in line, or last in line, for the most popular toy during the Christmas period, or maybe the restaurant you own will get access to the freshest yet limited batch strawberries you need based on whether you are giving better deals to Prime members.

Or here’s a more creative possibility. Amazon is excluding Amazon Prime video from Apple TV so that Prime members will buy its streaming device instead of Apple’s. As the smartphone market commodifies and transforms, Bezos could simply use his combined physical and online footprint to keep you from even seeing prices at his stores unless you are using Amazon-approved electronic devices. If Amazon were just one of many stores that would be one thing. But Amazon is quickly becoming the dominant way to buy and sell.

And this, make no mistake, is what is happening. Upon the announcement of the acquisition, Target’s stock price dropped by 10 percent and Walmart’s by 5 percent. Amazon’s rose by more than the price it is paying for Whole Foods. Wall Street sees the writing on the wall. There is only one force that can stop Amazon from organizing and regulating basically all American retail commerce — our democratic institutions and our political system. We the people.

Bezos knows Amazon is a political enterprise at this point. The day before he announced his company’s attempt to buy this supermarket chain, he released a request on Twitter to have people offer ideas for where he can direct charity money. That is the kind of public relations undertaken by political leaders. And Amazon put out an ad for a Ph.D. economist-cum-lobbyist “to educate regulators and policy makers about the fundamentally procompetitive focus of Amazon’s businesses.” And he has put political fixers, like Ivanka Trump’s lawyer and ex-Clinton administration officer Jamie Gorelick, on his board of directors. He also bought The Washington Post.

The public should speak out in opposition to this merger. More than that, the government should take this opportunity to reject the entire pro-finance pro-concentration philosophy that has taken hold in this country since the Reagan era. It is no accident that Whole Foods founder John Mackey was forced to surrender his life’s work because financiers looking for a quick buck bought up a large bloc of shares in his company and pressured him to sell the company to Amazon. The day before the announcement of the sale, he called these hedge funds “Ringwraiths,” after the evil characters in “Lord of the Rings.” Bezos might be the most powerful empire-builder in the land, but he had help.

This merger should frighten all of us. But it should also embolden anyone who believes that America should not be in thrall to monopolists like Bezos. For them, today, as Jeff Bezos might put it, is Day One.

Matt Stoller is fellow at the Open Markets program, where he is researching the history of the relationship between concentrated financial power and the Democratic party in the 20th century.

Algorithmic Control and the Revolution of Desire

zuckerberg_VR_people-625x350

By Alfie Brown

Source: ROAR Magazine

Last year, Stanford University published a study confirming what many of us may long have suspected: that your computer can predict what you want with more accuracy than your spouse or your friends. Your digital footprint betrays the truth not only about what you “like” but about what you really like — or so the argument goes. But what if our digital footprints, besides revealing our desires, are also responsible for the very construction of these desires? If that were the case, we would need to display a far deeper level of suspicion towards the complex patterns of corporate and state control found in contemporary cyberspace.

There is little doubt that innovations in mobile technologies are part of emerging methodologies of social control. In particular, games and applications that make use of the Google Maps back-end system — including Uber, Grindr, Pokémon Go and hundreds of others — which should be seen as one of the most important technological developments of the last decade or so, are particularly complicit in these new regulatory practices. Putting the well-publicized data collection issue aside, such applications have two powerful ideological functions. First, they construct the new “geographical contours” of the city, regulating the paths we take and mapping the city in the service of both corporate interest and the prevention of uprisings. Second, and more unconsciously, they enact what Jean-Francois Lyotard once called the “desirevolution” — an evolution and revolution of desire, in which that what we want is itself now determined by the digital paths we tread.

The Psycho-Geographical Contours of the City 

In 1981, the French theorist Guy Debord famously wrote of the “psycho-geographical contours” of the city that govern the routes we take, even when we may feel we are wandering freely around the physical space. At that time, it was Debord’s topic — architecture — that was the dominant force in re-organizing our routes through the city. Today, however, that role is increasingly taken up by the mobile phone. It is Uber that dictates the path of your taxi, Maps that dictates the route of your walks and drives, and Pokémon Go that (for a summer at least) determined where the next crowd would gather.

Other similar map-based application programing interfaces, or APIs, dictate our jogging routes (MapMyRun), our recreational hikes (LiveTrekker) and our tourist activities (TripAdvisor Guides). Pokémon Go attracted some publicity because it accidentally and humorously gathered crowds in weird places, but this should only alert us to its potential ability to gather crowds in the right places (to serve corporate interest) or to prevent the gathering of crowds in the wrong ones (to prevent organized uprisings, for instance). Such applications should be seen as a testing phase in the project of Google and its affiliated corporations as they work out how best to regulate the movements of large populations via their phones. Pokémon Go players were the early cyborgs, complete with hiccups and malfunctions — a beta version of Google’s future human. These future humans will go where instructed.

On a smaller scale, this point can be seen in concrete terms with a case study of London. A recent Transport for London talk discussed the possibility of “gamifying” commuting. In order to facilitate this possibility, Transport for London have made the internet API and data streams used to monitor all London Transport vehicles open source and open access, in the hope that developers will build London-focused apps based around the public transport system, thus maximizing profit. One idea is that if a particular tube station is at risk of becoming clogged up due to other delays, TfL could give “in-game rewards” for people willing to use alternative routes and thus smooth out the jam.

While traffic jam prevention may not seem like evidence that we have arrived in the dystopia of total corporate and state control, it does actually reveal the dangerous potentiality in such technologies. It shows that the UK is not as far away from the “social credit” game system recently implemented in Beijing to rate each citizen’s trustworthiness and give them rewards for their dedication to the Chinese state. While the UK media reacted with shock to these innovations in Chinese app development, a closer look at the electronic structures of mapping and controlling our own movements shows that a similar framework is already in its development phase in London too. In the “smart city” of the future, it won’t just be traffic jams that are smoothed out. Any inefficient misuse or any occupation of public space deemed dangerous by the authorities can be specifically targeted.

The Corporate Surveillance State

When it comes to these developments in technology, state and corporate forces work more closely with each other than ever before — and much more closely than they are willing to admit. Srećko Horvat has pointed out the short distance between the creators of Pokémon Go and Hillary Clinton, despite her odd and unsolicited recent public claim that she didn’t know who made the game. Likewise, Julian Assange’s strangely under-discussed 2014 book When Google Met WikiLeaks showed the shocking proximity of Google chief Eric Schmidt and the Washington state apparatus. In terms of surveillance and the use of big data, it has become impossible to sustain the distinction between state control and the production of wealth, since the two have become so irrevocably intertwined. As such, old arguments that “it’s all just about money” need to be treated with greater suspicion, since major firms today are so closely tied to the state. Various aspects of state organization should likewise be considered equally suspect because of their corporate underpinnings.

Of course, when it comes to the mapping applications that promise to help us access the best quality objects of our desire with the greatest efficiency and the least cost, these tempting forces of joint corporate and state control are entered into willingly by participants. As such, they require something else in order to function in the all-consuming way that they do. Far from simply channeling and transforming our movements, they also need to channel and even transform our desires.

We are now firmly within the world of the electronic object, where the mediation of everything from lovers and friends to meals and activities via our mobile phones and computers makes it virtually impossible to separate physical from electronic objectivity. Whilst the electronic Pokémon or the “in-game rewards” offered by many applications may not yet have the physicality of a lover who can be accessed via Tinder, or a burger that can be located via JustEat, the burger and the lover certainly have the electronic objectivity of the Pokémon. We can therefore see a transformation in the objects of desire taking place by and through our devices, so that we are confronted not only with a change in how we get what we want, but with a change in what we want in the first place.

Italo Calvino once wrote of the “amorous relationship” that “erases the lines between our bodies and sopa de frijoles, huachinango a la vera cruzana, and enchiladas.” While in such a moment food and lover become one in a kind of orgy of physical consumption, in the same novel Calvino warned of a time “when the olfactory alphabet, which made them so many words in a precious lexicon, is forgotten,” and in which “perfumes will be left speechless, inarticulate, illegible.”

It is this world that we find ourselves desiring in, where an orgy of electronic objects with no olfactory physicality blurs the distinction between lovers, meals and “in-game” rewards. The purpose of this shift, of course, is to increase the power of technological corporations by giving them a new sort of control over the way we relate to our objects of desire. If the boundaries between the way we search, desire and acquire our burgers, lovers and Pikachus are dissolving, it is not so much the old point that everything has become a commodity, but a new point that this kind of substitutional electronic objectivity endows corporate and state technologists with unprecedented power to distribute and redistribute the objects of the desire around the “smart city.”

Data Centralization in China and the West

There is, moreover, a significant centralization of power underpinning these developments. Like the social credit idea, the Chinese phenomenon of WeChat — developed in 2011 by Tencent, one of the largest internet and mobile media companies in the world — has received concerned media coverage in the West. WeChat is the first truly successful “SuperApp,” the basic premise of which is that all applications like WhatsApp, Facebook, Instagram, OpenRice, Tinder, TripAdvisor and many more, are rolled into one cohesive application. All for our convenience, of course.

As a result, however, there is now a new level of cohesion between the data-collection and movement monitoring going on in the mobile phone as a whole, where all data is now directly collected in a single place. More than half of the 1.1 billion WeChat users access the app over 10 times per day, and many users simply leave it on continuously, using it to map, shop, date and play. This means that the app sets a new precedent for continually monitoring the movements of a whole nation of citizens. WeChat’s incredibly strange “heat map” feature actually lets users — and authorities — see where crowds are forming. The claim is that this has nothing to do with crowd control: the objective is simply to help us access the least crowded shopping malls, doing nothing more than helping us get what we want.

WeChat is already the most popular social media application in China, but it will soon have huge significance worldwide, with an international version now available and many replica “SuperApps” in production. What the Western media finds to be so concerning about WeChat is once again something that already exists here in the West, at least in beta form, without us knowing it. WeChat actually offers us a glimpse into an Orwellian future in which companies and governments can track every movement we make. While in China the blocking of Google means that WeChat uses Baidu Maps as its API, the international version of WeChat simply taps into Google Maps, showing just how deeply integrated these corporate technologies already are.

What emerges from Western media coverage of these developments is the continued insistence on an apparent division between the public and the private sphere in the United States and Europe. When it comes to digital surveillance and the monitoring of movement, the situation is almost certainly better in the West than it is in China at this moment. Yet from an analysis of recent developments in China we learn not only that we need to be attentive to similar dangers here in the West, but also that there are powerful ideological mechanisms at play to obscure these developments by presenting China and the US as fundamentally opposed to one another. Whilst in China the links between the new SuperApps and the state are commonly accepted, in the US the illusion of privacy remains paramount. Although data is often shared between different corporations and between the public and the private sectors, this fact is generally obscured. The continued expressions of shock at the more openly centralized state control visible in China serve only to further consolidate the impression that these things are not happening in the US and Europe.

Furthermore, WeChat reveals more than the dangers of mass data collection and new levels of technological surveillance. It also embodies the power of the phone over the objects of desire. Since one single app can successfully market us food, lovers, holidays, events, blogs and even charities, the connections between such “objects” become more important than the differences. While the structural similarities between Grindr, Pokémon Go and OpenRice become apparent via analysis of both their surfaces and back systems, WeChat makes the connections plain to see. The various forms and objects of each individual’s desire no longer represent discreet and separable elements of a subject’s life. Instead we enter a fully cohesive libidinal economy in which we are increasingly regulated and mapped via the organization of what and how we desire.

The Desirevolution

So what do we do when faced with this revolution — a technological revolution that is not overthrowing any existing power structures but rather transforming the world in the service of private corporations and the state? Often, the response of those concerned by such developments is to express hostility or distrust towards technology itself. Yet to break this corporate organization of desire, we need not nostalgically yearn for a desire that is free of politics and technology, for no such desire is possible. On the contrary, what we need is to recognize that desire is necessarily and always controlled by both politics and technology.

This awareness would be the first step towards ensuring that the centralized corporate and state organization of desire malfunctions — and, ultimately, it would be the first step towards its potential reprogramming. The corporate desirevolution depends on our blindness to the politics of its technologies, asking us to experience our desires as spontaneous yearning and our mobile phone and its powerful apps as just tools for our convenience, helping us get what we want in the easiest way possible. We need to recognize that this is far from the case. The principal concern of those who own the apps — perhaps even more powerful than data collection — is to transform desire itself. At the very least, we can make visible the complicity of such technologies in producing the perfect conformist modern citizen.

The new mind control

mind_control

The internet has spawned subtle forms of influence that can flip elections and manipulate everything we say, think and do

By Robert Epstein

Source: Aeon Magazine

Over the past century, more than a few great writers have expressed concern about humanity’s future. In The Iron Heel (1908), the American writer Jack London pictured a world in which a handful of wealthy corporate titans – the ‘oligarchs’ – kept the masses at bay with a brutal combination of rewards and punishments. Much of humanity lived in virtual slavery, while the fortunate ones were bought off with decent wages that allowed them to live comfortably – but without any real control over their lives.

In We (1924), the brilliant Russian writer Yevgeny Zamyatin, anticipating the excesses of the emerging Soviet Union, envisioned a world in which people were kept in check through pervasive monitoring. The walls of their homes were made of clear glass, so everything they did could be observed. They were allowed to lower their shades an hour a day to have sex, but both the rendezvous time and the lover had to be registered first with the state.

In Brave New World (1932), the British author Aldous Huxley pictured a near-perfect society in which unhappiness and aggression had been engineered out of humanity through a combination of genetic engineering and psychological conditioning. And in the much darker novel 1984 (1949), Huxley’s compatriot George Orwell described a society in which thought itself was controlled; in Orwell’s world, children were taught to use a simplified form of English called Newspeak in order to assure that they could never express ideas that were dangerous to society.

These are all fictional tales, to be sure, and in each the leaders who held the power used conspicuous forms of control that at least a few people actively resisted and occasionally overcame. But in the non-fiction bestseller The Hidden Persuaders (1957) – recently released in a 50th-anniversary edition – the American journalist Vance Packard described a ‘strange and rather exotic’ type of influence that was rapidly emerging in the United States and that was, in a way, more threatening than the fictional types of control pictured in the novels. According to Packard, US corporate executives and politicians were beginning to use subtle and, in many cases, completely undetectable methods to change people’s thinking, emotions and behaviour based on insights from psychiatry and the social sciences.

Most of us have heard of at least one of these methods: subliminal stimulation, or what Packard called ‘subthreshold effects’ – the presentation of short messages that tell us what to do but that are flashed so briefly we aren’t aware we have seen them. In 1958, propelled by public concern about a theatre in New Jersey that had supposedly hidden messages in a movie to increase ice cream sales, the National Association of Broadcasters – the association that set standards for US television – amended its code to prohibit the use of subliminal messages in broadcasting. In 1974, the Federal Communications Commission opined that the use of such messages was ‘contrary to the public interest’. Legislation to prohibit subliminal messaging was also introduced in the US Congress but never enacted. Both the UK and Australia have strict laws prohibiting it.

Subliminal stimulation is probably still in wide use in the US – it’s hard to detect, after all, and no one is keeping track of it – but it’s probably not worth worrying about. Research suggests that it has only a small impact, and that it mainly influences people who are already motivated to follow its dictates; subliminal directives to drink affect people only if they’re already thirsty.

Packard had uncovered a much bigger problem, however – namely that powerful corporations were constantly looking for, and in many cases already applying, a wide variety of techniques for controlling people without their knowledge. He described a kind of cabal in which marketers worked closely with social scientists to determine, among other things, how to get people to buy things they didn’t need and how to condition young children to be good consumers – inclinations that were explicitly nurtured and trained in Huxley’s Brave New World. Guided by social science, marketers were quickly learning how to play upon people’s insecurities, frailties, unconscious fears, aggressive feelings and sexual desires to alter their thinking, emotions and behaviour without any awareness that they were being manipulated.

By the early 1950s, Packard said, politicians had got the message and were beginning to merchandise themselves using the same subtle forces being used to sell soap. Packard prefaced his chapter on politics with an unsettling quote from the British economist Kenneth Boulding: ‘A world of unseen dictatorship is conceivable, still using the forms of democratic government.’ Could this really happen, and, if so, how would it work?

The forces that Packard described have become more pervasive over the decades. The soothing music we all hear overhead in supermarkets causes us to walk more slowly and buy more food, whether we need it or not. Most of the vacuous thoughts and intense feelings our teenagers experience from morning till night are carefully orchestrated by highly skilled marketing professionals working in our fashion and entertainment industries. Politicians work with a wide range of consultants who test every aspect of what the politicians do in order to sway voters: clothing, intonations, facial expressions, makeup, hairstyles and speeches are all optimised, just like the packaging of a breakfast cereal.

Fortunately, all of these sources of influence operate competitively. Some of the persuaders want us to buy or believe one thing, others to buy or believe something else. It is the competitive nature of our society that keeps us, on balance, relatively free.

But what would happen if new sources of control began to emerge that had little or no competition? And what if new means of control were developed that were far more powerful – and far more invisible – than any that have existed in the past? And what if new types of control allowed a handful of people to exert enormous influence not just over the citizens of the US but over most of the people on Earth?

It might surprise you to hear this, but these things have already happened.

To understand how the new forms of mind control work, we need to start by looking at the search engine – one in particular: the biggest and best of them all, namely Google. The Google search engine is so good and so popular that the company’s name is now a commonly used verb in languages around the world. To ‘Google’ something is to look it up on the Google search engine, and that, in fact, is how most computer users worldwide get most of their information about just about everything these days. They Google it. Google has become the main gateway to virtually all knowledge, mainly because the search engine is so good at giving us exactly the information we are looking for, almost instantly and almost always in the first position of the list it shows us after we launch our search – the list of ‘search results’.

That ordered list is so good, in fact, that about 50 per cent of our clicks go to the top two items, and more than 90 per cent of our clicks go to the 10 items listed on the first page of results; few people look at other results pages, even though they often number in the thousands, which means they probably contain lots of good information. Google decides which of the billions of web pages it is going to include in our search results, and it also decides how to rank them. How it decides these things is a deep, dark secret – one of the best-kept secrets in the world, like the formula for Coca-Cola.

Because people are far more likely to read and click on higher-ranked items, companies now spend billions of dollars every year trying to trick Google’s search algorithm – the computer program that does the selecting and ranking – into boosting them another notch or two. Moving up a notch can mean the difference between success and failure for a business, and moving into the top slots can be the key to fat profits.

Late in 2012, I began to wonder whether highly ranked search results could be impacting more than consumer choices. Perhaps, I speculated, a top search result could have a small impact on people’s opinions about things. Early in 2013, with my associate Ronald E Robertson of the American Institute for Behavioral Research and Technology in Vista, California, I put this idea to a test by conducting an experiment in which 102 people from the San Diego area were randomly assigned to one of three groups. In one group, people saw search results that favoured one political candidate – that is, results that linked to web pages that made this candidate look better than his or her opponent. In a second group, people saw search rankings that favoured the opposing candidate, and in the third group – the control group – people saw a mix of rankings that favoured neither candidate. The same search results and web pages were used in each group; the only thing that differed for the three groups was the ordering of the search results.

To make our experiment realistic, we used real search results that linked to real web pages. We also used a real election – the 2010 election for the prime minister of Australia. We used a foreign election to make sure that our participants were ‘undecided’. Their lack of familiarity with the candidates assured this. Through advertisements, we also recruited an ethnically diverse group of registered voters over a wide age range in order to match key demographic characteristics of the US voting population.

All participants were first given brief descriptions of the candidates and then asked to rate them in various ways, as well as to indicate which candidate they would vote for; as you might expect, participants initially favoured neither candidate on any of the five measures we used, and the vote was evenly split in all three groups. Then the participants were given up to 15 minutes in which to conduct an online search using ‘Kadoodle’, our mock search engine, which gave them access to five pages of search results that linked to web pages. People could move freely between search results and web pages, just as we do when using Google. When participants completed their search, we asked them to rate the candidates again, and we also asked them again who they would vote for.

We predicted that the opinions and voting preferences of 2 or 3 per cent of the people in the two bias groups – the groups in which people were seeing rankings favouring one candidate – would shift toward that candidate. What we actually found was astonishing. The proportion of people favouring the search engine’s top-ranked candidate increased by 48.4 per cent, and all five of our measures shifted toward that candidate. What’s more, 75 per cent of the people in the bias groups seemed to have been completely unaware that they were viewing biased search rankings. In the control group, opinions did not shift significantly.

This seemed to be a major discovery. The shift we had produced, which we called the Search Engine Manipulation Effect (or SEME, pronounced ‘seem’), appeared to be one of the largest behavioural effects ever discovered. We did not immediately uncork the Champagne bottle, however. For one thing, we had tested only a small number of people, and they were all from the San Diego area.

Over the next year or so, we replicated our findings three more times, and the third time was with a sample of more than 2,000 people from all 50 US states. In that experiment, the shift in voting preferences was 37.1 per cent and even higher in some demographic groups – as high as 80 per cent, in fact.

We also learned in this series of experiments that by reducing the bias just slightly on the first page of search results – specifically, by including one search item that favoured the other candidate in the third or fourth position of the results – we could mask our manipulation so that few or even no people were aware that they were seeing biased rankings. We could still produce dramatic shifts in voting preferences, but we could do so invisibly.

Still no Champagne, though. Our results were strong and consistent, but our experiments all involved a foreign election – that 2010 election in Australia. Could voting preferences be shifted with real voters in the middle of a real campaign? We were skeptical. In real elections, people are bombarded with multiple sources of information, and they also know a lot about the candidates. It seemed unlikely that a single experience on a search engine would have much impact on their voting preferences.

To find out, in early 2014, we went to India just before voting began in the largest democratic election in the world – the Lok Sabha election for prime minister. The three main candidates were Rahul Gandhi, Arvind Kejriwal, and Narendra Modi. Making use of online subject pools and both online and print advertisements, we recruited 2,150 people from 27 of India’s 35 states and territories to participate in our experiment. To take part, they had to be registered voters who had not yet voted and who were still undecided about how they would vote.

Participants were randomly assigned to three search-engine groups, favouring, respectively, Gandhi, Kejriwal or Modi. As one might expect, familiarity levels with the candidates was high – between 7.7 and 8.5 on a scale of 10. We predicted that our manipulation would produce a very small effect, if any, but that’s not what we found. On average, we were able to shift the proportion of people favouring any given candidate by more than 20 per cent overall and more than 60 per cent in some demographic groups. Even more disturbing, 99.5 per cent of our participants showed no awareness that they were viewing biased search rankings – in other words, that they were being manipulated.

SEME’s near-invisibility is curious indeed. It means that when people – including you and me – are looking at biased search rankings, they look just fine. So if right now you Google ‘US presidential candidates’, the search results you see will probably look fairly random, even if they happen to favour one candidate. Even I have trouble detecting bias in search rankings that I know to be biased (because they were prepared by my staff). Yet our randomised, controlled experiments tell us over and over again that when higher-ranked items connect with web pages that favour one candidate, this has a dramatic impact on the opinions of undecided voters, in large part for the simple reason that people tend to click only on higher-ranked items. This is truly scary: like subliminal stimuli, SEME is a force you can’t see; but unlike subliminal stimuli, it has an enormous impact – like Casper the ghost pushing you down a flight of stairs.

We published a detailed report about our first five experiments on SEME in the prestigious Proceedings of the National Academy of Sciences (PNAS) in August 2015. We had indeed found something important, especially given Google’s dominance over search. Google has a near-monopoly on internet searches in the US, with 83 per cent of Americans specifying Google as the search engine they use most often, according to the Pew Research Center. So if Google favours one candidate in an election, its impact on undecided voters could easily decide the election’s outcome.

Keep in mind that we had had only one shot at our participants. What would be the impact of favouring one candidate in searches people are conducting over a period of weeks or months before an election? It would almost certainly be much larger than what we were seeing in our experiments.

Other types of influence during an election campaign are balanced by competing sources of influence – a wide variety of newspapers, radio shows and television networks, for example – but Google, for all intents and purposes, has no competition, and people trust its search results implicitly, assuming that the company’s mysterious search algorithm is entirely objective and unbiased. This high level of trust, combined with the lack of competition, puts Google in a unique position to impact elections. Even more disturbing, the search-ranking business is entirely unregulated, so Google could favour any candidate it likes without violating any laws. Some courts have even ruled that Google’s right to rank-order search results as it pleases is protected as a form of free speech.

Does the company ever favour particular candidates? In the 2012 US presidential election, Google and its top executives donated more than $800,000 to President Barack Obama and just $37,000 to his opponent, Mitt Romney. And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Google’s search results routinely favoured Democratic candidates. Are Google’s search rankings really biased? An internal report issued by the US Federal Trade Commission in 2012 concluded that Google’s search rankings routinely put Google’s financial interests ahead of those of their competitors, and anti-trust actions currently under way against Google in both the European Union and India are based on similar findings.

In most countries, 90 per cent of online search is conducted on Google, which gives the company even more power to flip elections than it has in the US and, with internet penetration increasing rapidly worldwide, this power is growing. In our PNAS article, Robertson and I calculated that Google now has the power to flip upwards of 25 per cent of the national elections in the world with no one knowing this is occurring. In fact, we estimate that, with or without deliberate planning on the part of company executives, Google’s search rankings have been impacting elections for years, with growing impact each year. And because search rankings are ephemeral, they leave no paper trail, which gives the company complete deniability.

Power on this scale and with this level of invisibility is unprecedented in human history. But it turns out that our discovery about SEME was just the tip of a very large iceberg.

Recent reports suggest that the Democratic presidential candidate Hillary Clinton is making heavy use of social media to try to generate support – Twitter, Instagram, Pinterest, Snapchat and Facebook, for starters. At this writing, she has 5.4 million followers on Twitter, and her staff is tweeting several times an hour during waking hours. The Republican frontrunner, Donald Trump, has 5.9 million Twitter followers and is tweeting just as frequently.

Is social media as big a threat to democracy as search rankings appear to be? Not necessarily. When new technologies are used competitively, they present no threat. Even through the platforms are new, they are generally being used the same way as billboards and television commercials have been used for decades: you put a billboard on one side of the street; I put one on the other. I might have the money to erect more billboards than you, but the process is still competitive.

What happens, though, if such technologies are misused by the companies that own them? A study by Robert M Bond, now a political science professor at Ohio State University, and others published in Nature in 2012 described an ethically questionable experiment in which, on election day in 2010, Facebook sent ‘go out and vote’ reminders to more than 60 million of its users. The reminders caused about 340,000 people to vote who otherwise would not have. Writing in the New Republic in 2014, Jonathan Zittrain, professor of international law at Harvard University, pointed out that, given the massive amount of information it has collected about its users, Facebook could easily send such messages only to people who support one particular party or candidate, and that doing so could easily flip a close election – with no one knowing that this has occurred. And because advertisements, like search rankings, are ephemeral, manipulating an election in this way would leave no paper trail.

Are there laws prohibiting Facebook from sending out ads selectively to certain users? Absolutely not; in fact, targeted advertising is how Facebook makes its money. Is Facebook currently manipulating elections in this way? No one knows, but in my view it would be foolish and possibly even improper for Facebook not to do so. Some candidates are better for a company than others, and Facebook’s executives have a fiduciary responsibility to the company’s stockholders to promote the company’s interests.

The Bond study was largely ignored, but another Facebook experiment, published in 2014 in PNAS, prompted protests around the world. In this study, for a period of a week, 689,000 Facebook users were sent news feeds that contained either an excess of positive terms, an excess of negative terms, or neither. Those in the first group subsequently used slightly more positive terms in their communications, while those in the second group used slightly more negative terms in their communications. This was said to show that people’s ‘emotional states’ could be deliberately manipulated on a massive scale by a social media company, an idea that many people found disturbing. People were also upset that a large-scale experiment on emotion had been conducted without the explicit consent of any of the participants.

Facebook’s consumer profiles are undoubtedly massive, but they pale in comparison with those maintained by Google, which is collecting information about people 24/7, using more than 60 different observation platforms – the search engine, of course, but also Google Wallet, Google Maps, Google Adwords, Google Analytics, Chrome, Google Docs, Android, YouTube, and on and on. Gmail users are generally oblivious to the fact that Google stores and analyses every email they write, even the drafts they never send – as well as all the incoming email they receive from both Gmail and non-Gmail users.

According to Google’s privacy policy – to which one assents whenever one uses a Google product, even when one has not been informed that he or she is using a Google product – Google can share the information it collects about you with almost anyone, including government agencies. But never with you. Google’s privacy is sacrosanct; yours is nonexistent.

Could Google and ‘those we work with’ (language from the privacy policy) use the information they are amassing about you for nefarious purposes – to manipulate or coerce, for example? Could inaccurate information in people’s profiles (which people have no way to correct) limit their opportunities or ruin their reputations?

Certainly, if Google set about to fix an election, it could first dip into its massive database of personal information to identify just those voters who are undecided. Then it could, day after day, send customised rankings favouring one candidate to just those people. One advantage of this approach is that it would make Google’s manipulation extremely difficult for investigators to detect.

Extreme forms of monitoring, whether by the KGB in the Soviet Union, the Stasi in East Germany, or Big Brother in 1984, are essential elements of all tyrannies, and technology is making both monitoring and the consolidation of surveillance data easier than ever. By 2020, China will have put in place the most ambitious government monitoring system ever created – a single database called the Social Credit System, in which multiple ratings and records for all of its 1.3 billion citizens are recorded for easy access by officials and bureaucrats. At a glance, they will know whether someone has plagiarised schoolwork, was tardy in paying bills, urinated in public, or blogged inappropriately online.

As Edward Snowden’s revelations made clear, we are rapidly moving toward a world in which both governments and corporations – sometimes working together – are collecting massive amounts of data about every one of us every day, with few or no laws in place that restrict how those data can be used. When you combine the data collection with the desire to control or manipulate, the possibilities are endless, but perhaps the most frightening possibility is the one expressed in Boulding’s assertion that an ‘unseen dictatorship’ was possible ‘using the forms of democratic government’.

Since Robertson and I submitted our initial report on SEME to PNAS early in 2015, we have completed a sophisticated series of experiments that have greatly enhanced our understanding of this phenomenon, and other experiments will be completed in the coming months. We have a much better sense now of why SEME is so powerful and how, to some extent, it can be suppressed.

We have also learned something very disturbing – that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide – entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue. In one of our recent experiments, biased search results shifted people’s opinions about the value of fracking by 33.9 per cent.

Perhaps even more disturbing is that the handful of people who do show awareness that they are viewing biased search rankings shift even further in the predicted direction; simply knowing that a list is biased doesn’t necessarily protect you from SEME’s power.

Remember what the search algorithm is doing: in response to your query, it is selecting a handful of webpages from among the billions that are available, and it is ordering those webpages using secret criteria. Seconds later, the decision you make or the opinion you form – about the best toothpaste to use, whether fracking is safe, where you should go on your next vacation, who would make the best president, or whether global warming is real – is determined by that short list you are shown, even though you have no idea how the list was generated.

Meanwhile, behind the scenes, a consolidation of search engines has been quietly taking place, so that more people are using the dominant search engine even when they think they are not. Because Google is the best search engine, and because crawling the rapidly expanding internet has become prohibitively expensive, more and more search engines are drawing their information from the leader rather than generating it themselves. The most recent deal, revealed in a Securities and Exchange Commission filing in October 2015, was between Google and Yahoo! Inc.

Looking ahead to the November 2016 US presidential election, I see clear signs that Google is backing Hillary Clinton. In April 2015, Clinton hired Stephanie Hannon away from Google to be her chief technology officer and, a few months ago, Eric Schmidt, chairman of the holding company that controls Google, set up a semi-secret company – The Groundwork – for the specific purpose of putting Clinton in office. The formation of The Groundwork prompted Julian Assange, founder of Wikileaks, to dub Google Clinton’s ‘secret weapon’ in her quest for the US presidency.

We now estimate that Hannon’s old friends have the power to drive between 2.6 and 10.4 million votes to Clinton on election day with no one knowing that this is occurring and without leaving a paper trail. They can also help her win the nomination, of course, by influencing undecided voters during the primaries. Swing voters have always been the key to winning elections, and there has never been a more powerful, efficient or inexpensive way to sway them than SEME.

We are living in a world in which a handful of high-tech companies, sometimes working hand-in-hand with governments, are not only monitoring much of our activity, but are also invisibly controlling more and more of what we think, feel, do and say. The technology that now surrounds us is not just a harmless toy; it has also made possible undetectable and untraceable manipulations of entire populations – manipulations that have no precedent in human history and that are currently well beyond the scope of existing regulations and laws. The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril.