Technology and a Tyranny Worse than Prison 

By Bert Olivier

Source: Brownstone Institute

In an outstanding piece of political-theoretical writing, titled ‘The Threat of Big Other’ (with its play on George Orwell’s ‘Big Brother’) Shoshana Zuboff, succinctly addresses the main issues of her book, The Age of Surveillance Capitalism – The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, Hachette, 2019), explicitly linking it to Orwell’s 1984

Significantly, at the time she reminded readers that Orwell’s goal with 1984 was to alert British and American societies that democracy is not immune to totalitarianism, and that “Totalitarianism, if not fought against, could triumph anywhere” (Orwell, quoted by Zuboff, p. 16). In other words, people are utterly wrong in their belief that totalitarian control of their actions through mass surveillance (as depicted in 1984, captured in the slogan, “Big Brother is watching you”) could only issue from the state, and she does not hesitate to name the source of this threat today (p. 16):

For 19 years, private companies practicing an unprecedented economic logic that I call surveillance capitalism have hijacked the Internet and its digital technologies. Invented at Google in 2000, this new economics covertly claims private human experience as free raw material for translation into behavioural data. Some data are used to improve services, but the rest are turned into computational products that predict your behaviour. These predictions are traded in a new futures market, where surveillance capitalists sell certainty to businesses determined to know what we will do next. 

By now we know that such mass surveillance does not merely have the purpose – if it ever did – of tracking and predicting consumer behaviour with the aim of maximising profits; far from it. It is generally known among those who prefer to remain informed about global developments, and who do not only rely on the legacy media for this, that in China such mass surveillance has reached the point where citizens are tracked through a myriad of cameras in public places, as well as through smartphones, to the point where their behaviour is virtually completely monitored and controlled. 

Small wonder that Klaus Schwab of the World Economic Forum (WEF) does not let an opportunity pass to praise China as the model to be emulated by other countries in this respect. It should therefore come as no surprise that investigative reporter, Whitney Webb, also alluding to Orwell’s prescience, draws attention to the striking similarities between mass surveillance that was developed in the United States (US) in 2020 and Orwell’s depiction of a dystopian society in 1984, first published in 1949. 

In an article titled “Techno-tyranny: How the US national security state is using coronavirus to fulfil an Orwellian vision,” she wrote:

Last year, a government commission called for the US to adopt an AI-driven mass surveillance system far beyond that used in any other country in order to ensure American hegemony in artificial intelligence. Now, many of the ‘obstacles’ they had cited as preventing its implementation are rapidly being removed under the guise of combating the coronavirus crisis.

Webb proceeds to discuss an American government body that focused on researching ways in which artificial intelligence (AI) could promote national security and defence needs, and which provided details concerning the “structural changes” which American society and economy would have to undertake to be able to maintain a technological advantage in relation to China. According to Webb the relevant governmental body recommended that the US follow China’s example in order to surpass the latter, specifically regarding some aspects of AI-driven technology as it pertains to mass surveillance. 

As she also points out, this stance on the desired development of surveillance technology conflicts with (incongruous) public statements by prominent American politicians and government officials, that Chinese AI-technological surveillance systems instantiate a significant threat for Americans’ way of life), which did not, however, prevent the implementation of several stages of such a surveillance operation in the US in 2020. As one knows in retrospect, such implementation was undertaken and justified as part of the American response to Covid-19. 

None of this is new, of course – by now it is well-known that Covid was the excuse to establish and implement Draconian measures of control, and that AI has been an integral part of it. The point I want to make, however, is that one should not be fooled into thinking that strategies of control will end there, nor that the Covid pseudo-vaccines were the last, or worst, of what the would-be rulers of the world can inflict upon us to exercise the total control they wish to achieve – a level of control that would be the envy of the fictional Big Brother society of Orwell’s 1984

For example, several critically thinking people have alerted one to the alarming fact that the widely touted Central Bank Digital Currencies (CBDCs) are Trojan horses, with which the neo-fascists driving the current attempt at a ‘great reset’ of society and the world economy aim to gain complete control over people’s lives. 

At first blush the proposed switch from a fractional reserve monetary system to a digital currency system may seem reasonable, particularly in so far as it promises the (dehumanising) ‘convenience’ of a cashless society. As Naomi Wolf has pointed out, however, far more than this is at stake. In the course of a discussion of the threat of ‘vaccine passports’ to democracy, she writes (The Bodies of Others, All Seasons Press, 2022, p. 194):

There is now also a global push toward government-managed digital currencies. With a digital currency, if you’re not a ‘good citizen,’ if you pay to see a movie you shouldn’t see, if you go to a play you shouldn’t go to, which the vaccine passport will know because you have to scan it everywhere you go, then your revenue stream can be shut off or your taxes can be boosted or your bank account won’t function. There is no coming back from this.

I was asked by a reporter, ‘What if Americans don’t adopt this?’

And I said, ‘You’re already talking from a world that’s gone if this succeeds in being rolled out.’ Because if we don’t reject the vaccine passports, there won’t be any choice. There will be no such thing as refusing to adopt it. There won’t be capitalism. There won’t be free assembly. There won’t be privacy. There won’t be choice in anything that you want to do in your life.

And there will be no escape.

 In short, this was something from which there was no returning. If indeed there was a ‘hill to die on,’ this was it. 

This kind of digital currency is already in use in China, and it is being rapidly developed in countries like Britain and Australia, to mention only some.

Wolf is not the only one to warn against the decisive implications that accepting digital currencies would have for democracy. 

Financial gurus such as Catherine Austin Fitts and Melissa Cuimmei have both signalled that it is imperative not to yield to the lies, exhortations, threats and whatever other rhetorical strategies the neo-fascists might employ to force one into this digital financial prison. In an interview where she deftly summarises the current situation of being “at war” with the globalists, Cuimmei has warned that the drive towards digital passports explains the attempt to get young children ‘vaccinated’ en masse: unless they can do so on a large scale, they could not draw children into the digital control system, and the latter would therefore not work. She has also stressed that the refusal to comply is the only way to stop this digital prison from becoming a reality. We have to learn to say “No!”

Why a digital prison, and one far more effective that Orwell’s dystopian society of Oceania? The excerpt from Wolf’s book, above, already indicates that the digital ‘currencies’ that would be shown in your Central World Bank account, would not be money, which you could spend as you saw fit; in effect, they would have the status of programmable vouchers that would dictate what you can and cannot do with them. 

They constitute a prison worse than debt, paralysing as the latter may be; if you don’t play the game of spending them on what is permissible, you could literally be forced to live without food or shelter, that is, eventually to die. Simultaneously, the digital passports of which these currencies would be a part, represent a surveillance system that would record everything you do and wherever you go. Which means that a social credit system of the kind that functions in China, and has been explored in the dystopian television series, Black Mirror, would be built into it, which could make or break you.  

In her The Solari Report, Austin Fitts, for her part, elaborates on what one can do to “stop CBDCs,” which includes the use of cash, as far as possible, limiting one’s dependence on digital transaction options in favour of analog, and using good local banks instead of the banking behemoths, in the process decentralising financial power, which is further strengthened by supporting small local businesses instead of large corporations. 

One should be under no illusion that this will prove to be easy, however. As history has taught us, when dictatorial powers attempt to gain power over people’s lives, resistance on the part of the latter is usually met with force, or ways of neutralising resistance.

As Lena Petrova reports, this was recently demonstrated in Nigeria, which was one of the first countries in the world (Ukraine being another), to introduce CBDCs, and where there was initially a very tepid response from the population, where most people prefer using cash (partly because many cannot afford smartphones). 

Not to be outdone, the Nigerian government resorted to dubious shenanigans, such as printing less money and asking people to hand in their ‘old’ banknotes for ‘new’ ones, which have not materialised. The result? People are starving because they lack cash to buy food, and they do not have, or do not want, CBDCs, partly because they lack smartphones and partly because they resist these digital currencies. 

It is difficult to tell whether Nigerians’ doubts about CBDCs is rooted in their awareness that, once embraced, the digital passport of which these currencies will comprise a part, would allow the government complete surveillance and control of the populace. Time will tell whether Nigerians will accept this Orwellian nightmare lying down.

Which brings me to the significant philosophical point underpinning any argument about resisting the drive for dictatorial power through mass surveillance. As every enlightened person should know, there are different kinds of power. One such variety of power is encapsulated in Immanuel Kant’s famous motto for enlightenment, formulated in his famous 18th-century essay, “What is Enlightenment?” The motto reads: “Sapere aude!” and translates as “Have the courage to think for yourself,” or “Dare to think!” 

This motto may be said to correspond with what contributors to the activities of Brownstone Institute engage in. Hence, the emphasis on critical intellectual engagement is indispensable. But is it sufficient? I would argue that, while speech act theory has demonstrated, accurately – emphasising the pragmatic aspect of language – that speaking (and one could add writing) is already ‘doing something,’ there is another sense of ‘doing.’ 

This is its meaning of acting in the sense one encounters in discourse theory – which demonstrates the interwovenness of speaking (or writing) and acting through the imbrication of language with power relations. What this implies is that language use is intertwined with actions that find their correlate(s) in speaking and writing. This is compatible with Hannah Arendt’s conviction, that of labour, work and action (the components of the vita activa), action – the verbal engagement with others, broadly for political purposes, is the highest embodiment of human activity.

Philosophers Michael Hardt and Antonio Negri have shed important light on the question of the connection between Kant’s “Sapere aude!” and action. In the third volume of their magisterial trilogy, Commonwealth (Cambridge, Mass., Harvard University Press, 2009; the other two volumes being Empire and Multitude), they argue that although Kant’s “major voice” shows that he was indeed an Enlightenment philosopher of the transcendental method, who uncovered the conditions of possibility of certain knowledge of the law-governed phenomenal world, but by implication also of a practical life of dutiful social and political responsibility, there is also a seldom-noticed “minor voice” in Kant’s work. 

This points, according to them, towards an alternative to the modern power complex that Kant’s “major voice” affirms, and it is encountered precisely in his motto, articulated in the short essay on enlightenment referred to above. They claim further that the German thinker developed his motto in an ambiguous manner – on the one hand “Dare to think” does not undermine his encouragement, that citizens carry out their various tasks obediently and pay their taxes to the sovereign. Needless to stress, such an approach amounts to the strengthening of the social and political status quo. But on the other hand, they argue that Kant himself creates the aperture for reading this enlightenment exhortation (p. 17): 

[…] against the grain: ‘dare to know’ really means at the same time also ‘know how to dare’. This simple inversion indicates the audacity and courage required, along with the risks involved, in thinking, speaking, and acting autonomously. This is the minor Kant, the bold, daring Kant, which is often hidden, subterranean, buried in his texts, but from time to time breaks out with a ferocious, volcanic, disruptive power. Here reason is no longer the foundation of duty that supports established social authority but rather a disobedient, rebellious force that breaks through the fixity of the present and discovers the new. Why, after all, should we dare to think and speak for ourselves if these capacities are only to be silenced immediately by a muzzle of obedience? 

One cannot fault Hardt and Negri here; notice, above, that they include ‘acting’ among those things for which one requires the courage to ‘dare.’ As I have previously pointed out in a discussion of critical theory and their interpretation of Kant on the issue of acting, towards the conclusion of his essay, Kant uncovers the radical implications of his argument: if the ruler does not submit himself (or herself) to the very same rational rules that govern the citizens’ actions, there is no obligation on the part of the latter to obey such a monarch any longer. 

In other words, rebellion is justified when authorities themselves do not act reasonably (which includes the tenets of ethical rationality), but, by implication, unjustifiably, if not aggressively, towards citizens. 

There is a lesson in this as far as the ineluctable need for action is concerned when rational argument with would-be oppressors gets one nowhere. This is especially the case when it becomes obvious that these oppressors are not remotely interested in a reasonable exchange of ideas, but summarily resort to the current unreasonable incarnation of technical rationality, namely AI-controlled mass surveillance, with the purpose of subjugating entire populations. 

Such action might take the form of refusing ‘vaccinations’ and rejecting CBDCs, but it is becoming increasingly apparent that one will have to combine critical thinking with action in the face of merciless strategies of subjugation on the part of the unscrupulous globalists.

The Raging Twenties: A New Map of Dystopia

Pepe Escobar’s new book Raging Twenties: Great Power Politics Meets Techno-Feudalism tells the story of a new phase of the U.S. empire.

By Pepe Escobar

Source: Consortium News

The Raging Twenties started with a murder: a missile strike on Gen. Soleimani at Baghdad airport on Jan. 3, 2020. Almost simultaneously, that geopolitical lethality was amplified when a virus cannibalized virtually the whole planet.

It’s as if Time has been standing still – or imploded – ever since. We cannot even begin to imagine the consequences of the anthropological rupture caused by SARS-CoV-2.

Throughout the process, language has been metastasizing, yielding a whole new basket of concepts while solidifying others. Circuit breaker. Biosecurity. Negative feedback loops. State of exception. Necropolitics. New Brutalism. Hybrid Neofascism. New Viral Paradigm.

This new terminology collates to the lineaments of a new regime, actually a hybrid mode of production: turbo-capitalism re-engineered as Rentier Capitalism 2.0, where Silicon Valley behemoths take the place of estates, and also The State. That is the “techno-feudal” option, as defined by economist Cedric Durand.

Squeezed and intoxicated by information performing the role of a dominatrix, we have been presented with a new map of Dystopia, packaged as a “new normal”, featuring cognitive dissonance, a bio-security paradigm, the inevitability of virtual work, social distancing as a political program, info-surveillance, and triumphant Trans-humanism.

A sanitary shock was superimposed over the ongoing economic shock – where financialization always takes precedence over the real economy.

But then the glimpse of a rosy future was offered towards more “inclusive” capitalism, in the form of a Great Reset, designed by a tiny plutocratic oligarchy duly self-appointed as Saviors.

All of these themes evolve along the 25 small chapters of this book, interacting with the larger geopolitical chessboard.

SARS-CoV-2 accelerated what was already a swing of the power center of the world towards Asia.

Since WWII, a great deal of the planet lived as cogs of a tributary system, with the Hegemon constantly transferring wealth and influence to itself – via what analyst Ray McGovern describes as SS (security state) enforcing the will of the MICIMATT (Military-Industrial-Congressional-Intelligence-Media-Academia-Think-Tank) complex.

This world-system is irretrievably fading out – especially due to the interpolations of the Russia-China strategic partnership. And that’s the other overarching theme of this book.

As a proposal to escape our excess hyper-reality show, this book does not offer recipes, but trails: configurations where there’s no masterplan, but multiple entryways and multiple possibilities.

These trails are networked to the narrative of a possible, emerging new configuration, in the anchoring essay titled “Eurasia, The Hegemon and the Three Sovereigns.”

In a running dialogue, you will have Michel Foucault talking to Lao Tzu, Marcus Aurelius talking to Vladimir Putin, philosophy talking to geoeconomics – all the while attempting to defuse the toxic interaction of the New Great Depression and variations of Cold War 2.0.

With the exception of the anchoring essay, this is a series of columns, arranged chronologically, originally published here on Consortium News/Washington D.C., Asia Times/Hong Kong and Strategic Culture/Moscow, widely republished and translated across the Global South.

They come from a global nomad. Since the mid 1990s I have lived and work between (mostly) East and West. With the exception of the first two months of 2020, I spent the bulk of the Raging Twenties in Asia, in Buddhist land.

So you will feel that the scent of these words is inescapably Buddhist, but in many aspects even more Taoist and Confucianist. In Asia we learn that the Tao transcends everything as it provides serenity. There’s much we can learn from humanism, stripped-off metaphysics.

2021 may be even fiercer than 2020. Yet nothing condemns us to be lost in a wilderness of mirrors while, as Ezra Pound wrote,

a tawdry cheapness

shall reign throughout our days.

The hidden “secret” of this book may be actually a yearning – that we’re able to muster our inner strength and choose a Taoist trail to ride the whale.

For those who don’t use Amazon, here is a mini-guide on how to order Raging Twenties: Great Power Politics Meets Techno-Feudalism.

Post-Pandemic Landscapes: Behavior Modification as the New Consensus Reality

By Kingsley L. Dennis

Source: Waking Times

The ‘Covid Event’ gave the unreal world its great coup over the place of the real. This perception intervention gave the final stimulus necessary to tip the twenty-first century into an awaiting technologically manipulated reality. A new landscape is emerging where, for the first time, the human mind is finding itself out-of-place within its own territory. What are now being termed the emerging ‘post-pandemic’ landscapes are likely to be hazardous territory for our mental, emotional, and physical states. The human condition is under modification.

New forms of power are on the rise, embedded within structures of health security, that are re-imagining our social lives, living and workspaces, and our physical and digital movements. Until now, the spider’s web of social control mainly operated below the waterline in a space where an almost intangible world existed beyond governance or accountability.  Now the Kraken awakes and is unashamedly coming to the surface. The beast of behavior modification is spreading its tentacles through our most established social and cultural institutions without shame – all in the name of health security (the new nom de plume of social management).  These institutions include the media, city life, the office, and – perhaps most of all – the online-digital world. The modification of these spaces is set to further desensitize, anesthetize, and dehumanize us. It is as if the collective human mind is being groomed and prepared for a new consensus reality of ‘normalized dissonance.’

The post-pandemic landscape is merging physical pandemics with its own viral digital epidemics that are infecting the human psyche. The Italian philosopher Franco Berardi has noted that our ‘electronic mediascape’ is putting ‘the sensitive organism in a state of permanent electrocution.’[1] The social body is being deliberately targeted by strategies that cause anxiety, fragmentation, exhaustion, confusion, polarization, and fear. We can see this through national and local lockdowns; social distancing; anti-social interaction; social ostracization; loss of economic independence, and more. In early July, Prof Sir Venki Ramakrishnan, president of the Royal Society (the UK’s national academy of science) stated publicly that face masks should be worn in all public spaces (as they already are in many places in Europe and worldwide). Not wearing a face covering, he added, ‘should be regarded as “anti-social” in the same way as drink driving or failing to wear a seatbelt.’[2] This is nothing short of encouraging a regime of public shaming. The human condition is being subjected to a new rhythm of the modern power-machine that is breaking down our social alliances.

The established conditions that created a sense of social reality are being dissolved and replaced with processes aimed at managing the masses through forms of separation and quantification. That is, the techniques necessary to begin the formation of a technologized humanity. These processes seek to reduce human life, and its environment, to something measurable and predicable – a life ordained by algorithms. These imposed changes are creating a disequilibrium in the human psyche – a fragmentation of the human self. Furthermore, they are seeking to break down our trusted social relations.

There is something insidious creeping up into the global collective that is attempting to create a world of sleepwalkers, plied with fear-pills, updated with vaccines, programmed with nonsense, and dismissive of alternative thinking. As a conscious, biological organism we are being prepared to mimic the automation of the machine. Humanity is mentally sleeping and slipping into the void where a new form of the ‘social collective’ awaits us.

Techniques are being devised and employed to produce normalized and standardized behavior in order to create a socially managed populace. The collective human mind is being adapted and adopted into an infrastructure of control that operates largely through modes of digital connectivity. I refer to this rising mechanism of social engineering as the modern power-machine (MPM) that exerts control over human expression and autonomy of behavior. To enact this, a consortium of institutions have been selected to structure contemporary societies toward specific functions that give the promise of security and human well-being whilst developing increased social dependency. This is the post-pandemic landscape now rapidly arising and to which all future generations shall be born into.

Childhood’s End

Luciano Floridi, a professor of philosophy and the ethics of information, believes that human civilization is shifting into a phase of ‘hyperhistory.’ A hyperhistorical society that is dependent upon integrative technologies, says Floridi, could also become human-independent – that is, not needing us. Life on this planet is being developed into an infrastructure that favors machinic intelligence and artificial organisms, thus de-territorializing the human experience. Our urban environments may soon be more conducive to artificial life than biological ones. No one is yet ready for the mutation at hand. We are being programmed to take on a new position in the world that will erode the possibility of human transcendence; a world where the ‘flesh robot’ will eventually become the reality consensus.

We are witnessing an unprecedented migration of humanity from its physical space to the digital-sphere – an environment of surveillance and technocratic social management. The incoming generations will recognize no fundamental difference between the digital-sphere and the physical world as this mergence will form the reality they are born into. To the new generations, the digital-physical-sphere will be their only reality for they will have been born without the offline-online distinction. In the words of Luciano Floridi, they were born onlife. This is now their reality, and it is ‘onlife.’ The world that many of us recognized as being human will never be the same again. With the ‘onlife’ mode, a new era of history begins. Childhood comes to an end when they stop being a child and become a user. It is then that they inhabit whole new realities – realities they may believe to be ‘user-generated’ when in fact the reverse is more the case.

Connectivity and access will be part of the regime of the new power-machine. And the rights of access are going to be a matter of consensus health security (as addressed in New Dawn 180/181).  To be a part of the power-machine will mean opting-in to its sanctioned, and on-surveillance, connections. Soon, opting out will be made an almost impossible alternative. Connecting into the power-machine will become the new cartography of the ‘human reality.’ Living ‘manually’ will become one of the last few remaining sites of resistance as human life becomes regulated-by-automation.

The City as Machine Cradle

Modern living, especially within dense urban metropolises, as well as within poverty-stricken neighborhoods, severely affects the human psychological condition, as well as affecting the nervous system. Journalist Naomi Klein has noted how a form of ‘Pandemic Shock Doctrine’ is emerging where city metropolises are forming suspicious partnerships with large tech conglomerates to re-design city living. Klein has stated that the quarantine lockdowns were not so much to save lives ‘but as a living laboratory for a permanent — and highly profitable — no-touch future.’[3] One tech CEO that Klein interviewed commented that: ‘There has been a distinct warming up to human-less, contactless technology…Humans are biohazards, machines are not.’[4] Several local city governments are in negotiations with large private tech companies to create a ‘seamless integration’ between city government, education, health, and policing operations. Further, the individual home will become a smart-enclosed hub for the urban dweller. All this, and more, as a ‘frontline pandemic response.’

Online learning, the home office, telehealth, and online commerce are all now a part of an emerging investment landscape to convert existing physical-digital infrastructures to cloud-based ones that will be incorporated into the arriving fully-completed 5G network. All in the name of providing citizens with a securitized ‘virus free’ landscape. Erich Schmidt, ex-CEO of Google/Alphabet and now chair of the Defense Innovation Board that advises the Department of Defense on military A.I., announced publicly with a straight face:

‘The benefit of these corporations, which we love to malign, in terms of the ability to communicate, the ability to deal with health, the ability to get information, is profound. Think about what your life would be like in America without Amazon.’[5]

Schmidt has now been hired to head up the task force commissioned to reimagine New York’s post-Covid reality. And he won’t be alone. High-tech is now jumping to get into partnerships with local governments in order to bring a safer, more ‘securitized’ landscape into civil society – all for ‘our’ benefit.

The business office landscape is also under re-organization to further regulate and isolate the social interactions of working colleagues. It can be said that a new form of business behavior modification is in the works. In a recent business analysis published in Bloomberg by Jeff Green and Michelle F. Davis, they suggested that:

The pre-Covid workplace, with its shared desks and common areas designed for “creative collisions,” is getting a makeover for the social distancing era. So far, what employers have come up with is a mash-up of airport security style entrance protocols and surveillance combined with precautions already seen at grocery stores, like sneeze guards and partitions.[6]

The authors of the report also foresee that the newly returned office worker will likely be encased in a makeshift cubicle made of plexiglass sheets. A new mode of anti-interaction is clearly in the works.

Hundreds of major companies, at least, are planning what they call ‘employee re-orientation programs’ and have already hired ‘thermal scanners’ to monitor employees for fevers, according to the article’s sources. The authors also noted that there has been a spike in job postings for ‘tracers,’ who would track down the contacts of anyone who tests positive for the covid-19 virus. In short, companies are now looking for a range of solutions to keep people away from one another throughout the working day. IBM, for example, is looking into using existing sensors or finding new technology to detect when people are too close together or ‘trending’ in that direction. Another report from the UK[7] noted how companies were looking into developing their own specialist employee smartphone apps that would operate elevators hands-free. The language employers are using includes creating ‘safe bubbles’ around employees and monitoring so that these ‘safe bubbles’ do not overlap. How would they manage such monitoring?

Various companies, the UK report goes on to say, are looking to teach artificial intelligence (AI) to monitor the video cameras that are monitoring the employees. Dr Mahesh Saptharishi, Motorola Solutions’ chief technology officer (based in Boston) explained that AI algorithms can offer feedback about ‘pinch points’ where people are too close together. Instead of employers (read ‘humans’) having to spend time (read ‘waste time’) watching the actual video, they can ‘ask’ the AI how well social distancing is being observed overall, and where problem points are.[8] So that’s the issue solved then. We’ll just rely on AI algorithms to tell us how to ‘social distance’ in our non-interacting bubbles and we can modify our behavior accordingly. Job done!

What this also signifies is that in order to be able to modify our behavior, machine intelligence will need to gather ever greater datasets about us. That is, ‘smart cities’ and ‘secure offices’ equals increased surveillance which equals expanded datasets. The ‘Black Iron Prison’ that Philip K. Dick saw coming is now hitting us squarely in the form of surveillance capitalism.

Surveillance Capitalism

Professor Shoshana Zuboff, the author of the widely acclaimed The Age of Surveillance Capitalism, has said that digital connection is now a means to others’ commercial ends. With the rapid rise of data collection for commercial gain, Zuboff says that: ‘The result is that both the world and our lives are pervasively rendered as information.’[9] People are reduced to being less than products because they are rendered into being a mere ‘input’ for the creation of the real product which is the data. Predictions about peoples’ futures are sold to the highest bidder so that these futures can be profited from or altered to favor better commercial gains. Zuboff considers surveillance capitalism to be, at its core, parasitic and self-referential – a parasite that feeds on every aspect of every human’s experience.

Human experience is considered free to be taken as raw material and it is this that becomes the product of value. From this material, organizations decide to intervene in our lives to shape and modify human behavior in order to favor the outcomes that are most desirable for commercial gains. Behavioral modification is now in the hands of private capital – and undertaken with the minimal amount of external oversight. At its most basic, humans have been reduced to ‘batteries’ that produce datasets for algorithms and machine learning to process. What is most worrying is that, by and large, the general populations are ignorant of what is going on quite literally beneath their fingertips. As Zuboff notes, people unknowingly end up funding their own forms of domination.

Through its operations of technocratic ‘normalization’ and the deliberate breaking up of social alliances, the power-machine age is manufacturing a new standardization of the human body and mind. With the encroachment of socially managed interventions, people are made vulnerable to the increased destabilizing of the human self. The human sense of ‘self’ and identity has become a fragile thing; it is analyzed, scrutinized, and criticized through social media; it is modified through surveillance capitalism; and it is increasingly being rendered by AI facial recognition systems such as Clearview. As these post-pandemic landscapes become increasingly rolled out in more social environments, we are likely to see, as a consequence, an ever-greater fragmentation of the human self.

The Fragmented Self

It is no exaggeration to say that humanity is entering a period of existential crisis that has perhaps not been last witnessed since the Middle Ages. Only this time, we don’t have our religious institutions to offer us salvation. The responsibility is upon our shoulders of finding salvation through becoming fully human in the face of dehumanizing forces. At present, we are being bombarded with such contradictory information that many people are unable to find coherence or to make a whole picture out of the shards. That is, the human mind is finding it increasingly difficult to see the patterns and to connect the dots. Many people will also now be experiencing forms of cognitive dissonance. One definition of this state is: ‘Cognitive dissonance refers to a situation involving conflicting attitudes, beliefs or behaviours. This produces a feeling of mental discomfort leading to an alteration in one of the attitudes, beliefs or behaviours to reduce the discomfort and restore balance.’[10]

The result of this is that the mind desperately wishes to reduce this discomfort and restore balance by seeking – or being provided with – a coherent picture, or closure. The danger here is that this ‘closure’ or ‘coherent picture’ may be provided by an external source, institution, or body (a structure of orthodox ‘authority’) and many people will jump onto it as a way of gaining closure, and thus comfort. When, in truth, we need to find this coherence and closure within ourselves, through our own resources. With the increasing breakdown of social relations and an interactive human environment, people’s consciousness is being further pushed into compartmentalization where events are seen as random rather than interrelated and meaningful. This lack of meaningfulness will be compensated for by the rise of virtual attractions as the digital-sphere increasingly becomes the ‘safe and secure’ home that people turn to. Critical thought, perceptive observation, and intuitive knowing will be under the onslaught of nullifying behavior modification.

As we are now seeing in the public space, self-identity (race, sexuality, etc) is becoming a target of division, further creating doubt, anxiety, and social polarization. Psychologically, people are being pushed to acquiesce, submit, and accept the measures that are being implemented as the ‘new normal’ post-pandemic landscapes. And the more we submit, the more we become vulnerable to further submission and disempowerment. Bureaucratic regimes and administrative structures will creep further into our living, work, and leisure lives until a form of what French philosopher Michel Foucault calls disciplinary power will dominate over the human condition. New forms of social discipline and collective obedience are fostering an artificial and engineered state of perception. We are right in the middle of a time of intense ‘enforced socialization,’ or what Edward Snowden recently referred to as an ‘architecture of oppression.’ For some, the only response to this overwhelming ‘architecture of oppression’ will be to find their comfort zones – such as sitting in their chairs at home with their ‘surrogates’ roaming the digital-physical landscape on their part.[11] Or, as the 2008 computer-animated sci-fi film Wall-E depicted, growing lazy and obese while robots cater to all their needs, while indulging in infantile entertainments. We can only hope this shall never be the case.

Humanity has entered unprecedented times. Such times demand an unprecedented response. It appears that we are now being asked to ‘step up’ to accept our responsibility for our human becoming, and so to become fully human. By doing nothing, we are allowing our behavior to be modified and our self-identities to be splintered. In these post-pandemic landscapes, the choices we make will be choices that, like never before, determine our future as a human species. I suggest it is time now for declaring our unity as an empowered fully human species – by not accepting the push of the power-machine into distanced and disempowered individuals.

No Escape from Our Techno-Feudal World

By Pepe Escobar

Source: Global Research

The political economy of the Digital Age remains virtually terra incognita. In Techno-Feudalism, published three months ago in France (no English translation yet), Cedric Durand, an economist at the Sorbonne, provides a crucial, global public service as he sifts through the new Matrix that controls all our lives.

Durand places the Digital Age in the larger context of the historical evolution of capitalism to show how the Washington consensus ended up metastasized into the Silicon Valley consensus. In a delightful twist, he brands the new grove as the “Californian ideology”.

We’re far away from Jefferson Airplane and the Beach Boys; it’s more like Schumpeter’s “creative destruction” on steroids, complete with IMF-style “structural reforms” emphasizing “flexibilization” of work and  outright marketization/financialization of everyday life.

The Digital Age was crucially associated with right-wing ideology from the very start. The incubation was provided by the Progress and Freedom Foundation (PFF), active from 1993 to 2010 and conveniently funded, among others, by Microsoft, At&T, Disney, Sony, Oracle, Google and Yahoo.

In 1994, PFF held a ground-breaking conference in Atlanta that eventually led to a seminal Magna Carta: literally, Cyberspace and the American Dream: a Magna Carta for the Knowledge Era, published in 1996, during the first Clinton term.

Not by accident the magazine Wired was founded, just like PFF, in 1993, instantly becoming the house organ of the “Californian ideology”.

Among the authors of the Magna Carta we find futurist Alvin “Future Shock” Toffler and Reagan’s former scientific counselor George Keyworth. Before anyone else, they were already conceptualizing how “cyberspace is a bioelectronic environment which is literally universal”. Their Magna Carta was the privileged road map to explore the new frontier.

Those Randian heroes

Also not by accident the intellectual guru of the new frontier was Ayn Rand and her quite primitive dichotomy between “pioneers” and the mob. Rand declared that egotism is good, altruism is evil, and empathy is irrational.

When it comes to the new property rights of the new Eldorado, all power should be exercised by the Silicon Valley “pioneers”, a Narcissus bunch in love with their mirror image as superior Randian heroes. In the name of innovation they should be allowed to destroy any established rules, in a Schumpeterian “creative destruction” rampage.

That has led to our current environment, where Google, Facebook, Uber and co. can overstep any legal framework, imposing their innovations like a fait accompli.

Durand goes to the heart of the matter when it comes to the true nature of “digital domination”: US leadership was never achieved because of spontaneous market forces.

On the contrary. The history of Silicon Valley is absolutely dependent on state intervention – especially via the industrial-military complex and the aero-spatial complex. The Ames Research Center, one of NASA’s top labs, is in Mountain View. Stanford was always awarded juicy military research contracts. During WWII, Hewlett Packard, for instance, was flourishing thanks to their electronics being used to manufacture radars. Throughout the 1960s, the US military bought the bulk of the still infant semiconductor production.

The Rise of Data Capitala 2016 MIT Technological Review report produced “in partnership” with Oracle, showed how digital networks open access to a new, virgin underground brimming with resources: “Those that arrive first and take control obtain the resources they’re seeking” – in the form of data.

So everything from video-surveillance images and electronic banking to DNA samples and supermarket tickets implies some form of territorial appropriation. Here we see in all its glory the extractivist logic inbuilt in the development of Big Data.

Durand gives us the example of Android to illustrate the extractivist logic in action. Google made Android free for all smartphones so it would acquire a strategic market position, beating the Apple ecosystem and thus becoming the default internet entry point for virtually the whole planet. That’s how a de facto, immensely valuable,  online real estate empire is built.

The key point is that whatever the original business – Google, Amazon, Uber – strategies of conquering cyberspace all point to the same target: take control of “spaces of observation and capture” of data.

About the Chinese credit system…

Durand offers a finely balanced analysis of the Chinese credit system – a public/private hybrid system launched in 2013 during the 3rd plenum of the 18th Congress of the CCP, under the motto “to value sincerity and punish insincerity”.

For the State Council, the supreme government authority in China, what really mattered was to encourage behavior deemed responsible in the financial, economic and socio-political spheres, and sanction what is not. It’s all about trust. Beijing defines it as “a method of perfecting the socialist market economy system that improves social governance”.

The Chinese term – shehui xinyong – is totally lost in translation in the West. Way more complex than “social credit”, it’s more about  “trustworthiness”, in the sense of integrity. Instead of the pedestrian Western accusations of being an Orwellian system, priorities include the fight against fraud and corruption at the national, regional and local levels, violations of environmental rules, disrespect of food security norms.

Cybernetic management of social life is being seriously discussed in China since the 1980s. In fact, since the 1940s, as we see in Mao’s Little Red Book. It could be seen as inspired by the Maoist principle of “mass lines”, as in “start with the masses to come back to the masses: to amass the ideas of the masses (which are dispersed, non-systematic), concentrate them (in general ideas and systematic), then come back to the masses to diffuse and explain them, make sure the masses assimilate them and translate them into action, and verify in the action of the masses the pertinence of these ideas”.

Durand’s analysis goes one step beyond Soshana Zuboff’s

The Age of Surveillance Capitalism when he finally reaches the core of his thesis, showing how digital platforms become “fiefdoms”: they live out of, and profit from, their vast “digital territory” peopled with data even as they lock in power over their services, which are deemed indispensable.

And just as in feudalism, fiefdoms dominate territory by attaching serfs. Masters made their living profiting from the social power derived from the exploitation of their domain, and that implied unlimited power over the serfs.

It all spells out total concentration. Silicon Valley stalwart Peter Thiel has always stressed the target of the digital entrepreneur is exactly to bypass competition. As quoted in Crashed: How a Decade of Financial Crises Changed the World, Thiel declared, “Capitalism and competition are antagonistic. Competition is for losers.”

So now we are facing not a mere clash between Silicon Valley capitalism and finance capital, but actually a new mode of production:

a turbo-capitalist survival as rentier capitalism, where Silicon giants take the place of estates, and also the State. That is the “techno-feudal” option, as defined by Durand.

Blake meets Burroughs

Durand’s book is extremely relevant to show how the theoretical and political critique of the Digital Age is still rarified. There is no precise cartography of all those dodgy circuits of revenue extraction. No analysis of how do they profit from the financial casino – especially mega investment funds that facilitate hyper-concentration. Or how do they profit from the hardcore exploitation of workers in the gig economy.

The total concentration of the digital glebe is leading to a scenario, as Durand recalls, already dreamed up by Stuart Mill, where every land in a country belonged to a single master. Our generalized dependency on the digital masters seems to be “the cannibal future of liberalism in the age of algorithms”.

Is there a possible way out? The temptation is to go radical – a Blake/Burroughs crossover. We have to expand our scope of comprehension – and stop confusing the map (as shown in the Magna Carta) with the territory (our perception).

William Blake, in his proto-psychedelic visions, was all about liberation and subordination – depicting an authoritarian deity imposing conformity via a sort of source code of mass influence. Looks like a proto-analysis of the Digital Age.

William Burroughs conceptualized Control – an array of manipulations including mass media (he would be horrified by social media). To break down Control, we must be able to hack into and disrupt its core programs. Burroughs showed how all forms of Control must be rejected – and defeated: “Authority figures are seen for what they are:  dead empty masks manipulated by computers”.

Here’s our future: hackers or slaves.

 

*

Note to readers: please click the share buttons above or below. Forward this article to your email lists. Crosspost on your blog site, internet forums. etc.

Slouching towards dystopia: the rise of surveillance capitalism and the death of privacy

Our lives and behaviour have been turned into profit for the Big Tech giants – and we meekly click “Accept”. How did we sleepwalk into a world without privacy?

By John Naughton

Source: New Statesman

Suppose you walk into a shop and the guard at the entrance records your name. Cameras on the ceiling track your every step in the store, log which items you looked at and which ones you ignored. After a while you notice that an employee is following you around, recording on a clipboard how much time you spend in each aisle. And after you’ve chosen an item and bring it to the cashier, she won’t complete the transaction until you reveal your identity, even if you’re paying cash.

Another scenario: a stranger is standing at the garden gate outside your house. You don’t know him or why he’s there. He could be a plain-clothes police officer, but there’s no way of knowing. He’s there 24/7 and behaves like a real busybody. He stops everybody who visits you and checks their identity. This includes taking their mobile phone and copying all its data on to a device he carries. He does the same for family members as they come and go. When the postman arrives, this stranger insists on opening your mail, or at any rate on noting down the names and addresses of your correspondents. He logs when you get up, how long it takes you to get dressed, when you have meals, when you leave for work and arrive at the office, when you get home and when you go to bed, as well as what you read. He is able to record all of your phone calls, texts, emails and the phone numbers of those with whom you exchange WhatsApp messages. And when you ask him what he thinks he’s doing, he just stares at you. If pressed, he says that if you have nothing to hide then you have nothing to fear. If really pressed, he may say that everything he does is for the protection of everyone.

A third scenario: you’re walking down the street when you’re accosted by a cheery, friendly guy. He runs a free photo-framing service – you just let him copy the images on your smartphone and he will tidy them up, frame them beautifully and put them into a gallery so that your friends and family can always see and admire them. And all for nothing! All you have to do is to agree to a simple contract. It’s 40 pages but it’s just typical legal boilerplate – the stuff that turns lawyers on. You can have a copy if you want. You make a quick scan of the contract. It says that of course you own your photographs but that, in exchange for the wonderful free framing service, you grant the chap “a non-exclusive, transferable, sub-licensable, royalty-free and worldwide licence to host, use, distribute, modify, copy, publicly perform or display, translate and create derivative works” of your photos. Oh, and also he can change, suspend, or discontinue the framing service at any time without notice, and may amend any of the agreement’s terms at his sole discretion by posting the revised terms on his website. Your continued use of the framing service after the effective date of the revised agreement constitutes your acceptance of its terms. And because you’re in a hurry and you need some pictures framed by this afternoon for your daughter’s birthday party, you sign on the dotted line.

All of these scenarios are conceivable in what we call real life. It doesn’t take a nanosecond’s reflection to conclude that if you found yourself in one of them you would deem it preposterous and intolerable. And yet they are all simple, if laboured, articulations of everyday occurrences in cyberspace. They describe accommodations that in real life would be totally unacceptable, but which in our digital lives we tolerate meekly and often without reflection.

The question is: how did we get here?

***

It’s a long story, but with hindsight the outlines are becoming clear. Technology comes into it, of course – but plays a smaller part than you might think. It’s more a story about human nature, about how capitalism has mutated to exploit digital technology, about the liberal democratic state and the social contract, and about governments that have been asleep at the wheel for several decades.

To start with the tech: digital is different from earlier general-purpose technologies in a number of significant ways. It has zero marginal costs, which means that once you have made the investment to create something it costs almost nothing to replicate it a billion times. It is subject to very powerful network effects – which mean that if your product becomes sufficiently popular then it becomes, effectively, impregnable. The original design axioms of the internet – no central ownership or control, and indifference to what it was used for so long as users conformed to its technical protocols – created an environment for what became known as “permissionless innovation”. And because every networked device had to be identified and logged, it was also a giant surveillance machine.

Since we humans are social animals, and the internet is a communications network, it is not surprising we adopted it so quickly once services such as email and web browsers had made it accessible to non-techies. But because providing those services involved expense – on servers, bandwidth, tech support, etc – people had to pay for them. (It may seem incredible now, but once upon a time having an email account cost money.) Then newspaper and magazine publishers began putting content on to web servers that could be freely accessed, and in 1996 Hotmail was launched (symbolically, on 4 July, Independence Day) – meaning that anyone could have email for free.

Hotmail quickly became ubiquitous. It became clear that if a business wanted to gain those powerful network effects, it had to Get Big Fast; and the best way to do that was to offer services that were free to use. The only thing that remained was finding a business model that could finance services growing at exponential rates and provide a decent return for investors.

That problem was still unsolved when Google launched its search engine in 1998. Usage of it grew exponentially because it was manifestly better than its competitors. One reason for its superiority was that it monitored very closely what users searched for and used this information to improve the algorithm. So the more that people used the engine, the better it got. But when the dot-com bubble burst in 2000, Google was still burning rather than making money and its two biggest venture capital investors, John Doerr of Kleiner Perkins and Michael Moritz of Sequoia Capital, started to lean on its founders, Larry Page and Sergey Brin, to find a business model.

Under that pressure they came up with one in 2001. They realised that the data created by their users’ searches could be used as raw material for algorithms that made informed guesses about what users might be interested in – predictions that could be useful to advertisers. In this way what was thought of as mere “data exhaust” became valuable “behavioural surplus” – information given by users that could be sold. Between that epiphany and Google’s initial public offering in 2004, the company’s revenues increased by over 3,000 per cent.

Thus was born a new business model that the American scholar Shoshana Zuboff later christened “surveillance capitalism”, which she defined as: “a new economic order that claims human experience as the raw material for hidden commercial practices of extraction, prediction and sales”. Having originated at Google, it was then conveyed to Facebook in 2008 when a senior Google executive, Sheryl Sandberg, joined the social media giant. So Sandberg became, as Zuboff puts it, the “Typhoid Mary” who helped disseminate surveillance capitalism.

***

The dynamic interactions between human nature and this toxic business model lie at the heart of what has happened with social media. The key commodity is data derived from close surveillance of everything that users do when they use these companies’ services. Therefore, the overwhelming priority for the algorithms that curate users’ social media feeds is to maximise “user engagement” – the time spent on them – and it turns out that misinformation, trolling, lies, hate-speech, extremism and other triggers of outrage seem to achieve that goal better than more innocuous stuff. Another engagement maximiser is clickbait – headlines that intrigue but leave out a key piece of information. (“She lied all her life. Guess what happened the one time she told the truth!”) In that sense, social media and many smartphone apps are essentially fuelled by dopamine – the chemical that ferries information between neurons in our brains, and is released when we do things that give us pleasure and satisfaction.

The bottom line is this: while social media users are essential for surveillance capitalism, they are not its paying customers: that role is reserved for advertisers. So the relationship of platform to user is essentially manipulative: he or she has to be encouraged to produce as much behavioural surplus as possible.

A key indicator of this asymmetry is the End User Licence Agreement (EULA) that users are required to accept before they can access the service. Most of these “contracts” consist of three coats of prime legal verbiage that no normal human being can understand, and so nobody reads them. To illustrate the point, in June 2014 the security firm F-Secure set up a free WiFi hotspot in the centre of London’s financial district. Buried in the EULA for this “free” service was a “Herod clause”: in exchange for the WiFi, “the recipient agreed to assign their first born child to us for the duration of eternity”. Six people accepted the terms.  In another experiment, a software firm put an offer of an award of $1,000 at the very end of its terms of service, just to see how many would read that far. Four months and 3,000 downloads later, just one person had claimed the offered sum.

Despite this, our legal systems accept the fact that most internet users click  “Accept” as confirmation of informed consent, which it clearly is not. It’s really passive acceptance of impotence. Such asymmetric contracts would be laughed out of court in real life but are still apparently sacrosanct in cyberspace.

According to the security guru Bruce Schneier of Harvard, “Surveillance is the business model of the internet.” But it’s also a central concern of modern states. When Edward Snowden broke cover in the summer of 2013 with his revelations of the extensiveness and scale of the surveillance capabilities and activities of the US and some other Western countries, the first question that came to mind was: is this a scandal or a crisis? Scandals happen all the time in democracies; they generate a great deal of heat and controversy, but after a while the media caravan moves on and nothing happens. Crises, on the other hand, do lead to substantive reform.

Snowden revealed that the US and its allies had been engaged in mass surveillance under inadequate democratic oversight. His disclosures provoked apparent soul-searching and anger in many Western democracies, but the degree of public concern varied from country to country. It was high in Germany, perhaps because so many Germans have recent memories of Stasi surveillance. In contrast, public opinion in Britain seemed relatively relaxed: opinion surveys at the time suggested that about two-thirds of the British public had confidence in the security services and were thus unruffled by Snowden. Nevertheless, there were three major inquiries into the revelations in the UK, and, ultimately, a new act of parliament – the Investigatory Powers Act 2016. This overhauled and in some ways strengthened judicial oversight of surveillance activities by the security services; but it also gave those services significant new powers  – for example in “equipment interference”  (legal cover to hack into targeted devices such as smartphones, domestic networks and “smart” devices such as thermostats). So, in the end, the impact of the Snowden revelations was that manifestly inadequate oversight provisions were replaced by slightly less inadequate ones. It was a scandal, not a crisis. Western states are still in the surveillance business; and their populations still seem comfortable with this.

There’s currently some concern about facial recognition, a genuinely intrusive surveillance technology. Machine-learning technology has become reasonably good at recognising faces in public places, and many state agencies and private companies are already deploying it. It means that people are being identified and tracked without their knowledge or consent. Protests against facial recognition are well-intentioned, but, as Harvard’s Bruce Schneier points out, banning it is the wrong way to oppose modern surveillance.

This is because facial recognition is just one identification tool among many enabled by digital technology. “People can be identified at a distance by their heartbeat or by their gait, using a laser-based system,” says Schneier. “Cameras are so good that they can read fingerprints and iris patterns from metres away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the licence plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.”

The important point is that surveillance and our passive acceptance of it lies at the heart of the dystopia we are busily constructing. It doesn’t matter which technology is used to identify people: what matters is that we can be identified, and then correlated and tracked across everything we do. Mass surveillance is increasingly the norm. In countries such as China, a surveillance infrastructure is being built by the government for social control. In Western countries, led by the US, it’s being built by corporations in order to influence our buying behaviour, and is then used incidentally by governments.

What’s happened in the West, largely unnoticed by the citizenry, is a sea-change in the social contract between individuals and the state. Whereas once the deal was that we accepted some limitations on our freedom in exchange for security, now the state requires us to surrender most of our privacy in order to protect us. The (implicit and explicit) argument is that if we have nothing to hide there is nothing to fear. And people seem to accept that ludicrous trope. We have been slouching towards dystopia.

***

The most eerie thing about the last two decades is the quiescence with which people have accepted – and adapted to – revolutionary changes in their information environment and lives. We have seen half-educated tech titans proclaim mottos such as “Move fast and break things” – as Mark Zuckerberg did in the early years of Facebook – and then refuse to acknowledge responsibility when one of the things they may have helped to break is democracy.  (This is the same democracy, incidentally, that enforces the laws that protect their intellectual property, helped fund the technology that has enabled their fortunes and gives them immunity for the destructive nonsense that is disseminated by their platforms.) And we allow them to get away with it.

What can explain such indolent passivity? One obvious reason is that we really (and understandably) value some of the services that the tech industry has provided. There have been various attempts to attach a monetary value to them, but any conversation with a family that’s spread over different countries or continents is enough to convince one that being able to Skype or FaceTime a faraway loved one is a real boon. Or just think of the way that Google has become a memory prosthesis for humanity – or how educational non-profit organisations such as the Khan Academy can disseminate learning for free online.

We would really miss these services if they were one day to disappear, and this may be one reason why many politicians tip-toe round tech companies’ monopoly power. That the services are free at the point of use has undermined anti-trust thinking for decades: how do you prosecute a  monopoly that is not price-gouging its users? (The answer, in the case of social media, is that users are not customers;  the monopoly may well be extorting its actual customers – advertisers – but nobody seems to have inquired too deeply into that until recently.)

Another possible explanation is what one might call imaginative failure – most people simply cannot imagine the nature of the surveillance society that we are constructing, or the implications it might have for them and their grandchildren. There are only two cures for this failure: one is an existential crisis that brings home to people the catastrophic damage that technology could wreak. Imagine, for example, a more deadly strain of the coronavirus that rapidly causes a pandemic – but governments struggle to control it because official edicts are drowned out by malicious disinformation on social media. Would that make people think again about the legal immunity that social media companies enjoy from prosecution for content that they host on their servers?

The other antidote to imaginative failure is artistic creativity. It’s no accident that two of the most influential books of the last century were novels – Orwell’s Nineteen Eighty-Four (1949) and Aldous Huxley’s Brave New World (1932). The first imagined a world in which humans were controlled by fear engendered by comprehensive surveillance; the second portrayed one in which citizens were undone by addiction to pleasure – the dopamine strategy, if you like. The irony of digital technology is that it has given us both of these nightmares at once.

Whatever the explanation, everywhere at the moment one notices a feeling of impotence – a kind of learned helplessness. This is seen most vividly in the way people shrug their shoulders and click “Accept” on grotesquely skewed and manipulative  EULAs. They face a binary choice: accept the terms or go away. Hence what has become known as the “privacy paradox” – whenever researchers and opinion pollsters ask internet users if they value their privacy, they invariably respond with a  resounding “yes”. And yet they continue to use the services that undermine that beloved privacy.

It hasn’t helped that internet users have watched their governments do nothing about tech power for two decades. Surveillance capitalism was enabled because its practitioners operated in a lawless environment. It appropriated people’s data as a free resource and asserted its right to do so, much as previous variations of capitalism appropriated natural resources without legal restrictions. And now the industry claims as one of its prime proprietary assets the huge troves of that appropriated data that it possesses.

It is also relevant that tech companies have been free to acquire start-ups that threatened to become competitors without much, if any, scrutiny from competition authorities. In any rational universe, Google would not be permitted to own YouTube, and Facebook would have to divest itself of WhatsApp and Instagram. It’s even possible – as the French journalist  Frédéric Filloux has recently argued – that  Facebook believes its corporate interests are best served by the re-election of Donald Trump, which is why it’s not going to fact-check any political ads. As far as I can see, this state of affairs has not aroused even a squawk in the US.

When Benjamin Franklin emerged on the final day of deliberation from the Constitutional Convention of 1787, a woman asked him, “Well Doctor, what have we got, a republic or a monarchy?” To which Franklin replied, “A republic… if you can keep it.” The equivalent reply for our tech-dominated society would be: we have a democracy, if we can keep it.

 

The Tech Giants Are a Conduit for Fascism

By Michael Krieger

Source: Liberty Blitzkrieg

A second former Amazon employee would spark more controversy. Deap Ubhi, a former AWS employee who worked for Lynch, was tasked with gathering marketing information to make the case for a single cloud inside the DOD. Around the same time that he started working on JEDI, Ubhi began talking with AWS about rejoining the company. As his work on JEDI deepened, so did his job negotiations. Six days after he received a formal offer from Amazon, Ubhi recused himself from JEDI, fabricating a story that Amazon had expressed an interest in buying a startup company he owned. A contracting officer who investigated found enough evidence that Ubhi’s conduct violated conflict of interest rules to refer the matter to the inspector general, but concluded that his conduct did not corrupt the process. (Ubhi, who now works in AWS’ commercial division, declined comment through a company spokesperson.)

Ubhi worsened the impression by making ill-advised public statements while still employed by the DOD. In a tweet, he described himself as “once an Amazonian, always an Amazonian.”

– From the must read ProPublica expose: How Amazon and Silicon Valley Seduced the Pentagon

That U.S. tech giants are willing participants in facilitating mass government surveillance has been widely known for a while, particularly since whistleblower Edward Snowden risked his life and liberty to tell us about it six years ago. We also know what happens to executives who don’t play ball.

Perhaps the most high profile example relates to Joseph Nacchio, CEO of telecom company Qwest in the aftermath of 9/11. Courageously, he was the only executive who pushed back against government attempts to violate the civil liberties of his customers. A few years later, he was thrown in jail for insider trading and stayed locked up for four years. He claimed his incarceration was retaliation for not bending the knee to government, which seems likely.

Charges his defense team claimed were U.S. government retaliation for his refusal to give customer data to the National Security Agency in February, 2001. This defense was not admissible in court because the U.S. Department of Justice filed an in limine motion, which is often used in national security cases, to exclude information which may reveal state secrets. Information from the Classified Information Procedures Act hearings in Nacchio’s case was likewise ruled inadmissible

Fast forward to today, and the tech giants have willingly and enthusiastically transformed themselves into compliant organs of the national security state. Big tech executives have by and large embraced this extremely lucrative and powerful role rather than push back against it. There’s simply too much money at stake, and nobody wants to go to the big house like Joe Nacchio. There is no resistance.

Just yesterday, we learned that Twitter’s executive for the Middle East is an actual British Army ‘psyops’ soldier. Unfortunately, this is not a joke.

As reported by Middle East Eye:

The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit, Middle East Eye has established.

Gordon MacMillan, who joined the social media company’s UK office six years ago, has for several years also served with the 77th Brigade, a unit formed in 2015 in order to develop “non-lethal” ways of waging war.

The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as “information warfare”.

Here’s how Twitter responded to the revelation…

Twitter would say only that “we actively encourage all our employees to pursue external interests.”

They don’t even care.

While that’s troubling enough, I want to focus your attention on a brilliant and extremely important piece published a couple of months ago at ProPublica, which many of you may have missed. It details the troubling and incestuous relationship between Amazon and Google executives with the Department of Defense. A relationship which virtually guarantees these CEOs immunity as long as they play ball. It’s impossible to read this piece and come away thinking these are “just private companies.” They demonstrably are not.

In the case of Amazon, a Pentagon whistleblower named Roma Laster grew uncomfortable with the cozy relationship Jeff Bezos had with DOD leaders.

We learn:

On Aug. 8, 2017, Roma Laster, a Pentagon employee responsible for policing conflicts of interest, emailed an urgent warning to the chief of staff of then-Secretary of Defense James Mattis. Several department employees had arranged for Jeff Bezos, the CEO of Amazon, to be sworn into an influential Pentagon advisory board despite the fact that, in the year since he’d been nominated, Bezos had never completed a required background check to obtain a security clearance.

Mattis was about to fly to the West Coast, where he would personally swear Bezos in at Amazon’s headquarters before moving on to meetings with executives from Google and Apple. Soon phone calls and emails began bouncing around the Pentagon. Security clearances are no trivial matter to defense officials; they exist to ensure that people with access to sensitive information aren’t, say, vulnerable to blackmail and don’t have conflicts of interest. Laster also contended that it was a “noteworthy exception” for Mattis to perform the ceremony. Secretaries of defense, she wrote, don’t hold swearing-in events…

The swearing-in was canceled only hours before it was scheduled to occur.

Bezos would’ve certainly been sworn into that board had Laster not had the courage to speak up. She later received her reward.

Laster did her best to enforce the rules. She would challenge the Pentagon’s cozy relationship not only with Bezos, but with Google’s Eric Schmidt, the chairman of the defense board that Bezos sought to join. The ultimate resolution? Laster was shunted aside. She was removed from the innovation board in November 2017 (but remains at the Defense Department). “Roma was removed because she insisted on them following the rules,” said a former DOD official knowledgeable about her situation.

Real whistleblowers are never celebrated by mass media and are always punished. That’s how you distinguish a real whistleblower from a fraud.

As mentioned above, Laster also called out and angered Eric Schmidt who, as chairman of Alphabet (Google, Youtube, etc), was trying to sell services to the Pentagon while at the same time serving as Chairman of the Department of Defense’s Innovation Board. That’s about as incestuous and corrupt as it gets.

Schmidt, the chairman of the innovation board, embraced the mission. In the spring and summer of 2016, he embarked, with fellow board members, on a series of visits to Pentagon operations around the world. Schmidt visited a submarine base in San Diego, an aircraft carrier off the coast of the United Arab Emirates and Creech Air Force Base, located deep in the Nevada desert near Area 51.

Inside the drone operations center at Creech, according to three people familiar with the trip, Schmidt observed video as a truck in a contested zone somewhere was surveilled by a Predator drone and annihilated. It was a mesmerizing display of the U.S. military’s lethal reach…

A little more than a year after Schmidt’s visit, Google won a $17 million subcontract in a project called Maven to help the military use image recognition software to identify drone targets — exactly the kind of function that Schmidt witnessed at Creech…

Schmidt’s influence, already strong under Carter, only grew when Mattis arrived as defense secretary. Schmidt’s travel privileges at the DOD, which required painstaking approval from the agency’s chief of staff for each stop of every trip, were suddenly unfettered after Schmidt requested carte blanche, according to three sources knowledgeable about the matter. Mattis granted him and the board permission to travel anywhere they wanted and to talk to anyone at the DOD on all but the most secret programs.

Such access is unheard-of for executives or directors of companies that sell to the government, say three current and former DOD officials, both to prevent opportunities for bribery or improper influence and to ensure that one company does not get advantages over others. “Mattis changed the rules of engagement and the muscularity of the innovation board went from zero to 60,” said a person who has served on Pentagon advisory boards. “There’s a lot of opportunity for mischief”…

Over the next months, Schmidt and two other board members with Google ties would continue flying all over the country, visiting Pentagon installations and meeting with DOD officials, sessions that no other company could attend. It’s hard to reconstruct what occurred in many of those meetings, since they were private. On one occasion, Schmidt quizzed a briefer about which cloud service provider was being used for a data project, according to a memo that Laster prepared after the briefing. When the briefer told him that Amazon handled the business, Schmidt asked if they’d considered other cloud providers. Laster’s memo flagged Schmidt’s inquiry as a “point of concern,” given that he was the chairman of a major cloud provider.

The DOD became unusually deferential to Schmidt. He preferred to travel on his personal jet, and he would ferry fellow board members with him. But that created a problem for his handlers: DOD employees are not permitted to ride on private planes. Still, the staff at the board didn’t want to inconvenience Schmidt by making him wait for his department support team to arrive on commercial flights. So, according to a source knowledgeable about the board’s spending, on at least one occasion the department requisitioned military aircraft at a cost of $25,000 an hour to transport its employees to meet Schmidt on his tour. (The DOD’s spokesperson said employees did this because “there were no commercial flights available.”)

Similar to the situation with Bezos, Roma Laster started asking questions, which angered master of the tech and military-industrial-complex universe Eric Schmidt.

Schmidt responded by threatening to go over her head to Mattis, according to her grievance. She was told to stand down and never again speak to Schmidt. According to the grievance, her boss told her, “Mr. Schmidt was a billionaire and would never accept pushback, warnings or limits.”

There’s so much more in this excellent article, but the key takeaway is the troubling extent of the existing merger between tech giants and the national security state. Disturbingly, this appears to have become even worse in the aftermath of the Snowden revelations, and the reasons why are clear. First, there are billions upon billions of dollars to be made. Second, nobody from the private sector ever gets punished for violating the civil liberties of the American public on behalf of the government and intelligence agencies. On the contrary, the only people who ever lose their freedoms and livelihoods are those who blow the whistle on government criminality (Thomas Drake, John Kiriakou, Chelsea Manning, Edward Snowden and Julian Assange, just to name a few).

Which brings up a very uncomfortable, yet fundamental question. How dangerous are tech giants that have near monopoly level power in core areas such as communications and online retail and also enjoy state sponsorship and the total immunity that comes with it? Add to the equation the enormous amount of money up for grabs provided you play ball with the national security state and you have a very precarious situation. This isn’t a hypothetical future dystopian scenario. It’s where we stand today. 

Facebook and Google are two companies with known ties to the national security state that together have enormous control over who, for all practical purposes, gets to speak in the modern online public square. Then consider that the tech giants represent a perfect vehicle for the national security state to censor or disappear from the conversation those deemed problematic to imperial narratives.

The U.S. government cannot explicitly restrict most kinds of speech, but tech giants can do whatever they please and don’t even need to provide a reasonable justification. This means any relationship between companies with this sort of online speech-policing power and the national security state is extremely dangerous. It’s a conduit for fascism.

Then there’s Amazon. A company that has a $600 million contract with the CIA, has used questionable practices in attempts to secure a $10 billion JEDI cloud deal with Pentagon, is aggressively marketing its facial recognition software to police departments across the country, and is coaching cops on how to obtain surveillance footage from its Ring doorbell camera without a warrant. But it gets even worse.

In light of recent public concerns around facial recognition, Bezos and his company are actively writing legislation for Congress on the issue.

We learn:

Amazon CEO Jeff Bezos says his company is developing a set of laws to regulate facial recognition technology that it plans to share with federal lawmakers.

In February, the company, which has faced escalating scrutiny over its controversial facial recognition tech, called Amazon Rekognition, published guidelines it said it hoped lawmakers would consider enacting. Now Amazon is taking another step, Bezos told reporters in a surprise appearance following Amazon’s annual Alexa gadget event in Seattle on Wednesday.

“Our public policy team is actually working on facial recognition regulations; it makes a lot of sense to regulate that,” Bezos said in response to a reporter’s question.

The idea is that Amazon will write its own draft of what it thinks federal legislation should look like, and it will then pitch lawmakers to adopt as much of it as possible…

In a statement, ACLU Northern CA Attorney Jacob Snow said:

“It’s a welcome sign that Amazon is finally acknowledging the dangers of face surveillance. But we’ve seen this playbook before. Once companies realize that people are demanding strong privacy protections, they sweep in, pushing weak rules that won’t protect consumer privacy and rights. Cities across the country are voting to ban face surveillance, while Amazon is pushing its surveillance tech deeper into communities.”

Meanwhile, Amazon is now using mafia tactics to pressure retailers who feel forced to use the platform given its dominance in online retail, to pay for advertising. It’s not just small brands under the gun, even large companies with high name recognition like Samsonite are being twisted via increasingly unethical practices.

Via Vox:

As Recode’s Jason Del Rey explored in his Land of the Giants podcast about the rise of Amazon, companies that sell on Amazon are increasingly having to pay to show up in search results — even when people are searching for their specific brands.

Case in point: the luggage brand Samsonite, which has to pay for sponsored ads in order to be the top result when you search “Samsonite” on Amazon.

As Samsonite’s Chief E-commerce Officer Charlie Cole told Del Rey, “Amazon is making money off your products, making money off your data by creating brands, and Amazon is making money off the privilege of being on their platform by selling you advertising to protect your brand.”

“It’s been a tough relationship,” he added.

Think about how completely insane that is, yet it’s also exactly what you’d expect to happen when one company comes to completely dominate a space as fundamental to the modern economy as online shopping.

Naturally, there’s more. It’s been well documented how Amazon uses its knowledge of product sales on its platform to then rip off existing brands by copying them and making its own version.

The more connected these tech giants are to the national security state, the more dangerous and unassailable they become. A destructive process which is already very much underway.

Centralized and unaccountable government power is alway an existential threat to human liberty, but centralized and unaccountable government power exercised via tech behemoths which aren’t restrained by the Constitution is even worse. This is the world being built around us, and we’d be wise to address it soon.

How to Avert a Digital Dystopia

By Jumana Abu-Ghazaleh

Source: OneZero

“What I find [ominous] is how seldom, today, we see the phrase ‘the 22nd century.’ Almost never. Compare this with the frequency with which the 21st century was evoked in popular culture during, say, the 1920s.”

—William Gibson, famed science-fiction author, in an interview on dystopian fiction.

The 2010s are almost over. And it doesn’t quite feel right.

When the end of 2009 came into view, the end of the 2000s felt like a relatively innocuous milestone. The current moment feels so much more, what’s the word?

Ah, yes: dystopian.

Looking back, “dystopia” might have been the watchword of the 2010s. Black Mirror debuted close to the beginning of the decade, and early in its run, it was sometimes critiqued for how over-the-top it all felt. Now, at the end of the decade, it’s regularly critiqued as made obsolete by reality.

And it’s not just prestige TV like Black Mirror reflecting the decade’s mood of incipient collapse. Of the 2010s top 10 highest-grossing films, by my count at least half involve an apocalypse either narrowly averted or, in fact, taking place (I’m looking at you, Avengers movies).

People have reasons to wallow. I get it. The existential threat of climate change alone — and seeing efforts to mitigate it slow down precisely as it becomes more pressing — could fuel whole libraries of dystopian fiction.

Meanwhile, our current tech landscape — the monopolies, the wild spread of disinformation, the sense that your most private data could go public whenever, with no recourse, all the things that risk making Black Mirror feel quaint — truly feels dystopian.

We enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

Since no one in a position to actually do something about our dystopian reality seems to be admitting it — no business leaders, politicians or legacy media — it makes sense that you might get catharsis of acknowledgment from pop culture instead. And yet, the most popular end-of-the-world fiction isn’t about actual imminent threats from climate or tech. It’s about Thanos coming to snap half of life out of existence. Or Voldemort threatening to destroy us Muggles.

Maybe that kind of pop culture, which acknowledges dystopia but not the actual threats we currently face, gives us a feeling of control: Sure, Equifax could leak my social security number and face zero consequences, but there are no Hunger Games. Wow — it really could be so much worse! Maybe we enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

But let’s look at those actual potential dystopias for a moment and think about what we need to do to avert them.

I’d suggest the big four U.S. tech giants — Amazon, Facebook, Apple, Google — each have a distinct possible dystopia associated with them. If we don’t turn around our current reality, we will likely get all four — after all, for all the antagonistic rhetoric among the giants, they are rather co-dependent. Let’s look at what we might have, ahem, look forward to — unless we demand the tech giants deliver on the utopia they purportedly set out to achieve when their respective founders raised their rounds of millions. I would argue not only that we can, but that we must hold them accountable.

“Mad Max,” or, slowly then all at once: starring Apple

“‘How did you go bankrupt?’ Bill asked. ‘Two ways,’ Mike said. ‘Gradually and then suddenly.’”

—Ernest Hemingway, The Sun Also Rises.

When you think of Mad Max, you probably think of an irradiated, post-apocalyptic desert hellscape. You’re also not thinking of Mad Max.

In the original 1979 film, the apocalypse hasn’t quite yet happened. There’s been a substantial social breakdown, but things are getting worse in slow motion. There are still functioning towns. Our protagonist, Max, is a working-class cop; and while there’s reason to believe a big crash is coming, or has even begun, society is still hanging on. (It’s only in the sequels that we’re well into the post-apocalyptic landscape people are thinking of when they say “Mad Max.”)

A relatively subtle dystopia, where things gradually decline in the background, is also a good day-to-day description of a society overrun by algorithms, even without the attention-grabbing mega-scandals of a Cambridge Analytica or massive data breach. A kind of dystopia “light” — and Apple is its poster child.

After all, Apple has a genuinely better track record than some of the other tech giants on a few key privacy issues. But it’s also genuinely aware of the value of promulgating that vision of itself — and that can lead Apple users into danger.

In January, Apple purchased a multistory billboard outside the Consumer Electronics Show in Las Vegas, with this message: “What happens on your iPhone, stays on your iPhone.” Sounds great — but it’s deeply misleading, and as journalist Mark Wilson noted, Apple’s mismatch between rhetoric and behavior fuels the nightmare that is our current data security crisis:

“[iPhone] contents are encrypted by default […] But that doesn’t stop the 2 million or so apps in the App Store from spying on iPhone users and selling details of their private lives. “Tens of millions of people have data taken from them — and they don’t have the slightest clue,” says [the] founder of [the] cybersecurity firm Guardian […] The Wall Street Journal studied 70 iOS apps […] and found several that were delivering deeply private information, including heart rate and fertility data, to Facebook.” [Emphasis mine.]

A tech giant that is claiming it’s the path to salvation, while effectively creating a trap for those who believe it, sounds ironically familiar given Apple’s famous evocation of Big Brother.

After all, when people talk about habit-forming technology in terms so terrifying they’ve convinced Silicon Valley executives to limit their children’s access to their own products, let’s be real: They’re talking about iPhones.

When academic child psychology researcher Jean Twenge talks about a possible teenage mental health epidemic fueled by social media, we know what’s at the heart of it: She’s talking about iPhones.

All those aforementioned horror stories, and a huge slice of those algorithms you’ve heard so much about, are likely first reaching you on smartphones that, with world market share above 50%, are largely, you guessed it, iPhones. (And none of these stories even mention Apple workers at overseas at facilities like Foxconn who create our iPhones and who really are living in a kind of explicit dystopia.)

What happens on your iPhone almost certainly doesn’t stay on your iPhone. But who created that surveillance capitalism running it all in the first place?

Enter Google.

“Black Mirror:” “Nosedive,” or, welcome to surveillance capitalism: starring Google

“We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

—Google’s then-CEO Eric Schmidt, in a 2011 interview.

You’ve probably heard it before: “if you’re not paying, you’re the product.” This is usually in reference to ostensibly “free” services like Facebook or Gmail. It’s a creepy thought. And, according to Shoshana Zuboff, professor emeritus at Harvard and economic analyst of what she’s termed “surveillance capitalism,” the selling of your personal information undermines autonomy. It’s worse than you being the product: “You are not the product. You are the abandoned carcass.”

Google, according to Zuboff, is the original inventor of Surveillance Capitalism. In their early “Don’t Be Evil” days, the idea of accessing people’s private Google searches and selling them was considered unthinkable. Then Google realized it could use search data for targeting purposes — and never stopped creating opportunities to surveil their users:

“Google’s new methods were prized for their ability to find data that users had opted to keep private and to infer extensive personal information that users did not provide. These operations were designed to bypass user awareness. […]In other words, from the very start Google’s breakthrough depended upon a one-way mirror: surveillance.”

Twenty years later, surveillance capitalism has become so ubiquitous that it’s hard to live in Western society without being surveilled constantly by private actors.

As far as I know, no mass popular culture has really yet captured this reality, but one small metaphor that kind of hits on its effects is a Black Mirror episode called “Nosedive.”

In “Nosedive,” everyday people’s lived experience is very clearly the picked-apart carcass for an entire economic and social order; a kind of surveillance-driven social credit score affects every aspect of your daily life, from customer service to government resources to friendships, all based on your app usage and, most creepily, how other people rate you in the app.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having a coming-out party. Increasingly, Google isn’t just surveilling us in private — with its “designing smart cities” initiatives, the company will literally be making city management decisions instead of citizens: Sidewalk Labs, a Google sister company, plans to develop “the most innovative district in the entire world” in the Quayside neighborhood of Toronto, and Google itself is planning on siphoning every bit of data about how Quayside residents live and breathe and move via ubiquitous monitoring sensors that will likely inform — for a fee naturally — how other cities will develop.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having its coming-out party.

Much like Apple, Google takes pains to present itself as a conscientious corporate citizen. They might be paternalistic, or antidemocratic — but they have learned it’s important to their brand that they’re seen as responsive to their workers and the broader public, largely thanks to the courageous and persistent effort of their workers and consumer advocates in civil society.

Not so much with Amazon.

“Elysium,” or, dystopia for some, Prime Day for others: starring Amazon

“[The New York Times] claims that our intentional approach is to create a soulless, dystopian workplace where no fun is had and no laughter heard. Again, I don’t recognize this Amazon and I very much hope you don’t either.” —Jeff Bezos, August 17, 2015 letter to staff after the New York Times investigation into working conditions at the company.

In 2015, Jeff Bezos felt the need to set the record straight: The New York Times was wrong about Amazon. Working there did not feel like a dystopia.

The years since have only validated the New York Times story, which focused on life for coders and executives at Amazon. Notably, when the Times and other investigative journalists have probed life for the far more numerous warehouse workers employed by Amazon, Bezos has largely stayed silent.

In fact, the further down the corporate ladder you get at Amazon, the more likely it seems that Jeff Bezos will stay quiet on any controversy. Just this month, in a report published almost exactly four years after Bezos’ “Amazon is not a dystopia” declaration, the New York Times has uncovered almost a dozen previously unreported deaths allegedly caused by Amazon’s decentralized delivery network. Rather than defend itself out loud, Amazon has kept quiet while repeating the same argument in the courts: Those delivery people aren’t Amazon workers at all, and thus Amazon is not liable.

Amazon, like every major tech giant, has a key role in the dystopia of surveillance capitalism — the monopolylike market share of Amazon Web Services, and Amazon’s involvement in increasingly ubiquitous facial recognition software, represent their own deeply dystopian trends. But the most visible dystopia Amazon creates, for all to see, is dystopia in the workplace.

In many ways, Amazon is the single company that best explains the appeal of an Andrew Yang figure to a certain slice of economically alienated young voters. When speaking near Amazon’s HQ in Seattle, Yang explicitly talked about the surveillance of Amazon workers, and how reliable those jobs are in any case:

“All the Amazon employees [here] are like, ‘Oh shit, is Jeff watching me right now?’… [Amazon will] open up a fulfillment warehouse that employs, let’s call it 20,000 people. How many retail workers worked at the malls that went out of business because of Amazon? [The] greatest thing would be if Jeff Bezos just stood up one day and said, ‘Hey, the truth is we are one of the primary organizations automating away millions of American jobs.’ […] I have friends who work at Amazon and they say point-blank that ‘we are told we are going to be trying to get rid of our own jobs.’”

You can flat-out disagree with Yang’s proposed solutions, but a lot of his appeal stems from the fact that he’s diagnosing a problem that broad swaths of people don’t feel is being talked about. Yang validates his supporters’ concerns that they are, in fact, living in a dystopia of the corporate overlord variety.

In the movie Elysium, most work is done in warehouses, under constant surveillance, with workers creating the very automation systems that surveil and punish them. The movie takes place in a company townlike setting, with no such thing as a class system or social mobility. Meanwhile, the ruling class in Elysium lives in space, having left everyone else behind to work on Earth, a planet now fully ravaged by climate change.

That might sound particularly far-fetched, but given Bezos’ explicit intention to colonize space because “we are in the process of destroying this planet,” it suddenly doesn’t feel so off the mark. And in an era where Governors and Mayors openly genuflect to Amazon, preemptively giving up vast swaths of democratic powers for the mere possibility that Amazon might host an office building there, it’s hard not to feel like we’re already in an Elysium-flavored dystopia.

Amazon has their dystopia picked out, flavor and all. But what happens when the biggest social network in the world can’t decide which dystopia it wants to be when it grows up?

Pick a dystopia — any dystopia!: starring Facebook

“Understanding who you serve is always a very important problem, and it only gets harder the more people that you serve.”

—Mark Zuckerberg, 2014 interview with the New York Times.

Ready Player One is one of the more popular recent dystopian novels.

The bleak future it depicts is relatively straightforward: In the face of economic and ecological collapse, the vast majority of human interaction and commercial activity happens over a shared virtual reality space called Oasis.

In Oasis, the downtrodden masses compete in enormous multiplayer video games, hoping to win enough prizes and gain sufficient corporate sponsorship to scrape out a decent existence. Imagine a version of The Matrix, where people choose to constantly log into unreality because actual reality has gotten so unbearably terrible, electing to let the real world waste away. Horrific.

Ready Player One is also the book that Oculus founder and former Facebook employee Palmer Luckey used to give new hires, working on virtual reality to get them “excited” about the “potential” of their work.

Sound beyond parody? In so many ways, Facebook is unique among the tech giants: It’s not hiding the specter of dystopia. It’s amplifying dystopia.

It’s hard to pick a popular dystopia Facebook isn’t invested in.

Surveillance capitalism? Google invented it, but Facebook has taken it to a whole new level with its social and emotional contagion experiments and relentless tracking of even nonusers.

1984? Sure, Facebook says, quietly patenting technology that lets your phone record you without warning.

Brave New World? Lest we forget, Facebook literally experimented with making depression contagious in 2014.

28 Days Later, or any of the various other mass-violence-as-disease horror movies like The Happening? Facebook has been used to spread mass genocidal panics far more terrifying than any apocalyptic Hollywood film.

What about the seemingly way out there dystopias — something like THX-1138 or a particularly gnarly Black Mirror episode where a brain can have its thoughts directly read, or even electronically implanted? It won’t comfort you to know that Facebook just acquired CTRL-Labs, which is developing a wearable brain-computer interface, raising questions about literal thought rewriting, brain hacking, and psychological “discontinuity.”

Roger McNamee, an early Zuckerberg advisor and arguably its most important early investor, has become unadorned about it: Facebook has become a dystopia. It’s up to the rest of us to catch up.

We spent the 2010s on dystopia—let’s spend the 2020s on utopia instead

“Plan for the worst, hope for the best, and maybe wind up somewhere in the middle.” —Bright Eyes, “Loose Leaves”

People generally seem to think dystopias are possible, but utopias are not. No one ridicules you for conceiving of a dystopia.

I think part of that is because it gives us an easy out. Dystopias paralyze us. They overwhelm. They make us feel small and powerless. Envisioning Dystopia is like getting married anticipating the divorce. All we can do is make sure it’s amicable.

Is there room for a utopian counterweight? There’s not only room, there’s an urgent need if we want to look forward (as opposed to despondently) to the 22nd century. We cannot avert or undo dystopias without believing in their counterparts.

But we need to make the utopian alternative feel real, accessible, and achievable. We need to be rooting not for the lesser of two evils, but for something actually good.

Dystopias — real, about-to-unfold dystopias — have been averted before. The threat of nuclear apocalypse during the Cold War. The shrinking hole in the ozone layer (which is both distinct from, and has lessons to teach us about, the climate crisis). We didn’t land in utopia, but it was only by hitching our wagons to a utopian vision that we averted the worst.

In 2017, cultural historian Jill Lepore penned a kind of goodbye letter to dystopian fiction, calling for a renewal of utopian imagination. “Dystopia,” she lamented, “used to be a fiction of resistance; it’s become a fiction of submission.” Dystopian narratives once served as stark warnings of what might be in store for us if we do nothing, spurring us on to devise a brighter future. Today, dystopian fiction is so prevalent and comes in so many unsavory flavors that our civic imaginations are understandably confined to identifying the one we deem most likely to inevitably happen, and to come to terms with it.

But we don’t have to.

A new decade is on the way. Let’s spend the 2020s exercising our utopian imaginations — the muscles we use to envision dystopia are now all too-well-developed, and a body that only exercises one set of muscles quickly grows off-balance.

Dystopias disempower. We are tiny, inconsequential — how could we do anything about them? Utopias, on the other hand, are rhetorical devices calling upon us to build. They invite our participation. Because a utopia where we don’t matter is a contradiction in terms.

Let’s envision a world where those creating algorithms are thinking not only about their reach, but also about their impact. A world in which we are not the carcass left behind by surveillance capitalism. A world in which calling for ethical norms and standards is in itself a utopian act.

Let’s spend the next decade fighting for what we actually want: A world in which the powerful few are held to a higher standard; an industry in which ethics aren’t an afterthought, and the phrase “unintended consequences” doesn’t absolve actors from the fall out of their very deliberate acts.

Let’s actualize the utopia which, ironically enough, the tech giants themselves so enthusiastically promised us when they set out to change the world.

Let’s spend this next decade asking for what we actually want.

Zuck’s New Scam

Don’t buy into Facebook’s pivot to privacy

By Lizzie O’Shea

Source: The Baffler

THE RHETORIC AT FACEBOOK—the largest social media platform in the world—is changing. In 2010, Mark Zuckerberg claimed that privacy was no longer a “social norm.” Today, the same man is asserting that “the future is private.” This was the buzz phrase at the company’s developer conference last month, F8, as the platform reoriented away from the newsfeed toward private chats, groups, and stories. Has Mark Zuckerberg been occupied by a parasitic fungus and become a zombie?

There is something stunning about this cultural shift in a corporation that holds such significant global power. It shows that when people speak up and agitate around privacy and data mining it can have a material effect. The idea that people don’t care about privacy, that they are willing to give it all away for the convenience of free services, has been debunked. Facebook specifically has been unable to ignore the waves of criticism it has experienced of late. The capitalist behemoths of the digital age often seem untouchable, and it is easy to forget that they operate in a social context. Like any powerful actor in society, Facebook is subject to the influence of organized people who will not shut up.

Nonetheless, it’s also important to be wary: savvy marketing is not the same as progress. The company is still fundamentally motivated by growth and profit. The presentations at F8 were focused on getting people onto Facebook-owned apps (including Instagram and WhatsApp) and building a sticky web around this engagement so they need never leave. You will shortly be able to buy things, find a date, even apply for a job—all directly through Facebook. Throughout these processes, Facebook will be able to grow its library of behavioral surplus, and in doing so, continue to expand its core business as the most effective and sophisticated supplier of advertising space in human history.

This context can help us better understand Zuckerberg’s privacy turn. Lest you feel uncomfortable about living more of your digital life through Facebook, they want to remind you that they are sufficiently forward thinking and benevolent to respect your privacy. As your adherence to the platform inches toward total, you can be sure the Facebook team is creating “a place where you are safe and supported.”

Well, sorry, Mark, your idea of privacy isn’t the same as mine. Privacy is too often framed in these discussions in highly superficial terms. It is about tinkering with the ways users engage with a platform as a matter of consumer choice. Mark wants you to be able to easily exclude people from seeing your messages and stories—unless that person happens to be him. This vision of privacy doesn’t hold any power because it does not challenge the definitive power framework for users of social media.

Privacy has necessarily become an expansive concept in the digital age, given the myriad ways in which technology occupies more of our personal spaces. To that end: the right to privacy includes the right to exist outside of the market. It is the right to enjoy spaces without feeling as though your presence is being used by marketers to predict your future.

In her recent book, Surveillance Capitalism, Shoshana Zuboff, writes about how these platform collect data and feed it into sophisticated algorithms, to be fabricated into prediction products that anticipate what you will do now, next, and later. Our participation becomes fodder for the “behavioral futures market,” and in turn, this has an influence on our psychology and sense of self. Private groups on Facebook might be slightly more comforting cyber spaces, but the essential topography remains the same. The fancy developer conferences and user products continue to be funded in a specific way. The logic of surveillance capitalism cannot be circumnavigated via a new company slogan. If anything, Zuckerberg’s privacy talk is an endorsement of this model. As Bernard Harcourt points out, “the watching works best not when it is internalized, but when it is absentmindedly forgotten.”

It need not be so: we can expropriate this moment. A better understanding of privacy will not be limited to design concepts generated by highly profitable social media platforms. It needs to encompass how privacy is an essential component of our agency as human beings. Agency, to be explored and expressed fully, requires that we have space outside the influence of capitalism—to have freedom from market forces seeking to manipulate our unconscious. Privacy demands that human emotions like shame, joy, guilt, and desire be explored without someone seeking to profit from the process without us noticing.

The unconscious exists as “neither individual nor collective,” writes the philosopher Mladen Dolar, but rather “precisely between the two, in the very establishment of the ties between an individual (becoming a subject) and a group to which s/he would belong.” In other words, there is a dialectic process at play between the social forces that shape us and our own personality. The baron capitalists of the data era seek to monetize this space—the right to privacy is the theoretical foundation for resistance. We need to elevate privacy to its full rhetorical potential, and recognize how it is both paradoxically individual and collective, and is defined not by consumer choice but agency. And privacy is therefore something that Facebook cannot offer—unless the company is prepared to change their entire business model.