Technology giants hold censorship meeting with US intelligence agencies

By Will Morrow

Source: WSWS.org

The New York Times and Washington Post this week published reports of a private meeting last month between eight major technology and social media corporations and the US intelligence agencies, to discuss their censorship operations in the lead-up to the November 2018 mid-term elections.

The meeting was convened at Facebook’s Menlo Park, California, headquarters on May 23, and was attended by representatives from Amazon, Apple, Google, Microsoft, Snap, Twitter and Oath, which owns Yahoo and the telecommunications network Verizon, along with agents from the FBI and the Department of Homeland Security.

The Post described the meeting, organized at the request of Facebook, as a “new overture by the technology industry to develop closer ties to law enforcement.” Both articles were based on anonymous statements by individuals who attended. One attendee told the Post that the conversation was a “back-and-forth, with both sides talking about how they were thinking about the problem and how we were looking for opportunities to work together.”

The meeting is yet another testament to the increasing integration of the technology giants with the US military/intelligence apparatus. These companies, which provide a growing share of the technical infrastructure for the repressive apparatus of the state, increasingly see the censorship of left-wing, anti-war, and progressive viewpoints as an integral part of their business strategy.

Amazon, Microsoft and Google are competing to secure control over a $10 billion project to host the Pentagon’s Cloud infrastructure, a position that will literally mean hosting the communications between military units engaged in battle. Employees at the three companies have also written letters in recent months denouncing their provision of artificial intelligence technology to improve drone targeting (Google), facial recognition of civilians by police agencies (Amazon), and assisting in the operations of the Immigration and Customs Enforcement (Microsoft).

The Times and the Post are the main media voices for the campaign by the Democratic Party and intelligence agencies for Internet censorship, under the guise of opposing the spread of “misinformation” by the Russian government. This McCarthyite campaign is based on the totally unsubstantiated allegation that Russian “fake news” led to popular disillusionment with Hillary Clinton in the 2016 vote, and the subsequent election of Donald Trump. The newspapers’ synchronized reports therefore present last month’s meeting as aimed at preventing Russian interference in the mid-term elections.

But the real target of the censorship campaign is popular access to left-wing news sources not controlled by the corporate media, and the proliferation of oppositional social media content, such as videos of police killings, mass roundups of immigrants, military interventions, protests and exposures of corporate malfeasance and government criminality.

Since the beginning of the year, Facebook has rolled out a series of changes to its News Feed, including demoting political content in favor of so-called “personal moments,” and prioritizing content from so-called “trustworthy sources”—in reality pro-establishment propaganda outlets—including the Times and the Wall Street Journal. The social media giant has also changed its algorithms to reduce the spread of “viral videos,” which CEO Mark Zuckerberg declared are “not good for people’s well-being and society.”

Last Thursday, Facebook published an announcement by its Head of News Integrity Tessa Lyons, announcing a further expansion of these measures, including the introduction of “fact-checking” for videos and photos. The post also stated that Facebook is introducing “machine learning to help identify and demote foreign Pages that are likely to spread financially-motivated hoaxes to people in other countries.” These will work alongside Facebook’s army of “fact checkers”—i.e., censors—many of them former security and intelligence agents, who belong to the 20,000 people employed in its “security” and “moderation” department.

The “demotion” of what Facebook calls “false news” was codified in “community guidelines” published by the company in April. The guidelines state that because the suppression of “false news” is a “sensitive issue,” the company does not openly remove news stories, which would be easily detected by publishers and their followers, but does the same thing secretly: “significantly reduc[ing] its distribution by showing it lower on the News Feed.” (See: “Facebook codifies its censorship regime”)

Lyons repeated this line of argument in an interview with PBS’ Miles O’Brien on May 16. Admitting that “censoring and fully removing information unless it violates our community standards is not the expectation from our community,” Lyons explained that instead “we work to reduce the damage it can do” by restricting its proliferation. The Washington Post reported yesterday that while speaking at the International Fact Checkers Network conference last week, Lyons “told attendees that … [Facebook] will soon use machine learning to predict pages that are more likely to share misinformation.”

With the official ending of net neutrality this month, the financial oligarchy that controls both the search and social media monopolies and internet service providers has further tightened its grip over the freedom of expression on the internet, with ISPs given the prerogative to block and throttle internet content at will.

The expansion of internet censorship takes place amidst mounting pressure on whistleblower Julian Assange, the Wikileaks journalist who has been effectively imprisoned in the Ecuadorian embassy in London since 2012, where he was forced to take refuge to avoid being extradited to the US and charged for publishing evidence of US government crimes. The persecution of Assange for the “crime” of publishing the truth is aimed at intimidating whistleblowers and honest journalists all around the world.

Google , which attended last month’s meeting with the FBI and Department of Homeland Security, has altered its search engine algorithms to censor left-wing and anti-war websites, including the World Socialist Web Site, whose Google search traffic fell by three quarters in response to changes by the search engine in April 2017. There are indications that Google has recently intensified its censorship of the World Socialist Web Site, with search impressions falling by as much as one third over the past month.

In August 2017, the World Socialist Web Site published an open letter to Google demanding that it end its censorship of the internet, declaring, “Censorship on this scale is political blacklisting. The obvious intent of Google’s censorship algorithm is to block news that your company does not want reported and to suppress opinions with which you do not agree.”

We urge all readers of the World Socialist Web Site seeking to defend the freedom of expression online to contact us and join the struggle against internet censorship.

Amazon, Microsoft and Google compete for Pentagon Cloud warfighter project

By Will Morrow

Source: WSWS.org

Amazon, Microsoft and Google are competing to secure a multi-billion-dollar Department of Defense contract to build and oversee the US military’s Cloud computing infrastructure, which will be used to control every aspect of the Pentagon’s global operations.

The Joint Enterprise Defense Infrastructure (JEDI) project will transfer the large number of separate data control centers currently being run by the Pentagon into a centralized Cloud network that will be administered by one of the technology giants. The contract is reported to be worth up to $10 billion over the next decade, potentially making it the Department of Defense’s single largest acquisition ever. The winning bidder is expected to be announced in September.

The company that secures the contract will be completely integrated into all of the US military’s fighting operations. According to Nextgov, Brigadier General David Krumm, the deputy director for requirements for the Joint Chiefs of Staff, described JEDI as a “global fabric” that will connect the headquarters with active combat forces, from an F-35 fighter jet pilot to a Pacific submarine captain to an Army platoon leader. “This is going to make a difference like few things have to get information to our warfighters,” Krumm said.

The Department of Defense hosted an industry conference on the project on March 7 in Arlington, Virginia, attended by technology companies, including representatives from Amazon and Microsoft. Krumm told the audience that JEDI would “change the way that this nation, its soldiers, its sailors, its Marines and its airmen fight and win our nation’s wars.”

The Cloud network will be required to hold data at all security classification levels, meaning security officials with top secret security clearances will be working at the facilities.

On May 16, Bloomberg Government published images of the advertisements produced by Amazon and Microsoft on electronic billboards in the Pentagon railway station about how their companies’ technology could support the military in battle.

Microsoft’s ad featured an image of a special operations soldier and the caption, “The cloud gets actionable insight while the action is still unfolding.” An Amazon Web Services ad included the statement, “Time to launch: months minutes,” to underscore that the cloud infrastructure will help coordinate missile launches.

The JEDI program was first announced in September 2017, a month after Trump’s Defence Secretary James Mattis carried out a tour of Silicon Valley boardrooms. Mattis met with Google’s Founder Sergey Brin and CEO Sundar Pichai, as well as executives at Facebook and Amazon, to discuss further integrating their technologies into the US armed forces.

The Defense One website reported on April 12 that “Brin in particular was eager to showcase how much Google was learning every day about AI and cloud implementation,” citing an anonymous senior Defense Department official. Mattis “returned to Washington, D.C., convinced that the US military had to move much of its data to a commercial cloud provider—not just to manage files, email, and paperwork but to push mission-critical information to front-line operators,” the article noted.

Significantly, the article notes that while Amazon and Microsoft have publicly expressed their desire to secure the contract, Google has “kept its own interest … out of the press. Company leaders have even hidden the pursuit from its own workers, according to Google employees Defense One reached.”

Google’s integration into the military’s operations has triggered widespread opposition among its employees. A letter published in April written to Google’s CEO Pichai and signed by more than 3,000 Google workers, demanded that the company cease its collaboration with the Pentagon.

The letter was a response to Google admitting in March that it is providing the military with artificial intelligence software that can be used to detect objects in video surveillance footage, under what is called Project Maven. This technology can be directly used to develop automatic targeting for the US drone murder operations in the Middle East and North Africa.

The Defense One article stated that “Maven is more than either Google or the Defense Department has admitted publicly, according to the senior defense official who called it a ‘pathfinder’ project, a starting point for future collaboration between the Pentagon and Google.”

Media reports indicate that the company most likely to secure the JEDI contract is Amazon. The company is considered to have an edge because it is already operating a Cloud network for the US intelligence agencies, under a $600 million contract reached in 2013.

Since September 2016, Amazon has been providing facial recognition technology called Rekognition to police forces and private intelligence contractors. Rekognition is able to process video footage from police body cameras, surveillance cameras and CCTV to “identify persons of interest against a collection of millions of faces in real-time, enabling timely and accurate crime prevention” (see: “Amazon providing facial recognition technology to police agencies for mass surveillance”).

The distinction between the technology corporations and the state has become almost entirely blurred as they become ever-more integrated into the military-intelligence apparatus. This takes place as Washington is working to outpace its major geostrategic rivals, above all China and Russia, in the arena of advanced warfare technology and artificial intelligence, in preparation for a catastrophic war that would inevitably involve the use of nuclear weapons.

As they integrate themselves into the American military build-up, the technology giants are collaborating in mass political censorship of left-wing and anti-war websites, above all the World Socialist Web Site, in order to suppress mass opposition to war. Since April 2017, Google has altered its search result algorithms in order to censor the WSWS and other left-wing and anti-war websites.

 

The author also recommends:

Google, drone murder and the military-intelligence-censorship complex
[19 May 2018]

Identity Theft and the Body’s Disappearance

By Robert Bohm

Source: The Hampton Institute

“What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination?”

– Allen Ginsberg from his poem “ Howl

Identity theft, at least the most familiar type, is possible because today the individual exists not merely as flesh and blood, but as flesh and blood spliced with bank account numbers, user names, passwords, credit card chips, etc. These added parts aren’t secondary to the individual’s overall identity, they’re central to it. Sometimes they’re all there is of it, as in many banking and purchasing transactions. In such instances, the data we’ve supplied to the relevant institutions doesn’t merely represent us, it is us. Our bodies alone can’t complete transactions without the account numbers, user names, passwords, credit card numbers, and ID cards which have become our identity’s essence. Without them, in many ways, we don’t exist.

In a worst case scenario, if someone gets hold of this private data, they can become us by possessing the data that is us. Following this, who or what we are is no longer a question. We don’t exist, except in the form of a stolen dataset now under someone else’s control.

In such a case, an unknown proxy has eliminated us and become who we once were.

Although problematic, the above form of identity theft is relatively minor. A worse form is one we all know about, yet chronically underestimate because we think of ourselves as too canny to be conned. Nonetheless, this other form of identity theft frames and limits everything we do. In the process, it fleeces us of the fullness of our identities and subjects our lives to a type of remote control. This remote control consists of the combined influence on us, from childhood onward, of society’s major institutions and dominant activities, which seed us with a variety of parameters for how to acceptably navigate society and and its particular challenges.

This process is usually called “socialization.” However, it’s better seen as a sorting procedure in which society sifts us through a citizenship sieve in order to eliminate supposed defects, thereby guaranteeing that, despite each of us possessing unique characteristics, we share an underlying uniformity. Ultimately, this process is a kind of identity eugenics which strives to purify the population by eliminating or weakening troublesome qualities – e.g., an overly questioning attitude, chronic boundary-testing, a confrontational stance toward authority, a fierce protectiveness toward whatever space the body inhabits, etc. Such traits are frowned upon because they’re seen by the status quo as a likely threat to society’s stability.

Such indoctrination is much subtler yet, in many ways, more pervasive than outright propaganda. Its theater of operations is everywhere, taking place on many fronts. Public and private education, advertising, mass culture, government institutions, the prevailing ideas of how to correct socioeconomic wrongs (this is a “good” form of protest, this a “bad” one), the methods by which various slangs are robbed of their transgressive nature through absorption into the mainstream, the social production of substitute behaviors for nonconformity and rebellion – each of these phenomena and others play a role in generating the so-called “acceptable citizen,” a trimmed down (i.e., possesses reduced potential) version of her or his original personality.

Make no doubt about it, this trimming of the personality is a form of identity theft. It is, in fact, the ultimate form. Take as an example the African slave in the U.S.: abducted from her or his homeland, forbidden from learning to read or write, denied legal standing in the courts, given no say over whether offspring would be sold to another owner or remain with them. The slave was robbed of her/his most essential identity, their status as a human being.

In his book, The Souls of Black Folk , W.E.B. Du Bois described this theft in terms of how slavery reduces the slave to a person with “no true self-consciousness” – that is, with no stable knowledge of self, no clear sense of who she or he is in terms of culture, preceding generations, rituals for bringing to fruition one’s potential to create her or his own fate. As Du Bois correctly argued, this left the slave, and afterwards the freed Black, with a “longing to attain self-conscious manhood,” to know who she or he was, to see oneself through one’s own eyes and not through the eyes of one’s denigrators – e.g., white supremacists, confederate diehards, “good” people who nonetheless regarded Blacks as “lesser,” etc. Du Bois understood that from such people’s perspectives, Blacks possessed only one identity: the identity of being owned, of possessing no value other than what its owner could extract from them. Without an owner to extract this value, the slave was either identity-less or possessed an identity so slimmed and emaciated as to be a nothing.

The point here isn’t that today socialization enslaves the population in the same way as U.S. slavery once enslaved Blacks, but rather that identity theft is, psychologically and culturally speaking, a key aspect of disempowering people and has been for centuries. Today, because of mass culture and new technologies, the methods of accomplishing it are far more sophisticated than during other eras.

How disempowerment/identity theft occurs in contemporary society is inseparable from capitalism’s current state of development. We long ago passed the moment (after the introduction of assembly line production in the early 20th century) when modern advertising started its trek toward becoming one of the most powerful socialization forces in the U.S. As such, it convinces consumers not only to purchase individual products but, even more importantly, sells us on the idea that buying in general and all the time, no matter what we purchase, is proof of one’s value as a person.

To accomplish this end, modern advertising was molded by its creators into a type of PSYOP designed for destabilizing individuals’ adherence to old saws like “a penny saved is a penny earned” and “without frugality none can be rich, and with it very few would be poor.” Once this happened, the United States’ days of puritan buying restraint were over. However, modern advertising was never solely about undermining personal fiscal restraint. It was also about manipulating feelings of personal failure – e.g., dissatisfaction with lifestyle and income, a sense of being trapped, fear of being physically unappealing, etc. – and turning them not into motives for self-scrutiny or social critiques, but into a spur for commodity obsession. This wasn’t simply about owning the product or products, but an obsessive hope that buying one or more commodities would trigger relief from momentary or long-term anxiety and frustration related to one’s life-woes: job, marriage, lack of money, illness, etc.

Helen Woodward, a leading advertising copywriter of the early decades of the 20th century, described how this was done in her book, Through Many Windows , published in 1926. One example she used focused on women as consumers:

The restless desire for a change in fashions is a healthy outlet. It is normal to want something different, something new, even if many women spend too much time and too much money that way. Change is the most beneficent medicine in the world to most people. And to those who cannot change their whole lives or occupations, even a new line in a dress is often a relief. The woman who is tired of her husband or her home or a job feels some lifting of the weight of life from seeing a straight line change into a bouffant, or a gray pass into a beige. Most people do not have the courage or understanding to make deeper changes.

Woodward’s statement reveals not only the advertising industry’s PSYOP characteristic of manipulating people’s frustrations in order to lure them into making purchases, but also the industry’s view of the people to whom it speaks through its ads. As indicated by Woodward’s words, this view is one of condescension, of viewing most consumers as unable to bring about real socioeconomic change because they lack the abilities – “the courage or understanding” – necessary to do so. Consequently, their main purpose in life, it is implied, is to exist as a consumer mass constantly gorging on capitalism’s products in order to keep the system running smoothly. In doing this, Woodward writes, buyers find in the act of making purchases “a healthy outlet” for troubled emotions spawned in other parts of their lives.

Such advertising philosophies in the early 20th century opened a door for the industry, one that would never again be closed. Through that door (or window), one could glimpse the future: a world with an ever greater supply of commodities to sell and an advertising industry ready to make sure people bought them. To guarantee this, advertisers set about creating additional techniques for reshaping public consciousness into one persuaded that owning as many of those commodities as possible was an existential exercise of defining who an individual was.

In his book The Consumer Society , philosopher Jean Baudrillard deals with precisely this process. He writes that such a society is driven by:

the contradiction between a virtually unlimited productivity and the need to dispose of the product. It becomes vital for the system at this stage to control not only the mechanism of production, but also consumer demand.

“To control … consumer demand.” This is the key phrase here. Capitalist forces not only wanted to own and control the means of production in factories, it also wanted to control consumers in such a way that they had no choice but to buy, then buy more. In other words, capitalism was in quest of a strategy engineered to make us synch our minds to a capitalism operating in overdrive (“virtually unlimited” production).

The way this occurs, Baudrillard argues, is by capitalism transforming (through advertising) the process of buying an individual product from merely being a response to a “this looks good” or “that would be useful around the house” attitude to something more in line with what psychologists call “ego integration.” It refers to that part of human development in which an individual’s various personality characteristics (viewpoints, goals, physical desires, etc.) are organized into a balanced whole. At that point, what advertising basically did for capitalism was develop a reconfigured ego integration process in which the personality is reorganized to view its stability as dependent on its life as a consumer.

Advertisers pulled this off because the commodity, in an age of commodity profusion, isn’t simply a commodity but is also an indicator or sign referring to a particular set of values or behavior, i.e. a particular type of person. It is this which is purchased: the meaning, or constellation of meanings, which the commodity indicates.

In this way, the commodity, once bought, becomes a signal to others that “I, the owner, am this type of person.” Buy an Old Hickory J143 baseball bat and those in the know grasp that you’re headed for the pros. Sling on some Pandora bling and all the guys’ eyes are on you as you hip-swing into the Groove Lounge. Even the NY Times is hip to what’s up. If you want to be a true Antifa activist, the newspaper informed its readers on Nov. 29, 2017, this is the attire you must wear:

Black work or military boots, pants, balaclavas or ski masks, gloves and jackets, North Face brand or otherwise. Gas masks, goggles and shields may be added as accessories, but the basics have stayed the same since the look’s inception.

After you dress up, it’s not even necessary to attend a protest and fight fascists to be full-blown Antifa. You’re a walking billboard (or signification) proclaiming your values everywhere. Dress the part and you are the part.

Let’s return to Baudrillard, though. In The System of Objects , another of his books, he writes about how the issue of signification, and the method by which individuals purchase particular commodities in order to refine their identity for public consumption, becomes the universal mass experience:

To become an object of consumption, an object must first become a sign. That is to say: it must become external, in a sense, to a relationship that it now merely signifies … Only in this context can it be ‘personalized’, can it become part of a series, and so on; only thus can it be consumed, never in its materiality, but in its difference.

This “difference” is what the product signifies. That is, the product isn’t just a product anymore. It isn’t only its function. It has transitioned into an indicator of a unique personality trait, or of being a member of a certain lifestyle grouping or social class, or of subscribing to a particular political persuasion, Republican, anarchist, whatever. In this way, choosing the commodities to purchase is essential to one’s self-construction, one’s effort to make sure the world knows exactly who they are.

The individual produced by this citizen-forming process is a reduced one, the weight of her/his full personality pared down by cutting off the unnecessary weight of potentials and inclinations perceived as “not a good fit” for a citizen at this stage of capitalism. Such a citizen, however, isn’t an automaton. She or he makes choices, indulges her or his unique appetites, even periodically rebels against bureaucratic inefficiency or a social inequity perceived to be particularly stupid or unfair. Yet after a few days or few months of this activity, this momentary rebel fades back into the woodwork, satisfied by their sincere but token challenge to the mainstream. The woodwork into which they fade is, of course, their home or another favorite location (a lover’s apartment, a bar, a ski resort cabin, a pool hall, etc.).

From this point on, or at least for the foreseeable future, such a person isn’t inclined to look at the world with a sharp political eye, except possibly within the confines of their private life. In this way, they turn whatever criticism of the mainstream they may have into a petty gripe endowed with no intention of joining with others in order to fight for any specific change(s) regarding that political, socioeconomic or cultural phenomenon against which the complaint has been lodged. Instead, all the complainer wants is congratulations from her or his listener(s) about how passionate, on-target, and right the complaint was.

This is the sieve process, identity eugenics, in action. Far more subtle and elastic than previous methods of social control, it narrows what we believe to be our options and successfully maneuvers us into a world where advertising shapes us more than schools do. In this mode, it teaches us that life’s choices aren’t so much about justice or morality, but more about what choosing between commodities is like: which is more useful to me in my private life, which one better defines me as a person, which one makes me look cooler, chicer, brainier, hunkier, more activist to those I know.

It is in this context that a young, new, “acceptable” citizen enters society as a walking irony. Raised to be a cog in a machine in a time of capitalistic excess, the individual arrives on the scene as a player of no consequence in a game in which she or he has been deluded that they’re the game’s star. But far from being a star, this person, weakened beyond repair by the surrender of too much potential, is so without ability that she or he has no impact whatsoever on the game. Consequently, this individual is, for all practical purposes, an absence. The ultimate invisible person, a nothing in the midst of players who don’t take note of this absence at all. And why should they? The full-of-potential individual who eventually morphed into this absence is long gone, remembered by no one, except as a fading image of what once was.

This process of reducing a potentially creative person into a virtual non-presence is a form of ideological anorexia. Once afflicted, an individual refuses nourishment until they’re nothing but skin and bones. However, the “weight” they’ve lost doesn’t consist of actual pounds. Instead, it involves a loss of the psychological heftiness and mental bulk necessary to be a full human being.

One can’t lose more weight than that.

Human life as we once knew it is gone, replaced by the ritual of endless purchasing. This is existence in what used to be called “the belly of the beast.” Our role in life has become to nourish capitalism by being at its disposal, by giving of ourselves. Such giving frequently entails self-mutilation: the debt, credit card and otherwise, that bludgeons to death the dreams of many individuals and families.

This quasi-religious self-sacrifice replicates in another form: the Dark Ages practice employed by fanatical monks and other flagellants who lashed themselves with whips made from copper wires, thereby ripping their flesh and bleeding until they descended into a state of religious hysteria. The more we give of ourselves in this way, the thinner and more weightless we become. Meanwhile, the god whom Allen Ginsberg called Moloch grows more obese day after day, its belly is filled with:

Robot apartments! invisible suburbs! skeleton treasuries! blind capitals! demonic industries! spectral nations! invincible madhouses! granite cocks! monstrous bombs!…

Dreams! adorations! illuminations! religions! the whole boatload of sensitive bullshit!

What capitalism wants from us, of course, isn’t merely self-sacrifice, it’s surrender. Hunger for life is viewed negatively by the status quo because it nourishes the self, making it stronger and more alert and, therefore, better prepared to assert itself. The fact that such an empowered self is more there (possesses more of a presence) than its undersized counterpart makes the healthier self unacceptable to the powers that be. This is because there-ness is no longer an option in our national life. Only non-there-ness is. If you’re not a political anorexic, you’re on the wrong side.

Wherever we look, we see it. Invisibility, or at least as much of it as possible, is the individual’s goal. It’s the new real. Fashion reveals this as well as anything. It does so by disseminating an ideal of beauty that fetishizes the body’s anorexic wilting away. Not the body’s presence but its fade to disappearance is the source of its allure. The ultimate fashion model hovers fragilely on the brink of absence in order not to distract from the only thing which counts in capitalism: the commodity to be sold – e.g., the boutique bomber jacket, the shirt, the pantsuit, the earrings, the shawl, the stilettos, the iPhone, the Ferrari, and, possibly most of all, the political passivity intrinsic to spending your life acquiring things in order to prove to others and ourselves that we’ve discovered in these things something more useful than Socrates’ goal of knowing thyself or Emma Goldman’s warning , “The most unpardonable sin in society is independence of thought.”

What is true on the fashion runway is also true in politics. Just as the best model is one thin enough to fade into non-presence, so our democracy, supposedly ruled “by and for the people,” has thinned down so much that “the people” can’t even be seen (except as stage props), let alone get their hands on democracy except in token ways. No matter how often we the people are praised rhetorically by politicians, we aren’t allowed as a group to get in the way of the capitalist system’s freedom to do whatever it wants in order to sustain commodity worship and guarantee capital’s right to permanent rule. If the military-industrial complex needs another war in order to pump out more profits, then so be it. We have no say in the matter. The identity theft built into society’s structure makes sure of this. It’s stripped us of our “weight” – our creativity, our willingness to take political risks, our capacity to choose action over posturing. After this forced weight loss, what’s left of us is a mess. Too philosophically and psychologically anemic to successfully challenge our leaders’ decisions, we, for all practical purposes, disappear.

As a reward for our passivity, we’re permitted a certain range of freedom – as long as “a certain range” is defined as “varieties of buying” and doesn’t include behavior that might result in the population’s attainment of greater political power.

So, it continues, the only good citizen is the absent citizen. Which is to say, a citizen who has dieted him or herself into a state of political anorexia – i.e., that level of mental weightlessness necessary for guaranteeing a person’s permanent self-exclusion from the machinery of power.

***

Our flesh no longer exists in the way it once did. A new evolutionary stage has arrived.

In this new stage, the flesh isn’t merely what it seems to be: flesh, pure and simple. Instead, it’s a hybrid. It’s what exists after the mind oversees its passage through the sieve of mass culture.

After this passage, what the flesh is now are the poses it adopts from studying movies, rappers, punk rockers, fashionistas of all kinds, reality TV stars, football hunks, whomever. It’s also what it wears, skinny jeans or loose-fitting chinos, short skirt or spandex, Hawaiian shirt or muscle tank top, pierced bellybutton, dope hiking boots, burgundy eyeliner. Here we come, marching, strolling, demon-eyed, innocent as Johnny Appleseed. Everybody’s snapping pics with their phones, selfies and shots of others (friends, strangers, the maimed, the hilarious, the so-called idiotic). The flesh’s pictures are everywhere. In movie ads, cosmetic ads, suppository ads, Viagra ads. This is the wave of the already-here but still-coming future. The actual flesh’s replacement by televised, printed, digitalized and Photoshopped images of it produces the ultimate self-bifurcation.

Increasingly cut off from any unmediated life of its own, the flesh now exists mostly as a natural resource for those (including ourselves) who need it for a project; to photograph it, dress it up, pose it in a certain way, put it on a diet, commodify/objectify it in any style ranging from traditional commodification to the latest avant-garde objectification.

All these stylings/makeovers, although advertised as a form of liberation for the flesh (a “freeing” of your flesh so you can be what you want to be), are in fact not that. Instead, they are part of the process of distancing ourselves from the flesh by always doing something to it rather than simply being it.

When we are it, we feel what the flesh feels, the pain, the joy, the satisfaction, the terror, the disgust, the hints of hope, a sense of irreparable loss, whatever.

When we objectify it, it is a mannequin, emotionless, a thing that uses up a certain amount of space. As such we can do what we want with it: decorate it, pull it apart, vent our frustrations on it, starve it, practice surgical cuts on it, put it to whatever use we like. It isn’t a person. It is separate from our personhood and we own it.

In fact we own all the world’s flesh.

We live, after all, in the American Empire, and the Empire owns everything. As the Empire’s citizens, we own everything it owns. Except for one thing: ourselves.

***

The flesh is both here and not here. Increasingly, it is more an object that we do things to – e.g., bulk it up, change its hair color, mass-kill it from a hotel window on the 32nd floor, view in a porno flick – than a presence in its own right (i.e., self-contained, a force to be reckoned with). In this sense, it is a growing absence, each day losing more of its self-determination and becoming more a thing lost than something that exists fully, on its own, in the here and now. Given this, the proper attitude to have toward the flesh is one of nostalgia.

Of course, the flesh hasn’t really disappeared. What has disappeared is what it once was, a meat-and-bones reality, a site of pleasure and injury. Now, however, it’s not so valuable in itself as it is in its in its role as a starting-off point for endless makeovers.

These makeover options are arrayed before the consumer everywhere: online, in big box stores, in niche markets and so on. Today, it is in these places, not at birth, that the flesh starts its trek toward maturation. It does this by offering itself up as a sacrifice to be used as they see fit by the fashion industry, the gym industry, the addiction-cure industry, the diet industry, the pharmaceutical industry, the education industry, etc. Each body in the nation reaches its fullest potential only when it becomes a testing site to be used by these industries as they explore more and better ways to establish themselves as indispensable to capitalism’s endless reproduction.

In the end, the flesh, the target of all this competition for its attention, has less of a life on its own than it does as the object of advertisers’ opinions about what can be done to improve it or to reconstruct it. Only to the extent that the flesh can transcend or reconstitute itself can it be said to be truly alive.

This last fact – about aliveness – represents the culmination of a process. This process pertains to the visualization and digitalization of everything and the consequent disappearance of everything behind a wall of signification.

A televised or computerized image, discussion, commentary, conjecture, etc., becomes the thing it meditates on, depicts or interprets. This happens by virtue of the fact that the thing itself (the real flesh behind the televised or computerized image, discussion, commentary, conjecture, etc.) has disappeared into the discussion or into the image of it presented on the computer or TV screen.

In the same way, an anorexic model (her/his flesh and blood presence) disappears into the fashions she or he displays for the public.

In each instance the thing (the flesh) now no longer exists except in other people’s meditations on it; it has become those other people’s meditations. The ultimate anorexic, it (the thing) has lost so much weight it’s no longer physically there except as an idea in someone else’s mind or in a series of binary codings inside computers.

This is the final victory of absence over there-ness, of the anorexic ideal over the idea of being fully human (i.e., “bulging with existence,” “fat with life”). The self has been successfully starved to the point of such a radical thinness that it can no longer stand up to a blade of grass, let alone make itself felt by the powers that be.

No Need To Wait – Dystopia Is Almost Upon Us

Source: TruePublica

Microsoft’s CEO has warned the technology industry against creating a dystopian future, the likes of which have been predicted by authors including George Orwell and Aldous Huxley. Satya Nadella kicked off the the company’s 2017 Build conference with a keynote that was as unexpected as it was powerful. He told the developers in attendance that they have a huge responsibility, and that the choices they make could have enormous implications.

They won’t listen of course. The collection of big data along with management, selling and distribution and the systems architecture to control it is now worth exactly double global military defence expenditure. In fact, this year, the big data industry overtook the worlds most valuable traded commodity – oil.

The truth is that the tech giants have already captured us all. We are already living in the beginnings of a truly dystopian world.

Leaving aside the endemic surveillance society our government has chosen on our behalf with no debate, politically or otherwise, we already have proof of the now and where it is leading. With fingerprint scanning, facial recognition, various virtual wallets to pay for deliveries, some would say your identity is as good as stolen. If it isn’t, it soon will be. That’s because the hacking industry, already worth a mind blowing $1trillion annually is expected to reach $2.1 trillion in just 14 months time.

The reality of not being able to take public transportation, hire a car, buy a book, or a coffee – requiring full personal identification is almost upon us. Britain even had an intention to be completely cashless by 2025 – postponed only by the impact of Brexit.

Alexa, the Amazon home assistant listens to everything said in the house. It is known to record conversations. Recently, police in Arkansas, USA demanded that Amazon turn over information collected from a murder suspect’s Echo — the speaker that controls Alexa, because they already knew what information could be extracted from it.

32M is the first company in the US that provides a human chip, allowing employees “to make purchases in their break-room micro market, open doors, login to computers, use the copy machine.” 3M also confirmed what the chip could really do – telling employees to “use it as your passport, public transit and all purchasing opportunities.”

Various Apps now locate people you may know and your own location can be shared amongst others without your knowledge and we’ve known for years that governments and private corporations have access to this data, whether you like it not.

Other countries are providing even scarier technologies.  Hypebeast Magazine reports that  Aadhaar is a 12-digit identity number issued to all Indian residents based on their biometric and demographic data. “This data must be linked to their bank account or else they’ll face the risk of losing access to their account. Folks have until the end of the year to do this, with phone numbers soon to be connected through the 12 digits by February. Failure to do so will deactivate the service. ” The technology has the ability to refuse access to state supplied services such as healthcare.

Our article “Insurance Industry Leads The Way in Social Credit Systems” also highlights what the fusion of technology and data is likely to end up doing for us. An astonishing 96 per cent of insurers think that ecosystems or applications made by autonomous organisations are having a major impact on the insurance industry. The use of social credit mechanisms is being developed, some already implemented, which will determine our future behaviour, which will affect us all – both individually and negatively.”

The Chinese government plans to launch its Social Credit System in 2020. Already being piloted on 12 million of its citizens, the aim is to judge the trustworthiness – or otherwise – of its 1.3 billion residents. Something as innocuous as a person’s shopping habits become a measure of character. But the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like. Friends are considered as well and individual credit scores fall depending on their trustworthiness. It’s not possible to imagine how far this will go in the end.

However to get us all there, to that situation, we need to be distracted from what is going on in the background. Some, are already concerned.

 

Distraction – detaching us from truth and reality

The Guardian wrote an interesting piece recently which highlighted some of the concerns of those with expert insider knowledge of the tech industry. For instance, Justin Rosenstein, the former Google and Facebook engineer who helped build the ‘like’ button –  is concerned. He believes there is a case for state regulation of smartphone technology because it is “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies.

If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.” Rosenstien also makes the observation that after Brexit and the election of Trump, digital forces have completely upended the political system and, left unchecked, could render democracy as we know it obsolete.

Carole Cadwalladre’s recent Exposé in the Observer/Guardian proved beyond doubt that democracy has already departed.  Here we learn about a shadowy global operation involving big data and billionaires who influenced the result of the EU referendum. Britain’s future place in the world has been altered by technology.

Nir Eyal 39, the author of Hooked: How to Build Habit-Forming Products writes: “The technologies we use have turned into compulsions, if not full-fledged addictions.” Eyal continues: “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Eyal feels the threat and protects his own family by cutting off the internet completely at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.”

The truth is we are no longer in control and have not been since we learned that our government was lying to us with the Snowden revelations back in 2013.

Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry agrees about the lack of control. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” Harris insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.

Harris is a tech whistleblower. He is lifting the lid on the vast powers accumulated by technology companies and the ways they are abusing the influence they have at their fingertips – literally.

“A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today.”

The techniques these companies use such as social reciprocity, autoplay and the like are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, ultimately revealed that the company can identify when teenagers feel “worthless or “insecure.” Harris adds, that this is “a perfect model of what buttons you can push in a particular person”.

Chris Marcellino, 33, a former Apple engineer is now in the final stages of retraining to be a neurosurgeon and notes that these types of technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

Roger McNamee, a venture capitalist who benefited from hugely profitable investments in Google and Facebook, has grown disenchanted with both of the tech giants. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”

James Williams ex-Google strategist who built the metrics system for the company’s global search advertising business, says Google now has the “largest, most standardised and most centralised form of attentional control in human history”. “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

Williams also takes the view that if the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on. If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?”

“Will we be able to recognise it, if and when it happens?” Williams says. “And if we can’t, then how do we know it hasn’t happened already?”

 

The dystopian arrival

Within ten years, some are speculating that many of us will be wearing eye lenses. Coupled with social media, we’ll be able to identify strangers and work out that a particular individual, in say a bar, has a low friend compatibility, and data shows you will likely not have a fruitful conversation. This idea is literally scratching the surface of the information overload en-route right now.

It is not at all foolish to think that in that same bar a patron is shouting at the bartender, who refuses to serve him another drink because the glass he was holding measured his blood-alcohol level through the sweat in his fingers. He’ll have to wait at least 45 minutes before he’ll be permitted to order another scotch. You might even think that is a good idea – it isn’t.

Google’s Quantum Artificial Intelligence  Lab, already works with other organisations associated with NASA. Google’s boss sits on the Board of the Pentagon with links plugged directly into the surveillance architecture of the NSA in the USA and GCHQ in Britain. This world, where artificial intelligence makes its mark, as Williams mentions earlier, will deliberately undermine the ability to think for yourself.

In the scenario of the eye lenses, you might even have the ability to command your eyewear to shut down. But when you do, suddenly you are confronted with an un-Googled world. It appears drab and colourless in comparison. The people before you are bland, washed out and unattractive. The art, plants, wall paint, lighting and decorations had all been shaped by your own preferences, and without the distortion field your wearable eyewear provided, the world appears as a grey, lifeless template.

You find it difficult to last without the assistance of your self imposed augmented life, and accompanied by nervous laughter you switch it back on. The world you view through the prism of your computer eyewear has become your default setting. You know you have free will, but don’t feel like you need it. As Marcellino says the same neurological pathways as gambling and drug use drive how you choose to see the world.

This type of technology will be available and these types of scenario’s will become real, sooner than you think.

Our governments, allied with the tech giants are coercing us into a place of withering obedience with the use of 360 degree state surveillance. New technology, which is somehow seen as the road to liberty, contentment and prosperity, is really our future being shaped by a system that will destroy our civil liberties, crush our human rights and it will eventually ensnare and trap us all. This much they are already attempting in China and Japan with social credit mechanisms and pre-crime technology which is a truly frightening prospect. Without debate or our knowledge, here in western democracies, these technologies are already in use.

 

THE MONOPOLIZATION OF AMERICA: The biggest economic problem you’re hearing almost nothing about

By Robert Reich

Source: Nation of Change

Not long ago I visited some farmers in Missouri whose profits are disappearing. Why? Monsanto alone owns the key genetic traits to more than 90 percent of the soybeans planted by farmers in the United States, and 80 percent of the corn. Which means Monsanto can charge farmers much higher prices.

Farmers are getting squeezed from the other side, too, because the food processors they sell their produce to are also consolidating into mega companies that have so much market power they can cut the prices they pay to farmers.

This doesn’t mean lower food prices to you. It means more profits to the monopolists.

Monopolies all around

America used to have antitrust laws that stopped corporations from monopolizing markets, and often broke up the biggest culprits. No longer. It’s a hidden upward redistribution of money and power from the majority of Americans to corporate executives and wealthy shareholders.

You may think you have lots of choices, but take a closer look:

1. The four largest food companies control 82 percent of beef packing, 85 percent of soybean processing, 63 percent of pork packing, and 53 percent of chicken processing.

2. There are many brands of toothpaste, but 70 percent of all of it comes from just two companies.

3. You may think you have your choice of sunglasses, but they’re almost all from one company: Luxottica – which also owns nearly all the eyeglass retail outlets.

4. Practically every plastic hanger in America is now made by one company, Mainetti.

5. What brand of cat food should you buy? Looks like lots of brands but behind them are basically just two companies.

6. What about your pharmaceuticals? Yes, you can get low-cost generic versions. But drug companies are in effect paying the makers of generic drugs to delay cheaper versions. Such “pay for delay” agreements are illegal in other advanced economies, but antitrust enforcement hasn’t laid a finger on them in America. They cost you and me an estimated $3.5 billion a year.

7. You think your health insurance will cover the costs? Health insurers are consolidating, too. Which is one reason your health insurance premiums, copayments, and deductibles are soaring.

8. You think you have a lot of options for booking discount airline tickets and hotels online? Think again. You have only two. Expedia merged with Orbitz, so that’s one company. And then there’s Priceline.

9. How about your cable and Internet service? Basically just four companies (and two of them just announced they’re going to merge).

Why the monopolization of America is a huge problem

The problem with all this consolidation into a handful of giant firms is they don’t have to compete. Which means they can – and do – jack up your prices.

Such consolidation keeps down wages. Workers with less choice of whom to work for have a harder time getting a raise. When local labor markets are dominated by one major big box retailer, or one grocery chain, for example, those firms essentially set wage rates for the area.

These massive corporations also have a lot of political clout. That’s one reason they’re consolidating: Power.

Antitrust laws were supposed to stop what’s been going on. But today, they’re almost a dead letter. This hurts you.

We’ve forgotten history

The first antitrust law came in 1890 when Senator John Sherman responded to public anger about the economic and political power of the huge railroad, steel, telegraph, and oil cartels – then called “trusts” – that were essentially running America.

A handful of corporate chieftains known as “robber barons” presided over all this – collecting great riches at the expense of workers who toiled long hours often in dangerous conditions for little pay. Corporations gouged consumers and corrupted politics.

Then in 1901, progressive reformer Teddy Roosevelt became president. By this time, the American public was demanding action.

In his first message to Congress in December 1901, only two months after assuming the presidency, Roosevelt warned, “There is a widespread conviction in the minds of the American people that the great corporations known as the trusts are in certain of their features and tendencies hurtful to the general welfare.”

Roosevelt used the Sherman Antitrust Act to go after the Northern Securities Company, a giant railroad trust run by J. P. Morgan, the nation’s most powerful businessman. The U.S. Supreme Court backed Roosevelt and ordered the company dismantled.

In 1911, John D. Rockefeller’s Standard Oil Trust was broken up, too. But in its decision, the Supreme Court effectively altered the Sherman Act, saying that monopolistic restraints of trade were objectionable if they were “unreasonable” – and that determination was to be made by the courts. What was an unreasonable restraint of trade?

In the presidential election of 1912, Roosevelt, running again for president but this time as a third party candidate, said he would allow some concentration of industries where there were economic efficiencies due to large scale. He’d then he’d have experts regulate these large corporations for the public benefit.

Woodrow Wilson, who ended up winning the election, and his adviser Louis Brandeis, took a different view. They didn’t think regulation would work, and thought all monopolies should be broken up.

For the next 65 years, both views dominated. We had strong antitrust enforcement along with regulations that held big corporations in check.

Most big mergers were prohibited. Even large size was thought to be a problem. In 1945, in the case of United States v. Alcoa (1945), the Supreme Court ruled that even though Alcoa hadn’t pursued a monopoly, it had become one by becoming so large that it was guilty of violating the Sherman Act.

What happened to antitrust?

All this changed in the 1980s, after Robert Bork – who, incidentally, I studied antitrust law with at Yale Law School, and then worked for when he became Solicitor General under President Ford – wrote an influential book called The Antitrust Paradox, which argued that the sole purpose of the Sherman Act is consumer welfare.

Bork argued that mergers and large size almost always create efficiencies that bring down prices, and therefore should be legal. Bork’s ideas were consistent with the conservative Chicago School of Economics, and found a ready audience in the Reagan White House.

Bork was wrong. But since then, even under Democratic administrations, antitrust has all but disappeared.

The monopolization of high tech

We’re seeing declining competition even in cutting-edge, high-tech industries.

In the new economy, information and ideas are the most valuable forms of property. This is where the money is.

We haven’t seen concentration on this scale ever before.

Google and Facebook are now the first stops for many Americans seeking news. Meanwhile, Amazon is now the first stop for more than a half of American consumers seeking to buy anything. Talk about power.

Contrary to the conventional view of an American economy bubbling with innovative small companies, the reality is quite different. The rate at which new businesses have formed in the United States has slowed markedly since the late 1970s.

Big Tech’s sweeping patents, standard platforms, fleets of lawyers to litigate against potential rivals, and armies of lobbyists have created formidable barriers to new entrants. Google’s search engine is so dominant, “Google” has become a verb.

The European Union filed formal antitrust charges against Google, accusing it of forcing search engine users into its own shopping platforms. And last June, it fined Google a record $2.7 billion.

But not in America.

It’s time to revive antitrust

Economic and political power cannot be separated because dominant corporations gain political influence over how markets are organized, maintained, and enforced – which enlarges their economic power further.

One of the original goals of the antitrust laws was to prevent this.

Big Tech – along with the drug, insurance, agriculture, and financial giants – is coming to dominate both our economy and our politics.

There’s only one answer: It is time to revive antitrust.

Disarming the Weapons of Mass Distraction

By Madeleine Bunting

Source: Rise Up Times

“Are you paying attention?” The phrase still resonates with a particular sharpness in my mind. It takes me straight back to my boarding school, aged thirteen, when my eyes would drift out the window to the woods beyond the classroom. The voice was that of the math teacher, the very dedicated but dull Miss Ploughman, whose furrowed grimace I can still picture.

We’re taught early that attention is a currency—we “pay” attention—and much of the discipline of the classroom is aimed at marshaling the attention of children, with very mixed results. We all have a history here, of how we did or did not learn to pay attention and all the praise or blame that came with that. It used to be that such patterns of childhood experience faded into irrelevance. As we reached adulthood, how we paid attention, and to what, was a personal matter and akin to breathing—as if it were automatic.

Today, though, as we grapple with a pervasive new digital culture, attention has become an issue of pressing social concern. Technology provides us with new tools to grab people’s attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain “productive” on a cab journey.

Even those spare moments of time in our day—waiting for a bus, standing in a queue at the supermarket—can now be “harvested,” says the writer Tim Wu in his book The Attention Merchants. In this quest to pursue “those slivers of our unharvested awareness,” digital technology has provided consumer capitalism with its most powerful tools yet. And our attention fuels it. As Matthew Crawford notes in The World Beyond Your Head, “when some people treat the minds of other people as a resource, this is not ‘creating wealth,’ it is transferring it.”

There’s a whiff of panic around the subject: the story that our attention spans are now shorter than a goldfish’s attracted millions of readers on the web; it’s still frequently cited, despite its questionable veracity. Rates of diagnosis attention deficit hyperactivity disorder in children have soared, creating an $11 billion global market for pharmaceutical companies. Every glance of our eyes is now tracked for commercial gain as ever more ingenious ways are devised to capture our attention, if only momentarily. Our eyeballs are now described as capitalism’s most valuable real estate. Both our attention and its deficits are turned into lucrative markets.

There is also a domestic economy of attention; within every family, some get it and some give it. We’re all born needing the attention of others—our parents’, especially—and from the outset, our social skills are honed to attract the attention we need for our care. Attention is woven into all forms of human encounter from the most brief and transitory to the most intimate. It also becomes deeply political: who pays attention to whom?

Social psychologists have researched how the powerful tend to tune out the less powerful. One study with college students showed that even in five minutes of friendly chat, wealthier students showed fewer signs of engagement when in conversation with their less wealthy counterparts: less eye contact, fewer nods, and more checking the time, doodling, and fidgeting. Discrimination of race and gender, too, plays out through attention. Anyone who’s spent any time in an organization will be aware of how attention is at the heart of office politics. A suggestion is ignored in a meeting, but is then seized upon as a brilliant solution when repeated by another person.

What is political is also ethical. Matthew Crawford argues that this is the essential characteristic of urban living: a basic recognition of others.

And then there’s an even more fundamental dimension to the politics of attention. At a primary level, all interactions in public space require a very minimal form of attention, an awareness of the presence and movement of others. Without it, we would bump into each other, frequently.

I had a vivid demonstration of this point on a recent commute: I live in East London and regularly use the narrow canal paths for cycling. It was the canal rush hour—lots of walkers with dogs, families with children, joggers as well as cyclists heading home. We were all sharing the towpath with the usual mixture of give and take, slowing to allow passing, swerving around and between each other. Only this time, a woman was walking down the center of the path with her eyes glued to her phone, impervious to all around her. This went well beyond a moment of distraction. Everyone had to duck and weave to avoid her. She’d abandoned the unspoken contract that avoiding collision is a mutual obligation.

This scene is now a daily occurrence for many of us, in shopping centers, station concourses, or on busy streets. Attention is the essential lubricant of urban life, and without it, we’re denying our co-existence in that moment and place. The novelist and philosopher, Iris Murdoch, writes that the most basic requirement for being good is that a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”

Attention is what draws us out of ourselves to experience and engage in the world. The word is often accompanied by a verb—attention needs to be grabbed, captured, mobilized, attracted, or galvanized. Reflected in such language is an acknowledgement of how attention is the essential precursor to action. The founding father of psychology William James provided what is still one of the best working definitions:

It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

Attention is a limited resource and has to be allocated: to pay attention to one thing requires us to withdraw it from others. There are two well-known dimensions to attention, explains Willem Kuyken, a professor of psychology at Oxford. The first is “alerting”— an automatic form of attention, hardwired into our brains, that warns us of threats to our survival. Think of when you’re driving a car in a busy city: you’re aware of the movement of other cars, pedestrians, cyclists, and road signs, while advertising tries to grab any spare morsel of your attention. Notice how quickly you can swerve or brake when you spot a car suddenly emerging from a side street. There’s no time for a complicated cognitive process of decision making. This attention is beyond voluntary control.

The second form of attention is known as “executive”—the process by which our brain selects what to foreground and focus on, so that there can be other information in the background—such as music when you’re cooking—but one can still accomplish a complex task. Crucially, our capacity for executive attention is limited. Contrary to what some people claim, none of us can multitask complex activities effectively. The next time you write an email while talking on the phone, notice how many typing mistakes you make or how much you remember from the call. Executive attention can be trained, and needs to be for any complex activity. This was the point James made when he wrote: “there is no such thing as voluntary attention sustained for more than a few seconds at a time… what is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.”

Attention is a complex interaction between memory and perception, in which we continually select what to notice, thus finding the material which correlates in some way with past experience. In this way, patterns develop in the mind. We are always making meaning from the overwhelming raw data. As James put it, “my experience is what I agree to attend to. Only those items which I notice shape my mind—without selective interest, experience is an utter chaos.”

And we are constantly engaged in organizing that chaos, as we interpret our experience. This is clear in the famous Gorilla Experiment in which viewers were told to watch a video of two teams of students passing a ball between them. They had to count the number of passes made by the team in white shirts and ignore those of the team in black shirts. The experiment is deceptively complex because it involves three forms of attention: first, scanning the whole group; second, ignoring the black T-shirt team to keep focus on the white T-shirt team (a form of inhibiting attention); and third, remembering to count. In the middle of the experiment, someone in a gorilla suit ambles through the group. Afterward, half the viewers when asked hadn’t spotted the gorilla and couldn’t even believe it had been there. We can be blind not only to the obvious, but to our blindness.

There is another point in this experiment which is less often emphasized. Ignoring something—such as the black T-shirt team in this experiment—requires a form of attention. It costs us attention to ignore something. Many of us live and work in environments that require us to ignore a huge amount of information—that flashing advert, a bouncing icon or pop-up.

In another famous psychology experiment, Walter Mischel’s Marshmallow Test, four-year-olds had a choice of eating a marshmallow immediately or two in fifteen minutes. While filmed, each child was put in a room alone in front of the plate with a marshmallow. They squirmed and fidgeted, poked the marshmallow and stared at the ceiling. A third of the children couldn’t resist the marshmallow and gobbled it up, a third nibbled cautiously, but the last third figured out how to distract themselves. They looked under the table, sang… did anything but look at the sweet. It’s a demonstration of the capacity to reallocate attention. In a follow-up study some years later, those who’d been able to wait for the second marshmallow had better life outcomes, such as academic achievement and health. One New Zealand study of 1,000 children found that this form of self-regulation was a more reliable predictor of future success and wellbeing than even a good IQ or comfortable economic status.

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

*

I learned to concentrate as a child. Being a bookworm helped. I’d be completely absorbed in my reading as the noise of my busy family swirled around me. It was good training for working in newsrooms; when I started as a journalist, they were very noisy places with the clatter of keyboards, telephones ringing and fascinating conversations on every side. What has proved much harder to block out is email and text messages.

The digital tech companies know a lot about this widespread habit; many of them have built a business model around it. They’ve drawn on the work of the psychologist B.F. Skinner who identified back in the Thirties how, in animal behavior, an action can be encouraged with a positive consequence and discouraged by a negative one. In one experiment, he gave a pigeon a food pellet whenever it pecked at a button and the result, as predicted, was that the pigeon kept pecking. Subsequent research established that the most effective way to keep the pigeon pecking was “variable-ratio reinforcement.” Give the pigeon a food pellet sometimes, and you have it well and truly hooked.

We’re just like the pigeon pecking at the button when we check our email or phone. It’s a humiliating thought. Variable reinforcement ensures that the customer will keep coming back. It’s the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

In much public space, wherever your eye lands—from the back of the toilet door, to the handrail on the escalator, or the hotel key card—an ad is trying to grab your attention, and does so by triggering the oldest instincts of the human mind: fear, sex, and food. Public places become dominated by people trying to sell you something. In his tirade against this commercialization, Crawford cites advertisements on the backs of school report cards and on debit machines where you swipe your card. Before you enter your PIN, that gap of a few seconds is now used to show adverts. He describes silence and ad-free experience as “luxury goods” that only the wealthy can afford. Crawford has invented the concept of the “attentional commons,” free public spaces that allow us to choose where to place our attention. He draws the analogy with environmental goods that belong to all of us, such as clean air or clean water.

Some legal theorists are beginning to conceive of our own attention as a human right. One former Google employee warned that “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have.” They use the insights into human behavior derived from social psychology—the need for approval, the need to reciprocate others’ gestures, the fear of missing out. Your attention ceases to be your own, pulled and pushed by algorithms. Attention is referred to as the real currency of the future.

*

In 2013, I embarked on a risky experiment in attention: I left my job. In the previous two years, it had crept up on me. I could no longer read beyond a few paragraphs. My eyes would glaze over and, even more disastrously for someone who had spent their career writing, I seemed unable to string together my thoughts, let alone write anything longer than a few sentences. When I try to explain the impact, I can only offer a metaphor: it felt like my imagination and use of language were vacuum packed, like a slab of meat coated in plastic. I had lost the ability to turn ideas around, see them from different perspectives. I could no longer draw connections between disparate ideas.

At the time, I was working in media strategy. It was a culture of back-to-back meetings from 8:30 AM to 6 PM, and there were plenty of advantages to be gained from continuing late into the evening if you had the stamina. Commitment was measured by emails with a pertinent weblink. Meetings were sometimes as brief as thirty minutes and frequently ran through lunch. Meanwhile, everyone was sneaking time to battle with the constant emails, eyes flickering to their phone screens in every conversation. The result was a kind of crazy fog, a mishmash of inconclusive discussions.

At first, it was exhilarating, like being on those crazy rides in a theme park. By the end, the effect was disastrous. I was almost continuously ill, battling migraines and unidentifiable viruses. When I finally made the drastic decision to leave, my income collapsed to a fraction of its previous level and my family’s lifestyle had to change accordingly. I had no idea what I was going to do; I had lost all faith in my ability to write. I told friends I would have to return the advance I’d received to write a book. I had to try to get back to the skills of reflection and focus that had once been ingrained in me.

The first step was to teach myself to read again. I sometimes went to a café, leaving my phone and computer behind. I had to slow down the racing incoherence of my mind so that it could settle on the text and its gradual development of an argument or narrative thread. The turning point in my recovery was a five weeks’ research trip to the Scottish Outer Hebrides. On the journey north of Glasgow, my mobile phone lost its Internet connection. I had cut myself loose with only the occasional text or call to family back home. Somewhere on the long Atlantic beaches of these wild and dramatic islands, I rediscovered my ability to write.

I attribute that in part to a stunning exhibition I came across in the small harbor town of Lochboisdale, on the island of South Uist. Vija Celmins is an acclaimed Latvian-American artist whose work is famous for its astonishing patience. She can take a year or more to make a woodcut that portrays in minute detail the surface of the sea. A postcard of her work now sits above my desk, a reminder of the power of slow thinking.

Just as we’ve had a slow eating movement, we need a slow thinking campaign. Its manifesto could be the German poet Rainer Maria Rilke’s beautiful “Letters to a Young Poet”:

To let every impression and the germ of every feeling come to completion inside, in the dark, in the unsayable, the unconscious, in what is unattainable to one’s own intellect, and to wait with deep humility and patience for the hour when a new clarity is delivered.

Many great thinkers attest that they have their best insights in moments of relaxation, the proverbial brainwave in the bath. We actually need what we most fear: boredom.

When I left my job (and I was lucky that I could), friends and colleagues were bewildered. Why give up a good job? But I felt that here was an experiment worth trying. Crawford frames it well as “intellectual biodiversity.” At a time of crisis, we need people thinking in different ways. If we all jump to the tune of Facebook or Instagram and allow ourselves to be primed by Twitter, the danger is that we lose the “trained powers of concentration” that allow us, in Crawford’s words, “to recognize that independence of thought and feeling is a fragile thing, and requires certain conditions.”

I also took to heart the insights of the historian Timothy Snyder, who concluded from his studies of twentieth-century European totalitarianism that the way to fend off tyranny is to read books, make an effort to separate yourself from the Internet, and “be kind to our language… Think up your own way of speaking.” Dropping out and going offline enabled me to get back to reading, voraciously, and to writing; beyond that, it’s too early to announce the results of my experiment with attention. As Rilke said, “These things cannot be measured by time, a year has no meaning, and ten years are nothing.”

*

A recent column in The New Yorker cheekily suggests that all the fuss about the impact of digital technologies on our attention is nothing more than writers’ worrying about their own working habits. Is all this anxiety about our fragmenting minds a moral panic akin to those that swept Victorian Britain about sexual behavior? Patterns of attention are changing, but perhaps it doesn’t much matter?

My teenage children read much less than I did. One son used to play chess online with a friend, text on his phone, and do his homework all at the same time. I was horrified, but he got a place at Oxford. At his interview, he met a third-year history undergraduate who told him he hadn’t yet read any books in his time at university. But my kids are considerably more knowledgeable about a vast range of subjects than I was at their age. There’s a small voice suggesting that the forms of attention I was brought up with could be a thing of the past; the sustained concentration required to read a whole book will become an obscure niche hobby.

And yet, I’m haunted by a reflection: the magnificent illuminations of the eighth-century Book of Kells has intricate patterning that no one has ever been able to copy, such is the fineness of the tight spirals. Lines are a millimeter apart. They indicate a steadiness of hand and mind—a capability most of us have long since lost. Could we be trading in capacities for focus in exchange for a breadth of reference? Some might argue that’s not a bad trade. But we would lose depth: artist Paul Klee wrote that he would spend a day in silent contemplation of something before he painted it. Paul Cézanne was similarly known for his trance like attention on his subject. Madame Cézanne recollected how her husband would gaze at the landscape, and told her, “The landscape thinks itself in me, and I am its consciousness.” The philosopher Maurice Merleau-Ponty describes a contemplative attention in which one steps outside of oneself and immerses oneself in the object of attention.

It’s not just artists who require such depth of attention. Nearly two decades ago, a doctor teaching medical students at Yale was frustrated at their inability to distinguish between types of skin lesions. Their gaze seemed restless and careless. He took his students to an art gallery and told them to look at a picture for fifteen minutes. The program is now used in dozens of US medical schools.

Some argue that losing the capacity for deep attention presages catastrophe. It is the building block of “intimacy, wisdom, and cultural progress,” argues Maggie Jackson in her book Distracted, in which she warns that “as our attentional skills are squandered, we are plunging into a culture of mistrust, skimming, and a dehumanizing merging between man and machine.” Significantly, her research began with a curiosity about why so many Americans were deeply dissatisfied with life. She argues that losing the capacity for deep attention makes it harder to make sense of experience and to find meaning—from which comes wonder and fulfillment. She fears a new “dark age” in which we forget what makes us truly happy.

Strikingly, the epicenter of this wave of anxiety over our attention is the US. All the authors I’ve cited are American. It’s been argued that this debate represents an existential crisis for America because it exposes the flawed nature of its greatest ideal, individual freedom. The commonly accepted notion is that to be free is to make choices, and no one can challenge that expression of autonomy. But if our choices are actually engineered by thousands of very clever, well-paid digital developers, are we free? The former Google employee Tristan Harris confessed in an article in 2016 that technology “gives people the illusion of free choice while architecting the menu so that [tech giants] win, no matter what you choose.”

Despite my children’s multitasking, I maintain that vital human capacities—depth of insight, emotional connection, and creativity—are at risk. I’m intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

One of the most interesting responses to our distraction epidemic is mindfulness. Its popularity is evidence that people are trying to find a way to protect and nourish their minds. Jon Kabat-Zinn, who pioneered the development of secular mindfulness, draws an analogy with jogging: just as keeping your body fit is now well understood, people will come to realize the importance of looking after their minds.

I’ve meditated regularly for twenty years, but curious as to how this is becoming mainstream, I went to an event in the heart of high-tech Shoreditch in London. In a hipster workspaces with funky architecture, excellent coffee, and an impressive range of beards, a soft-spoken retired Oxford professor of psychology, Mark Williams, was talking about how multitasking has a switching cost in focus and concentration. Our unique human ability to remember the past and to think ahead brings a cost; we lose the present. To counter this, he advocated a daily practice of mindfulness: bringing attention back to the body—the physical sensations of the breath, the hands, the feet. Williams explained how fear and anxiety inhibit creativity. In time, the practice of mindfulness enables you to acknowledge fear calmly and even to investigate it with curiosity. You learn to place your attention in the moment, noticing details such as the sunlight or the taste of the coffee.

On a recent retreat, I was beside a river early one morning and a rower passed. I watched the boat slip by and enjoyed the beauty in a radically new way. The moment was sufficient; there was nothing I wanted to add or take away—no thought of how I wanted to do this every day, or how I wanted to learn to row, or how I wished I was in the boat. Nothing but the pleasure of witnessing it. The busy-ness of the mind had stilled. Mindfulness can be a remarkable bid to reclaim our attention and to claim real freedom, the freedom from our habitual reactivity that makes us easy prey for manipulation.

But I worry that the integrity of mindfulness is fragile, vulnerable both to commercialization by employers who see it as a form of mental performance enhancement and to consumer commodification, rather than contributing to the formation of ethical character. Mindfulness as a meditation practice originates in Buddhism, and without that tradition’s ethics, there is a high risk of it being hijacked and misrepresented.

Back in the Sixties, the countercultural psychologist Timothy Leary rebelled against the conformity of the new mass media age and called for, in Crawford’s words, an “attentional revolution.” Leary urged people to take control of the media they consumed as a crucial act of self-determination; pay attention to where you place your attention, he declared. The social critic Herbert Marcuse believed Leary was fighting the struggle for the ultimate form of freedom, which Marcuse defined as the ability “to live without anxiety.” These were radical prophets whose words have an uncanny resonance today. Distraction has become a commercial and political strategy, and it amounts to a form of emotional violence that cripples people, leaving them unable to gather their thoughts and overwhelmed by a sense of inadequacy. It’s a powerful form of oppression dressed up in the language of individual choice.

The stakes could hardly be higher, as William James knew a century ago: “The faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will.” And what are we humans without these three?

A 2% Financial Wealth Tax Would Provide A $12,000 Annual Stipend To Every American Household

Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income.

By Paul Buchheit

Source: Nation of Change

It’s not hard to envision the benefits in work opportunities, stress reduction, child care, entrepreneurial activity, and artistic pursuits for American households with an extra $1,000 per month. It’s also very easy to justify a financial wealth tax, given that the dramatic stock market surge in recent years is largely due to an unprecedented degree of technological and financial productivity that derives from the work efforts and taxes of ALL Americans. A 2% annual tax on financial wealth is a small price to pay for the great fortunes bestowed on the most fortunate Americans.

The REASONS? Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income (UBI).

(1) Our Jobs are Disappearing

A 2013 Oxford study determined that nearly HALF of American jobs are at risk of being replaced by computers, AI, and robots. Society simply can’t keep up with technology. As for the skeptics who cite the Industrial Revolution and its job-enhancing aftermath (which actually took 60 years to develop), the McKinsey Global Institute says that society is being transformed at a pace “ten times faster and at 300 times the scale” of the radical changes of two hundred years ago.

(2) Half of America is Stressed Out or Sick

Half of Americans are in or near poverty, unable to meet emergency expenses, living from paycheck to paycheck, and getting physically and emotionally ill because of it. Numerous UBI experiments have led to increased well-being for their participants. A guaranteed income reduces the debilitating effects of inequality. As one recipient put it, “It takes me out of depression…I feel more sociable.”

(3) Children Need Our Help

This could be the best reason for monthly household stipends. Parents, especially mothers, are unable to work outside the home because of the all-important need to care for their children. Because we currently lack a UBI, more and more children are facing hunger and health problems and educational disadvantages.

(4) We Need More Entrepreneurs

A sudden influx of $12,000 per year for 126 million households will greatly stimulate the economy, potentially allowing millions of Americans to TAKE RISKS that could lead to new forms of innovation and productivity.

Perhaps most significantly, a guaranteed income could relieve some of the pressure on our newest generation of young adults, who are deep in debt, underemployed, increasingly unable to live on their own, and ill-positioned to take the entrepreneurial chances that are needed to spur innovative business growth. No other group of Americans could make more productive use of an immediate boost in income.

(5) We Need the Arts & Sciences

A recent Gallup poll found that nearly 70% of workers don’t feel ‘engaged’ (enthusiastic and committed) in their jobs. The work chosen by UBI recipients could unleash artistic talents and creative impulses that have been suppressed by personal financial concerns, leading, very possibly, to a repeat of the 1930s, when the Works Progress Administration hired thousands of artists and actors and musicians to help sustain the cultural needs of the nation.

Arguments against

The usual uninformed and condescending opposing argument is that UBI recipients will waste the money, spending it on alcohol and drugs and other ‘temptation’ goods. Not true. Studies from the World Bank and the Brooks World Poverty Institute found that money going to poor families is used primarily for essential needs, and that the recipients experience greater physical and mental well-being as a result of their increased incomes. Other arguments against the workability of the UBI are countered by the many successful experiments conducted in the present and recent past: FinlandCanada, Netherlands, Kenya, IndiaGreat Britain, Uganda, Namibia, and in the U.S. in Alaska and California.

How to pay for it

Largely because of the stock market, U.S. financial wealth has surged to $77 trillion, with the richest 10% owning over three-quarters of it. Just a 2 percent tax on total financial wealth would generate enough revenue to provide a $12,000 annual stipend to every American household (including those of the richest families).

It’s easy to justify a wealth tax. Over half of all basic research is paid for by our tax dollars. All the technology in our phones and computers started with government research and funding. Pharmaceutical companies wouldn’t exist without decades of support from the National Institutes of Health. Yet the tech and pharmaceutical companies claim patents on the products paid for and developed by the American people.

The collection of a wealth tax would not be simple, since only about half of U.S. financial wealth is held directly in equities and liquid assets (Table 5-2). But it’s doable. As Thomas Piketty notes, “A progressive tax on net wealth is better than a progressive tax on consumption because first, net wealth is better defined for very wealthy individuals..”

And certainly a financial industry that knows how to package worthless loans into A-rated mortgage-backed securities should be able to figure out how to tax the investment companies that manage the rest of our ever-increasing national wealth.

 

The Singular Pursuit of Comrade Bezos

By Malcolm Harris

Source: Medium

It was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by way of stepping up output each year, every year, year after year. Nothing else mattered: not profit, not the rate of industrial accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced.

— Francis Spufford, Red Plenty

But isn’t a business’s goal to turn a profit? Not at Amazon, at least in the traditional sense. Jeff Bezos knows that operating cash flow gives the company the money it needs to invest in all the things that keep it ahead of its competitors, and recover from flops like the Fire Phone. Up and to the right.

— Recode, “Amazon’s Epic 20-Year Run as a Public Company, Explained in Five Charts


From a financial point of view, Amazon doesn’t behave much like a successful 21st-century company. Amazon has not bought back its own stock since 2012. Amazon has never offered its shareholders a dividend. Unlike its peers Google, Apple, and Facebook, Amazon does not hoard cash. It has only recently started to record small, predictable profits. Instead, whenever it has resources, Amazon invests in capacity, which results in growth at a ridiculous clip. When the company found itself with $13.8 billion lying around, it bought a grocery chain for $13.7 billion. As the Recode story referenced above summarizes in one of the graphs: “It took Amazon 18 years as a public company to catch Walmart in market cap, but only two more years to double it.” More than a profit-seeking corporation, Amazon is behaving like a planned economy.

If there is one story on Americans who grew up after the fall of the Berlin Wall know about planned economies, I’d wager it’s the one about Boris Yeltsin in a Texas supermarket.

In 1989, recently elected to the Supreme Soviet, Yeltsin came to America, in part to see Johnson Space Center in Houston. On an unscheduled jaunt, the Soviet delegation visited a local supermarket. Photos from the Houston Chronicle capture the day: Yeltsin, overcome by a display of Jell-O Pudding Pops; Yeltsin inspecting the onions; Yeltsin staring down a full display of shiny produce like a line of enemy soldiers. Planning could never master the countless variables that capitalism calculated using the tireless machine of self-interest. According to the story, the overflowing shelves filled Yeltsin with despair for the Soviet system, turned him into an economic reformer, and spelled the end for state socialism as a global force. We’re taught this lesson in public schools, along with Animal Farm: Planned economies do not work.

It’s almost 30 years later, but if Comrade Yeltsin had visited today’s most-advanced American grocery stores, he might not have felt so bad. Journalist Hayley Peterson summarized her findings in the title of her investigative piece, “‘Seeing Someone Cry at Work Is Becoming Normal’: Employees Say Whole Foods Is Using ‘Scorecards’ to Punish Them.” The scorecard in question measures compliance with the (Amazon subsidiary) Whole Foods OTS, or “on-the-shelf” inventory management. OTS is exhaustive, replacing a previously decentralized system with inch-by-inch centralized standards. Those standards include delivering food from trucks straight to the shelves, skipping the expense of stockrooms. This has resulted in produce displays that couldn’t bring down North Korea. Has Bezos stumbled into the problems with planning?

Although OTS was in play before Amazon purchased Whole Foods last August, stories about enforcement to tears fit with the Bezos ethos and reputation. Amazon is famous for pursuing growth and large-scale efficiencies, even when workers find the experiments torturous and when they don’t make a lot of sense to customers, either. If you receive a tiny item in a giant Amazon box, don’t worry. Your order is just one small piece in an efficiency jigsaw that’s too big and fast for any individual human to comprehend. If we view Amazon as a planned economy rather than just another market player, it all starts to make more sense: We’ll thank Jeff later, when the plan works. And indeed, with our dollars, we have.

In fact, to think of Amazon as a “market player” is a mischaracterization. The world’s biggest store doesn’t use suggested retail pricing; it sets its own. Book authors (to use a personal example) receive a distinctly lower royalty for Amazon sales because the site has the power to demand lower prices from publishers, who in turn pass on the tighter margins to writers. But for consumers, it works! Not only are books significantly cheaper on Amazon, the site also features a giant stock that can be shipped to you within two days, for free with Amazon Prime citizensh…er, membership. All 10 or so bookstores I frequented as a high school and college student have closed, yet our access to books has improved — at least as far as we seem to be able to measure. It’s hard to expect consumers to feel bad enough about that to change our behavior.


Although they attempt to grow in a single direction, planned economies always destroy as well as build. In the 1930s, the Soviet Union compelled the collectivization of kulaks, or prosperous peasants. Small farms were incorporated into a larger collective agricultural system. Depending on who you ask, dekulakization was literal genocide, comparable to the Holocaust, and/or it catapulted what had been a continent-sized expanse of peasants into a modern superpower. Amazon’s decimation of small businesses (bookstores in particular) is a similar sort of collectivization, purging small proprietors or driving them onto Amazon platforms. The process is decentralized and executed by the market rather than the state, but don’t get confused: Whether or not Bezos is banging on his desk, demanding the extermination of independent booksellers — though he probably is — these are top-down decisions to eliminate particular ways of life.

Now, with the purchase of Whole Foods, Bezos and Co. seem likely to apply the same pattern to food. Responding to reports that Amazon will begin offering free two-hour Whole Foods delivery for Prime customers, BuzzFeed’s Tom Gara tweeted, “Stuff like this suggests Amazon is going to remove every cent of profit from the grocery industry.” Free two-hour grocery delivery is ludicrously convenient, perhaps the most convenient thing Amazon has come up with yet. And why should we consumers pay for huge dividends to Kroger shareholders? Fuck ’em; if Bezos has the discipline to stick to the growth plan instead of stuffing shareholder pockets every quarter, then let him eat their lunch. Despite a business model based on eliminating competition, Amazon has avoided attention from antitrust authorities because prices are down. If consumers are better off, who cares if it’s a monopoly? American antitrust law doesn’t exist to protect kulaks, whether they’re selling books or groceries.

Amazon has succeeded in large part because of the company’s uncommon drive to invest in growth. And today, not only are other companies slow to spend, so are governments. Austerity politics and decades of privatization put Amazon in a place to take over state functions. If localities can’t or won’t invest in jobs, then Bezos can get them to forgo tax dollars (and dignity) to host HQ2. There’s no reason governments couldn’t offer on-demand cloud computing services as a public utility, but instead the feds pay Amazon Web Services to host their sites. And if the government outsources health care for its population to insurers who insist on making profits, well, stay tuned. There’s no near-term natural end to Amazon’s growth, and by next year the company’s annual revenue should surpass the GDP of Vietnam. I don’t see any reason why Amazon won’t start building its own cities in the near future.

America never had to find out whether capitalism could compete with the Soviets plus 21st-century technology. Regardless, the idea that market competition can better set prices than algorithms and planning is now passé. Our economists used to scoff at the Soviets’ market-distorting subsidies; now Uber subsidizes every ride. Compared to the capitalists who are making their money by stripping the copper wiring from the American economy, the Bezos plan is efficient. So, with the exception of small business owners and managers, why wouldn’t we want to turn an increasing amount of our life-world over to Amazon? I have little doubt the company could, from a consumer perspective, improve upon the current public-private mess that is Obamacare, for example. Between the patchwork quilt of public- and private-sector scammers that run America today and “up and to the right,” life in the Amazon with Lex Luthor doesn’t look so bad. At least he has a plan, unlike some people.

From the perspective of the average consumer, it’s hard to beat Amazon. The single-minded focus on efficiency and growth has worked, and delivery convenience is perhaps the one area of American life that has kept up with our past expectations for the future. However, we do not make the passage from cradle to grave as mere average consumers. Take a look at package delivery, for example: Amazon’s latest disruptive announcement is “Shipping with Amazon,” a challenge to the USPS, from which Amazon has been conniving preferential rates. As a government agency bound to serve everyone, the Postal Service has had to accept all sorts of inefficiencies, like free delivery for rural customers or subsidized media distribution to realize freedom of the press. Amazon, on the other hand, is a private company that doesn’t really have to do anything it doesn’t want to do. In aggregate, as average consumers, we should be cheering. Maybe we are. But as members of a national community, I hope we stop to ask if efficiency is all we want from our delivery infrastructure. Lowering costs as far as possible sounds good until you remember that one of those costs is labor. One of those costs is us.

Earlier this month, Amazon was awarded two patents for a wristband system that would track the movement of warehouse employees’ hands in real time. It’s easy to see how this is a gain in efficiency: If the company can optimize employee movements, everything can be done faster and cheaper. It’s also easy to see how, for those workers, this is a significant step down the path into a dystopian hellworld. Amazon is a notoriously brutal, draining place to work, even at the executive levels. The fear used to be that if Amazon could elbow out all its competitors with low prices, it would then jack them up, Martin Shkreli style. That’s not what happened. Instead, Amazon and other monopsonists have used their power to drive wages and the labor share of production down. If you follow the Bezos strategy all the way, it doesn’t end in fully automated luxury communism or even Wall-E. It ends in The Matrix, with workers swaddled in a pod of perfect convenience and perfect exploitation. Central planning in its capitalist form turns people into another cost to be reduced as low as possible.

Just because a plan is efficient doesn’t mean it’s good. Postal Service employees are unionized; they have higher wages, paths for advancement, job stability, negotiated grievance procedures, health benefits, vacation time, etc. Amazon delivery drivers are not and do not. That difference counts as efficiency when we measure by price, and that is, to my mind, a very good argument for not handing the world over to the king of efficiency. The question that remains is whether we have already been too far reduced, whether after being treated as consumers and costs, we might still have it in us to be more, because that’s what it will take to wrench society away from Bezos and from the people who have made him look like a reasonable alternative.