Cyberpunk is Dead

By John Semley

Source: The Baffler

“It was an embarrasser; what did I want? I hadn’t thought that far ahead. Me, caught without a program!”
—Bruce Bethke, “Cyberpunk” (1983)

Held annually in a downtown L.A. convention center so massive and glassy that it served as a futurist backdrop for the 1993 sci-fi action film Demolition Man and as an intergalactic “Federal Transport Hub” in Paul Verhoeven’s 1997 space-fascism satire Starship Troopers, the Electronic Entertainment Expo, a.k.a. “E3,” is the trade show of the future. Sort of.

With “electronic entertainment” now surpassing both music and movies (and, indeed the total earnings of music and movies combined), the future of entertainment, or at least entertainment revenue, is the future of video games. Yet it’s a future that’s backward-looking, its gaze locked in the rearview as the medium propels forward.

Highlights of E3’s 2019 installment included more details around a long-gestating remake of the popular PlayStation 1-era role-playing game Final Fantasy VII, a fifth entry in the demon-shooting franchise Doom, a mobile remake of jokey kids side-scroller Commander Keen, and playable adaptations of monster-budget movie franchises like Star Wars and The Avengers. But no title at E3 2019 garnered as much attention as Cyberpunk 2077, the unveiling of which was met with a level of slavish mania one might reserve for a stadium rock concert, or the ceremonial reveal of an efficacious new antibiotic.

An extended trailer premiere worked to whet appetites. Skyscrapers stretched upward, slashed horizontally with long windows of light and decked out with corporate branding for companies called “DATA INC.” and “softsys.” There were rotating wreaths of bright neon billboards advertising near-futuristic gizmos and gee-gaws, and, at the street level, sketchy no-tell motels and cars of the flying, non-flying, and self-piloting variety. In a grimy, high-security bunker, a man with a buzzcut, his face embedded with microchips, traded blows with another, slightly larger man with a buzzcut, whose fists were robotically augmented like the cyborg Special Forces brawler Jax from Mortal Kombat. The trailer smashed to its title, and to wild applause from congregated gamers and industry types.

Then, to a chug-a-lug riff provided by Swedish straight-edge punkers Refused (recording under the nom de guerre SAMURAI) that sounded like the sonic equivalent of a can of Monster energy drink, an enormous freight-style door lifted, revealing, through a haze of pumped-out fog, a vaguely familiar silhouette: a tall, lean-muscular stalk, scraggly hair cut just above the shoulders. Over the PA system, in smoothly undulating, bass-heavy movie trailer tones, a canned voice announced: “Please welcome . . . Keanu Reeves.” Applause. Pitchy screams. Hysterics in the front row prostrating themselves in Wayne’s World “we’re not worthy!” fashion. “I gotta talk to ya about something!” Reeves roared through the din. Dutifully reading from a teleprompter, he plugged Cyberpunk 2077’s customizable characters and its “vast open world with a branching storyline,” set in “a metropolis of the future where body modification has become an obsession.”

More than just stumping for Cyberpunk 2077, Reeves lent his voice and likeness to the game as a non-playable character (NPC) named “Johnny Silverhand,” who is described in the accompanying press materials as a “legendary rockerboy.” A relative newbie to the world of blockbuster Xbox One games, Reeves told the audience at E3 that Cyberpunk piqued his interest because he’s “always drawn to fascinating stories.” The comment is a bit rich—OK, yes, this is a trade show pitch, but still—considering that such near-futuristic, bodily augmented, neon-bathed dystopias are hardly new ground for Reeves. His appearance in Cyberpunk 2077 serves more to lend the game some genre cred, given Reeves’s starring roles in canonical sci-fi films such as Johnny Mnemonic (1995) and the considerably more fantastic Matrix trilogy (1999-2003)—now quadrilogy; with an anticipated fourth installment announced just recently. Like many of E3 2019’s other top-shelf titles, Cyberpunk 2077 looked forward by reflecting back, conjuring its tech-noir scenario from the nostalgic ephemera of cyberpunk futures past.

This was hardly lost among all the uproar and excitement. Author William Gibson, a doyenne of sci-fi’s so-called “cyberpunk” subgenre, offered his own withering appraisal of Cyberpunk 2077, tweeting that the game was little more than a cloned Grand Theft Auto, “skinned-over with generic 80s retro-future” upholstery. “[B]ut hey,” Gibson added, a bit glibly, “that’s just me.” One would imagine that, at least in the burrows of cyberpunk fandom, Gibson’s criticism carries considerable weight.

After all, the author’s 1984 novel Neuromancer is a core text in cyberpunk literature. Gibson also wrote the screenplay for Johnny Mnemonic, adapted from one of his own short stories, which likewise developed the aesthetic and thematic template for the cyberpunk genre: future dystopias in which corporations rule, computer implants (often called “wetware”) permit access to expansive virtual spaces that unfold before the user like a walk-in World Wide Web, scrappy gangs of social misfits unite to hack the bad guys’ mainframes, and samurai swords proliferate, along with Yakuza heavies, neon signs advertising noodle bars in Kanji, and other fetish objects imported from Japanese pop culture. Gibson dissing Cyberpunk 2077 is a bit like Elvis Presley clawing out of his grave to disparage the likeness of an aspiring Elvis impersonator.

Gibson’s snark speaks to a deeper malaise that has beset cyberpunk. A formerly lively genre that once offered a clear, if goofy, vision of the future, its structures of control, and the oppositional forces undermining those authoritarian edifices, it has now been clouded by a kind of self-mythologizing nostalgia. This problem was diagnosed as early as 1991 by novelist Lewis Shiner, himself an early cyberpunk-lit affiliate.

“What cyberpunk had going for it,” Shiner wrote in a New York Times op-ed titled “Confessions of an Ex-Cyberpunk, “was the idea that technology did not have to be intimidating. Readers in their teens and 20’s responded powerfully to it. They were tired of hearing how their home computers were tempting them into crime, how a few hackers would undermine Western civilization. They wanted fiction that could speak to the sense of joy and power that computers gave them.”

That sense of joy had been replaced, in Shiner’s estimation, by “power fantasies” (think only of The Matrix, in which Reeves’s moonlighting hacker becomes a reality-bending god), which offer “the same dead-end thrills we get from video games and blockbuster movies” (enter, in due time, the video games and blockbuster movies). Where early cyberpunk offerings rooted through the scrap heap of genre, history, and futurist prognostication to cobble together a genre that felt vital and original, its modern iterations have recourse only to the canon of cyberpunk itself, smashing together tropes, clichés, and old-hat ideas that, echoing Gibson’s complaint, feel pathetically unoriginal.

As Refused (in their pre-computer game rock band iteration) put it on the intro to their 1998 record The Shape of Punk to Come: “They told me that the classics never go out of style, but . . . they do, they do.”

Blade Ran

The word was minted by author Bruce Bethke, who titled a 1980 short story about teenage hackers “Cyberpunk.” But cyberpunk’s origins can be fruitfully traced back to 1968, when Philip K. Dick published Do Androids Dream of Electric Sheep?, a novel that updated the speculative fiction of Isaac Asimov’s Robot series for the psychedelic era. It’s ostensibly a tale about a bounty hunter named Rick Deckard chasing rogue androids in a post-apocalyptic San Francisco circa 1992. But like Dick’s better stories, it used its ready-made pulp sci-fi premise to flick at bigger questions about the nature of sentience and empathy, playing to a readership whose conceptions of consciousness were expanding.

Ridley Scott brought Dick’s story to the big screen with a loose 1982 film adaptation, Blade Runner, which cast Harrison Ford as Deckard and pushed its drizzly setting ahead to 2019. With its higher order questions about what it means to think, to feel, and to be free—and about who, or what, is entitled to such conditions—Blade Runner effectively set a cyberpunk template: the billboards, the neon, the high-collared jackets, the implants, the distinctly Japanese-influenced mise-en-scène extrapolated from Japan’s 1980s-era economic dominance. It is said that William Gibson saw Blade Runner in theaters while writing Neuromancer and suffered something of a crisis of conscience. “I was afraid to watch Blade Runner,” Gibson told The Paris Review in 2011. “I was right to be afraid, because even the first few minutes were better.” Yet Gibson deepened the framework established by Blade Runner with a crucial invention that would come to define cyberpunk as much as drizzle and dumpsters and sky-high billboards. He added another dimension—literally.

Henry Case, Gibson establishes early on, “lived for the bodiless exultation of cyberspace.” As delineated in Neuromancer, cyberspace is an immersive, virtual dimension. It’s a fully realized realm of data—“bright lattices of logic unfolding across that colorless void”—which hackers can “jack into” using strapped-on electrodes. That the matrix is “bodiless” is a key concept, both of Neuromancer and of cyberpunk generally. It casts the Gibsonian idea of cyberspace against another of the genre’s hallmarks: the high-tech body mods flogged by Keanu Reeves during the Cyberpunk 2077 E3 demo.

Early in Neuromancer, Gibson describes these sorts of robotic, cyborg-like implants and augmentations. A bartender called Ratz has a “prosthetic arm jerking monotonously” that is “cased in grubby pink plastic.” The same bartender has implanted teeth: “a webwork of East European steel and brown decay.” Gibson’s intense, earthy descriptions of these body modifications cue the reader into the fundamental appeal of Neuromancer’s matrix, in which the body itself becomes utterly immaterial. Authors from Neal Stephenson (Snow Crash) to Ernest Cline (Ready Player One, which is like a dorkier Snow Crash, if such a thing is conceivable), further developed this idea of what theorist Fredric Jameson called “a whole parallel universe of the nonmaterial.”

As envisioned in Stephenson’s Snow Crash, circa 1992, this parallel universe takes shape less as some complex architecture of unfathomable data, and more as an immersive, massively multiplayer online role-playing game (MMORPG). Stephenson’s “Metaverse”—a “moving illustration drawn by [a] computer according to specifications coming down the fiber-optic cable”—is not a supplement to our real, three-dimensional world of physical bodies, but a substitute for it. Visitors navigate the Metaverse using virtual avatars, which are infinitely customizable. As Snow Crash’s hero-protagonist, Hiro Protagonist (the book, it should be noted, is something of a satire), describes it: “Your avatar can look any way you want it to . . . If you’re ugly, you can make your avatar beautiful. If you’ve just gotten out of bed, your avatar can still be wearing beautiful clothes and professionally applied makeup. You can look like a gorilla or a dragon or a giant talking penis in the Metaverse.”

Beyond Meatspatial Reasoning

The Metaverse seems to predict the wide-open, utopian optimism of the internet: that “sense of joy and power” Lewis Shiner was talking about. It echoes early 1990s blather about the promise of a World Wide Web free from corporate or government interests, where users could communicate with others across the globe, forge new identities in chat rooms, and sample from a smorgasbord of lo-res pornographic images. Key to this promise was, to some extent, forming new identities and relationships by leaving one’s physical form behind (or jacked into a computer terminal in a storage locker somewhere).

Liberated from such bulky earthly trappings, we’d be free to pursue grander, more consequential adventures inside what Gibson, in Neuromancer, calls “the nonspace of the mind.” Elsewhere in cyberpunk-lit, bodies are seen as impediments to the purer experience of virtuality. After a character in Cory Doctorow’s Down and Out in the Magic Kingdom unplugs from a bracingly real simulation immersing him in the life of Abraham Lincoln, he curses the limitations of “the stupid, blind eyes; the thick, deaf ears.” Or, as Case puts it in Neuromancer, the body is little more than “meat.”

In Stephenson’s Metaverse, virtual bodies don’t even obey the tedious laws of physics that govern our non-virtual world. In order to manage the high amount of pedestrian traffic within the Metaverse and prevent users from bumping around endlessly, the complicated computer programming permits avatars simply to pass through one another. “When things get this jammed together,” Hiro explains, “the computer simplifies things by drawing all of the avatars ghostly and translucent so you can see where you’re going.” Bodies—or their virtual representations—waft through one another, as if existing in the realm of pure spirit. There is an almost Romantic bent here (Neuromancer = “new romancer”). If the imagination, to the Romantics, opened up a gateway to deep spiritual truth, here technology serves much the same purpose. Philip K. Dick may have copped something of the 1960s psychedelic era’s ethos of expanding the mind to explore the radiant depths of the individual soul, spirit, or whatever, but cyberpunk pushed that ethos outside, creating a shared mental non-space accessible by anyone with the means—a kind of Virtual Commons, or what Gibson calls a “consensual hallucination.”

Yet outside this hallucination, bodies still persist. And in cyberpunk, the physical configurations of these bodies tend to express their own utopian dimension. Bruce Bethke claimed that “cyberpunk” resulted from a deliberate effort to “invent a new term that grokked the juxtaposition of punk attitudes and high technology.” Subsequent cyberpunk did something a bit different, not juxtaposing but dovetailing those “punk attitudes” with high-tech. (“Low-life, high-tech” is a kind of a cyberpunk mantra.) Neuromancer’s central heist narrative gathers a cast of characters—hacker Henry Case, a cybernetically augmented “Razorgirl” named Molly Millions, a drug-addled thief, a Rastafari pilot—that can be described as “ragtag.” The major cyberpunk blockbusters configure their anti-authoritarian blocs along similar lines.

In Paul Verhoeven’s cyberpunk-y action satire Total Recall, a mighty construction worker-cum-intergalactic-spy (Arnold Schwarzenegger) joins a Martian resistance led by sex workers, physically deformed “mutants,” little people, and others whose physical identities mirror their economic alienation and opposition to a menacing corporate-colonial overlord named Cohaagen.

In Johnny Mnemonic, Keanu Reeves’s businesslike “mnemonic courier” (someone who ferries information using computer implants embedded in the brain) is joined by a vixenish bodyguard (Dina Meyer’s Jane, herself a version of Neuromancer’s Molly Millions), a burly doctor (Henry Rollins), and a group of street urchin-like “Lo-Teks” engaged in an ongoing counterinsurgency against the mega-corporation Pharmakom. Both Mnemonic and Recall rely on cheap twists, in which a figure integral to the central intrigue turns out to be something ostensibly less- or other-than-human. Total Recall has Kuato, a half-formed clairvoyant mutant who appears as a tumorous growth wriggling in the abdomen of his brother. Even more ludicrously, Mnemonic’s climax reveals that the Lo-Teks’ leader is not the resourceful J-Bone (Ice-T), but rather Jones, a computer-augmented dolphin. In cyberpunk, the body’s status as “dead meat” to be transcended through computer hardware and neurological implantation offers a corollary sense of freedom.

The idea of the cybernetic body as a metaphor for the politicized human body was theorized in 1985, cyberpunk’s early days, by philosopher and biologist Donna Haraway. Dense and wildly eclectic, by turns exciting and exasperating, Haraway’s “Cyborg Manifesto” is situated as an ironic myth, designed to smash existing oppositions between science and nature, mind and body. Haraway was particularly interested in developing an imagistic alternative to the idea of the “Goddess,” so common to the feminism of the time. Where the Goddess was backward-looking in orientation, attempting to connect women to some prelapsarian, pre-patriarchal state of nature, the cyborg was a myth of the future, or at least of the present. “Cyborg imagery,” she writes, “can suggest a way out of the maze of dualisms in which we have explained our bodies and our tools to ourselves.” Part machine and part flesh, Haraway visualizes the cyborg as a being that threatens existing borders and assumes responsibility for building new ones.

Though they are not quite identical concepts, Haraway’s figure of the cyborg and the thematics of cyberpunk share much in common. A character like Gibson’s Molly Millions, for example, could be described as a cyborg, even if she is still essentially gendered as female (the gender binary was one of the many “dualisms” Haraway believed the cyborg could collapse). Cyborgs and cyberpunk are connected in their resistance to an old order, be it political and economic (as in Neuromancer, Johnny Mnemonic, etc.) or metaphysical (as in Haraway). The cyborg and the cyberpunk both dream of new futures, new social relationships, new bodies, and whole new categories of conceptions and ways of being.

The historical problem is that, for the most part, these new categories and these new relationships failed to materialize, as cyberpunk’s futures were usurped and commodified by the powers they had hoped to oppose.

Not Turning Japanese

In an introduction to the Penguin Galaxy hardcover reissue of Neuromancer, sci-fi-fantasy writer Neil Gaiman ponders precisely how the 1980s cyberpunk visions came to shape the future. “I wonder,” he writes, “to what extent William Gibson described a future, and how much he enabled it—how much the people who read and loved Neuromancer made the future crystallize around his vision.”

It’s a paradox that dogs most great sci-fi writers, whose powers for Kuato-style clairvoyance have always struck me as exaggerated. After all, it’s not as if, say, Gene Roddenberry literally saw into the future, observed voice-automated assistants of the Siri and Alexa variety, and then invented his starship’s speaking computers. It’s more that other people saw the Star Trek technology and went along inventing it. The same is true of Gibson’s matrix or Stephenson’s Metaverse, or the androids of Asimov and Dick. And the realization of many technologies envisioned by cyberpunk—including the whole concept of the internet, which now operates not as an escapist complement to reality, but an essential part of its fabric, like water or heat—has occurred not because of scrappy misfits and high-tech lowlifes tinkering in dingy basements, but because of gargantuan corporate entities. Or rather, the cyberpunks have become the corporate overlords, making the transition from the Lo-Teks to Pharmakom, from Kuato to Cohaagen. In the process, the genre and all its aspirations have been reduced to so much dead meat. This is what Shiner was reacting to when, in 1991, he renounced his cyberpunk affiliations, or when Bruce Bethke, who coined the term, began referring to “cyberpunk” as “the c-word.”

The commodification of the cool is a classic trick of capitalism, which has the frustrating ability to mutate faster than the forces that oppose it. Yet even this move toward commodification and corporatization is anticipated in much cyberpunk. “Power,” for Neuromancer’s Henry Case, “meant corporate power.” Gibson goes on: “Case had always taken it for granted that the real bosses, the kingpins in a given industry, would be both more and less than people.” For Case (and, it follows, Gibson, at least at the time of his writing), this power had “attained a kind of immortality” by evolving into an organism. Taking out one-or-another malicious CEO hardly matters when lines of substitutes are waiting in the wings to assume the role.

It’s here that cyberpunk critiques another kind of body. Not the ruddy human form that can be augmented and perfected by prosthetics and implants, but the economic body. Regarding the economy as a holistic organism—or a constituent part of one—is an idea that dates back at least as far as Adam Smith’s “invisible hand.” The rhetoric of contemporary economics is similarly biological. An edifying 2011 argument in Al Jazeera by Paul Rosenberg looked at the power of such symbolic conceptions of the economy. “The organic metaphor,” Rosenberg writes, “tells people to accept the economy as it is, to be passive, not to disturb it, to take a laissez faire attitude—leave it alone.”

This idea calls back to another of cyberpunk’s key aesthetic influences: the “body economic” of Japan in the 1980s. From the 2019 setting of 1982’s Blade Runner, to the conspicuous appearance of yakuza goons in Gibson’s stories, to Stephenson’s oddly anachronistic use of “Nipponese” in Snow Crash, cyberpunk’s speculative futures proceed from the economic ascendency of 1980s Japan, and the attendant anxiety that Japan would eventually eclipse America as an economic powerhouse. This idea, that Japan somehow is (or was) the future, has persisted all the way up to Cyberpunk 2077’s aesthetic template, and its foregrounding of villains like the shadowy Arasaka Corporation. It suggests that, even as it unfolds nearly sixty years from our future, the blockbuster video game is still obsessed with a vision of the future past.

Indeed, it’s telling that as the robust Japanese economy receded in the 1990s, its burly body giving up the proverbial ghost, that Japanese cinema became obsessed with avenging spirits channeled into the present by various technologies (a haunted video cassette in Hideo Nakata’s Ringu, the internet itself in Kiyoshi Kurosawa’s Kairo, etc.). But in the 1980s, Japan’s economic and technologic dominance seemed like a foregone conclusion. In a 2001 Time article, Gibson called Japan cyberpunk’s “de facto spiritual home.” He goes on:

I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, “You see? You see? It is Blade Runner town.” And it was. It so evidently was.

Gibson’s analysis features one glaring mistake. His insistence that “modern Japan simply was cyberpunk” is tethered to its actual history as an economic and technological powerhouse circa the 1980s, and not from its own science-fictional preoccupations. “It was not that there was a cyberpunk movement in Japan or a native literature akin to cyberpunk,” he writes. Except there so evidently was.

The Rusting World

Even beyond the limp, Orwellian connotations, 1984 was an auspicious year for science-fiction. There was Neuromancer, yes. But 1984 also saw the first collected volume of Akira, a manga written and illustrated by Katsuhiro Otomo. Originally set, like Blade Runner, in 2019, Akira imagines a cyberpunk-y Neo-Tokyo, in which motorcycle-riding gangs do battle with oppressive government forces. Its 1988 anime adaptation was even more popular, in both Japan and the West. (The film’s trademark cherry red motorcycle has been repeatedly referenced in the grander cyberpunk canon, appearing in Steven Spielberg’s film adaptation of Ready Player One and, if pre-release hype is to believed, in Cyberpunk 2077 itself.) In 2018, the British Film Institute hailed Akira, accurately, as “a vital cornerstone of the cyberpunk genre.”

Japan has plenty of other, non-Akira cyberpunk touchstones. As a cinematic subgenre, Japanese cyberpunk feels less connected to the “cyber” and more to the spirit of “punk,” whether in the showcasing of actual Japanese punk rock bands (as in 1982’s Burst City) or the films’ own commitment to a rough-hewn, low-budget, underground aesthetic. Chief among the latter category of films is Shinya Tsukamoto’s Tetsuo: The Iron Man, which was shot on 16mm over a grueling year-and-a-half, mostly in and around Tetsuo actress and cinematographer Kei Fujiwara’s apartment, which also housed most of the film’s cast and crew.

Unlike the Western cyberpunk classics, Tsukamoto’s vision of human-machine hybridization is demonstrably more nightmarish. The film follows two characters, credited as the Salaryman (Tomorowo Taguchi) and the Guy (a.k.a. “The Metal Fetishist,” played by writer/director/producer/editor Tsukamoto himself), bound by horrifying mutations, which see their flesh and internal organs sprouting mechanical hardware.

In its own way, Tetsuo works as a cyberpunk-horror allegory for the Japanese economy. As the Salaryman and the Fetishist learn to accept the condition of their mechanization, they merge together, absorbing all the inorganic matter around them, growing enormously like a real-world computer virus or some terrifying industrial Katamari. Their mission resonates like a perverse inversion of Japan’s post-industrial promise. As Tsukamoto’s Fetishist puts it: “We can rust the whole world and scatter it into the dust of the universe.”

Like Haraway’s development of the cyborg as a metaphoric alternative to the New Age “goddess,” Tetsuo’s titular Iron Man can offer a similar corrective. If cyberpunk has become hopelessly obsessed with its own nostalgia, recycling all its 1980s bric-a-brac endlessly, then we need a new model. Far from the visions of Gibson, in which technology provides an outlet for a scrappy utopian impulse that jeopardizes larger corporate-political dystopias, Tetsuo is more pessimistic. It sees the body—both the individual physical body and the grander corpus of political economy—as being machine-like. Yet, as Rosenberg notes in his Al Jazeera analysis of economic rhetoric, it may be more useful to conceive of the economy not as a “body” or an organism but as a machine. The body metaphor is conservative, “with implications that tend toward passivity and acceptance of whatever ills there may be.” Machines, by contrast, can be fixed, greased, re-oriented. They are, unlike bodies, a thing separate from us, and so subject to our designs.

Cybernetic implants and cyborg technology are not some antidote to corporate hegemony. The human does not meld with technology to transcend the limitations of humanity. Rather, technology and machinery pose direct threats to precisely that condition. We cannot, in Tsukamoto’s film, hack our way to a better future, or technologically augment our way out of collective despair. Technology—and the mindless rush to reproduce it—are, to Tsukamoto, the very conditions of that despair. Even at thirty years old, Tetsuo offers a chilling vision not of the future, or of 1980s Japan, but of right now: a present where the liberating possibilities of technology have been turned inside-out; where hackers become CEOs whose platforms bespoil democracy; where automation offers not the promise of increased wealth and leisure time, but joblessness, desperation, and the wholesale redundancy of the human species; where the shared hallucination of the virtual feels less than consensual.

There’s nothing utopian about the model of cyberpunk developed in Tetsuo: The Iron Man. It is purely dystopian. But this defeatism offers clarity. And in denying the collaborative, collectivist, positive vision of a technological future in favor of a vision of identity-destroying, soul-obliterating horror, Tsukamoto’s stone-cold classic of Japanese cyberpunk invites us to imagine our own anti-authoritarian, anti-corporate arrangements. The enduring canon of American-style cyberpunk may have grown rusty. It has been caught, as Bethke put it in his genre-naming story, “without a program.” But the genre’s gnarlier, Japanese iterations have plenty to offer, embodying sci-fi’s dream of imagining a far-off future as a deep, salient critique of the present. It is only when we accept this cruel machinery of the present that we can freely contemplate how best to tinker with its future.

Left to peddle such a despairing vision in a packed-out L.A. convention center, even cyberpunk’s postmortem poster boy Keanu Reeves would be left with little to say but a resigned, bewildered, “Woah . . .”

Automatons – Life Inside the Unreal Machine

By Kingsley L. Dennis

Source: Waking Times

ɔːˈtɒmət(ə)n/

noun

a moving mechanical device made in imitation of a human being.

a machine which performs a range of functions according to a predetermined set of coded instructions.

used in similes and comparisons to refer to a person who seems to act in a mechanical or unemotional way.

“Don’t you wish you were free, Lenina?”

“I don’t know what you mean. I am free. Free to have the most wonderful time. Everybody’s happy nowadays.”

He laughed. “Yes, ‘Everybody’s happy nowadays.’ We have been giving the children that at five. But wouldn’t you like to be free to be happy in some other way, Lenina? In your own way, for example; not in everybody else’s way.”

“I don’t know what you mean,” she repeated.

Aldous Huxley, Brave New World

Are we turning into a mass of unaware sleepwalkers? Our eyes are seemingly open and yet we are living as if asleep and the dream becomes our waking lives. It seems that more and more people, in the highly technologized nations at least, are in danger of succumbing to the epidemic of uniformity. People follow cycles of fashions and wear stupid clothes when they think it is the ‘in thing;’ and hyper-budget films take marketing to a whole new level forcing parents to rush out to buy the merchandise because their kids are screaming for it. And if one child in the class doesn’t have the latest toy like all their classmates then they are ostracized for this lack. Which means that poor mummy and daddy have to make sure they get their hands on these gadgets. Put the two items together – zombies and uniformity – and what do you get? Welcome to the phenomenon of Black Fridays, which have become the latest manifestation of national Zombie Days.

Unless you’ve been living in a cave somewhere (or living a normal, peaceful existence) then you will know what this event is – but let me remind you anyway of what a Black Friday is. It is a day when members of the public are infected with the ‘must buy’ and ‘act like an idiot’ virus that turns them into screaming, raging hordes banging on the doors of hyper-market retailers hours before they open. Many of these hordes sleep outside all night to get early entry. Then when the doors are finally opened they go rushing in fighting and screaming as if re-enacting a scene from Games of Thrones. Those that do survive the fisticuffs come away with trolleys full of boxes too big to carry. This display of cultural psychosis, generally named as idiocracy, is also a condition nurtured by societies based on high-consumption with even higher inequalities of wealth distribution. In other words, a culture conditioned to commodity accumulation will buy with fervour when things are cheap. This is because although conditioned to buy, they lack the financial means to satiate this desire. Many people suffer from a condition which psychologists have named as ‘miswanting,’ which means that we desire things we don’t like and like things we don’t desire. What this is really saying is that we tend to ‘want badly’ rather than having genuine need. What we are witnessing in these years is an epidemic of idiocracy and its propagating faster than post-war pregnancies. And yet we are programmed by our democratic societies to not think differently. In this respect, many people also suffer from a condition known as ‘confirmation bias.’

Confirmation bias is our conditioned tendency to pick and choose that information which confirms our pre-existing beliefs or ideas. Two people may be able to look at the same evidence and yet they will interpret it according to how it fits into and validates their own thinking. That’s why so many debates go nowhere as people generally don’t wish to be deviated away from those ideas they have invested so much time and effort in upholding. It’s too much of a shock to realize that what we thought was true, or valid, is not the case. To lose the safety and security of our ideas would be too much for many people. It is now well understood in psychology that we like to confirm our existing beliefs; after all, it makes us feel right!

Many of our online social media platforms are adhering to this principle by picking and choosing those items of news, events, etc that their algorithms have deemed we are most likely to want to see. As convenient as it may seem, it is unlikely to be in our best interests in the long term. The increasing automation of the world around us is set to establish a new ecology in our hyperreality. We will be forced to acknowledge that algorithms and intelligent software will soon, if it isn’t already, be running nearly everything in our daily lives. Historian Yuval Harari believes that ‘the twenty-first century will be dominated by algorithms. “Algorithm” is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is.’1 Algorithms already follow our shopping habits, recommend products for us, pattern recognize our online behavior, help us drive our cars, fly our planes, trade our economies, coordinate our public transport, organize our energy distribution, and a lot, lot more that we are just not really aware of. One of the signs of living in a hyperreality is that we are surrounded by an invisible coded environment, written in languages we don’t understand, making our lives more abstracted from reality.

Modern societies are adapting to universal computing infrastructures that will usher in new arrangements and relations. Of course, these are only the early years, although there is already a lot of uncertainty and unpredictability. As it is said, industrialization didn’t turn us into machines and automation isn’t going to turn us into automatons. Which is more or less correct; after all, being human is not that simple. Yet there will be new dependencies and relations forming as algorithms continue to create and establish what can be called ‘pervasive assistance.’ Again, it is a question of being alert so that we don’t feel compelled just to give ourselves over to our algorithms. The last thing we want is for a bunch of psychologists trying to earn yet more money from a new disease of ‘algorithmic dependency syndrome’ or something similar.

It needs stating that by automating the world we also run the risk of being distanced from our own responsibilities. And this also implies, importantly, the responsibility we have to ourselves – to transcend our own limitations and to develop our human societies for the better. We should not forget that we are here to mature as a species and we should not allow the world of automation to distract us from this. Already literature and film have portrayed such possibilities. Examples are David Brin’s science-fiction novel Kiln People (2002 – also adapted into the film Surrogates, 2009), which clearly showed how automation may provide a smokescreen for people to disappear behind their surrogate substitutes.

Algorithms are the new signals that code an unseen territory all around us. In a world of rapidly increasing automation and digital identities we’ll have to keep our wits about us in order to retain what little of our identities we have left. We want to make sure that we don’t get lost in our emoji messages, our smilies of flirtation; or, even worse, loose our life in the ‘death cult’ of the selfies. Identities by their very nature are constructs; in fact, we can go so far as to call them fake. They are constructed from layers of ongoing conditioning which a person identifies with. This identity functions as a filter to interpret incoming perceptions. The limited degree of perceptions available to us almost guarantees that identities fall into a knowable range of archetypes. We would be wise to remember that who we are is not always the same as what we project. And yet some people on social media are unable to distinguish their public image from their personal identity, which starts to sound a bit scary. Philosopher Jean Baudrillard, not opposed to saying what he thought, stated it in another way:

We are in a social trance: vacant, withdrawn, lacking meaning in our own eyes. Abstracted, irresponsible, enervated. They have left us the optic nerve, but all the others have been disabled…All that is left is the mental screen of indifference, which matches the technical in-difference of the images.2

Baudrillard would probably be the first to agree that breathing is often a disguise to make us think that someone is alive. After all, don’t we breathe automatically without thinking about it?

We must not make the human spirit obsolete just because our technological elites are dreaming of a trans-human future. Speaking of such futures, inventor and futurist Ray Kurzweil predicts that in the 2030s human brains will be able to connect to the cloud and to use it just like we use cloud computing today. That is, we will be able to transfer emails and photos directly from the cloud to our brain as well as backing up our thoughts and memories. How will this futuristic scenario be possible? Well, Kurzweil says that nanobots – tiny robots constructed from DNA strands – will be swimming around in our brains. And the result? According to Kurzweil we’re going to be funnier, sexier, and better at expressing our loving sentiments. Well, that’s okay then – nanobot my brain up! Not only will being connected to the computing cloud make us sexier and funnier humans, it will even take us closer to our gods says Kurzweil – ‘So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world – it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.’It’s hard to argue with such a bargain – a few nanobots in our brain to become godlike? I can imagine a lot of people will be signing up for this. There may even be a hefty monthly charge for those wanting more than 15GB of back-up headspace. Personally, I prefer the headspace that’s ad infinitum and priceless. I hope I’m not in the minority.

Looking at the choices on offer so far it seems that there is the zombie option, which comes with add-on idiocracy (basic model), and the trans-human nanobot sexy-god upgrade (pricy). But then let’s not forget that in an automated world it may be the sentient robots that come out on top. Now, that would be an almost perfect demonstration of a simulation reality.

Life in Imitation

There are those who believe that self-awareness is going to be the end game of artificial intelligence – the explosive ‘wow factor’ that really throws everything into high gear. The new trend now is deep machine-learning to the point where machines will program not only themselves but also other machines. Cognitive computer scientists are attempting to recapture the essence of human consciousness in the hope of back-engineering this complexity into machine code. It’s a noble endeavor, if not at least for their persistence. The concern here is that if machines do finally achieve sentience then the next thing that we’ll need to roll out will be machine psychologists. Consciousness, after all, comes at a price. There is no free lunch when it comes to possessing a wide-awake brain. With conscious awareness comes responsibilities, such as values, ethics, morality, compassion, forgiveness, empathy, goodness, and good old-fashioned love. And I personally like the love part (gives me a squishy feeling every time).

It may not actually be the sentient robots we need to worry about; it’s the mindless ones we need to be cautious of (of course, we could say the same thing about ourselves). One of the methods used in training such robots is, in the words of their trainers, to provide them with enough ‘intrinsic motivation.’ Not only will this help the robots to learn their environments, it is also hoped that it will foster attention in them to acquire sufficient situational awareness. If I were to write a science-fiction scenario on this I would make it so that the sentient robots end up being more human than we are, and humans turn into their automated counterparts. Funny, maybe – but more so in the funny-bone hurting sort of way rather than the laugh-out-loud variety. Or perhaps it’s already been done. It appears that we are attempting to imbue our devices with qualities we are also striving to possess for ourselves. Humans are naturally vulnerable; it is part of our organic make-up. Whatever we create may inherit those vulnerabilities. However, this here is not a discussion on the pros and cons of smart machines and artificial intelligence (there are many more qualified discussions on that huge topic).

While we are creating, testing, worrying, or arguing over machines and their like we are taking our attention away from the center – ourselves. The trick of surviving in the ‘unreal machine’ of life is by becoming more human, the very antithesis of the robotic. Technology can assist us in interacting and participating to a better degree with our environments. The question, as always, is the uses to which such tools are put – and by whom. Such tools can help us realize our dreams, or they can entrap us in theirs. Algorithms, smart machines, intelligent infrastructure, and automated processes: these are all going to come about and be a part of our transforming world. And in many respects, they will make life more comfortable for us. Yet within this comfort zone we still need to strive and seek for our betterment. We should not allow an automated environment to deprive us of our responsibility, and need, to find meaning and significance in our world. Our technologies should force us to acknowledge our human qualities and to uplift them, and not to turn us into an imitation of them.

Another metaphor for the simulated ‘robotic’ creature is the golem. The golem legend speaks of a creature fashioned from clay, a Cabbalistic motif which has appeared frequently in literary and cinematic form (such as Frankenstein). The Cabbalistic automaton that is the golem, which means ‘unformed,’ has often been used to show the struggle between mechanical limitation and human feelings. This struggle depicts the tension that combines cogs and consciousness; the entrapment in matter and the spirit of redemption and liberation. This is a myth that speaks of the hubris in humanity fashioning its own creatures and ‘magically’ bestowing life upon them. It is the act of creating a ‘sacred machine’ from the parts and pieces of a material world and then to imbue them with human traits. And through this human likeness they are required to fulfil human chores and work as slaves. Sounds familiar? The Cabbalistic humanoid – the sentient robot – is forever doomed, almost like the divine nature of Man trapped within the confines and limitations of a material reality. They represent the conflict of being torn between a fixed fate and freedom.

Our material reality may be the ultimate unreal machine. We are the cogs, the clay golem, the imperfect creature fashioned by another. Our fears of automation may only be a reflection of our own automation. We struggle to express some form of release whilst unaware that the binds that mechanize us are forever tightening.

We have now shifted through the zombie-idiocracy model (basic), the trans-human nanobot sexy-god model (pricy), to arrive at the realization that it is us – and not our sentient robots – that are likely to be the automaton (tragic). And this is the biblical fall from grace; the disconnection from our god(s). We have come loose from Central Source and we have lost our way.

We are now living in the hyperreal realm where zombies, cyborgs, and golem robots all reside – but it is not the place for the genuine human. Things are going to have to change. Not only do we have to retain our humanity, we also must remain sane. With our continuing modern technologies, our augmented reality and bioengineering, the difference between fiction and reality will blur even further. And this blurring is likely to become more prominent as people increasingly try to reshape reality to fit around their own imaginative fictions. Staying sane, grounded, and balanced is going to be a very, very good option for the days to come.

We are going to be sharing our planetary space with the new smart machines. I am reminded of the Dr. Seuss book Horton Hears a Who! that has the refrain, ‘a person’s a person no matter how small.’ Size doesn’t count – but being human does. And staying human in these years will be the hard task allotted to us.

Phantom Performances – The Rise of the Spectacle

By Kingsley L. Dennis

Source: Waking Times

ˈspɛktək(ə)l/

noun

a visually striking performance or display

an event or scene regarded in terms of its visual impact

“Now the death of God combined with the perfection of the image has brought us to a whole new state of expectation. We are the image.” ~John Ralston Saul, Voltaire’s Bastards

“Magical thinking is the currency not only of celebrity culture, but also of totalitarian culture.” ~Chris Hedges, Empire of Illusion

Welcome to the spectacle. Or perhaps I should say the kind of spectacle that has become the face of entertainment that pervades our westernized cultures. The way that the spectacle succeeds is that it isn’t so much about fooling us into believing its lies as real, but rather that it is we who ask to be fooled. We seek to suspend our sense of reality, to pursue a space of escape. The spectacle pulls us in because we lend our willingness to its agenda. If we are honest, in this post-truth age, we will admit to living in an age of spectacle. And it is from this that many of us receive our interpretation of reality. Since the middle of the 20th century onwards the ‘western spectacle’ has been in the form of media advertisement and propaganda. We may think that we’ve only recently arrived at the age of the spectacle, where Disneylandification is becoming the norm, and Super Bowls are interspersed with scantily-clad singers, and TV programs appear in the slots between advertisers. Yet the whole spectacle show has been a form of function creep ever since telecommunications first emerged as a social phenomenon. The image has been with humanity since the first dawn of our arising; from cave paintings to hieroglyphics to cuneiform clay tablets. The major difference is that today the spectacle of the image has not only gone global, but it has also gotten inside of our heads.

Western cultures especially (and the US specifically) have now made the image, the spectacle, and hence the illusion so grand, so vivid, and so persuasively realistic that they are becoming our basis of reality. We swing from one illusion to the alternative, which is still yet another grand spectacle; just as we swing from the political left to the right, believing each side is distinctly different. Yet each is a part of the same bubble that customizes our lives – they form a part of our news, our heroes, our tragedies, and our dreams. We now serve a mosaic of ideals carefully crafted as a patchwork of phantom performances. Nothing is ever real anymore except the painful extremes that pervade our daily existence: the violence, the suffering, the deprivation, the inequality, the disease. Only these fragments that create great pain become the real, and from these many of us seek refuge in a plenitude of diversions, distractions, and triviality.

Western civilization has chosen to be played out upon a grand stage where the performance – of invented storylines and scripts – runs the show. We move through social realities that are an entanglement of signs, virtual connections, and social media status. It’s all about who is going to be the next ‘influencer’? We are encouraged to project back into the world our entertainment-mediatized fantasies. People begin to act out their imaginary landscapes, often in violent and distorted ways, as young students massacre their classmates before going to eat at McDonalds. This is the hyperreal that distorts a stable reality, making it harder to gain a grounded perspective on things. People are increasingly being guided by the false totems of media-militarized-entertainment.

The media spectacle gives us our modern guiding images. This is similar to how in the Middle Ages images depicted in stained glass windows and paintings of religious torment or salvation acted to control and influence the social behavior of our ancestors. For many of us the white-bearded god above is dead, so we have media depictions of heroes, adventurers, McGyvers, celebrity-cosmetic makeovers, beauty pageants, talk shows and reality television to be our social guides. An illusory sensate reality has been erected that runs on pseudo-lives and phantom performances. Such phantom performances mask our personal failures and conveniently hide them behind a curtain of the unreal. People prefer to watch the rich and famous on television rather than face the domestic unhappiness of their own lives. Why have ice when you can have bubblegum-flavored ice-cream?

Luckily for those of us who live in the west we inhabit a world of easy-correction where we can make ourselves better if we buy certain products, ingest certain foods, and hang-out in the right yoga gyms. For every situation there is seemingly a commercial solution. We have not been abandoned, after all. In the realm of hyperreality, our fantasies are no longer an impediment to success. On the contrary, our fantasies are the portals through which we enter. All we need is for the world of the media to give us our dream. Everybody has talent, as the reality shows tell us – ‘Britain’s Got Talent,’ ‘America’s Got Talent:’ in fact, we’ve all got talent! We are all of us hidden unique performers, and the world ‘out there’ is begging for our arrival. This is not to be confused with the manipulation by greedy commercial enterprises that are ready to discard you as soon as your ‘talent’ no longer sells.

Yet the truth of the matter is that the spectacle of celebrity culture seeks commodities, not real individuals or souls. It doesn’t want that we seek for any form of transcendence, illumination, or real growth. It is a world that seeks only those that feed the phantom and encourage others to do the same. It is the ‘real’ that gets pushed into a black hole – to become a figment of the imagination, whilst imaginary dreams take its place. Celebrity culture thrives from the very lack of inner reflection. There is no ‘going within’ unless it is a form of medication going down our throats. If we are brutally honest, the celebrity spectacle is an ugly specter that can be as cruel as it is superficial.

The Spectacle of Celebrity Culture

No one achieves celebrity status on their own. It is a stage performance that requires a hoard of cultural enablers; from media, marketers, promoters, agents, handlers, and a host of hungry and gullible people. It is a veritable stage of actors, with each person in it to gain something for themselves. They either seek attention, satisfaction, fame, wealth, or a combination of these. Celebrity culture has come to dominate how many of us define our sense of belonging. It has come to define how we relate to the world around us, and in this respect has disfigured our notions of social belonging and community. Celebrity culture funds and feeds our own movies inside our heads as we invent our roles and behavior. It is a culture in which very few participants are even real for a day.

We idolize celebrities and often project them as idealized forms of ourselves. And yet through this substitution we move further away from any real self-actualization. The transcendent – the Real – does not do substitutes. By throwing our fantasies onto others we are diminishing our own power. In the words of one serious journalist,

We are chained to the flickering shadows of celebrity culture, the spectacle of the arena and the airwaves, the lies of advertising, the endless personal dramas, many of them completely fictional, that have become the staple of news, celebrity gossip, New Age mysticism, and pop psychology …in contemporary culture the fabricated, the inauthentic, and the theatrical have displaced the natural, the genuine, and the spontaneous, until reality itself has been converted into stagecraft. 1

We are subtly pushed through the well-structured stagecraft whilst all the time thinking that it is real. Our contemporary ‘death of the gods’ has been replaced by a divine adoration of celebrities and celebrity culture. Celebrity items, like holy relics, are paraded, idolized, and sold for vast sums. People rush for autographs, only to sell them later on eBay to make an unhealthy profit. Celebrity personal possessions are sold off at prestigious auction houses for astronomical prices, so aging people can wear the clothes of their idols. The glitzy suit that Elvis wore before dying in a Las Vegas toilet; or the dress that Marilyn Monroe wore to show her knickers to the world above a subway vent. Everything is up for grabs – the profane is made sacred, and then sacrificed as celebrity talismans. It all engenders a performance of hysteria, leading sometimes to stalking, or what is nowadays referred to as ‘trolling,’ as celebrity private photos are hacked and shared online. It’s happened to Emma Watson, Jennifer Lawrence, Kate Upton, Jessica Alba, Kate Hudson, Scarlett Johansson…and the list goes on, and on, and on.

The world of celebrity culture thrusts us into a moral void. People are valued by their appearance and their skin-deep beauty rather than their humanity. Such a culture focuses upon onanistic desires and ways for self-gratification. The cult of self ‘has within it the classic traits of psychopaths: superficial charm, grandiosity, and self-importance; a need for constant stimulation, a penchant for lying, deception, and manipulation, and the inability to feel remorse or guilt.’2 The cult of self also promotes the right to get whatever we wish, and celebrity media plays into this, often at the cost of the celebrity who suffers from social media harassment and online trolling. Celebrity public life is not a sacred space; instead, it has become a theatre of performance that is open for all spectators. And those spectators who surround themselves with celebrity culture tend to live in the present, fed by an endless stream of packaged information. They live by credit promises, ignorant to the future prospect of unmanageable debt. They are hostage to a culture that keeps them enthralled, like a television commercial replete with pleasing jingles. They navigate their purchases through well-known brands, eyeing the famous logos as guides. It is an image-saturated reality, bright and tantalizing, offering comfort and satisfaction upon all levels – until the credit runs out. Then the person becomes an outlaw to the very system that fattened them up like foie gras ducks.

These are the trivial diversions that for many are necessary, and which exist in cultures that prize shallow entertainment above substance. We may wonder whether the consumerist celebrity culture is a compensation for the loss of our true freedom regarding the human spirit and our well-being. And celebrities too are often trapped within their own fairy-tale prisons. They are skillfully controlled by their handlers and pushed in front of the media – all this to compensate for the insatiable appetites of those thirsty spectators that swarm upon celebrity culture. We are tantalizingly shown that even us, the humble spectators, can triumph in fame through the lens of reality television. The celebrity machinery oils itself on the media-creation of third and fourth-rate celebrities that have their fifteen minutes of fame – crammed together on desert islands, stuffing insects into their mouths as they bad-mouth their once beloved ‘best-friend’ and vote them off the show. Reality survival, it seems, comes at a cost. And then when they finally emerge into the ‘real world’ of the hyperreal they throng and mingle with other reality-stars under the glare of media spotlight in the vain hope that together they can populate an illusory world of the celebrity.

The world of reality television is another limb on the body of phantom performance. In the last decade a multitude of reality shows have cropped up on our television screens; and they all have one thing in common – they involve being constantly watched. Popular shows such as Big Brother put strangers to live together with round-the-clock constant surveillance. These strangers are even videoed in their beds as they sleep or fondle and kiss with other contestants. Sex lives are ogled over alongside the tears and on-screen breakdowns. Then the television psychologists are wheeled out to offer ‘expert commentary’ on the contestant’s state for mass consumption. Yet underneath all this glamour and glitz is the subtle message that intrusive surveillance is a normal feature of contemporary societies. In fact, it even masquerades as something cool that can be shared online, and which can make us famous. However, the brute reality is that such reality shows normalize what would otherwise be blatant non-constitutional intervention. And yet such shows make surveillance not only routine but a potentially enjoyable part of our modern lives. We are being conditioned into monitoring and sharing our own lives for others to see. Our phantom performances can make any one of us into an enviable star.

Social media is now rife with home-grown videos where everyone from toddler to teenager to retiree is making their performances visible to the image-hungry collective. Selfies too are the new fashionable rage as we perform in front of ourselves. This trend has become so pervasive that each year the number of selfie-related deaths has been increasing. In 2015 more people died from taking selfies than from shark attacks.[i] A dedicated online Wikipedia page has been established to record some of the ongoing ‘selfie-deaths.’ Here are a few examples:

Two young men died in the Ural Mountains after they pulled the pin from a live hand grenade to take a selfie. The phone with the picture remained as evidence of the circumstance of their deaths. (Russia, January 2015)

An 18-year-old died when she attempted to take the “ultimate selfie”, posing with a friend on top of a train in the north-eastern Romanian city of Iași when her leg touched a live wire above which electrocuted her with 27,000 volts. (Romania, May 2015)

A 19-year-old from Houston died after trying to take an Instagram selfie while holding a loaded gun to his head. He accidentally fired the gun and shot himself in the throat. (USA, September 2015)

A 17-year-old student, Andrey Retrovsky from Vologda, Russia, fell to his death attempting to take a selfie while hanging from a rope from a nine-story building. The rope snapped. Retrovsky was known for taking ‘extreme’ selfies and posting them to his Instagram account. (Russia, September 2015)

Selfie deaths, it seems, are global – and not a rare occurrence. Our phantom performances come at a cost. In a world where the image is iconic, more and more people are losing themselves in a reality where a sense of achievement comes from catching the ‘ultimate selfie.’

The drive for inner fulfilment, transcendence, and growth has been wavered aside in favor of the pixilated image. We fear not being seen. We dread being anonymous. Even being a spectral ghost is preferable to being dead.

We Are the Image

The new perspective on the world is pixilated. We are awash with images without substance and which are routinely fetishized as iconic. Signs are lacking immanence; they are fleeting and transient like never before. That is why corporations spend millions trying to find an image logo that will stick around long enough to be implanted into our minds. Images are becoming signs to the disappearance of the real. Images are the new believable reality; now no one cares that the original behind the image has quietly slipped away. The world exists as if in a play of phantom appearances. The image has taken centerstage within the space of the new real. We are now the image.

Yet the danger here is that in being given the image with its glamour and glitz we are in return giving up our critical and intellectual tools that help us cope with a complex world. Where once we had the faculty of separating illusion from reality we now have a simplified hyperreal world where everything can be explained away by a platitude of post-truth phrases. Does it even matter anymore that Las Vegas with its illusion of France with the mock Eiffel Tower, or its pseudo-canals of Venice, are far from the reality of France or Venice? How many people care? Or that the fantasy worlds within the various Disney theme parks are merging with the entertainment-saturated lives outside? Would it truly matter if we were all living within a controlled environment as depicted within the film The Truman Show? Or maybe, just maybe, such films are actually trying to tell us something – to wake us up?

The danger now is that our cultural spectacles – our celebrity culture and spectral images – are making any other alternative seem dull to us. It may be that in an age of simplified gratification any complex reality is boring. What the ‘real’ presents us with may no longer be enough. In its place we are perhaps seeking a false magic.

We have lost touch with that essential something that can work like magic in our lives. As one thinker recently stated:

We live in changing times whereby humanity is undergoing a transformation…We need to understand phenomena at deeper levels, and not just accept what we are told, or what is fed to us through well-structured social institutions and channels. We must learn to accept that our thinking is a great tangible spiritual force for change. 2

The notion that our ideas, our vision, our projections onto the world can be a ‘great tangible spiritual force for change’ is eluding us. Never before has it been so important to trust in the power of the human spirit, and to put forth, with honesty and integrity, the innate human power. The alternative is that we slide into the slipstream of our own phantom performances – we become the image.

 

Extract from the book Bardo Times: hyperreality, high-velocity, simulation, automation, mutation – a hoax?

Endnotes 

Hedges, Chris. 2010. Empire of Illusion: The End of Literacy and the Triumph of Spectacle. New York: Nation Books, p15

 Gulbekian, S.E. 2004. In the Belly of the Beast: Holding Your Own in Mass Culture. Charlottesville, VA: Hampton Roads, p251

[i] See http://www.telegraph.co.uk/technology/11881900/More-people-have-died-by-taking-selfies-this-year-than-by-shark-attacks.html

How to Avert a Digital Dystopia

By Jumana Abu-Ghazaleh

Source: OneZero

“What I find [ominous] is how seldom, today, we see the phrase ‘the 22nd century.’ Almost never. Compare this with the frequency with which the 21st century was evoked in popular culture during, say, the 1920s.”

—William Gibson, famed science-fiction author, in an interview on dystopian fiction.

The 2010s are almost over. And it doesn’t quite feel right.

When the end of 2009 came into view, the end of the 2000s felt like a relatively innocuous milestone. The current moment feels so much more, what’s the word?

Ah, yes: dystopian.

Looking back, “dystopia” might have been the watchword of the 2010s. Black Mirror debuted close to the beginning of the decade, and early in its run, it was sometimes critiqued for how over-the-top it all felt. Now, at the end of the decade, it’s regularly critiqued as made obsolete by reality.

And it’s not just prestige TV like Black Mirror reflecting the decade’s mood of incipient collapse. Of the 2010s top 10 highest-grossing films, by my count at least half involve an apocalypse either narrowly averted or, in fact, taking place (I’m looking at you, Avengers movies).

People have reasons to wallow. I get it. The existential threat of climate change alone — and seeing efforts to mitigate it slow down precisely as it becomes more pressing — could fuel whole libraries of dystopian fiction.

Meanwhile, our current tech landscape — the monopolies, the wild spread of disinformation, the sense that your most private data could go public whenever, with no recourse, all the things that risk making Black Mirror feel quaint — truly feels dystopian.

We enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

Since no one in a position to actually do something about our dystopian reality seems to be admitting it — no business leaders, politicians or legacy media — it makes sense that you might get catharsis of acknowledgment from pop culture instead. And yet, the most popular end-of-the-world fiction isn’t about actual imminent threats from climate or tech. It’s about Thanos coming to snap half of life out of existence. Or Voldemort threatening to destroy us Muggles.

Maybe that kind of pop culture, which acknowledges dystopia but not the actual threats we currently face, gives us a feeling of control: Sure, Equifax could leak my social security number and face zero consequences, but there are no Hunger Games. Wow — it really could be so much worse! Maybe we enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

But let’s look at those actual potential dystopias for a moment and think about what we need to do to avert them.

I’d suggest the big four U.S. tech giants — Amazon, Facebook, Apple, Google — each have a distinct possible dystopia associated with them. If we don’t turn around our current reality, we will likely get all four — after all, for all the antagonistic rhetoric among the giants, they are rather co-dependent. Let’s look at what we might have, ahem, look forward to — unless we demand the tech giants deliver on the utopia they purportedly set out to achieve when their respective founders raised their rounds of millions. I would argue not only that we can, but that we must hold them accountable.

“Mad Max,” or, slowly then all at once: starring Apple

“‘How did you go bankrupt?’ Bill asked. ‘Two ways,’ Mike said. ‘Gradually and then suddenly.’”

—Ernest Hemingway, The Sun Also Rises.

When you think of Mad Max, you probably think of an irradiated, post-apocalyptic desert hellscape. You’re also not thinking of Mad Max.

In the original 1979 film, the apocalypse hasn’t quite yet happened. There’s been a substantial social breakdown, but things are getting worse in slow motion. There are still functioning towns. Our protagonist, Max, is a working-class cop; and while there’s reason to believe a big crash is coming, or has even begun, society is still hanging on. (It’s only in the sequels that we’re well into the post-apocalyptic landscape people are thinking of when they say “Mad Max.”)

A relatively subtle dystopia, where things gradually decline in the background, is also a good day-to-day description of a society overrun by algorithms, even without the attention-grabbing mega-scandals of a Cambridge Analytica or massive data breach. A kind of dystopia “light” — and Apple is its poster child.

After all, Apple has a genuinely better track record than some of the other tech giants on a few key privacy issues. But it’s also genuinely aware of the value of promulgating that vision of itself — and that can lead Apple users into danger.

In January, Apple purchased a multistory billboard outside the Consumer Electronics Show in Las Vegas, with this message: “What happens on your iPhone, stays on your iPhone.” Sounds great — but it’s deeply misleading, and as journalist Mark Wilson noted, Apple’s mismatch between rhetoric and behavior fuels the nightmare that is our current data security crisis:

“[iPhone] contents are encrypted by default […] But that doesn’t stop the 2 million or so apps in the App Store from spying on iPhone users and selling details of their private lives. “Tens of millions of people have data taken from them — and they don’t have the slightest clue,” says [the] founder of [the] cybersecurity firm Guardian […] The Wall Street Journal studied 70 iOS apps […] and found several that were delivering deeply private information, including heart rate and fertility data, to Facebook.” [Emphasis mine.]

A tech giant that is claiming it’s the path to salvation, while effectively creating a trap for those who believe it, sounds ironically familiar given Apple’s famous evocation of Big Brother.

After all, when people talk about habit-forming technology in terms so terrifying they’ve convinced Silicon Valley executives to limit their children’s access to their own products, let’s be real: They’re talking about iPhones.

When academic child psychology researcher Jean Twenge talks about a possible teenage mental health epidemic fueled by social media, we know what’s at the heart of it: She’s talking about iPhones.

All those aforementioned horror stories, and a huge slice of those algorithms you’ve heard so much about, are likely first reaching you on smartphones that, with world market share above 50%, are largely, you guessed it, iPhones. (And none of these stories even mention Apple workers at overseas at facilities like Foxconn who create our iPhones and who really are living in a kind of explicit dystopia.)

What happens on your iPhone almost certainly doesn’t stay on your iPhone. But who created that surveillance capitalism running it all in the first place?

Enter Google.

“Black Mirror:” “Nosedive,” or, welcome to surveillance capitalism: starring Google

“We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

—Google’s then-CEO Eric Schmidt, in a 2011 interview.

You’ve probably heard it before: “if you’re not paying, you’re the product.” This is usually in reference to ostensibly “free” services like Facebook or Gmail. It’s a creepy thought. And, according to Shoshana Zuboff, professor emeritus at Harvard and economic analyst of what she’s termed “surveillance capitalism,” the selling of your personal information undermines autonomy. It’s worse than you being the product: “You are not the product. You are the abandoned carcass.”

Google, according to Zuboff, is the original inventor of Surveillance Capitalism. In their early “Don’t Be Evil” days, the idea of accessing people’s private Google searches and selling them was considered unthinkable. Then Google realized it could use search data for targeting purposes — and never stopped creating opportunities to surveil their users:

“Google’s new methods were prized for their ability to find data that users had opted to keep private and to infer extensive personal information that users did not provide. These operations were designed to bypass user awareness. […]In other words, from the very start Google’s breakthrough depended upon a one-way mirror: surveillance.”

Twenty years later, surveillance capitalism has become so ubiquitous that it’s hard to live in Western society without being surveilled constantly by private actors.

As far as I know, no mass popular culture has really yet captured this reality, but one small metaphor that kind of hits on its effects is a Black Mirror episode called “Nosedive.”

In “Nosedive,” everyday people’s lived experience is very clearly the picked-apart carcass for an entire economic and social order; a kind of surveillance-driven social credit score affects every aspect of your daily life, from customer service to government resources to friendships, all based on your app usage and, most creepily, how other people rate you in the app.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having a coming-out party. Increasingly, Google isn’t just surveilling us in private — with its “designing smart cities” initiatives, the company will literally be making city management decisions instead of citizens: Sidewalk Labs, a Google sister company, plans to develop “the most innovative district in the entire world” in the Quayside neighborhood of Toronto, and Google itself is planning on siphoning every bit of data about how Quayside residents live and breathe and move via ubiquitous monitoring sensors that will likely inform — for a fee naturally — how other cities will develop.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having its coming-out party.

Much like Apple, Google takes pains to present itself as a conscientious corporate citizen. They might be paternalistic, or antidemocratic — but they have learned it’s important to their brand that they’re seen as responsive to their workers and the broader public, largely thanks to the courageous and persistent effort of their workers and consumer advocates in civil society.

Not so much with Amazon.

“Elysium,” or, dystopia for some, Prime Day for others: starring Amazon

“[The New York Times] claims that our intentional approach is to create a soulless, dystopian workplace where no fun is had and no laughter heard. Again, I don’t recognize this Amazon and I very much hope you don’t either.” —Jeff Bezos, August 17, 2015 letter to staff after the New York Times investigation into working conditions at the company.

In 2015, Jeff Bezos felt the need to set the record straight: The New York Times was wrong about Amazon. Working there did not feel like a dystopia.

The years since have only validated the New York Times story, which focused on life for coders and executives at Amazon. Notably, when the Times and other investigative journalists have probed life for the far more numerous warehouse workers employed by Amazon, Bezos has largely stayed silent.

In fact, the further down the corporate ladder you get at Amazon, the more likely it seems that Jeff Bezos will stay quiet on any controversy. Just this month, in a report published almost exactly four years after Bezos’ “Amazon is not a dystopia” declaration, the New York Times has uncovered almost a dozen previously unreported deaths allegedly caused by Amazon’s decentralized delivery network. Rather than defend itself out loud, Amazon has kept quiet while repeating the same argument in the courts: Those delivery people aren’t Amazon workers at all, and thus Amazon is not liable.

Amazon, like every major tech giant, has a key role in the dystopia of surveillance capitalism — the monopolylike market share of Amazon Web Services, and Amazon’s involvement in increasingly ubiquitous facial recognition software, represent their own deeply dystopian trends. But the most visible dystopia Amazon creates, for all to see, is dystopia in the workplace.

In many ways, Amazon is the single company that best explains the appeal of an Andrew Yang figure to a certain slice of economically alienated young voters. When speaking near Amazon’s HQ in Seattle, Yang explicitly talked about the surveillance of Amazon workers, and how reliable those jobs are in any case:

“All the Amazon employees [here] are like, ‘Oh shit, is Jeff watching me right now?’… [Amazon will] open up a fulfillment warehouse that employs, let’s call it 20,000 people. How many retail workers worked at the malls that went out of business because of Amazon? [The] greatest thing would be if Jeff Bezos just stood up one day and said, ‘Hey, the truth is we are one of the primary organizations automating away millions of American jobs.’ […] I have friends who work at Amazon and they say point-blank that ‘we are told we are going to be trying to get rid of our own jobs.’”

You can flat-out disagree with Yang’s proposed solutions, but a lot of his appeal stems from the fact that he’s diagnosing a problem that broad swaths of people don’t feel is being talked about. Yang validates his supporters’ concerns that they are, in fact, living in a dystopia of the corporate overlord variety.

In the movie Elysium, most work is done in warehouses, under constant surveillance, with workers creating the very automation systems that surveil and punish them. The movie takes place in a company townlike setting, with no such thing as a class system or social mobility. Meanwhile, the ruling class in Elysium lives in space, having left everyone else behind to work on Earth, a planet now fully ravaged by climate change.

That might sound particularly far-fetched, but given Bezos’ explicit intention to colonize space because “we are in the process of destroying this planet,” it suddenly doesn’t feel so off the mark. And in an era where Governors and Mayors openly genuflect to Amazon, preemptively giving up vast swaths of democratic powers for the mere possibility that Amazon might host an office building there, it’s hard not to feel like we’re already in an Elysium-flavored dystopia.

Amazon has their dystopia picked out, flavor and all. But what happens when the biggest social network in the world can’t decide which dystopia it wants to be when it grows up?

Pick a dystopia — any dystopia!: starring Facebook

“Understanding who you serve is always a very important problem, and it only gets harder the more people that you serve.”

—Mark Zuckerberg, 2014 interview with the New York Times.

Ready Player One is one of the more popular recent dystopian novels.

The bleak future it depicts is relatively straightforward: In the face of economic and ecological collapse, the vast majority of human interaction and commercial activity happens over a shared virtual reality space called Oasis.

In Oasis, the downtrodden masses compete in enormous multiplayer video games, hoping to win enough prizes and gain sufficient corporate sponsorship to scrape out a decent existence. Imagine a version of The Matrix, where people choose to constantly log into unreality because actual reality has gotten so unbearably terrible, electing to let the real world waste away. Horrific.

Ready Player One is also the book that Oculus founder and former Facebook employee Palmer Luckey used to give new hires, working on virtual reality to get them “excited” about the “potential” of their work.

Sound beyond parody? In so many ways, Facebook is unique among the tech giants: It’s not hiding the specter of dystopia. It’s amplifying dystopia.

It’s hard to pick a popular dystopia Facebook isn’t invested in.

Surveillance capitalism? Google invented it, but Facebook has taken it to a whole new level with its social and emotional contagion experiments and relentless tracking of even nonusers.

1984? Sure, Facebook says, quietly patenting technology that lets your phone record you without warning.

Brave New World? Lest we forget, Facebook literally experimented with making depression contagious in 2014.

28 Days Later, or any of the various other mass-violence-as-disease horror movies like The Happening? Facebook has been used to spread mass genocidal panics far more terrifying than any apocalyptic Hollywood film.

What about the seemingly way out there dystopias — something like THX-1138 or a particularly gnarly Black Mirror episode where a brain can have its thoughts directly read, or even electronically implanted? It won’t comfort you to know that Facebook just acquired CTRL-Labs, which is developing a wearable brain-computer interface, raising questions about literal thought rewriting, brain hacking, and psychological “discontinuity.”

Roger McNamee, an early Zuckerberg advisor and arguably its most important early investor, has become unadorned about it: Facebook has become a dystopia. It’s up to the rest of us to catch up.

We spent the 2010s on dystopia—let’s spend the 2020s on utopia instead

“Plan for the worst, hope for the best, and maybe wind up somewhere in the middle.” —Bright Eyes, “Loose Leaves”

People generally seem to think dystopias are possible, but utopias are not. No one ridicules you for conceiving of a dystopia.

I think part of that is because it gives us an easy out. Dystopias paralyze us. They overwhelm. They make us feel small and powerless. Envisioning Dystopia is like getting married anticipating the divorce. All we can do is make sure it’s amicable.

Is there room for a utopian counterweight? There’s not only room, there’s an urgent need if we want to look forward (as opposed to despondently) to the 22nd century. We cannot avert or undo dystopias without believing in their counterparts.

But we need to make the utopian alternative feel real, accessible, and achievable. We need to be rooting not for the lesser of two evils, but for something actually good.

Dystopias — real, about-to-unfold dystopias — have been averted before. The threat of nuclear apocalypse during the Cold War. The shrinking hole in the ozone layer (which is both distinct from, and has lessons to teach us about, the climate crisis). We didn’t land in utopia, but it was only by hitching our wagons to a utopian vision that we averted the worst.

In 2017, cultural historian Jill Lepore penned a kind of goodbye letter to dystopian fiction, calling for a renewal of utopian imagination. “Dystopia,” she lamented, “used to be a fiction of resistance; it’s become a fiction of submission.” Dystopian narratives once served as stark warnings of what might be in store for us if we do nothing, spurring us on to devise a brighter future. Today, dystopian fiction is so prevalent and comes in so many unsavory flavors that our civic imaginations are understandably confined to identifying the one we deem most likely to inevitably happen, and to come to terms with it.

But we don’t have to.

A new decade is on the way. Let’s spend the 2020s exercising our utopian imaginations — the muscles we use to envision dystopia are now all too-well-developed, and a body that only exercises one set of muscles quickly grows off-balance.

Dystopias disempower. We are tiny, inconsequential — how could we do anything about them? Utopias, on the other hand, are rhetorical devices calling upon us to build. They invite our participation. Because a utopia where we don’t matter is a contradiction in terms.

Let’s envision a world where those creating algorithms are thinking not only about their reach, but also about their impact. A world in which we are not the carcass left behind by surveillance capitalism. A world in which calling for ethical norms and standards is in itself a utopian act.

Let’s spend the next decade fighting for what we actually want: A world in which the powerful few are held to a higher standard; an industry in which ethics aren’t an afterthought, and the phrase “unintended consequences” doesn’t absolve actors from the fall out of their very deliberate acts.

Let’s actualize the utopia which, ironically enough, the tech giants themselves so enthusiastically promised us when they set out to change the world.

Let’s spend this next decade asking for what we actually want.

The Future of the Spectacle

…or How the West Learned to Stop Worrying and Love the Reality Police

By CJ Hopkins

Source: Off-Guardian

If you want a vision of the future, don’t imagine “a boot stamping on a human face — for ever,” as Orwell suggested in 1984. Instead, imagine that human face staring mesmerized into the screen of some kind of nifty futuristic device on which every word, sound, and image has been algorithmically approved for consumption by the Defense Advanced Research Projects Agency (“DARPA”) and its “innovation ecosystem” of “academic, corporate, and governmental partners.”

The screen of this futuristic device will offer a virtually unlimited range of “non-divisive” and “hate-free” content, none of which will falsify or distort the “truth,” or in any way deviate from “reality.”

Western consumers will finally be free to enjoy an assortment of news, opinion, entertainment, and educational content (like this Guardian podcast about a man who gave birth, or MSNBC’s latest bombshell about Donald Trump’s secret Russian oligarch backers) without having their enjoyment totally ruined by discord-sowing alternative journalists like Aaron Maté or satirists like myself.

“Fake news” will not appear on this screen. All the news will be “authentic.” DARPA and its partners will see to that. You won’t have to worry about being “influenced” by Russians, Nazis, conspiracy theorists, socialists, populists, extremists, or whomever.

Persons of Malicious Intent will still be able to post their content (because of “freedom of speech” and all that stuff), but they will do so down in the sewers of the Internet where normal consumers won’t have to see it.

Anyone who ventures down there looking for it (i.e., such “divisive” and “polarizing” content) will be immediately placed on an official DARPA watchlist for “potential extremists,” or “potential white supremacists,” or “potential Russians.”

Once that happens, their lives will be over (ie, the lives of the potentially extremist fools who have logged onto whatever dark web platform will still be posting essays like this, not the lives of the Persons of Malicious Intent, who never had any lives to begin with, and who by that time will probably be operating out of some heavily armed, off-the-grid compound in Idaho).

Their schools, employers, and landlords will be notified. Their photos and addresses will be published online. Anyone who ever said two words to them (or, God help them, appears in a photograph with them) will have 24 hours to publicly denounce them, or be placed on DARPA’s watchlist themselves.

Meanwhile, up where the air is clean, Western consumers will sit in their cubicles, or stagger blindly down the sidewalk like zombies, or come barrel-assing at you on their pink corporate scooters, staring down at the screens of their devices, where normal reality will be unfolding.

They will stare at their screens at their dinner tables, in restaurants, in bed, and everywhere else. Every waking hour of their lives will be spent consuming the all-consuming, smiley, happy, global capitalist Spectacle, every empty moment of which will be monitored and pre-approved by DARPA.

What a relief that will finally be, not to have to question anything, or wonder what is real and what isn’t. When the corporate media tell us the Russians hacked an election, or the Vermont power grid, or are blackmailing the president with an FSB pee-tape, or that the non-corporate media are all “propaganda peddlers,” or that the Labour Party is a hive of anti-Semites, or that some boogeyman has WMDs, or is yanking little babies out of their incubators, or gratuitously gassing them, or attacking us with crickets, or that someone secretly met with Julian Assange in the Ecuadorian embassy, or that we’re being attacked by Russian spy whales, and suddenly self-radicalized Nazi terrorists, or it’s time for the “International Community” to humanitarianly intervene because “our house is burning,” and our world is on fire, and there are “concentration camps,” and a “coup in Great Britain”…

…or whatever ass-puckering apocalyptic panic the global capitalist ruling classes determine they need to foment that day, we will know that this news has been algorithmically vetted and approved by DARPA and its corporate, academic, and government partners, and thus, is absolutely “real” and “true,” or we wouldn’t be seeing it on the screen of our devices.

If you think this vision is science fiction, or dystopian satire, think again. Or read this recent article in Bloomberg, “U.S. Unleashes Military to Fight Fake News, Disinformation.”

Here’s the lede to get you started …

Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel ‘large-scale, automated disinformation attacks’…the Defense Advanced Research Projects Agency (DARPA) wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society…”

What could be more reassuring than the knowledge that DARPA and its corporate partners will be scanning the entire Internet for content created with “malicious intent,” or which has the potential to “polarize” society, and making sure we never see that stuff? If they can’t do it, I don’t know who can.

They developed the Internet, after all.

I’m not exactly sure how they did it, but Yasha Levine wrote a book about it, which I think we’re still technically allowed to read.

Anyway, according to the Bloomberg article, DARPA and its corporate partners won’t have the system up and running in time for the 2020 elections, so the Putin-Nazis will probably win again.

Which means we are looking at four more years of relentless Russia and fascism hysteria, and fake news and divisive content hysteria, and anti-Semitism and racism hysteria, and … well, basically, general apocalyptic panic over anything and everything you can possibly think of.

Believe me, I know, that prospect is exhausting … but the global capitalist ruling classes need to keep everyone whipped up into a shrieking apoplectic frenzy over anything other than global capitalism until they can win the War on Populism and globally implement the New Normality, after which the really serious reality policing can finally begin.

I don’t know, call me crazy, or a Person of Malicious Intent, but I think I’d prefer that boot in the face.

It’s Just An Illusion – The Management of Perception

By Kingsley L. Dennis

A New Aesthetic

By Damaris Zehner

Source: Resilience

There are all these good ideas – intensive agriculture, organic farming, permaculture, the local food movement. But why is most food still not grown this way, if it really is better? Why don’t farmers switch to sustainable land use methods? It seems to me there are at least four reasons.

First is the conservative nature of farming. Any activity that involves a large and long investment for an uncertain outcome is going to be conservative; no one wants to experiment when a year’s income is riding on the results. Farmers tend to stick to what has seemed to work. The psychology of previous investment plays a part in their choices as well. Once you’ve bought the huge combine, well, you have to use it.

Even when things don’t work so well, farmers will keep doing them if there are financial incentives to do so. This is the second reason. Government programs have tended to encourage big agribusinesses and have been less friendly to smaller, more varied farms.

Third, farmers love their machines. All Americans do. We fall every time for the promise that new technology will make our lives easier, more fun, more productive, and more sophisticated, and people with outdated technology, whether cell phones or tractors, get made fun of. Many people don’t have the time or the patience for more manual ways of working. I knew of a horse farmer who recently complained that he wouldn’t hire young men on his farm because they got impatient with the horses and, as he put it, just wanted to be roaring off with an internal combustion engine. (His workers were young women). These young men have become habituated to the speed and power made possible by fossil fuels and get rattled when asked to move more slowly.

Finally, there is an unconscious but still powerful motivation why farmers don’t want to stop spraying and switch to more natural methods of food production. It is a mistaken aesthetic that dictates how people see and judge the land around them, that tells us what looks beautiful and productive and “American” – that is, efficient, high-tech, and gleaming with the promise of the future. Perfect, undisturbed expanses of commodity crops, synchronized lines of combines churning through thousand-acre wheat fields, shiny factories, and brightly colored grocery stores are our proof that we are not a third-world nation, or Amish, or hippies – that we are still orthodox worshipers of the god of progress.

I don’t dispute the attraction of the aesthetic. Honestly, the land around here looks pretty good. Or at least it looks pretty. But the cost of those perfect fields and vast expanses of monoculture may be more than we can pay. Our aesthetics are as damaging to the environment as our greed or carelessness. So we need to move toward a new aesthetic.

Before we can do so, we need to ask ourselves: how much of the world are we responsible for tidying up? Nature is messy by our standards. A patch of disturbed earth becomes populated with a swirling mob of what we’d call weeds – dock, plantain, dandelion, mulberry, crabgrass, lamb’s quarters, and a hundred plants I don’t have a name for. And that bothers us. We spray, mow, and weed, in the process disturbing the natural succession of plants. We say that keeping our lawns, gardens, and fields as pure monoculture is more efficient and attractive. I drove with farmers past fields of soybeans shortly after the introduction of Round-Up herbicide, and they talked about how beautiful the thick carpet of identical plants is. They’re not wrong. The lush uniformity is beautiful. But I’m not sure we have the right to expect the same sort of beauty from nature that we can create within our houses. Should a farm field look like wall-to-wall carpeting? Should every molehill be leveled, every fence row scorched, whatever t cost, just because we think it looks nicer?

We have neighbors down the road whose property has been described as a doll’s house because of its detailed perfection. It’s a good description – they treat their two acres as if it were as entirely under their control as a doll’s house. The fences have lines of brown under them where the mower can’t reach and herbicide has been sprayed. Their lawn is grass only, no violets or dandelions. Their mature hardwood trees are all pollarded to be a matching height. It’s pretty, I suppose. It’s also horrifying as an illustration of their attitude toward natural beauty. To speak in hyperbolic terms, those friendly neighbors are conducting an all-out war on nature, with policies of scorched earth and ethnic cleansing, and the result is extreme totalitarianism. This stands in striking contrast to the permaculture sites I’ve visited, which look like a hodgepodge of annuals, perennials, weeds, and small creatures and don’t involve any mowing. I suspect that everyone’s first reaction to seeing permaculture in action is, in fact, “Why don’t they mow?” They have their reasons, and they have a different aesthetic.

I admit I like the cleanliness and order that humans impose. I’m all right with keeping my house clean, but I have to decide how far my household extends. If I find insects on my kitchen counter, I kill them. But should I kill the insects in my yard? All the insects in the world? Because if farmers adopt a policy of insect genocide, as most do, it’s going to have costs to the surroundings – which include me. When the summer crop-dusting airplanes fly overhead carpet-bombing bugs and weeds, I have to run to bring the laundry inside and shut the windows if it’s windy, which it usually is, because – call me a crazy tree-hugger – I prefer my sheets and towels to smell of fresh air and not the toxin du jour. That’s where my aesthetic differs from the farmers’.

Farmers around here will tell you that they are aiming for efficiency, and I do appreciate that harvesting crops is easier when the equipment is not clogged with morning glory vines and ragweed stalks. I also understand that weeds and other plants compete with the crops and lower farmers’ yields, so there is a financial as well as an aesthetic motivation for them to keep their fields clean. But there’s no question that these farmers are also driven by the false aesthetic of human-imposed purity. I watch while they grub out a small patch of trees that they had no problem maneuvering around, just so the field looks “clean.”

It’s a competitive aesthetic, too. People in this small community will gossip about and criticize landowners whose fields aren’t clean – I hear them every year talking about whose land isn’t yet sprayed, tilled, or ditched. People from out of our area have asked me when they’ve come over to visit, “Whose land is that down the road? It looks bad.” What they mean when they say that farmers are not keeping their land “clean” is that farmers are not leaving a toe-hold for nature on their property. Rabbits and deer have no right to a corridor of shelter; killdeer and quail have to keep packing up and moving as their surroundings are cut down; coyotes are shot. And once we’ve expunged the aborigines, we can live the mindless imperialist lifestyle we like.

I have some sympathy, I guess. I don’t want coyotes eating my goats or rabbits ruining my garden. But I have to ask the question again: how much of the natural world do we have the right to control at the same level that we control our houses and yards? If we are going to live in a better balance with nature than we do now, we have to change not only our acquisitiveness and our focus on profit and exploitation; we also have to learn to see beauty in what we now consider messiness.

Twitter Suspends Accounts For Propaganda, Has Literal Propagandist As High-Level Executive

By Caitlin Johnstone

Source: CaitlinJohnstone.com

Middle East Eye‘s Ian Cobain has published an exclusive titled “Twitter executive for Middle East is British Army ‘psyops’ soldier”, exposing the fact that Twitter’s senior editorial executive for Europe, the Middle East and Africa also works for an actual, literal propaganda unit in the British military called the 77th Brigade. Which is mighty interesting, considering the fact that Twitter constantly suspends accounts from non-empire-aligned nations based on the allegation that they are engaging in propaganda.

“The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit,” Cobain writes. “Gordon MacMillan, who joined the social media company’s UK office six years ago, has for several years also served with the 77th Brigade, a unit formed in 2015 in order to develop ‘non-lethal’ ways of waging war. The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as ‘information warfare’.”

https://twitter.com/IanCobain/status/1178590025128251392

MacMillan’s presence in a government psyops unit was not a secret; until Middle East Eye began raising questions on the matter, it was right there on his LinkedIn profile. This is not something that anyone considering him for promotion was likely to have been unaware of. According to his (now-edited) LinkedIn page, MacMillan has been in his current position as Head of Editorial EMEA since July 2016. According to Middle East Eye, MacMillan was already a captain in the 77th Brigade by the end of 2016. His current rank there is being hidden behind a wall of government secrecy.

When questioned by Middle East Eye about MacMillan’s work in the British Army’s online propaganda program, Twitter hilariously responded, “Twitter is an open, neutral and rigorously independent platform. We actively encourage all our employees to pursue external interests in line with our commitment to healthy corporate social responsibility, and we will continue to do so.”

That’s very nice of Twitter, isn’t it? They encourage their employees to pursue wholesome external interests, whether that be tennis, volunteering at a soup kitchen, or moonlighting at a military program explicitly devoted to online psychological warfare. You know, just everyday socially responsible pastime stuff.

The fact that Twitter not only employs known propagandists but actively promotes them to executive positions is a very large and inconvenient plot hole in their “open, neutral and rigorously independent platform” story. Especially since, as I documented recently, the mass purges of foreign Twitter accounts we’ve been seeing more and more of lately always exclusively target governments and groups which are not in alignment with the interests of the US-centralized power alliance of which the UK is a part. We’ve seen mass suspensions of accounts from Cuba, China, Russia, Iran, Venezuela, and the Catalan independence movement on allegations of “coordinated influence operations” and “covert, manipulative behaviors”, yet Twitter currently employs a high-level executive for whom coordinated influence operations and covert, manipulative behaviors on behalf of the British government are a known vocation.

“On September 20 Twitter deleted a large number of accounts, including in MacMillan’s area of responsibility. How many of those were designated by the British state?” asks Moon of Alabama of this new report.

How many indeed?

This is just one more item on the ever-growing mountain of evidence that these giant, immensely influential social media platforms we’ve all been herded into are nothing other than state propaganda for the digital age. True, they operate in a way which disregards the official lines that are drawn between government power and corporate power and the lines that are drawn between nations, but then, so do our rulers. We are living in a globe-spanning corporate oligarchic empire, and these government-aligned Silicon Valley giants are a major part of that empire’s propaganda engine.

The real power of that empire and that oligarchy lies in their invisibile and unacknowledged nature. Officially we all live in separate, sovereign nations run by democratically elected officials; unofficially we live in a massive transnational empire ruled by a loose alliance of plutocrats and opaque government agencies where military propagandists are employed by social media monopolies to manipulate public narratives. The official mask exists only on the level of narrative, while the unofficial reality is what’s actually happening. Yet whenever you try to publicly discuss the threat that is being posed by oligarchic narrative control online, you get told by establishment loyalists and libertarians that Twitter is just a simple private business running things in a way that is entirely separate from government censorship and state propaganda.

All we clear-eyed rebels can do is keep documenting the evidence of what’s going on and pointing to it as loudly as we can. So once again for the people in the back: Twitter employs literal government propagandists as high-level executives while purging accounts from unabsorbed governments for circulating unauthorized narratives. This is a fact. Remember it.