Cyberpunk is Dead

By John Semley

Source: The Baffler

“It was an embarrasser; what did I want? I hadn’t thought that far ahead. Me, caught without a program!”
—Bruce Bethke, “Cyberpunk” (1983)

Held annually in a downtown L.A. convention center so massive and glassy that it served as a futurist backdrop for the 1993 sci-fi action film Demolition Man and as an intergalactic “Federal Transport Hub” in Paul Verhoeven’s 1997 space-fascism satire Starship Troopers, the Electronic Entertainment Expo, a.k.a. “E3,” is the trade show of the future. Sort of.

With “electronic entertainment” now surpassing both music and movies (and, indeed the total earnings of music and movies combined), the future of entertainment, or at least entertainment revenue, is the future of video games. Yet it’s a future that’s backward-looking, its gaze locked in the rearview as the medium propels forward.

Highlights of E3’s 2019 installment included more details around a long-gestating remake of the popular PlayStation 1-era role-playing game Final Fantasy VII, a fifth entry in the demon-shooting franchise Doom, a mobile remake of jokey kids side-scroller Commander Keen, and playable adaptations of monster-budget movie franchises like Star Wars and The Avengers. But no title at E3 2019 garnered as much attention as Cyberpunk 2077, the unveiling of which was met with a level of slavish mania one might reserve for a stadium rock concert, or the ceremonial reveal of an efficacious new antibiotic.

An extended trailer premiere worked to whet appetites. Skyscrapers stretched upward, slashed horizontally with long windows of light and decked out with corporate branding for companies called “DATA INC.” and “softsys.” There were rotating wreaths of bright neon billboards advertising near-futuristic gizmos and gee-gaws, and, at the street level, sketchy no-tell motels and cars of the flying, non-flying, and self-piloting variety. In a grimy, high-security bunker, a man with a buzzcut, his face embedded with microchips, traded blows with another, slightly larger man with a buzzcut, whose fists were robotically augmented like the cyborg Special Forces brawler Jax from Mortal Kombat. The trailer smashed to its title, and to wild applause from congregated gamers and industry types.

Then, to a chug-a-lug riff provided by Swedish straight-edge punkers Refused (recording under the nom de guerre SAMURAI) that sounded like the sonic equivalent of a can of Monster energy drink, an enormous freight-style door lifted, revealing, through a haze of pumped-out fog, a vaguely familiar silhouette: a tall, lean-muscular stalk, scraggly hair cut just above the shoulders. Over the PA system, in smoothly undulating, bass-heavy movie trailer tones, a canned voice announced: “Please welcome . . . Keanu Reeves.” Applause. Pitchy screams. Hysterics in the front row prostrating themselves in Wayne’s World “we’re not worthy!” fashion. “I gotta talk to ya about something!” Reeves roared through the din. Dutifully reading from a teleprompter, he plugged Cyberpunk 2077’s customizable characters and its “vast open world with a branching storyline,” set in “a metropolis of the future where body modification has become an obsession.”

More than just stumping for Cyberpunk 2077, Reeves lent his voice and likeness to the game as a non-playable character (NPC) named “Johnny Silverhand,” who is described in the accompanying press materials as a “legendary rockerboy.” A relative newbie to the world of blockbuster Xbox One games, Reeves told the audience at E3 that Cyberpunk piqued his interest because he’s “always drawn to fascinating stories.” The comment is a bit rich—OK, yes, this is a trade show pitch, but still—considering that such near-futuristic, bodily augmented, neon-bathed dystopias are hardly new ground for Reeves. His appearance in Cyberpunk 2077 serves more to lend the game some genre cred, given Reeves’s starring roles in canonical sci-fi films such as Johnny Mnemonic (1995) and the considerably more fantastic Matrix trilogy (1999-2003)—now quadrilogy; with an anticipated fourth installment announced just recently. Like many of E3 2019’s other top-shelf titles, Cyberpunk 2077 looked forward by reflecting back, conjuring its tech-noir scenario from the nostalgic ephemera of cyberpunk futures past.

This was hardly lost among all the uproar and excitement. Author William Gibson, a doyenne of sci-fi’s so-called “cyberpunk” subgenre, offered his own withering appraisal of Cyberpunk 2077, tweeting that the game was little more than a cloned Grand Theft Auto, “skinned-over with generic 80s retro-future” upholstery. “[B]ut hey,” Gibson added, a bit glibly, “that’s just me.” One would imagine that, at least in the burrows of cyberpunk fandom, Gibson’s criticism carries considerable weight.

After all, the author’s 1984 novel Neuromancer is a core text in cyberpunk literature. Gibson also wrote the screenplay for Johnny Mnemonic, adapted from one of his own short stories, which likewise developed the aesthetic and thematic template for the cyberpunk genre: future dystopias in which corporations rule, computer implants (often called “wetware”) permit access to expansive virtual spaces that unfold before the user like a walk-in World Wide Web, scrappy gangs of social misfits unite to hack the bad guys’ mainframes, and samurai swords proliferate, along with Yakuza heavies, neon signs advertising noodle bars in Kanji, and other fetish objects imported from Japanese pop culture. Gibson dissing Cyberpunk 2077 is a bit like Elvis Presley clawing out of his grave to disparage the likeness of an aspiring Elvis impersonator.

Gibson’s snark speaks to a deeper malaise that has beset cyberpunk. A formerly lively genre that once offered a clear, if goofy, vision of the future, its structures of control, and the oppositional forces undermining those authoritarian edifices, it has now been clouded by a kind of self-mythologizing nostalgia. This problem was diagnosed as early as 1991 by novelist Lewis Shiner, himself an early cyberpunk-lit affiliate.

“What cyberpunk had going for it,” Shiner wrote in a New York Times op-ed titled “Confessions of an Ex-Cyberpunk, “was the idea that technology did not have to be intimidating. Readers in their teens and 20’s responded powerfully to it. They were tired of hearing how their home computers were tempting them into crime, how a few hackers would undermine Western civilization. They wanted fiction that could speak to the sense of joy and power that computers gave them.”

That sense of joy had been replaced, in Shiner’s estimation, by “power fantasies” (think only of The Matrix, in which Reeves’s moonlighting hacker becomes a reality-bending god), which offer “the same dead-end thrills we get from video games and blockbuster movies” (enter, in due time, the video games and blockbuster movies). Where early cyberpunk offerings rooted through the scrap heap of genre, history, and futurist prognostication to cobble together a genre that felt vital and original, its modern iterations have recourse only to the canon of cyberpunk itself, smashing together tropes, clichés, and old-hat ideas that, echoing Gibson’s complaint, feel pathetically unoriginal.

As Refused (in their pre-computer game rock band iteration) put it on the intro to their 1998 record The Shape of Punk to Come: “They told me that the classics never go out of style, but . . . they do, they do.”

Blade Ran

The word was minted by author Bruce Bethke, who titled a 1980 short story about teenage hackers “Cyberpunk.” But cyberpunk’s origins can be fruitfully traced back to 1968, when Philip K. Dick published Do Androids Dream of Electric Sheep?, a novel that updated the speculative fiction of Isaac Asimov’s Robot series for the psychedelic era. It’s ostensibly a tale about a bounty hunter named Rick Deckard chasing rogue androids in a post-apocalyptic San Francisco circa 1992. But like Dick’s better stories, it used its ready-made pulp sci-fi premise to flick at bigger questions about the nature of sentience and empathy, playing to a readership whose conceptions of consciousness were expanding.

Ridley Scott brought Dick’s story to the big screen with a loose 1982 film adaptation, Blade Runner, which cast Harrison Ford as Deckard and pushed its drizzly setting ahead to 2019. With its higher order questions about what it means to think, to feel, and to be free—and about who, or what, is entitled to such conditions—Blade Runner effectively set a cyberpunk template: the billboards, the neon, the high-collared jackets, the implants, the distinctly Japanese-influenced mise-en-scène extrapolated from Japan’s 1980s-era economic dominance. It is said that William Gibson saw Blade Runner in theaters while writing Neuromancer and suffered something of a crisis of conscience. “I was afraid to watch Blade Runner,” Gibson told The Paris Review in 2011. “I was right to be afraid, because even the first few minutes were better.” Yet Gibson deepened the framework established by Blade Runner with a crucial invention that would come to define cyberpunk as much as drizzle and dumpsters and sky-high billboards. He added another dimension—literally.

Henry Case, Gibson establishes early on, “lived for the bodiless exultation of cyberspace.” As delineated in Neuromancer, cyberspace is an immersive, virtual dimension. It’s a fully realized realm of data—“bright lattices of logic unfolding across that colorless void”—which hackers can “jack into” using strapped-on electrodes. That the matrix is “bodiless” is a key concept, both of Neuromancer and of cyberpunk generally. It casts the Gibsonian idea of cyberspace against another of the genre’s hallmarks: the high-tech body mods flogged by Keanu Reeves during the Cyberpunk 2077 E3 demo.

Early in Neuromancer, Gibson describes these sorts of robotic, cyborg-like implants and augmentations. A bartender called Ratz has a “prosthetic arm jerking monotonously” that is “cased in grubby pink plastic.” The same bartender has implanted teeth: “a webwork of East European steel and brown decay.” Gibson’s intense, earthy descriptions of these body modifications cue the reader into the fundamental appeal of Neuromancer’s matrix, in which the body itself becomes utterly immaterial. Authors from Neal Stephenson (Snow Crash) to Ernest Cline (Ready Player One, which is like a dorkier Snow Crash, if such a thing is conceivable), further developed this idea of what theorist Fredric Jameson called “a whole parallel universe of the nonmaterial.”

As envisioned in Stephenson’s Snow Crash, circa 1992, this parallel universe takes shape less as some complex architecture of unfathomable data, and more as an immersive, massively multiplayer online role-playing game (MMORPG). Stephenson’s “Metaverse”—a “moving illustration drawn by [a] computer according to specifications coming down the fiber-optic cable”—is not a supplement to our real, three-dimensional world of physical bodies, but a substitute for it. Visitors navigate the Metaverse using virtual avatars, which are infinitely customizable. As Snow Crash’s hero-protagonist, Hiro Protagonist (the book, it should be noted, is something of a satire), describes it: “Your avatar can look any way you want it to . . . If you’re ugly, you can make your avatar beautiful. If you’ve just gotten out of bed, your avatar can still be wearing beautiful clothes and professionally applied makeup. You can look like a gorilla or a dragon or a giant talking penis in the Metaverse.”

Beyond Meatspatial Reasoning

The Metaverse seems to predict the wide-open, utopian optimism of the internet: that “sense of joy and power” Lewis Shiner was talking about. It echoes early 1990s blather about the promise of a World Wide Web free from corporate or government interests, where users could communicate with others across the globe, forge new identities in chat rooms, and sample from a smorgasbord of lo-res pornographic images. Key to this promise was, to some extent, forming new identities and relationships by leaving one’s physical form behind (or jacked into a computer terminal in a storage locker somewhere).

Liberated from such bulky earthly trappings, we’d be free to pursue grander, more consequential adventures inside what Gibson, in Neuromancer, calls “the nonspace of the mind.” Elsewhere in cyberpunk-lit, bodies are seen as impediments to the purer experience of virtuality. After a character in Cory Doctorow’s Down and Out in the Magic Kingdom unplugs from a bracingly real simulation immersing him in the life of Abraham Lincoln, he curses the limitations of “the stupid, blind eyes; the thick, deaf ears.” Or, as Case puts it in Neuromancer, the body is little more than “meat.”

In Stephenson’s Metaverse, virtual bodies don’t even obey the tedious laws of physics that govern our non-virtual world. In order to manage the high amount of pedestrian traffic within the Metaverse and prevent users from bumping around endlessly, the complicated computer programming permits avatars simply to pass through one another. “When things get this jammed together,” Hiro explains, “the computer simplifies things by drawing all of the avatars ghostly and translucent so you can see where you’re going.” Bodies—or their virtual representations—waft through one another, as if existing in the realm of pure spirit. There is an almost Romantic bent here (Neuromancer = “new romancer”). If the imagination, to the Romantics, opened up a gateway to deep spiritual truth, here technology serves much the same purpose. Philip K. Dick may have copped something of the 1960s psychedelic era’s ethos of expanding the mind to explore the radiant depths of the individual soul, spirit, or whatever, but cyberpunk pushed that ethos outside, creating a shared mental non-space accessible by anyone with the means—a kind of Virtual Commons, or what Gibson calls a “consensual hallucination.”

Yet outside this hallucination, bodies still persist. And in cyberpunk, the physical configurations of these bodies tend to express their own utopian dimension. Bruce Bethke claimed that “cyberpunk” resulted from a deliberate effort to “invent a new term that grokked the juxtaposition of punk attitudes and high technology.” Subsequent cyberpunk did something a bit different, not juxtaposing but dovetailing those “punk attitudes” with high-tech. (“Low-life, high-tech” is a kind of a cyberpunk mantra.) Neuromancer’s central heist narrative gathers a cast of characters—hacker Henry Case, a cybernetically augmented “Razorgirl” named Molly Millions, a drug-addled thief, a Rastafari pilot—that can be described as “ragtag.” The major cyberpunk blockbusters configure their anti-authoritarian blocs along similar lines.

In Paul Verhoeven’s cyberpunk-y action satire Total Recall, a mighty construction worker-cum-intergalactic-spy (Arnold Schwarzenegger) joins a Martian resistance led by sex workers, physically deformed “mutants,” little people, and others whose physical identities mirror their economic alienation and opposition to a menacing corporate-colonial overlord named Cohaagen.

In Johnny Mnemonic, Keanu Reeves’s businesslike “mnemonic courier” (someone who ferries information using computer implants embedded in the brain) is joined by a vixenish bodyguard (Dina Meyer’s Jane, herself a version of Neuromancer’s Molly Millions), a burly doctor (Henry Rollins), and a group of street urchin-like “Lo-Teks” engaged in an ongoing counterinsurgency against the mega-corporation Pharmakom. Both Mnemonic and Recall rely on cheap twists, in which a figure integral to the central intrigue turns out to be something ostensibly less- or other-than-human. Total Recall has Kuato, a half-formed clairvoyant mutant who appears as a tumorous growth wriggling in the abdomen of his brother. Even more ludicrously, Mnemonic’s climax reveals that the Lo-Teks’ leader is not the resourceful J-Bone (Ice-T), but rather Jones, a computer-augmented dolphin. In cyberpunk, the body’s status as “dead meat” to be transcended through computer hardware and neurological implantation offers a corollary sense of freedom.

The idea of the cybernetic body as a metaphor for the politicized human body was theorized in 1985, cyberpunk’s early days, by philosopher and biologist Donna Haraway. Dense and wildly eclectic, by turns exciting and exasperating, Haraway’s “Cyborg Manifesto” is situated as an ironic myth, designed to smash existing oppositions between science and nature, mind and body. Haraway was particularly interested in developing an imagistic alternative to the idea of the “Goddess,” so common to the feminism of the time. Where the Goddess was backward-looking in orientation, attempting to connect women to some prelapsarian, pre-patriarchal state of nature, the cyborg was a myth of the future, or at least of the present. “Cyborg imagery,” she writes, “can suggest a way out of the maze of dualisms in which we have explained our bodies and our tools to ourselves.” Part machine and part flesh, Haraway visualizes the cyborg as a being that threatens existing borders and assumes responsibility for building new ones.

Though they are not quite identical concepts, Haraway’s figure of the cyborg and the thematics of cyberpunk share much in common. A character like Gibson’s Molly Millions, for example, could be described as a cyborg, even if she is still essentially gendered as female (the gender binary was one of the many “dualisms” Haraway believed the cyborg could collapse). Cyborgs and cyberpunk are connected in their resistance to an old order, be it political and economic (as in Neuromancer, Johnny Mnemonic, etc.) or metaphysical (as in Haraway). The cyborg and the cyberpunk both dream of new futures, new social relationships, new bodies, and whole new categories of conceptions and ways of being.

The historical problem is that, for the most part, these new categories and these new relationships failed to materialize, as cyberpunk’s futures were usurped and commodified by the powers they had hoped to oppose.

Not Turning Japanese

In an introduction to the Penguin Galaxy hardcover reissue of Neuromancer, sci-fi-fantasy writer Neil Gaiman ponders precisely how the 1980s cyberpunk visions came to shape the future. “I wonder,” he writes, “to what extent William Gibson described a future, and how much he enabled it—how much the people who read and loved Neuromancer made the future crystallize around his vision.”

It’s a paradox that dogs most great sci-fi writers, whose powers for Kuato-style clairvoyance have always struck me as exaggerated. After all, it’s not as if, say, Gene Roddenberry literally saw into the future, observed voice-automated assistants of the Siri and Alexa variety, and then invented his starship’s speaking computers. It’s more that other people saw the Star Trek technology and went along inventing it. The same is true of Gibson’s matrix or Stephenson’s Metaverse, or the androids of Asimov and Dick. And the realization of many technologies envisioned by cyberpunk—including the whole concept of the internet, which now operates not as an escapist complement to reality, but an essential part of its fabric, like water or heat—has occurred not because of scrappy misfits and high-tech lowlifes tinkering in dingy basements, but because of gargantuan corporate entities. Or rather, the cyberpunks have become the corporate overlords, making the transition from the Lo-Teks to Pharmakom, from Kuato to Cohaagen. In the process, the genre and all its aspirations have been reduced to so much dead meat. This is what Shiner was reacting to when, in 1991, he renounced his cyberpunk affiliations, or when Bruce Bethke, who coined the term, began referring to “cyberpunk” as “the c-word.”

The commodification of the cool is a classic trick of capitalism, which has the frustrating ability to mutate faster than the forces that oppose it. Yet even this move toward commodification and corporatization is anticipated in much cyberpunk. “Power,” for Neuromancer’s Henry Case, “meant corporate power.” Gibson goes on: “Case had always taken it for granted that the real bosses, the kingpins in a given industry, would be both more and less than people.” For Case (and, it follows, Gibson, at least at the time of his writing), this power had “attained a kind of immortality” by evolving into an organism. Taking out one-or-another malicious CEO hardly matters when lines of substitutes are waiting in the wings to assume the role.

It’s here that cyberpunk critiques another kind of body. Not the ruddy human form that can be augmented and perfected by prosthetics and implants, but the economic body. Regarding the economy as a holistic organism—or a constituent part of one—is an idea that dates back at least as far as Adam Smith’s “invisible hand.” The rhetoric of contemporary economics is similarly biological. An edifying 2011 argument in Al Jazeera by Paul Rosenberg looked at the power of such symbolic conceptions of the economy. “The organic metaphor,” Rosenberg writes, “tells people to accept the economy as it is, to be passive, not to disturb it, to take a laissez faire attitude—leave it alone.”

This idea calls back to another of cyberpunk’s key aesthetic influences: the “body economic” of Japan in the 1980s. From the 2019 setting of 1982’s Blade Runner, to the conspicuous appearance of yakuza goons in Gibson’s stories, to Stephenson’s oddly anachronistic use of “Nipponese” in Snow Crash, cyberpunk’s speculative futures proceed from the economic ascendency of 1980s Japan, and the attendant anxiety that Japan would eventually eclipse America as an economic powerhouse. This idea, that Japan somehow is (or was) the future, has persisted all the way up to Cyberpunk 2077’s aesthetic template, and its foregrounding of villains like the shadowy Arasaka Corporation. It suggests that, even as it unfolds nearly sixty years from our future, the blockbuster video game is still obsessed with a vision of the future past.

Indeed, it’s telling that as the robust Japanese economy receded in the 1990s, its burly body giving up the proverbial ghost, that Japanese cinema became obsessed with avenging spirits channeled into the present by various technologies (a haunted video cassette in Hideo Nakata’s Ringu, the internet itself in Kiyoshi Kurosawa’s Kairo, etc.). But in the 1980s, Japan’s economic and technologic dominance seemed like a foregone conclusion. In a 2001 Time article, Gibson called Japan cyberpunk’s “de facto spiritual home.” He goes on:

I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, “You see? You see? It is Blade Runner town.” And it was. It so evidently was.

Gibson’s analysis features one glaring mistake. His insistence that “modern Japan simply was cyberpunk” is tethered to its actual history as an economic and technological powerhouse circa the 1980s, and not from its own science-fictional preoccupations. “It was not that there was a cyberpunk movement in Japan or a native literature akin to cyberpunk,” he writes. Except there so evidently was.

The Rusting World

Even beyond the limp, Orwellian connotations, 1984 was an auspicious year for science-fiction. There was Neuromancer, yes. But 1984 also saw the first collected volume of Akira, a manga written and illustrated by Katsuhiro Otomo. Originally set, like Blade Runner, in 2019, Akira imagines a cyberpunk-y Neo-Tokyo, in which motorcycle-riding gangs do battle with oppressive government forces. Its 1988 anime adaptation was even more popular, in both Japan and the West. (The film’s trademark cherry red motorcycle has been repeatedly referenced in the grander cyberpunk canon, appearing in Steven Spielberg’s film adaptation of Ready Player One and, if pre-release hype is to believed, in Cyberpunk 2077 itself.) In 2018, the British Film Institute hailed Akira, accurately, as “a vital cornerstone of the cyberpunk genre.”

Japan has plenty of other, non-Akira cyberpunk touchstones. As a cinematic subgenre, Japanese cyberpunk feels less connected to the “cyber” and more to the spirit of “punk,” whether in the showcasing of actual Japanese punk rock bands (as in 1982’s Burst City) or the films’ own commitment to a rough-hewn, low-budget, underground aesthetic. Chief among the latter category of films is Shinya Tsukamoto’s Tetsuo: The Iron Man, which was shot on 16mm over a grueling year-and-a-half, mostly in and around Tetsuo actress and cinematographer Kei Fujiwara’s apartment, which also housed most of the film’s cast and crew.

Unlike the Western cyberpunk classics, Tsukamoto’s vision of human-machine hybridization is demonstrably more nightmarish. The film follows two characters, credited as the Salaryman (Tomorowo Taguchi) and the Guy (a.k.a. “The Metal Fetishist,” played by writer/director/producer/editor Tsukamoto himself), bound by horrifying mutations, which see their flesh and internal organs sprouting mechanical hardware.

In its own way, Tetsuo works as a cyberpunk-horror allegory for the Japanese economy. As the Salaryman and the Fetishist learn to accept the condition of their mechanization, they merge together, absorbing all the inorganic matter around them, growing enormously like a real-world computer virus or some terrifying industrial Katamari. Their mission resonates like a perverse inversion of Japan’s post-industrial promise. As Tsukamoto’s Fetishist puts it: “We can rust the whole world and scatter it into the dust of the universe.”

Like Haraway’s development of the cyborg as a metaphoric alternative to the New Age “goddess,” Tetsuo’s titular Iron Man can offer a similar corrective. If cyberpunk has become hopelessly obsessed with its own nostalgia, recycling all its 1980s bric-a-brac endlessly, then we need a new model. Far from the visions of Gibson, in which technology provides an outlet for a scrappy utopian impulse that jeopardizes larger corporate-political dystopias, Tetsuo is more pessimistic. It sees the body—both the individual physical body and the grander corpus of political economy—as being machine-like. Yet, as Rosenberg notes in his Al Jazeera analysis of economic rhetoric, it may be more useful to conceive of the economy not as a “body” or an organism but as a machine. The body metaphor is conservative, “with implications that tend toward passivity and acceptance of whatever ills there may be.” Machines, by contrast, can be fixed, greased, re-oriented. They are, unlike bodies, a thing separate from us, and so subject to our designs.

Cybernetic implants and cyborg technology are not some antidote to corporate hegemony. The human does not meld with technology to transcend the limitations of humanity. Rather, technology and machinery pose direct threats to precisely that condition. We cannot, in Tsukamoto’s film, hack our way to a better future, or technologically augment our way out of collective despair. Technology—and the mindless rush to reproduce it—are, to Tsukamoto, the very conditions of that despair. Even at thirty years old, Tetsuo offers a chilling vision not of the future, or of 1980s Japan, but of right now: a present where the liberating possibilities of technology have been turned inside-out; where hackers become CEOs whose platforms bespoil democracy; where automation offers not the promise of increased wealth and leisure time, but joblessness, desperation, and the wholesale redundancy of the human species; where the shared hallucination of the virtual feels less than consensual.

There’s nothing utopian about the model of cyberpunk developed in Tetsuo: The Iron Man. It is purely dystopian. But this defeatism offers clarity. And in denying the collaborative, collectivist, positive vision of a technological future in favor of a vision of identity-destroying, soul-obliterating horror, Tsukamoto’s stone-cold classic of Japanese cyberpunk invites us to imagine our own anti-authoritarian, anti-corporate arrangements. The enduring canon of American-style cyberpunk may have grown rusty. It has been caught, as Bethke put it in his genre-naming story, “without a program.” But the genre’s gnarlier, Japanese iterations have plenty to offer, embodying sci-fi’s dream of imagining a far-off future as a deep, salient critique of the present. It is only when we accept this cruel machinery of the present that we can freely contemplate how best to tinker with its future.

Left to peddle such a despairing vision in a packed-out L.A. convention center, even cyberpunk’s postmortem poster boy Keanu Reeves would be left with little to say but a resigned, bewildered, “Woah . . .”

Whose Dystopia Is It Anyway?

Reason writers debate which fictional dystopia best predicted our current moment.

By Mike Riggs, Katherine Mangu-Ward, Todd Krainin, Nick Gillespie, Jesse Walker, Robby Soave, Eric Boehm, Christian Britschgi, Peter Suderman & Brian Doherty

Source: Reason

With social media platforms seemingly unable to distinguish Russian trolls from red-blooded Americans, the last two years have felt like a Deckardian purgatory. The frequency with which intellectual elites accuse their detractors of laboring on behalf of an always-approaching-never-arriving foreign power, meanwhile, smacks of Orwell. And if the proliferation of opioids in the American heartland doesn’t sound like “delicious soma,” what does? (Marijuana? Alcohol? Twitter?)

“We live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s,” George Washington University’s Henry Farrell recently argued in the Boston Review. Despite being a poor prognosticator of what future technologies would look like and do, Dick, Farrell writes, “captured with genius the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together.”

But the universe of possibilities is much larger than just Orwell, Huxley, or Dick. Below, Reason‘s editorial staffers make the case for nearly a dozen other Nostradamii of the right now, ranging from Edgar Allan Poe to Monty Python’s Terry Gilliam. As for why we’re debating dystopias, and not utopias: Because there is no bad in a utopia, and because no dystopia could persist for long without at least a little good, it’s safe to assume that if you’re living in an imperfect world—and you very much are—it’s a dystopian one.

Dick wasn’t wrong, but Edgar Allan Poe got there first, writes Nick Gillespie:

At the core of Philip K. Dick’s work is a profound anxiety about whether we are autonomous individuals or being programmed by someone or something else. In Do Androids Dream of Electric Sheep?, are the characters human or Nexus-6 androids? In The Three Stigmata of Palmer Eldritch and A Scanner Darkly, you’re never quite sure what’s real and what’s the product of too much “Chew-Z” and “Substance D,” hallucinogenic, mind-bending drugs that erode the already-thin line between reality and insanity.

Which is to say that Dick’s alternately funny and terrifying galaxy is a subset of the universe created by Edgar Allan Poe a century earlier. Poe’s protagonists—not really the right word for them, but close enough—are constantly struggling with basic questions of what is real and what is the product of their own demented minds.

This dilemma is front and center in Poe’s only novel, The Narrative of Arthur Gordon Pym (1838), which tells the story of a stowaway who ships out on the Grampus and endures mutiny, shipwreck, cannibalism, and worse. It becomes harder and harder for Pym to trust his senses about the most basic facts, such as what side of a piece of paper has writing on it. The conclusion—not really the right word for the book’s end, but close enough—dumps Pym’s epistemological problem into the reader’s lap in violent and hysterical fashion. A friend told me he threw the book across the room in disbelief when he read its final page, which anticipates the frustration so many of us feel while following the news these days. Just when you think reality can’t get any stranger or less believable, it does exactly that, in both Poe’s fictional world and our real one.

2018’s turn toward hamfisted authoritarianism echoes Terry Gilliam’s Brazil, says Christian Britschgi:

No-knock raids by masked, militarized, police officers. A ludicrously inefficient bureaucracy. Crackdowns on unlicensed repairmen. If all this sounds eerily familiar, you may have seen it coming in 1985’s Brazil.

Set in a repressive near-future Britain, the film tells the story of lowly civil servant Samuel Lowry, who wants nothing more than to hide in the comically inefficient bureaucratic machine that employs him, all while doing his level best to quietly resist both a narcissistic culture demanding he rise higher, and a brutish security apparatus looking to punish anyone who steps out of line.

Directed by Monty Python alum Terry Gilliam, Brazil is surreal, ridiculous, and often just plain silly. Yet there is something chilling about the film’s depiction of the state as a bumbling, byzantine bureaucracy that can’t help but convert every aspect of life into an endless series of permission slips, reinforced by a system of surveillance, disappearance, and torture.

Evil and inefficiency are intimately intertwined in Brazil—with the whole plot set in motion by a literal bug in the system that sends jackbooted thugs to raid the wrong house and arrest the wrong man. While the regime in Brazil lacks a central, dictatorial figure at the top of the pyramid, there is definitely something distinctly current about the world it depicts, with every application of force complemented by an equal element of farce. Trump’s first crack at imposing a travel ban, for instance, proved incredibly draconian and cruel precisely because of how rushed, sloppy, and incoherent the actual policy was.

Fortunately, our own world does manage to be far less authoritarian than the one depicted in Brazil and has mercifully better functioning technology as well. The parallels can still give one pause, however, when you consider what direction we might be headed in.

The current moment definitely tilts toward Ray Bradbury’s Fahrenheit 451, says Eric Boehm:

We are not living in a world where government agents raid homes to set books ablaze, but “there is more than one way to burn a book, and the world is full of people running about with lit matches,” as Ray Bradbury warned in a coda appended to post-1979 editions of his 1953 classic.

Specifically, Bradbury was warning about the dangers of authoritarian political correctness. In that coda, he relates anecdotes about an undergrad at Vassar College asking if he’d consider revising The Martian Chronicles to include more female characters, and a publishing house asking him to remove references to the Christian god in a short story they sought to reprint.

More generally, though, Bradbury was commenting on the common misunderstanding of Fahrenheit 451 as a story about an authoritarian government burning books. It is that, of course, but it’s really about how cultural decay allows authoritarianism to flourish. It was only after people had decided for themselves that books were dangerous that the government stepped in to enforce the consensus, Guy Montag’s boss tells him in one of the novel’s best scenes. “Technology, mass exploitation, and minority pressure carried the trick,” Captain Beatty explains. “Politics? One column, two sentences, a headline! Whirl man’s mind around so fast…that the centrifuge flings off all unnecessary, time-wasting thought!”

In place of literature and high culture, Bradbury’s dystopia has an eerily accurate portrayal of reality television. Montag’s wife is obsessed with the “parlor family” who inhabit the wall-sized television screens in the living room, and clearly has a closer attachment to them than to her husband. The ubiquity of those screens—and how the government exploits them—is on full display near the end of the story, when Montag is on the lam for revolting against orders to burn books, and messages are flashed across every parlor screen in the city telling people to look for the dangerous runaway fireman.

We might not live in Montag’s specific version of Bradbury’s dystopia, but we exist somewhere on the timeline that leads there—which is exactly what Bradbury, and Captain Beatty, are trying to tell us.

Wrong book! We’re really living in Neal Stephenson’s Snow Crash, says Katherine Mangu-Ward:

It’s 1992. Computers are running Windows 3.1. Mobile phones are rare and must be carried in a suitcase. A few nerds in Illinois are getting pretty close to inventing the first web browser, but they’re not quite there yet.

This is the year Neal Stephenson publishes Snow Crash, a novel whose action centers around a global fiber optic network, which can be accessed wirelessly via tiny computers and wearables. On this network, users are identifiable by their avatars, a Sanskrit word that Stephenson’s novel popularized; those avatars may or may not be reliable indications of what they are like in real life. Many of the characters work as freelancers, coding, delivering goods, or collecting information piecemeal. They are compensated in frictionless micropayments, some of which take place in encrypted online digital currency. Intellectual property is the most valuable kind of property, but knowledge is stored in vast digital libraries that function as fully searchable encyclopedias and compendia. Plus there’s this really cool digital map where you can zoom in and see anywhere on the planet.

Basically what I’m saying here is that every other entry in the feature is baloney. We are living in the world Neal Stephenson hallucinated after spending too much time in the library in the early 1990s. End of story.

Is it a dystopia? Sure, if you want to get technical about it: Our antihero, Hiro Protagonist (!), is beset by all manner of typical Blade Runner–esque future deprivations, including sub-optimal housing, sinister corporate villains, and a runaway virus that threatens to destroy all of humanity.

But in addition to the this-guy-must-have-a-secret-time-machine prescience of the tech, the book offers a gritty/pretty vision of anarcho-capitalism that’s supremely compelling—when they’re in meatspace, characters pop in and out of interestingly diverse autonomous quasi-state entities, and the remnants of the U.S. government is just one of the governance options.

Stephenson’s semi-stateless cyberpunk vision is no utopia, that’s for darn sure. But the ways in which it anticipated our technological world is astonishing, and I wouldn’t mind if our political reality inched a little closer to Snow Crash‘s imagined future as well.

Katherine is off by three years. 1989’s Back to the Future: Part II is actually the key to understanding 2018, says Robby Soave:

Back to the Future: Part II has always been the least-appreciated entry in the series: It’s the most confusing and kid-unfriendly, lacking both the originality of the first film and the emotional beats of the third. But almost 30 years after its release, the middle installment of Robert Zemeckis’s timeless time-travel epic is newly relevant: not for accurately depicting the future, but for warning us what life would be like with a buffoonish, bullying billionaire in charge.

2015, the furthest point in the future visited by Marty McFly and “Doc” Emmett Brown has come and gone, and we still don’t have flying cars, hover boards, or jackets that dry themselves. But we do have a president who seems ripped from the film’s alternate, hellish version of Hill Valley in 1985, where the loathsome Biff Tannen has become a powerful mogul after traveling into the past and using his knowledge of the future to rig a series of events in his favor.

The similarities between Trump and alternate-reality Biff are so numerous that Back to the Future writer Bob Gale has retroactively (and spuriously) claimed the 45th president as inspiration for the character. Biff buys Hill Valley’s courthouse and turns it into a casino hotel. Biff is a crony capitalist who weaponizes patriotism for personal enrichment (“I just want to say one thing: God bless America”). Biff is a paunchy playboy with two supermodel ex-wives, a bad temper, and even worse hair. There’s no escaping Biff: He’s a media figure, a businessman, a civic leader, and even a member of the family.

“Biff is corrupt, and powerful, and married to your mother!” Doc Brown laments to Marty. Millions of Americans no doubt feel the same way about a man who similarly possesses the uncanny ability to commandeer our attention and insert himself into every facet of modern life. Sometimes it’s hard to avoid the feeling that we’re simply living through the wrong timeline—thanks, McFly.

We may not have hoverboards, but America is teeming with the legal “Orb” from Woody Allen’s Sleeper, observes Todd Krainin:

The world never recovered after Albert Shanker, president of the United Federation of Teachers, acquired a nuclear warhead. Two hundred years later, in the year 2173, the territory once known as the United States is ruled by The Leader, the avuncular figurehead of a police state that brainwashes, surveils, and pacifies every citizen.

Every citizen except for our hero, Miles Monroe. Cryogenically frozen in the late 20th century, Monroe is thawed out in the 22nd. As the only person alive with no biometric record, Monroe is essentially an undocumented immigrant from the past, making him the ideal secret weapon for an underground revolutionary movement.

“What kind of government you guys got here?” asks a bewildered Monroe, after learning the state will restructure his brain. “This is worse than California!”

Monroe’s quest to take down the worse-than-Sacramento government takes him through a world that’s amazingly prescient for a film that aims for slapstick comedy. He gets high on the orb (space age marijuana), crunches on a 15-foot long stalk of techno-celery produced on an artificial farm (GMOs), impersonates a domestic assistant (Alexa), and joins a crunchy underground (#Resist), in order to defeat The Leader (guess who).

Sleeper‘s most memorable invention is the Orgasmatron, a computerized safe space that provides instant climaxes for a frigid and frightened populace. It’s basically the internet porn and sex robot for today’s intimacy-averse millennials.

In the highpoint of the film, Monroe attempts to clone The Leader from his nose. This in a film released 23 years before real doctors cloned Dolly the sheep from the cell of a mammary gland.

By the film’s end, Monroe is faced with the prospect of replacing The Leader with a revolutionary band of eco-Marxists. But some things never change.

“Political solutions don’t work,” he prophesies. “It doesn’t matter who’s up there. They’re all terrible.”

For a journalism outlet, we’ve been embarrassingly slow to recognize that Orson Scott Card’s Ender’s Game explains the media world we live in, argues Peter Suderman:

In a 2004 feature for Time, Lev Grossman explored of a new form of web-based journalism that was then radically reshaping both the political and media landscapes: blogs. Grossman profiled several bloggers, most of whom were young and relatively unknown, with little experience in or connection to mainstream journalism. Yet “blogs showcase some of the smartest, sharpest writing being published,” Grossman wrote. In particular, bloggers were influencing some pretty big national conversations about U.S. military actions and politics.

From the vantage of 2018, all this might seem like old news: The mainstream media has adopted and amplified many blogging practices. But even in 2004, the idea of user-produced, semi-anonymous journalism, posted directly to the net with no editorial filter, had been in circulation for years as a sci-fi conceit—perhaps most prominently in Orson Scott Card’s 1985 novel, Ender’s Game.

In the book, a child genius named Ender Wiggin is sent to an orbiting military academy to prepare for a military invasion. While he’s away, his adolescent siblings—themselves unusually gifted—hatch a plan to manipulate world politics by posting psuedononymous political arguments on “the nets.” These essays are read by citizens and politicians alike, and both siblings develop powerful followings. Eventually, they help prevent the world from exploding into planetary war, and pave the way for mankind’s colonial expansion into space.

Card’s narrative was too compact, its assumptions about the influence of online writing too simplistic. But it previewed the ways in which the internet would expand the reach and influence of little-known writers—especially political pundits—who lack conventional journalistic training or credentials. Today’s internet-based media landscape is neither a utopia nor a dystopia, but a lively, raucous, fascinating, and occasionally frustrating extrapolation of what Scott Card imagined before any of it existed in the real world.

This year is definitely one of Heinlein’s “crazy years,” says Brian Doherty:

Robert Heinlein was one of the first science fiction writers to create a fictional structure that seemed to privilege prediction, with his “Future History” sequence, collected in the volume The Past Through Tomorrow.

Prediction was not Heinlein’s purpose—storytelling was. But his “Future History” chart started off with the “Crazy Years”: “Considerable technical advance during this period, accompanied by a gradual deterioration of mores, orientation, and social institutions, terminating in mass psychoses in the sixth decade, and the interregnum.” Heinlein made this prediction in 1941, so the “sixth decade” meant the 1950s.

Did he really predict the Trump era? Heinlein fans have seen in wild ideological excesses on both left and right a clear sign that we are, collectively, losing our minds. Instapundit‘s Glenn Reynolds thinks we are certainly in Heinlein’s Crazy Years, noting it’s become a cliché among Heinlein fans to notice. He sees as evidence totemic but useless responses to policy crisis, and a social networking age that allows for tighter epistemic bubbles for information consumers and producers. Factually, the internet makes it stunningly easier for anyone to have opinions about politics and policy far better informed by accurate facts and trends than in any previous era. That so many might choose not to do so shows why predictions of “crazy years” can seem so eternally prescient: People can just be crazy (colloquially).

A lot of the “crazy” news these days that might lead to the never-witty declaration that it’s “not The Onion” come from unusual personal qualities of our president; some come from excesses of the desire to control others’ thought and expression. But if “crazy” means dangerous, then recent trends in crime domestically and wealth and health worldwide indicate we are mucking along well enough.

Indeed, as per the title of Heinlein’s anthology, the past is tomorrow and probably always will be. That times of technologic advance will be followed by “gradual deterioration” (read: changes) in mores, orientation, and social institutions is the kind of golden prediction of the dystopia we eternally are moving in (and always moving through) with which it’s hard to lose.

Loing before the 2016 Flyover Takeover, Walker Percy predicted a frayed nation would disassemble itself, writes Mike Riggs:

It’s the 1980s, and liberals have taken “In God We Trust” off the penny, while “knotheads”—conservatives—have mired the U.S. in a 15-year war with Ecuador. Liberals love “dirty movies from Sweden,” knotheads gravitate toward “clean” films, like The Sound of Music, Flubber, and Ice Capades of 1981. America’s big cities, meanwhile, are shells of themselves. “Wolves have been seen in downtown Cleveland, like Rome during the black plague.” Political polarization has even led to a change in international relations: “Some southern states have established diplomatic ties with Rhodesia. Minnesota and Oregon have their own consulates in Sweden.”

Our guide through the social hellscape of Love in the Ruins is Thomas More, a descendant of Sir Thomas More (author of 1516’s Utopia) and a lecherous Catholic psychiatrist with an albumin allergy who nevertheless chugs egg-white gin fizzes like water. A stand-in for Percy, More is a keen social taxonomist and a neutral party in the culture war. He notes that liberals tend to favor science and secularism; conservatives, business and God. But “though the two make much of their differences, I do not notice a great deal of difference between the two.” In the bustling Louisiana town of Paradise, wealthy knotheads and wealthy leftists live side by side, in nice houses, with new cars parked in their driveways, just as they currently do in Manhattan, Georgetown, and Palm Beach. One group may go to church on Sundays, the other bird watching, but they are more like each other than they are the “dropouts from, castoffs of, and rebels against our society” who live in the swamp on the edge of town.

Yet even the wealthy must bear the brunt of social frisson. A local golf course magnate alternates between depression and indignation as the poor of Paradise challenge his decision to automate the jobs at his country club.

Love in the Ruins is the most radical timeline extending from the King assassination, Kent State, and the Tate Murders, three historical moments that helped undo the World War II–era fantasy—ever more childish in hindsight—of America as a cohesive unit. We were not one then, and are not now. Percy saw 2018 coming from a four-decade mile.

You are all wrong, says Jesse Walker:

Identity has never been as fluid, fungible, and multiple as it is today. That guy you’re arguing with on Twitter might actually be a crowd of people. That crowd of people you’re arguing with might actually be just one guy. Trolls try on a persona for an hour, then discard it for something new. Bots adopt a persona and stick with it, but without an actual mind in command. Your identity might be stolen altogether, leaving you to learn that an entity that looks like you has been spending money, sending messages, or otherwise borrowing your life. You might even wake one day to discover that someone has inserted your head onto someone else’s body, all so a stranger can live out a fantasy.

You can decide for yourself how much of that is a utopia and how much is a dystopia. All I know is that at some point we started living in Being John Malkovich.

It’s Time for Some Anti-Science Fiction

nature-spaceships_00374723

Source: The Hipcrime Vocab

It’s Time for Some Anti-Science Fiction
Why must positive depictions of the future always be dependent upon some sort of new technology?

Neal Stephenson is a very successful and well-known science fiction writer. He’s also very upset that the pace of technological innovation has seemingly slowed down and we seem to be unable to come up with truly transformative  “big ideas” anymore. He believes this is the reason why we are so glum and pessimistic nowadays. Indeed, the science fiction genre, once identified with space exploration and utopias of post-scarcity and abundant leisure time, has come to be dominated by depictions of the future as a hellhole of extreme inequality, toxic environmental pollution, overcrowded cities, oppressive totalitarian governments, and overall political and social breakdown. Think of movies like The Hunger Games, Elysium, The Giver, and Snowpiercer.

This pessimism is destructive and corrosive, believes Stephenson. According to the BBC:

Acclaimed science-fiction writer Neal Stephenson saw this bleak trend in his own work, but didn’t give it much thought until he attended a conference on the future a couple years ago. At the time, Stephenson said that science fiction guides innovation because young readers later grow up to be scientists and engineers.

But fellow attendee Michael Crow, president of Arizona State University (ASU), “took a more sort of provocative stance, that science fiction actually needed to supply ideas that scientists and engineers could actually implement”, Stephenson says. “[He] basically told me that I needed to get off my duff and start writing science fiction in a more constructive and optimistic vein.”

“We want to create a more open, optimistic, ambitious and engaged conversation about the future,” project director Ed Finn says. According to his argument, negative visions of the future as perpetuated in pop culture are limiting people’s abilities to dream big or think outside the box. Science fiction, he says, should do more. “A good science fiction story can be very powerful,” Finn says. “It can inspire hundreds, thousands, millions of people to rally around something that they want to do.”

Basically, Stephenson wants to bring back the kind of science fiction that made us actually long for the future rather than dread it. Stephenson means to counter this techno-pessimism by inviting a number of well-known science fiction writers to come up with more positive, even utopian, visions of the future, where we once again come up with “big ideas” that inspire the scientists and engineers in their white labcoats. He apparently believes that it is the duty of science fiction authors to act as, in the words of one commentator, “the first draft of the future. ” Indeed, much of modern technology and space exploration was presaged by authors like H.G. Wells and Jules Verne. From the BBC article above, here are some of the positive future scenarios depicted in the book:

  •     Environmentalists fight to stop entrepreneurs from building the first extreme tourism destination hotel in Antarctica.
  •     People vie for citizenship on a near-zero-gravity moon of Mars, which has become a hub for innovation.
  •     Animal activists use drones to track elephant poachers.
  •     A crew crowd-funds a mission to the Moon to set up an autonomous 3D printing robot to create new building materials.
  •     A 20km tall tower spurs the US steel industry, sparks new methods of generating renewable energy and houses The First Bar in Space.

The whole idea behind Project Hieroglyph, as I understand it, is to depict more positive futures than the ones being depicted in current science fiction and media. That seems like a good idea. But my question is – why must these positive futures always involve more intensive application of technology? Why are we unable to envision a better future in any other way besides more technology, more machines, more inventions, more people, more economic growth, etc. Haven’t we already been down that road?

Or to put it another way, why must science fiction writers assume that more technological innovation will produce a better society when our modern society is the result of previous technological innovations, and is seen by many people as a dystopia (with many non-scientifically-minded people actually longing for a collapse of some sort)? Perhaps, to paraphrase former president Reagan, in the context of our current crisis, technology is not the solution to the problem, technology is the problem.

***

It’s worth pointing out that many of the increasingly dystopian elements of our present circumstances have been brought about by the application of technology.

Economists have pinpointed technology as a key driver of inequality thanks to the hollowing out of the middle class due to the automation of routine tasks that underpinned the  industrial/service economy leaving only high-end and low-end jobs remaining, as well as the “superstar effect” where a few well-paid superstars capture all the gains because technology allows them to everywhere at once. Fast supercomputers have allowed the rich to game the stock market casino where the average stock is now held for just fractions of a second, while global telecommunications has led to reassigning jobs anywhere in the world where the very cheapest workers can be found. America’s manufacturing  jobs are now done by Chinese workers and its service jobs by Indian workers half a world away even as the old Industrial heartland looks suspiciously like what is depicted in The Hunger Games. Rather than a world of abundant leisure, stressed out workers take their laptops to the beach, fearful of losing their jobs if they don’t, while millions have given up even looking for work anymore. A permanently underemployed underclass distracts itself with Netflix, smartphones and computer games, and takes expensive drugs promoted by pharmaceutical companies to deal with their depression.

Global supply chains, supertankers, the “warehouse and wheels,” and online shopping have hollowed out local main street economies and led to monopolies in every industry across the board. Small family farmers have been kicked off the land worldwide and replaced by gargantuan, fossil-fuel powered agricultural factories owned by agribusinesses churning out  bland processed food based around wheat, corn and soy causing soaring obesity rates worldwide and runaway population growth.

Banks have merged into just a handful of entities that are “too-big-to-fail” and send trillions around the world at the speed of light. Gains are privatized while loses and risk are socialized, and the public sphere is sold off to profiteers at fire sale prices. A small financial aristocracy controls the system and hamstrings the world with debt. Just eighty people control as much wealth as half of the planet’s population, and in the world’s biggest economy just three people gain as much income as half the workforce. There are now more prisoners in America than farmers.

A now global trans-national elite of owner-oligarchs criss-crosses the world in Gulfsteam jets and million-dollar yachts and  hides their money in offshore accounts beyond the reach of increasingly impotent national governments, while smaller local governments can’t keep potholes filled, streets plowed and streetlights on for ordinary citizens. Many of the world’s great cities have become “elite citadels” making it impossible for regular citizens to live there. This elite controls bond markets, funds political campaigns and owns and controls a monopolized media that normalizes this state of affairs using sophisticated propaganda tools enhanced by cutting-edge psychological research enabled by MRI scanners. The media is controlled by a small handful of corporations and panders to the lowest common demonstrator while keeping people in a constant state of fear and panic. Advertising preys on our insecurities and desire for status to make us buy more, enabled by abundant credit. The Internet, once the hope for a more democratic future, has ended up as shopping mall, entertainment delivery system and spying/tracking system rather than a force for democracy and revolution.

Security cameras peer at us from every streetcorner and store counter and shocking revelations about the power and reach of the national security state that are as fantastic as anything dreamed up by dystopian science fiction writers have become so commonplace that people hardly notice anymore. Anonymous people in gridded glass office towers read our every email, listen to our every phone call and track our every move using our cell phones. New technology promises “facial recognition” and “smart” technology promoted by corporations promises to track and permanently record literally every move you make.

Remote-control drones patrol the skies of global conflict zones and vaporize people half a world away without their pilots ever seeing their faces. High-tech fighter jets allow us to “cleanly” drop bombs without the messiness of a real war. Private mercenaries are a burgeoning industry and global arms sales continue to increase even in a stagnant global economy with arms companies often selling to both sides. By some accounts one in ten Americans is employed in some sort of “guard labor,” that is, keeping their fellow citizens in line. The number of failed states continues to increase in the Middle East and Africa and citizens in democracies are marching in the streets.

Not that there’s nothing for the national security state to fear after all – technology has enabled individual terrorists and non-state actors to produce devastating weapons capable of destroying economies and killing thousands as 9-11 demonstrated. A single “superempowered” individual can kill millions with a nuclear bomb the size of a suitcase or an engineered virus or other bioterrorism weapon. The latest concern is “cyberwarfare” which could destroy the technological infrastructure we are now utterly dependent upon and kill millions. “Non-state actors” can wreak as much havoc as armies thanks to modern technology, and there are a lot of disgruntled people out there.

And then there is the environmental devastation, of which climate change is the most overwhelming, but includes everything from burned down Amazonian rainforest, to polluted mangroves in Thailand, to collapased fish stocks, dissolving coral reefs and oceans full of jellyfish. Half the  world’s terrestrial biodiversity has been eliminated in the past fifty years and we’ve lost so much polar ice that earth’s gravity is measurably affected. In China, the world’s economic success story, the haze is so thick that people can’t see the tops of the skyscrapers they already have and there are “cancer villages.” The skies may be a bit clearer in America thanks to deindustrialization, but things like drought in the Southwest and increasinginly powerful hurricanes are reminders that no one is immune. Entire countries and major cities look to be submerged under rising oceans and the first climate refugees are already on the move from places like Africa and Southeast Asia leading to anti-immigrant backlash in developed countries.

This is not some future dystopia, by the way, this is where technology has us led right now. Today. Current headlines. Maybe the reason that dystopias are so popular is because that seems to be where technology had led us here in the first decade of the twenty-first century. I’m skeptical that Project Hieroglyph and it’s fostering of “big ideas” will do much to change that.

Thus my fundamental question is, given the above, why is it always assumed that the path to utopia goes through a widespread deployment of even more innovation and technology? Is it realistic to believe that colonies on Mars, drones, intelligent robots, skyscrapers and space elevators will solve any of this?

I’ve written before about the fact that the technology we already have in our possession today was expected to deliver a utopia by numerous writers and thinkers of the past. “The coming of the wireless era will make war impossible, because it will make war ridiculous,” declared Marconi in 1912. HG Wells, a committed socialist who lived during perhaps the greatest period of invention before or since (railroads, harnessing of electricity, radio communication, internal combustion engines, powered flight, antibiotics),  very frequently depicted utopian societies brought about through the applications of greater technology. Science fiction authors still seem to conceive utopias as being exclusively brought about by “technological progress.” But given hindsight, is that realistic anymore?

Maybe it’s time for some anti-science fiction.

***

The classic example of this is William Morris’ utopian novel News From Nowhere.

Morris was a key figure in the Arts and Crafts movement, which was a reaction to the factory-based mass production and subsequent deskilling of the workforce. People no longer collectively made the world of goods and buildings around them, rather they were now made by a small amount of people using deskilled, alienated labor in giant factories with the profits accruing to a tiny handful of capitalist owners. Morris wanted another way.

In Morris’ future London there are very little in the way of centralized institutions.  People work when they want to and do what they want to. Money is not used. Life is lived leisurely pace. Writing during the transformative changes of the Industrial Revolution, Morris’ London looks less like a World’s Fair and more like a lost bucolic pastoral London that had long since vanished under the smoke of factories. Technology plays a very small role yet people are much happier.

Morris’ work was written partially in response to a book entitled Looking Backward by Edward Bellamy, which was extraordinarily popular in the late nineteenth century, but almost forgotten today. Bellamy’s year 2000 utopia had the means of production brought under centralized control, with people serving time in an “industrial army” for twenty years and then retiring to a life of leisure and  material abundance brought about by production for use rather than capitalist profit.

Morris still felt that this subordinated workers to machines rather than depicting a society for the maximization of human well-being, including work. Here is Morris in a speech:

“Before I leave this matter of the surroundings of life, I wish to meet a possible objection. I have spoken of machinery being used freely for releasing people from the more mechanical and repulsive part of necessary labour; it is the allowing of machines to be our masters and not our servants that so injures the beauty of life nowadays. And, again, that leads me to my last claim, which is that the material surroundings of my life should be pleasant, generous, and beautiful; that I know is a large claim, but this I will say about it, that if it cannot be satisfied, if every civilised community cannot provide such surroundings for all its members, I do not want the world to go on.”

Morris’ book shows that utopias need not be high-tech. It also shows that real utopias are brought about by the underlying philosophy of a society and its corresponding social relations. It seems to me like Stephenson’s utopias are all predicated on the continuation of the philosophy and social relations of our current society – more growth, more technology, faster innovation, more debt, corporate control, trickle-down economics, private property, absentee ownership, anarchic markets, autonomous utility-maximizing consumers, etc. It is yoked to our ideas of “progress” as simply an application of more and faster technology.

By contrast, Morris’ utopia has the technological level we would  associate with a “dystopian” post collapse society, yet everyone seems a whole lot happier.

***

Now I don’t mean to suggest that any utopia should necessarily be a place where we have reverted to some sort pre-industrial level of technology. We don’t need to depict utopias as living like the Amish (although that would be an interesting avenue of exploration). I merely wish to point out that a future utopia need not be exclusively the domain of science fiction authors, and need not be predicated by some sort of new wonder technology or space exploration. For example, in an article entitled Is It Possible to Imagine Utopia Anymore? the author writes:

Recently, though, we may have finally hit Peak Dystopia…All of which suggests there might be an opening for a return to Utopian novels — if such a thing as “Utopian novels” actually existed anymore…In college, as part of a history class, I read Edward Bellamy’s Looking Backwards, a Utopian science-fiction novel published in 1888. The book — an enormous success in its time, nearly as big as Uncle Tom’s Cabin — is interesting now less as literature than as a historical document, and it’s certainly telling that, in the midst of the industrial revolution, a novel promising a future socialist landscape of increased equality and reduced labor so gripped the popular imagination. We might compare Bellamy’s book to current visions of Utopia if I could recall even a single Utopian novel or film from the past five years. Or ten years. Or 20. Wikipedia lists dozens of contemporary dystopian films and novels, yet the most recent entry in its rather sparse “List of Utopian Novels” is Island by Aldous Huxley, published in 1962*. The closest thing to a recent Utopian film I can think of is Spike Jonze’s Her, though that vision of the future — one in which human attachment to sentient computers might become something close to meaningful — hardly seems like a fate we should collectively strive for, but rather one we might all be resigned to placidly accept

Many serious contemporary authors have tackled dystopia: David Foster Wallace’s Infinite Jest, Gary Shteyngart’s Super Sad True Love Story, Cormac McCarthy’s The Road, and so on. But the closest thing we have to a contemporary Utopian novel is what we could call the retropia: books like Michael Chabon’s Telegraph Avenue (about a funky throwback Oakland record store) or Jonathan Lethem’s Fortress of Solitude (about 1970s Brooklyn) that fondly recall a bygone era, by way of illustrating what we’ve lost since —  “the lost glories of a vanished world,” as Chabon puts it. Lethem’s more recent Dissident Gardens is also concerned with utopia, but mostly in so far as it gently needles the revolutionaries of yesteryear.

Indeed, the closest things we have to utopias on TV today are shows like Mad Men which take place during the era when Star Trek was on TV rather than a utopia inspired by Star Trek itself. For many Americans, their version of utopia is not in the future but in the past – the 1950’s era of widespread prosperity, full employment, single-earner households, more leisure, guaranteed pensions, social mobility, inexpensive housing, wide open roads and spaces, and increasing living standards. As this article points out:

When I first heard about the project, my cynical heart responded skeptically. After all, much of the Golden Age science fiction Stephenson fondly remembers was written in an era when, for all its substantial problems, the U.S. enjoyed a greater degree of democratic consensus. Today, Congress can barely pass a budget, let alone agree on collective investments.

If someone asked me to depict a more positive future than the one we have, deploying more technology is just about the last thing I would do to bring it about. In fact, the future I would depict would almost certainly include less technology, or rather technology playing a smaller role in our lives. I would focus more on social relations that would make us be happy to be alive, where we eat good food, spend time doing what we want instead of what we’re forced to, and don’t have to be medicated just to make it through another day in our high-pressure classrooms and cubicles. I might even depict a future with no television inspired by Jerry Mander’s 1978 treatise Four Arguments for the Elimination of Television (hey, remember this is fiction after all!)

Rather it would depict different political, economic and social relations first, with new technology playing only a supporting, not a starring role. Organizing society around the needs of productive enterprise, growth and profits (and nothing else) is the reason, I believe, why we are feeling so depressed about the future that dystopias resonate more with a demoralized general public who rolls their collective eyes at the exhortations of science fiction writers with an agenda**. The problem of science fiction is it’s single-minded conflagration of technology with progress.

Personally my utopia would be something more like life on the Greek island of Ikaria*** according to this article from The New York Times (which reads an awful lot like News from Nowhere):

Seeking to learn more about the island’s reputation for long-lived residents, I called on Dr. Ilias Leriadis, one of Ikaria’s few physicians, in 2009. On an outdoor patio at his weekend house, he set a table with Kalamata olives, hummus, heavy Ikarian bread and wine. “People stay up late here,” Leriadis said. “We wake up late and always take naps. I don’t even open my office until 11 a.m. because no one comes before then.” He took a sip of his wine. “Have you noticed that no one wears a watch here? No clock is working correctly. When you invite someone to lunch, they might come at 10 a.m. or 6 p.m. We simply don’t care about the clock here.”

Pointing across the Aegean toward the neighboring island of Samos, he said: “Just 15 kilometers over there is a completely different world. There they are much more developed. There are high-rises and resorts and homes worth a million euros. In Samos, they care about money. Here, we don’t. For the many religious and cultural holidays, people pool their money and buy food and wine. If there is money left over, they give it to the poor. It’s not a ‘me’ place. It’s an ‘us’ place.”

Ikaria’s unusual past may explain its communal inclinations. The strong winds that buffet the island — mentioned in the “Iliad” — and the lack of natural harbors kept it outside the main shipping lanes for most of its history. This forced Ikaria to be self-sufficient. Then in the late 1940s, after the Greek Civil War, the government exiled thousands of Communists and radicals to the island. Nearly 40 percent of adults, many of them disillusioned with the high unemployment rate and the dwindling trickle of resources from Athens, still vote for the local Communist Party. About 75 percent of the population on Ikaria is under 65. The youngest adults, many of whom come home after college, often live in their parents’ home. They typically have to cobble together a living through small jobs and family support.

Leriadis also talked about local “mountain tea,” made from dried herbs endemic to the island, which is enjoyed as an end-of-the-day cocktail. He mentioned wild marjoram, sage (flaskomilia), a type of mint tea (fliskouni), rosemary and a drink made from boiling dandelion leaves and adding a little lemon. “People here think they’re drinking a comforting beverage, but they all double as medicine,” Leriadis said. Honey, too, is treated as a panacea. “They have types of honey here you won’t see anyplace else in the world,” he said. “They use it for everything from treating wounds to curing hangovers, or for treating influenza. Old people here will start their day with a spoonful of honey. They take it like medicine.”

Over the span of the next three days, I met some of Leriadis’s patients. In the area known as Raches, I met 20 people over 90 and one who claimed to be 104. I spoke to a 95-year-old man who still played the violin and a 98-year-old woman who ran a small hotel and played poker for money on the weekend.

On a trip the year before, I visited a slate-roofed house built into the slope at the top of a hill. I had come here after hearing of a couple who had been married for more than 75 years. Thanasis and Eirini Karimalis both came to the door, clapped their hands at the thrill of having a visitor and waved me in. They each stood maybe five feet tall. He wore a shapeless cotton shirt and a battered baseball cap, and she wore a housedress with her hair in a bun. Inside, there was a table, a medieval-looking fireplace heating a blackened pot, a nook of a closet that held one woolen suit coat, and fading black-and-white photographs of forebears on a soot-stained wall. The place was warm and cozy. “Sit down,” Eirini commanded. She hadn’t even asked my name or business but was already setting out teacups and a plate of cookies. Meanwhile, Thanasis scooted back and forth across the house with nervous energy, tidying up.

The couple were born in a nearby village, they told me. They married in their early 20s and raised five children on Thanasis’s pay as a lumberjack. Like that of almost all of Ikaria’s traditional folk, their daily routine unfolded much the way Leriadis had described it: Wake naturally, work in the garden, have a late lunch, take a nap. At sunset, they either visited neighbors or neighbors visited them. Their diet was also typical: a breakfast of goat’s milk, wine, sage tea or coffee, honey and bread. Lunch was almost always beans (lentils, garbanzos), potatoes, greens (fennel, dandelion or a spinachlike green called horta) and whatever seasonal vegetables their garden produced; dinner was bread and goat’s milk. At Christmas and Easter, they would slaughter the family pig and enjoy small portions of larded pork for the next several months.

During a tour of their property, Thanasis and Eirini introduced their pigs to me by name. Just after sunset, after we returned to their home to have some tea, another old couple walked in, carrying a glass amphora of homemade wine. The four nonagenarians cheek-kissed one another heartily and settled in around the table. They gossiped, drank wine and occasionally erupted into laughter.

No robot babysitters or mile-high skyscrapers required.

* No mention of Ernest Callenbach’s Ecotopia published in 1975?

** ASU is steeped in Department of Defense funding and DARPA (The Defense Research Projects Agency) was present at a conference about the book entitled “Can We Imagine Our Way to a Better Future?” held in Washington D.C. I’m guessing the event did not take place in the more run-down parts of the city. Cui Bono?

***Ironically, Icaria was used as the name of a utopian science fiction novel, Voyage to Icaria, and inspired an actual utopian community.