Now That We’ve Incentivized Sociopaths–Guess What Happens Next

By Charles Hugh Smith

Source: Of Two Minds

As long as central banks create and distribute trillions in conscience-free credit to conscience-free financiers and corporations, the incentives for sociopathy only increase.

“Sociopath” is a word we now encounter regularly in the mainstream media, but what does it mean? Here is a list of 16 traits, many of which are visible in lionized corporate and political leaders and entrepreneurs.

One key trait is a lack of moral responsibility or conscience; the sociopath feels no remorse if he/she takes advantage of people or exploits them.

Sociopaths are masters of superficial charm, intelligence and confidence, and adept at massaging or misrepresenting reality up to and including outright lying to persuade others or get their way.

Like all psychological syndromes (manic depression, autism, bipolar disorder, etc.), there is a wide spectrum of sociopathological traits, some of which may offer some adaptive benefits (and hence their continued presence in the human genome). In other words, an individual can have a few of the traits in greater or lesser proportions.

Thus the modern BBC Sherlock Holmes (played by Benedict Cumberbatch) describes himself as a “high-functioning sociopath” (though many contest this diagnosis of the original Holmes in Arthur Conan Doyle’s stories).

Anyone who has read Walter Isaacson’s biography of Steve Jobs can readily see manifestations of sociopathy in Jobs: his famous “reality distortion field,” his refusal to accept that he’d fathered a daughter, his lack of empathy, his wild emotional swings (from verbal abuse to weeping), his dietary extremes, his charm, so quickly turned on or off, his uneven parenting, and so on. His obsessive-compulsive behavior was also on full display. Yet Jobs is lauded and even worshiped as a genius and unparalleled entrepreneur. Was this the result of his sociopathological traits, or something that arose despite them?

The ledger of costs and benefits of Jobs’ output is weighted by the global benefits of the products he shepherded to market and the hundreds of billions of dollars in sales and net worth he generated for investors while the head of Apple. Though narcissistic in many ways (with the resulting negative effects on many of his intimates), Jobs was clearly focused on creating “insanely great” products that would benefit customers and users. Despite his sociopathological traits, there is no evidence he set out to deceive anyone with the objective of exploiting their good will or belief in his vision to skim billions of dollars from unwary investors.

But the ledgers of others manifesting sociopathy are far less beneficial, as the billions of dollars they generated were in essence a form of fraud.

The rise and fall of WeWork is a recent textbook example of sociopathy reaping enormous financial gains for the sociopaths without creating any actual value. There are plenty of media accounts of the founders’ excesses (including the goal of becoming the world’s first trillionaire), some of which we might have expected to raise flags in venture capitalists, board members, etc., but these traits were overlooked in the rush for all involved to garner billions of dollars in fees and net worth when WeWork went public.

This example (among many) illustrates that sociopathy is incentivized in our socio-political-economic system, and sociopathic “winners” are lionized as epitomes of ambitious success. (The entire charade of the stock market rising due to Federal Reserve-enabled stock buybacks is an institutionalized example of sociopathy.)

Correspondent Tom D. recently summarized the core dynamic and consequence of this systemic incentivization of sociopathy:

I’ve been a successful business owner, but I’m not a sociopath–I deliver value to my customers, my investors, and I don’t move forward if I see anyone being substantially hurt by my actions.

My peers and I look at organizations such as WeWorks, see the rewards reaped by the sociopathic leaders, and realize we are at a constitutional disadvantage working within such a system.

I could never conceive of taking a $700-900m payday at the expense of investors for whom I’ve generated no value whatsoever.

I simply could not do it.

If ‘out-sociopathing’ the sociopaths is what it takes to ‘succeed’ in todays business climate– I’ll fail.

So I don’t try.

From the sociopath’s standpoint, that’s probably a feature not a bug–one that helps keep effective competition out of the marketplace.

I wonder how much of civilizational decline is simply due to good people accepting their lot and opting out.

If the system incentivizes conscience-free sociopaths more than it incentivizes those creating real value, the system will eventually fall into the equivalent of Gresham’s law (“bad money drives out good money”): the con-men and fraudsters will drive out entrepreneurs with a conscience who create real value for customers, investors and society at large.

If we look at recent IPOs and compare them to the Apple IPO, it seems we’ve already reached that point. Apple went public as a highly profitable company. Uber, Lyft, Beyond Meat and WeWork (if their IPO fraud hadn’t been revealed) are all unprofitable, in some cases losing billions of dollars with little prospect for eventual profits.

Venture capital folks explain this by noting that the flood of central bank credit-money-creation has generated trillions of dollars of liquid capital seeking “the next big thing” that will “disrupt” existing models and therefore generate billions in profits.

This pinpoints one key source of the incentivization of sociopaths: central banks’ creation of trillions of dollars of conscience-free capital seeking a quick profit anywhere on the planet, by any means available.

Conscience-free capital is an easy mark for a conscience-free sociopath. It’s a marriage made in heaven, a perfect match.

Those with a conscience are essentially squeezed out of the system. The choice is binary: either play and lose or opt out.

I’ve written about “opting out” since 2009, since it was one of the few options available to commoners in the final decline of the Western Roman Empire. If we feel we’re at a systemic disadvantage, i.e. the system is rigged against us, opting out makes much more sense than sacrificing oneself in a fruitless battle to stay alive in a system that incentivizes amoral sociopaths.

If we consider what generates outsized success in our rapidly changing economy, we find a variety of factors supporting “winner take most” asymmetric gains. As economist Michael Spence has observed, those who develop new business models earn outsized gains because new forms of capital and labor that are scarce create the most value.

Many of these new business models disintermediate existing models, obsoleting entire layers of middlemen and management.

Netflix is a good example: the move from mailing CDs to streaming content obsoleted cable companies. Now Disney is disrupting Netflix by launching its own streaming service at $6.99 a month, offering content that cable subscribers had to pay $60+ a month to access via a “premium” cable add-on, most of which they didn’t even use.

In contrast, WeWork sold itself as a “tech innovator” when in fact it was simply a commercial real estate packager, leasing large spaces and chopping them up into small spaces with common areas and a few services.

How does our system incentivize sociopathy? By focusing exclusively on short-term gains reaped from IPOs (initial public offerings) and by blindly seeking “the next disruptor that will generate billions,” the system is easy prey for charming sociopaths who can tell a good (if not quite truthful) story.

The amoral sociopath with the story attracts amoral sociopaths in venture capital, banking and politics, as these fields are all focused on short-term, outsized, quickly skimmed gains, regardless of the consequences to investors or society at large.

What would change this incentivization of sociopathy? Ending the Federal Reserve’s delivery of trillions of dollars in conscience-free capital to sociopaths and limiting the VC-IPO flim-flam machine would be a start, but given Wall Street’s dependence on these profits and the millions the Street gives to political campaigns, this is politically unfeasible. Any such regulation that reaches Congress will be watered down or larded with loopholes.

There may be no way to excise the incentives for sociopathy, because the incentives all favor the sociopaths’ most fertile ground: the Federal Reserve’s money spigot of nearly free money for the most sociopathological financiers and corporations; amoral, conscience-free greed; the worship of short-term gains, regardless of consequences, and the extreme profitability of rigged games and The Big Con PR (“we’re only evil when it’s profitable, which is, well, all the time”.)

As long as central banks create and distribute trillions in conscience-free credit to conscience-free financiers and corporations, the incentives for sociopathy only increase, and the incentives for everyone else to opt out increase proportionately.

What happens next? The dead wood of sociopathy is ignited by a random lightning strike, and the entire financial system (and the economy it feeds) burns to the ground in an uncontrollable conflagration of blowback, consequence and karma.

Cyberpunk is Dead

By John Semley

Source: The Baffler

“It was an embarrasser; what did I want? I hadn’t thought that far ahead. Me, caught without a program!”
—Bruce Bethke, “Cyberpunk” (1983)

Held annually in a downtown L.A. convention center so massive and glassy that it served as a futurist backdrop for the 1993 sci-fi action film Demolition Man and as an intergalactic “Federal Transport Hub” in Paul Verhoeven’s 1997 space-fascism satire Starship Troopers, the Electronic Entertainment Expo, a.k.a. “E3,” is the trade show of the future. Sort of.

With “electronic entertainment” now surpassing both music and movies (and, indeed the total earnings of music and movies combined), the future of entertainment, or at least entertainment revenue, is the future of video games. Yet it’s a future that’s backward-looking, its gaze locked in the rearview as the medium propels forward.

Highlights of E3’s 2019 installment included more details around a long-gestating remake of the popular PlayStation 1-era role-playing game Final Fantasy VII, a fifth entry in the demon-shooting franchise Doom, a mobile remake of jokey kids side-scroller Commander Keen, and playable adaptations of monster-budget movie franchises like Star Wars and The Avengers. But no title at E3 2019 garnered as much attention as Cyberpunk 2077, the unveiling of which was met with a level of slavish mania one might reserve for a stadium rock concert, or the ceremonial reveal of an efficacious new antibiotic.

An extended trailer premiere worked to whet appetites. Skyscrapers stretched upward, slashed horizontally with long windows of light and decked out with corporate branding for companies called “DATA INC.” and “softsys.” There were rotating wreaths of bright neon billboards advertising near-futuristic gizmos and gee-gaws, and, at the street level, sketchy no-tell motels and cars of the flying, non-flying, and self-piloting variety. In a grimy, high-security bunker, a man with a buzzcut, his face embedded with microchips, traded blows with another, slightly larger man with a buzzcut, whose fists were robotically augmented like the cyborg Special Forces brawler Jax from Mortal Kombat. The trailer smashed to its title, and to wild applause from congregated gamers and industry types.

Then, to a chug-a-lug riff provided by Swedish straight-edge punkers Refused (recording under the nom de guerre SAMURAI) that sounded like the sonic equivalent of a can of Monster energy drink, an enormous freight-style door lifted, revealing, through a haze of pumped-out fog, a vaguely familiar silhouette: a tall, lean-muscular stalk, scraggly hair cut just above the shoulders. Over the PA system, in smoothly undulating, bass-heavy movie trailer tones, a canned voice announced: “Please welcome . . . Keanu Reeves.” Applause. Pitchy screams. Hysterics in the front row prostrating themselves in Wayne’s World “we’re not worthy!” fashion. “I gotta talk to ya about something!” Reeves roared through the din. Dutifully reading from a teleprompter, he plugged Cyberpunk 2077’s customizable characters and its “vast open world with a branching storyline,” set in “a metropolis of the future where body modification has become an obsession.”

More than just stumping for Cyberpunk 2077, Reeves lent his voice and likeness to the game as a non-playable character (NPC) named “Johnny Silverhand,” who is described in the accompanying press materials as a “legendary rockerboy.” A relative newbie to the world of blockbuster Xbox One games, Reeves told the audience at E3 that Cyberpunk piqued his interest because he’s “always drawn to fascinating stories.” The comment is a bit rich—OK, yes, this is a trade show pitch, but still—considering that such near-futuristic, bodily augmented, neon-bathed dystopias are hardly new ground for Reeves. His appearance in Cyberpunk 2077 serves more to lend the game some genre cred, given Reeves’s starring roles in canonical sci-fi films such as Johnny Mnemonic (1995) and the considerably more fantastic Matrix trilogy (1999-2003)—now quadrilogy; with an anticipated fourth installment announced just recently. Like many of E3 2019’s other top-shelf titles, Cyberpunk 2077 looked forward by reflecting back, conjuring its tech-noir scenario from the nostalgic ephemera of cyberpunk futures past.

This was hardly lost among all the uproar and excitement. Author William Gibson, a doyenne of sci-fi’s so-called “cyberpunk” subgenre, offered his own withering appraisal of Cyberpunk 2077, tweeting that the game was little more than a cloned Grand Theft Auto, “skinned-over with generic 80s retro-future” upholstery. “[B]ut hey,” Gibson added, a bit glibly, “that’s just me.” One would imagine that, at least in the burrows of cyberpunk fandom, Gibson’s criticism carries considerable weight.

After all, the author’s 1984 novel Neuromancer is a core text in cyberpunk literature. Gibson also wrote the screenplay for Johnny Mnemonic, adapted from one of his own short stories, which likewise developed the aesthetic and thematic template for the cyberpunk genre: future dystopias in which corporations rule, computer implants (often called “wetware”) permit access to expansive virtual spaces that unfold before the user like a walk-in World Wide Web, scrappy gangs of social misfits unite to hack the bad guys’ mainframes, and samurai swords proliferate, along with Yakuza heavies, neon signs advertising noodle bars in Kanji, and other fetish objects imported from Japanese pop culture. Gibson dissing Cyberpunk 2077 is a bit like Elvis Presley clawing out of his grave to disparage the likeness of an aspiring Elvis impersonator.

Gibson’s snark speaks to a deeper malaise that has beset cyberpunk. A formerly lively genre that once offered a clear, if goofy, vision of the future, its structures of control, and the oppositional forces undermining those authoritarian edifices, it has now been clouded by a kind of self-mythologizing nostalgia. This problem was diagnosed as early as 1991 by novelist Lewis Shiner, himself an early cyberpunk-lit affiliate.

“What cyberpunk had going for it,” Shiner wrote in a New York Times op-ed titled “Confessions of an Ex-Cyberpunk, “was the idea that technology did not have to be intimidating. Readers in their teens and 20’s responded powerfully to it. They were tired of hearing how their home computers were tempting them into crime, how a few hackers would undermine Western civilization. They wanted fiction that could speak to the sense of joy and power that computers gave them.”

That sense of joy had been replaced, in Shiner’s estimation, by “power fantasies” (think only of The Matrix, in which Reeves’s moonlighting hacker becomes a reality-bending god), which offer “the same dead-end thrills we get from video games and blockbuster movies” (enter, in due time, the video games and blockbuster movies). Where early cyberpunk offerings rooted through the scrap heap of genre, history, and futurist prognostication to cobble together a genre that felt vital and original, its modern iterations have recourse only to the canon of cyberpunk itself, smashing together tropes, clichés, and old-hat ideas that, echoing Gibson’s complaint, feel pathetically unoriginal.

As Refused (in their pre-computer game rock band iteration) put it on the intro to their 1998 record The Shape of Punk to Come: “They told me that the classics never go out of style, but . . . they do, they do.”

Blade Ran

The word was minted by author Bruce Bethke, who titled a 1980 short story about teenage hackers “Cyberpunk.” But cyberpunk’s origins can be fruitfully traced back to 1968, when Philip K. Dick published Do Androids Dream of Electric Sheep?, a novel that updated the speculative fiction of Isaac Asimov’s Robot series for the psychedelic era. It’s ostensibly a tale about a bounty hunter named Rick Deckard chasing rogue androids in a post-apocalyptic San Francisco circa 1992. But like Dick’s better stories, it used its ready-made pulp sci-fi premise to flick at bigger questions about the nature of sentience and empathy, playing to a readership whose conceptions of consciousness were expanding.

Ridley Scott brought Dick’s story to the big screen with a loose 1982 film adaptation, Blade Runner, which cast Harrison Ford as Deckard and pushed its drizzly setting ahead to 2019. With its higher order questions about what it means to think, to feel, and to be free—and about who, or what, is entitled to such conditions—Blade Runner effectively set a cyberpunk template: the billboards, the neon, the high-collared jackets, the implants, the distinctly Japanese-influenced mise-en-scène extrapolated from Japan’s 1980s-era economic dominance. It is said that William Gibson saw Blade Runner in theaters while writing Neuromancer and suffered something of a crisis of conscience. “I was afraid to watch Blade Runner,” Gibson told The Paris Review in 2011. “I was right to be afraid, because even the first few minutes were better.” Yet Gibson deepened the framework established by Blade Runner with a crucial invention that would come to define cyberpunk as much as drizzle and dumpsters and sky-high billboards. He added another dimension—literally.

Henry Case, Gibson establishes early on, “lived for the bodiless exultation of cyberspace.” As delineated in Neuromancer, cyberspace is an immersive, virtual dimension. It’s a fully realized realm of data—“bright lattices of logic unfolding across that colorless void”—which hackers can “jack into” using strapped-on electrodes. That the matrix is “bodiless” is a key concept, both of Neuromancer and of cyberpunk generally. It casts the Gibsonian idea of cyberspace against another of the genre’s hallmarks: the high-tech body mods flogged by Keanu Reeves during the Cyberpunk 2077 E3 demo.

Early in Neuromancer, Gibson describes these sorts of robotic, cyborg-like implants and augmentations. A bartender called Ratz has a “prosthetic arm jerking monotonously” that is “cased in grubby pink plastic.” The same bartender has implanted teeth: “a webwork of East European steel and brown decay.” Gibson’s intense, earthy descriptions of these body modifications cue the reader into the fundamental appeal of Neuromancer’s matrix, in which the body itself becomes utterly immaterial. Authors from Neal Stephenson (Snow Crash) to Ernest Cline (Ready Player One, which is like a dorkier Snow Crash, if such a thing is conceivable), further developed this idea of what theorist Fredric Jameson called “a whole parallel universe of the nonmaterial.”

As envisioned in Stephenson’s Snow Crash, circa 1992, this parallel universe takes shape less as some complex architecture of unfathomable data, and more as an immersive, massively multiplayer online role-playing game (MMORPG). Stephenson’s “Metaverse”—a “moving illustration drawn by [a] computer according to specifications coming down the fiber-optic cable”—is not a supplement to our real, three-dimensional world of physical bodies, but a substitute for it. Visitors navigate the Metaverse using virtual avatars, which are infinitely customizable. As Snow Crash’s hero-protagonist, Hiro Protagonist (the book, it should be noted, is something of a satire), describes it: “Your avatar can look any way you want it to . . . If you’re ugly, you can make your avatar beautiful. If you’ve just gotten out of bed, your avatar can still be wearing beautiful clothes and professionally applied makeup. You can look like a gorilla or a dragon or a giant talking penis in the Metaverse.”

Beyond Meatspatial Reasoning

The Metaverse seems to predict the wide-open, utopian optimism of the internet: that “sense of joy and power” Lewis Shiner was talking about. It echoes early 1990s blather about the promise of a World Wide Web free from corporate or government interests, where users could communicate with others across the globe, forge new identities in chat rooms, and sample from a smorgasbord of lo-res pornographic images. Key to this promise was, to some extent, forming new identities and relationships by leaving one’s physical form behind (or jacked into a computer terminal in a storage locker somewhere).

Liberated from such bulky earthly trappings, we’d be free to pursue grander, more consequential adventures inside what Gibson, in Neuromancer, calls “the nonspace of the mind.” Elsewhere in cyberpunk-lit, bodies are seen as impediments to the purer experience of virtuality. After a character in Cory Doctorow’s Down and Out in the Magic Kingdom unplugs from a bracingly real simulation immersing him in the life of Abraham Lincoln, he curses the limitations of “the stupid, blind eyes; the thick, deaf ears.” Or, as Case puts it in Neuromancer, the body is little more than “meat.”

In Stephenson’s Metaverse, virtual bodies don’t even obey the tedious laws of physics that govern our non-virtual world. In order to manage the high amount of pedestrian traffic within the Metaverse and prevent users from bumping around endlessly, the complicated computer programming permits avatars simply to pass through one another. “When things get this jammed together,” Hiro explains, “the computer simplifies things by drawing all of the avatars ghostly and translucent so you can see where you’re going.” Bodies—or their virtual representations—waft through one another, as if existing in the realm of pure spirit. There is an almost Romantic bent here (Neuromancer = “new romancer”). If the imagination, to the Romantics, opened up a gateway to deep spiritual truth, here technology serves much the same purpose. Philip K. Dick may have copped something of the 1960s psychedelic era’s ethos of expanding the mind to explore the radiant depths of the individual soul, spirit, or whatever, but cyberpunk pushed that ethos outside, creating a shared mental non-space accessible by anyone with the means—a kind of Virtual Commons, or what Gibson calls a “consensual hallucination.”

Yet outside this hallucination, bodies still persist. And in cyberpunk, the physical configurations of these bodies tend to express their own utopian dimension. Bruce Bethke claimed that “cyberpunk” resulted from a deliberate effort to “invent a new term that grokked the juxtaposition of punk attitudes and high technology.” Subsequent cyberpunk did something a bit different, not juxtaposing but dovetailing those “punk attitudes” with high-tech. (“Low-life, high-tech” is a kind of a cyberpunk mantra.) Neuromancer’s central heist narrative gathers a cast of characters—hacker Henry Case, a cybernetically augmented “Razorgirl” named Molly Millions, a drug-addled thief, a Rastafari pilot—that can be described as “ragtag.” The major cyberpunk blockbusters configure their anti-authoritarian blocs along similar lines.

In Paul Verhoeven’s cyberpunk-y action satire Total Recall, a mighty construction worker-cum-intergalactic-spy (Arnold Schwarzenegger) joins a Martian resistance led by sex workers, physically deformed “mutants,” little people, and others whose physical identities mirror their economic alienation and opposition to a menacing corporate-colonial overlord named Cohaagen.

In Johnny Mnemonic, Keanu Reeves’s businesslike “mnemonic courier” (someone who ferries information using computer implants embedded in the brain) is joined by a vixenish bodyguard (Dina Meyer’s Jane, herself a version of Neuromancer’s Molly Millions), a burly doctor (Henry Rollins), and a group of street urchin-like “Lo-Teks” engaged in an ongoing counterinsurgency against the mega-corporation Pharmakom. Both Mnemonic and Recall rely on cheap twists, in which a figure integral to the central intrigue turns out to be something ostensibly less- or other-than-human. Total Recall has Kuato, a half-formed clairvoyant mutant who appears as a tumorous growth wriggling in the abdomen of his brother. Even more ludicrously, Mnemonic’s climax reveals that the Lo-Teks’ leader is not the resourceful J-Bone (Ice-T), but rather Jones, a computer-augmented dolphin. In cyberpunk, the body’s status as “dead meat” to be transcended through computer hardware and neurological implantation offers a corollary sense of freedom.

The idea of the cybernetic body as a metaphor for the politicized human body was theorized in 1985, cyberpunk’s early days, by philosopher and biologist Donna Haraway. Dense and wildly eclectic, by turns exciting and exasperating, Haraway’s “Cyborg Manifesto” is situated as an ironic myth, designed to smash existing oppositions between science and nature, mind and body. Haraway was particularly interested in developing an imagistic alternative to the idea of the “Goddess,” so common to the feminism of the time. Where the Goddess was backward-looking in orientation, attempting to connect women to some prelapsarian, pre-patriarchal state of nature, the cyborg was a myth of the future, or at least of the present. “Cyborg imagery,” she writes, “can suggest a way out of the maze of dualisms in which we have explained our bodies and our tools to ourselves.” Part machine and part flesh, Haraway visualizes the cyborg as a being that threatens existing borders and assumes responsibility for building new ones.

Though they are not quite identical concepts, Haraway’s figure of the cyborg and the thematics of cyberpunk share much in common. A character like Gibson’s Molly Millions, for example, could be described as a cyborg, even if she is still essentially gendered as female (the gender binary was one of the many “dualisms” Haraway believed the cyborg could collapse). Cyborgs and cyberpunk are connected in their resistance to an old order, be it political and economic (as in Neuromancer, Johnny Mnemonic, etc.) or metaphysical (as in Haraway). The cyborg and the cyberpunk both dream of new futures, new social relationships, new bodies, and whole new categories of conceptions and ways of being.

The historical problem is that, for the most part, these new categories and these new relationships failed to materialize, as cyberpunk’s futures were usurped and commodified by the powers they had hoped to oppose.

Not Turning Japanese

In an introduction to the Penguin Galaxy hardcover reissue of Neuromancer, sci-fi-fantasy writer Neil Gaiman ponders precisely how the 1980s cyberpunk visions came to shape the future. “I wonder,” he writes, “to what extent William Gibson described a future, and how much he enabled it—how much the people who read and loved Neuromancer made the future crystallize around his vision.”

It’s a paradox that dogs most great sci-fi writers, whose powers for Kuato-style clairvoyance have always struck me as exaggerated. After all, it’s not as if, say, Gene Roddenberry literally saw into the future, observed voice-automated assistants of the Siri and Alexa variety, and then invented his starship’s speaking computers. It’s more that other people saw the Star Trek technology and went along inventing it. The same is true of Gibson’s matrix or Stephenson’s Metaverse, or the androids of Asimov and Dick. And the realization of many technologies envisioned by cyberpunk—including the whole concept of the internet, which now operates not as an escapist complement to reality, but an essential part of its fabric, like water or heat—has occurred not because of scrappy misfits and high-tech lowlifes tinkering in dingy basements, but because of gargantuan corporate entities. Or rather, the cyberpunks have become the corporate overlords, making the transition from the Lo-Teks to Pharmakom, from Kuato to Cohaagen. In the process, the genre and all its aspirations have been reduced to so much dead meat. This is what Shiner was reacting to when, in 1991, he renounced his cyberpunk affiliations, or when Bruce Bethke, who coined the term, began referring to “cyberpunk” as “the c-word.”

The commodification of the cool is a classic trick of capitalism, which has the frustrating ability to mutate faster than the forces that oppose it. Yet even this move toward commodification and corporatization is anticipated in much cyberpunk. “Power,” for Neuromancer’s Henry Case, “meant corporate power.” Gibson goes on: “Case had always taken it for granted that the real bosses, the kingpins in a given industry, would be both more and less than people.” For Case (and, it follows, Gibson, at least at the time of his writing), this power had “attained a kind of immortality” by evolving into an organism. Taking out one-or-another malicious CEO hardly matters when lines of substitutes are waiting in the wings to assume the role.

It’s here that cyberpunk critiques another kind of body. Not the ruddy human form that can be augmented and perfected by prosthetics and implants, but the economic body. Regarding the economy as a holistic organism—or a constituent part of one—is an idea that dates back at least as far as Adam Smith’s “invisible hand.” The rhetoric of contemporary economics is similarly biological. An edifying 2011 argument in Al Jazeera by Paul Rosenberg looked at the power of such symbolic conceptions of the economy. “The organic metaphor,” Rosenberg writes, “tells people to accept the economy as it is, to be passive, not to disturb it, to take a laissez faire attitude—leave it alone.”

This idea calls back to another of cyberpunk’s key aesthetic influences: the “body economic” of Japan in the 1980s. From the 2019 setting of 1982’s Blade Runner, to the conspicuous appearance of yakuza goons in Gibson’s stories, to Stephenson’s oddly anachronistic use of “Nipponese” in Snow Crash, cyberpunk’s speculative futures proceed from the economic ascendency of 1980s Japan, and the attendant anxiety that Japan would eventually eclipse America as an economic powerhouse. This idea, that Japan somehow is (or was) the future, has persisted all the way up to Cyberpunk 2077’s aesthetic template, and its foregrounding of villains like the shadowy Arasaka Corporation. It suggests that, even as it unfolds nearly sixty years from our future, the blockbuster video game is still obsessed with a vision of the future past.

Indeed, it’s telling that as the robust Japanese economy receded in the 1990s, its burly body giving up the proverbial ghost, that Japanese cinema became obsessed with avenging spirits channeled into the present by various technologies (a haunted video cassette in Hideo Nakata’s Ringu, the internet itself in Kiyoshi Kurosawa’s Kairo, etc.). But in the 1980s, Japan’s economic and technologic dominance seemed like a foregone conclusion. In a 2001 Time article, Gibson called Japan cyberpunk’s “de facto spiritual home.” He goes on:

I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, “You see? You see? It is Blade Runner town.” And it was. It so evidently was.

Gibson’s analysis features one glaring mistake. His insistence that “modern Japan simply was cyberpunk” is tethered to its actual history as an economic and technological powerhouse circa the 1980s, and not from its own science-fictional preoccupations. “It was not that there was a cyberpunk movement in Japan or a native literature akin to cyberpunk,” he writes. Except there so evidently was.

The Rusting World

Even beyond the limp, Orwellian connotations, 1984 was an auspicious year for science-fiction. There was Neuromancer, yes. But 1984 also saw the first collected volume of Akira, a manga written and illustrated by Katsuhiro Otomo. Originally set, like Blade Runner, in 2019, Akira imagines a cyberpunk-y Neo-Tokyo, in which motorcycle-riding gangs do battle with oppressive government forces. Its 1988 anime adaptation was even more popular, in both Japan and the West. (The film’s trademark cherry red motorcycle has been repeatedly referenced in the grander cyberpunk canon, appearing in Steven Spielberg’s film adaptation of Ready Player One and, if pre-release hype is to believed, in Cyberpunk 2077 itself.) In 2018, the British Film Institute hailed Akira, accurately, as “a vital cornerstone of the cyberpunk genre.”

Japan has plenty of other, non-Akira cyberpunk touchstones. As a cinematic subgenre, Japanese cyberpunk feels less connected to the “cyber” and more to the spirit of “punk,” whether in the showcasing of actual Japanese punk rock bands (as in 1982’s Burst City) or the films’ own commitment to a rough-hewn, low-budget, underground aesthetic. Chief among the latter category of films is Shinya Tsukamoto’s Tetsuo: The Iron Man, which was shot on 16mm over a grueling year-and-a-half, mostly in and around Tetsuo actress and cinematographer Kei Fujiwara’s apartment, which also housed most of the film’s cast and crew.

Unlike the Western cyberpunk classics, Tsukamoto’s vision of human-machine hybridization is demonstrably more nightmarish. The film follows two characters, credited as the Salaryman (Tomorowo Taguchi) and the Guy (a.k.a. “The Metal Fetishist,” played by writer/director/producer/editor Tsukamoto himself), bound by horrifying mutations, which see their flesh and internal organs sprouting mechanical hardware.

In its own way, Tetsuo works as a cyberpunk-horror allegory for the Japanese economy. As the Salaryman and the Fetishist learn to accept the condition of their mechanization, they merge together, absorbing all the inorganic matter around them, growing enormously like a real-world computer virus or some terrifying industrial Katamari. Their mission resonates like a perverse inversion of Japan’s post-industrial promise. As Tsukamoto’s Fetishist puts it: “We can rust the whole world and scatter it into the dust of the universe.”

Like Haraway’s development of the cyborg as a metaphoric alternative to the New Age “goddess,” Tetsuo’s titular Iron Man can offer a similar corrective. If cyberpunk has become hopelessly obsessed with its own nostalgia, recycling all its 1980s bric-a-brac endlessly, then we need a new model. Far from the visions of Gibson, in which technology provides an outlet for a scrappy utopian impulse that jeopardizes larger corporate-political dystopias, Tetsuo is more pessimistic. It sees the body—both the individual physical body and the grander corpus of political economy—as being machine-like. Yet, as Rosenberg notes in his Al Jazeera analysis of economic rhetoric, it may be more useful to conceive of the economy not as a “body” or an organism but as a machine. The body metaphor is conservative, “with implications that tend toward passivity and acceptance of whatever ills there may be.” Machines, by contrast, can be fixed, greased, re-oriented. They are, unlike bodies, a thing separate from us, and so subject to our designs.

Cybernetic implants and cyborg technology are not some antidote to corporate hegemony. The human does not meld with technology to transcend the limitations of humanity. Rather, technology and machinery pose direct threats to precisely that condition. We cannot, in Tsukamoto’s film, hack our way to a better future, or technologically augment our way out of collective despair. Technology—and the mindless rush to reproduce it—are, to Tsukamoto, the very conditions of that despair. Even at thirty years old, Tetsuo offers a chilling vision not of the future, or of 1980s Japan, but of right now: a present where the liberating possibilities of technology have been turned inside-out; where hackers become CEOs whose platforms bespoil democracy; where automation offers not the promise of increased wealth and leisure time, but joblessness, desperation, and the wholesale redundancy of the human species; where the shared hallucination of the virtual feels less than consensual.

There’s nothing utopian about the model of cyberpunk developed in Tetsuo: The Iron Man. It is purely dystopian. But this defeatism offers clarity. And in denying the collaborative, collectivist, positive vision of a technological future in favor of a vision of identity-destroying, soul-obliterating horror, Tsukamoto’s stone-cold classic of Japanese cyberpunk invites us to imagine our own anti-authoritarian, anti-corporate arrangements. The enduring canon of American-style cyberpunk may have grown rusty. It has been caught, as Bethke put it in his genre-naming story, “without a program.” But the genre’s gnarlier, Japanese iterations have plenty to offer, embodying sci-fi’s dream of imagining a far-off future as a deep, salient critique of the present. It is only when we accept this cruel machinery of the present that we can freely contemplate how best to tinker with its future.

Left to peddle such a despairing vision in a packed-out L.A. convention center, even cyberpunk’s postmortem poster boy Keanu Reeves would be left with little to say but a resigned, bewildered, “Woah . . .”

Algorithmic Feudalism

By Michael Krieger

Source: Liberty Blitzkrieg

Stiegler insists, however, that authentic thinking and calculative thinking are not mutually exclusive; indeed, mathematical rationality is one of our major prosthetic extensions. But the catastrophe of the digital age is that the global economy, powered by computational “reason” and driven by profit, is foreclosing the horizon of independent reflection for the majority of our species, in so far as we remain unaware that our thinking is so often being constricted by lines of code intended to anticipate, and actively shape, consciousness itself. 

– Via TruthDig: Fighting the Unprecedented ‘Proletarianization’ of the Human Mind

As the share price of Google parent company Alphabet soared to new highs in the U.S. equity market last week, several articles were published detailing just how out of control and dangerous this tech behemoth has become.

First, we learned Google is in the process of secretly sucking up the personalized healthcare data of up to 50 million Americans without the permission of patients or doctors. This was followed by a detailed report in the Wall Street Journal outlining how the search giant is meddling with its algorithms far more aggressively than executives lead people to believe. Despite these revelations, or more likely because of them, the stock price jumped to record levels. This is the world we live in.

We should’ve known right away that a tech company with the motto “don’t be evil,” would quickly and without any hesitation embrace as much evil as possible. Although pushback against America’s most dangerous tech giants (Google, Facebook and Amazon) has been growing, it hasn’t amounted to anything serious, and investors don’t expect much if the share price is any indication. Perhaps after seeing zero bank executives jailed after last decade’s financial crime spree, coupled with Boeing executives likewise facing no real repercussions despite killing hundreds out of profit-obsessed negligence, we’ve come to embrace our sociopathic, depraved overlords. Give me liberty, or give me new highs in the S&P500.

It’s important to note that while much of the recent focus on tech giants revolves around market dominance and anti-competitiveness, the real danger posed is far more extensive. Particularly since the post-election “panic of 2016,” these companies have begun to more earnestly morph into digital information gatekeepers in the name of empire and the national security state.

Day by day, tweaked algorithm by tweaked algorithm, and with each new thought criminal banished from major digital platforms, we’ve seen not only dissident views marginalized, but we’ve also lost a capacity to access information we’re looking for should tech company CEOs or their national security state partners deem it inappropriate. The powers that be have determined the internet permitted too much freedom of thought and opinion, so the tech giants stand ready to bluntly throw the hammer down in order to reverse that trend and regain narrative control. The algorithm will be used to get you in line, and if you don’t comply, the algorithm will destroy you.

More from TruthDig:

Stiegler believes that digital technology, in the hands of technocrats whom he calls “the new barbarians,” now threatens to dominate our tertiary memory, leading to a historically unprecedented “proletarianization” of the human mind. For Stiegler, the stakes today are much higher than they were for Marx, from whom this term is derived: proletarianization is no longer a threat posed to physical labor but to the human spirit itself…

Stiegler firmly believes that a distinction must always be upheld between “authentic thinking” and “computational cognitivism” and that today’s crisis lies in confusing the latter for the former: we have entrusted our rationality to computational technologies that now dominate everyday life, which is increasingly dependent on glowing screens driven by algorithmic anticipations of their users’ preferences and even writing habits (e.g., the repugnantly named “predictive text” feature that awaits typed-in characters to regurgitate stock phrases)… As Stiegler’s translator, the philosopher and filmmaker Daniel Ross, puts it, our so-called post-truth age is one “where calculation becomes so hegemonic as to threaten the possibility of thinking itself.” 

This is the true crux of what we’re dealing with, and so we find ourselves at a terrifying transition point in the entire historical human experience should we fail to correct it. As a consequence of their dominant market shares in core areas of our modern digital world like e-commerce (Amazon), human-to-human communication (Facebook) and information access (Google), tech giants now have the capacity to replace human curiosity and thought with opaque and ever-changing algorithms.

Here’s some of what the WSJ revealed in its investigation published last week:

More than 100 interviews and the Journal’s own testing of Google’s search results reveal:

Google made algorithmic changes to its search results that favor big businesses over smaller ones, and in at least one case made changes on behalf of a major advertiser, eBay Inc., contrary to its public position that it never takes that type of action. The company also boosts some major websites, such as Amazon.com Inc.and Facebook Inc., according to people familiar with the matter. 

• Google engineers regularly make behind-the-scenes adjustments to other information the company is increasingly layering on top of its basic search results. These features include auto-complete suggestions, boxes called “knowledge panels” and “featured snippets,” and news results, which aren’t subject to the same company policies limiting what engineers can remove or change.

Despite publicly denying doing so, Google keeps blacklists to remove certain sites or prevent others from surfacing in certain types of results. These moves are separate from those that block sites as required by U.S. or foreign law, such as those featuring child abuse or with copyright infringement, and from changes designed to demote spam sites, which attempt to game the system to appear higher in results.

• In auto-complete, the feature that predicts search terms as the user types a query, Google’s engineers have created algorithms and blacklists to weed out more-incendiary suggestions for controversial subjects, such as abortion or immigration, in effect filtering out inflammatory results on high-profile topics. 

• Google employees and executives, including co-founders Larry Page and Sergey Brin, have disagreed on how much to intervene on search results and to what extent. Employees can push for revisions in specific search results, including on topics such as vaccinations and autism. 

• To evaluate its search results, Google employs thousands of low-paid contractors whose purpose the company says is to assess the quality of the algorithms’ rankings. Even so, contractors said Google gave feedback to these workers to convey what it considered to be the correct ranking of results, and they revised their assessments accordingly, according to contractors interviewed by the Journal. The contractors’ collective evaluations are then used to adjust algorithms.

This comes down to power and control, and the tech giants are now maturing into their predictable role as algorithmic gatekeepers of a new digital feudalism. Google has the power to shape your mind by limiting what you have access to, while at the same time wielding the power to destroy your livelihood with a tweak of an algorithm. Although a lot of the most nefarious stuff is still being conducted at the margins so the masses don’t realize what’s happening, stealth censorship will continue to be rolled out until the internet most people use becomes for all practical purposes an information gulag where nothing but shameless propaganda is pumped onto screens by hidden algorithms tweaked (for your own good) by billionaires.

A perfect example of this can be seen in how YouTube hides ones of the most popular videos ever made regarding the attacks of September 11, 2001. The short clip made by James Corbett, is titled 9/11: A Conspiracy Theory, and has over 3.2 million views. Nevertheless, here’s what YouTube spits out if you search by the exact title of the video.

Keep scrolling and you still won’t find it. This isn’t YouTube helping users find the information they want, it’s YouTube hiding content from its users. Moreover, the only reason I’m aware of the censoring of this particular item is because I’m familiar with the video from years ago. You can be certain this sort of thing is more common than you realize and will only get worse.

The internet was supposed to free information while connecting people and ideas across borders. This promise is being lost with each passing day, and rectifying the situation is one of the most significant challenges we face. Should we fail, we can look forward to a future where humanity consists of little more than digitally lobotomized automatons responding like lab rats to algorithms created by tech CEOs and their national security state partners.

 

Facebook and YouTube remove posts naming CIA impeachment whistleblower

By Kevin Reed

Source: WSWS.org

Multiple media sources reported on Friday that the social media platforms Facebook and YouTube were removing posts that identified by name the CIA whistleblower behind the Congressional impeachment inquiry of President Donald Trump.

In an email statement, Facebook said, “Any mention of the potential whistleblower’s name violates our coordinating harm policy, which prohibits content ‘outing of witness, informant or activist’,” adding, “We are removing any and all mentions of the potential whistleblower’s name and will revisit this decision should their name be widely published in the media or used by public figures in debate.”

CNN also reported that YouTube issued a statement saying that it was using a combination of artificial intelligence software and human monitors to find and delete videos with the name of the “Ukrainegate” whistleblower. “The removals, the spokesperson added, would affect the titles and descriptions of videos as well as the video’s actual content,” the CNN report said.

The World Socialist Web Site has independently confirmed that Facebook is deleting posts containing the name of alleged CIA whistleblower Eric Ciaramella.

Facebook’s claim that any content posted on its platform naming Ciaramella constitutes “outing” the whistleblower is absurd. The alleged identity of the career CIA analyst who filed a complaint regarding the July 25 phone call between President Trump and Ukrainian President Volodymyr Zelensky has been known since October 30 when the pro-Republican Real Clear Politics website published his name.

When his name was published by Real Clear Politics, the whistleblower’s attorneys—in typical CIA fashion—said they could “neither confirm nor deny” that Ciaramella was their client.

Ciaramella is a plausible candidate for being whistleblower, given his background as a registered Democrat and CIA analyst with expertise in Ukraine and Russia. He worked under both Obama National Security Advisor Susan Rice and Trump National Security Advisor H.R. McMaster. In mid-2017 he was sent back to the CIA amid accusations that he was leaking anti-Trump information to the media.

While Ciaramella’s name has been widely circulated by Republican political figures, right-wing news sites and former CIA analyst and Trump aide Fred Fleitz said, “everyone knows who he is,” the Democrats and their allies in the media at the New York Times, Washington Post and major television networks have not made his name public.

Even an article in the New York Times on Friday that reported on Facebook’s censoring of posts by the right-wing website Breitbart did not include Ciaramella’s name. By taking the step of scrubbing posts mentioning allegations that are widely shared and reported, Facebook and YouTube are now joining with these corporate media organizations and blocking the public from having access to important information.

The latest heavy-handed social media censorship—so obviously being carried out in the service of the Democratic Party impeachment inquiry and the CIA—actually helps the Trump administration, the Republican Party and the extreme right-wing political forces defending the White House, allowing them to adopt the false posture of advocating free flow of information, even as Trump continues to demonize the media as the “enemies of the people.”

The mass scrubbing of all social media content by Facebook and YouTube that mentions the name Eric Ciaramella is part of the broader censorship efforts by the technology monopolies, in collaboration with the intelligence state, and sets the stage for even more draconian attacks on freedom of expression.

This must be seen within the context of the drive by a substantial section of the ruling establishment for the social media platforms to “step up to the plate” and, as Hillary Clinton said last week, take down “false, deceptive or deliberately misleading content” or “pay a price.” Leading figures within the Democratic Party, including presidential candidate Elizabeth Warren, Congresswoman Alexandria Ocasio-Cortez and Congresswoman Rashida Tlaib, have been campaigning for social media censorship that will block what they call “untruthful statements.”

As explained on the World Socialist Web Site, the increasing calls for censorship on social media are part of a protracted campaign by the US intelligence apparatus, under conditions of a growing movement of the working class and young people and increasing interest and support for socialism, to suppress left-wing, antiwar and progressive political viewpoints.

Furthermore, the WSWS has pointed out that what is determined as “fake” or “real” is not to be decided by the government or giant tech monopolies: “All the dishonesty of the campaign for internet censorship is contained in the failure to answer, much less consider, one central question: Who is to determine what is true and what is false?”

The publication of the name of the CIA analyst who submitted his complaint memo to the heads of the House and Senate Intelligence Committees in August is not a crime. In fact, his identity is of substantial consequence, given that his complaint became the starting point of an effort to remove a sitting president through impeachment.

Automatons – Life Inside the Unreal Machine

By Kingsley L. Dennis

Source: Waking Times

ɔːˈtɒmət(ə)n/

noun

a moving mechanical device made in imitation of a human being.

a machine which performs a range of functions according to a predetermined set of coded instructions.

used in similes and comparisons to refer to a person who seems to act in a mechanical or unemotional way.

“Don’t you wish you were free, Lenina?”

“I don’t know what you mean. I am free. Free to have the most wonderful time. Everybody’s happy nowadays.”

He laughed. “Yes, ‘Everybody’s happy nowadays.’ We have been giving the children that at five. But wouldn’t you like to be free to be happy in some other way, Lenina? In your own way, for example; not in everybody else’s way.”

“I don’t know what you mean,” she repeated.

Aldous Huxley, Brave New World

Are we turning into a mass of unaware sleepwalkers? Our eyes are seemingly open and yet we are living as if asleep and the dream becomes our waking lives. It seems that more and more people, in the highly technologized nations at least, are in danger of succumbing to the epidemic of uniformity. People follow cycles of fashions and wear stupid clothes when they think it is the ‘in thing;’ and hyper-budget films take marketing to a whole new level forcing parents to rush out to buy the merchandise because their kids are screaming for it. And if one child in the class doesn’t have the latest toy like all their classmates then they are ostracized for this lack. Which means that poor mummy and daddy have to make sure they get their hands on these gadgets. Put the two items together – zombies and uniformity – and what do you get? Welcome to the phenomenon of Black Fridays, which have become the latest manifestation of national Zombie Days.

Unless you’ve been living in a cave somewhere (or living a normal, peaceful existence) then you will know what this event is – but let me remind you anyway of what a Black Friday is. It is a day when members of the public are infected with the ‘must buy’ and ‘act like an idiot’ virus that turns them into screaming, raging hordes banging on the doors of hyper-market retailers hours before they open. Many of these hordes sleep outside all night to get early entry. Then when the doors are finally opened they go rushing in fighting and screaming as if re-enacting a scene from Games of Thrones. Those that do survive the fisticuffs come away with trolleys full of boxes too big to carry. This display of cultural psychosis, generally named as idiocracy, is also a condition nurtured by societies based on high-consumption with even higher inequalities of wealth distribution. In other words, a culture conditioned to commodity accumulation will buy with fervour when things are cheap. This is because although conditioned to buy, they lack the financial means to satiate this desire. Many people suffer from a condition which psychologists have named as ‘miswanting,’ which means that we desire things we don’t like and like things we don’t desire. What this is really saying is that we tend to ‘want badly’ rather than having genuine need. What we are witnessing in these years is an epidemic of idiocracy and its propagating faster than post-war pregnancies. And yet we are programmed by our democratic societies to not think differently. In this respect, many people also suffer from a condition known as ‘confirmation bias.’

Confirmation bias is our conditioned tendency to pick and choose that information which confirms our pre-existing beliefs or ideas. Two people may be able to look at the same evidence and yet they will interpret it according to how it fits into and validates their own thinking. That’s why so many debates go nowhere as people generally don’t wish to be deviated away from those ideas they have invested so much time and effort in upholding. It’s too much of a shock to realize that what we thought was true, or valid, is not the case. To lose the safety and security of our ideas would be too much for many people. It is now well understood in psychology that we like to confirm our existing beliefs; after all, it makes us feel right!

Many of our online social media platforms are adhering to this principle by picking and choosing those items of news, events, etc that their algorithms have deemed we are most likely to want to see. As convenient as it may seem, it is unlikely to be in our best interests in the long term. The increasing automation of the world around us is set to establish a new ecology in our hyperreality. We will be forced to acknowledge that algorithms and intelligent software will soon, if it isn’t already, be running nearly everything in our daily lives. Historian Yuval Harari believes that ‘the twenty-first century will be dominated by algorithms. “Algorithm” is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is.’1 Algorithms already follow our shopping habits, recommend products for us, pattern recognize our online behavior, help us drive our cars, fly our planes, trade our economies, coordinate our public transport, organize our energy distribution, and a lot, lot more that we are just not really aware of. One of the signs of living in a hyperreality is that we are surrounded by an invisible coded environment, written in languages we don’t understand, making our lives more abstracted from reality.

Modern societies are adapting to universal computing infrastructures that will usher in new arrangements and relations. Of course, these are only the early years, although there is already a lot of uncertainty and unpredictability. As it is said, industrialization didn’t turn us into machines and automation isn’t going to turn us into automatons. Which is more or less correct; after all, being human is not that simple. Yet there will be new dependencies and relations forming as algorithms continue to create and establish what can be called ‘pervasive assistance.’ Again, it is a question of being alert so that we don’t feel compelled just to give ourselves over to our algorithms. The last thing we want is for a bunch of psychologists trying to earn yet more money from a new disease of ‘algorithmic dependency syndrome’ or something similar.

It needs stating that by automating the world we also run the risk of being distanced from our own responsibilities. And this also implies, importantly, the responsibility we have to ourselves – to transcend our own limitations and to develop our human societies for the better. We should not forget that we are here to mature as a species and we should not allow the world of automation to distract us from this. Already literature and film have portrayed such possibilities. Examples are David Brin’s science-fiction novel Kiln People (2002 – also adapted into the film Surrogates, 2009), which clearly showed how automation may provide a smokescreen for people to disappear behind their surrogate substitutes.

Algorithms are the new signals that code an unseen territory all around us. In a world of rapidly increasing automation and digital identities we’ll have to keep our wits about us in order to retain what little of our identities we have left. We want to make sure that we don’t get lost in our emoji messages, our smilies of flirtation; or, even worse, loose our life in the ‘death cult’ of the selfies. Identities by their very nature are constructs; in fact, we can go so far as to call them fake. They are constructed from layers of ongoing conditioning which a person identifies with. This identity functions as a filter to interpret incoming perceptions. The limited degree of perceptions available to us almost guarantees that identities fall into a knowable range of archetypes. We would be wise to remember that who we are is not always the same as what we project. And yet some people on social media are unable to distinguish their public image from their personal identity, which starts to sound a bit scary. Philosopher Jean Baudrillard, not opposed to saying what he thought, stated it in another way:

We are in a social trance: vacant, withdrawn, lacking meaning in our own eyes. Abstracted, irresponsible, enervated. They have left us the optic nerve, but all the others have been disabled…All that is left is the mental screen of indifference, which matches the technical in-difference of the images.2

Baudrillard would probably be the first to agree that breathing is often a disguise to make us think that someone is alive. After all, don’t we breathe automatically without thinking about it?

We must not make the human spirit obsolete just because our technological elites are dreaming of a trans-human future. Speaking of such futures, inventor and futurist Ray Kurzweil predicts that in the 2030s human brains will be able to connect to the cloud and to use it just like we use cloud computing today. That is, we will be able to transfer emails and photos directly from the cloud to our brain as well as backing up our thoughts and memories. How will this futuristic scenario be possible? Well, Kurzweil says that nanobots – tiny robots constructed from DNA strands – will be swimming around in our brains. And the result? According to Kurzweil we’re going to be funnier, sexier, and better at expressing our loving sentiments. Well, that’s okay then – nanobot my brain up! Not only will being connected to the computing cloud make us sexier and funnier humans, it will even take us closer to our gods says Kurzweil – ‘So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world – it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.’It’s hard to argue with such a bargain – a few nanobots in our brain to become godlike? I can imagine a lot of people will be signing up for this. There may even be a hefty monthly charge for those wanting more than 15GB of back-up headspace. Personally, I prefer the headspace that’s ad infinitum and priceless. I hope I’m not in the minority.

Looking at the choices on offer so far it seems that there is the zombie option, which comes with add-on idiocracy (basic model), and the trans-human nanobot sexy-god upgrade (pricy). But then let’s not forget that in an automated world it may be the sentient robots that come out on top. Now, that would be an almost perfect demonstration of a simulation reality.

Life in Imitation

There are those who believe that self-awareness is going to be the end game of artificial intelligence – the explosive ‘wow factor’ that really throws everything into high gear. The new trend now is deep machine-learning to the point where machines will program not only themselves but also other machines. Cognitive computer scientists are attempting to recapture the essence of human consciousness in the hope of back-engineering this complexity into machine code. It’s a noble endeavor, if not at least for their persistence. The concern here is that if machines do finally achieve sentience then the next thing that we’ll need to roll out will be machine psychologists. Consciousness, after all, comes at a price. There is no free lunch when it comes to possessing a wide-awake brain. With conscious awareness comes responsibilities, such as values, ethics, morality, compassion, forgiveness, empathy, goodness, and good old-fashioned love. And I personally like the love part (gives me a squishy feeling every time).

It may not actually be the sentient robots we need to worry about; it’s the mindless ones we need to be cautious of (of course, we could say the same thing about ourselves). One of the methods used in training such robots is, in the words of their trainers, to provide them with enough ‘intrinsic motivation.’ Not only will this help the robots to learn their environments, it is also hoped that it will foster attention in them to acquire sufficient situational awareness. If I were to write a science-fiction scenario on this I would make it so that the sentient robots end up being more human than we are, and humans turn into their automated counterparts. Funny, maybe – but more so in the funny-bone hurting sort of way rather than the laugh-out-loud variety. Or perhaps it’s already been done. It appears that we are attempting to imbue our devices with qualities we are also striving to possess for ourselves. Humans are naturally vulnerable; it is part of our organic make-up. Whatever we create may inherit those vulnerabilities. However, this here is not a discussion on the pros and cons of smart machines and artificial intelligence (there are many more qualified discussions on that huge topic).

While we are creating, testing, worrying, or arguing over machines and their like we are taking our attention away from the center – ourselves. The trick of surviving in the ‘unreal machine’ of life is by becoming more human, the very antithesis of the robotic. Technology can assist us in interacting and participating to a better degree with our environments. The question, as always, is the uses to which such tools are put – and by whom. Such tools can help us realize our dreams, or they can entrap us in theirs. Algorithms, smart machines, intelligent infrastructure, and automated processes: these are all going to come about and be a part of our transforming world. And in many respects, they will make life more comfortable for us. Yet within this comfort zone we still need to strive and seek for our betterment. We should not allow an automated environment to deprive us of our responsibility, and need, to find meaning and significance in our world. Our technologies should force us to acknowledge our human qualities and to uplift them, and not to turn us into an imitation of them.

Another metaphor for the simulated ‘robotic’ creature is the golem. The golem legend speaks of a creature fashioned from clay, a Cabbalistic motif which has appeared frequently in literary and cinematic form (such as Frankenstein). The Cabbalistic automaton that is the golem, which means ‘unformed,’ has often been used to show the struggle between mechanical limitation and human feelings. This struggle depicts the tension that combines cogs and consciousness; the entrapment in matter and the spirit of redemption and liberation. This is a myth that speaks of the hubris in humanity fashioning its own creatures and ‘magically’ bestowing life upon them. It is the act of creating a ‘sacred machine’ from the parts and pieces of a material world and then to imbue them with human traits. And through this human likeness they are required to fulfil human chores and work as slaves. Sounds familiar? The Cabbalistic humanoid – the sentient robot – is forever doomed, almost like the divine nature of Man trapped within the confines and limitations of a material reality. They represent the conflict of being torn between a fixed fate and freedom.

Our material reality may be the ultimate unreal machine. We are the cogs, the clay golem, the imperfect creature fashioned by another. Our fears of automation may only be a reflection of our own automation. We struggle to express some form of release whilst unaware that the binds that mechanize us are forever tightening.

We have now shifted through the zombie-idiocracy model (basic), the trans-human nanobot sexy-god model (pricy), to arrive at the realization that it is us – and not our sentient robots – that are likely to be the automaton (tragic). And this is the biblical fall from grace; the disconnection from our god(s). We have come loose from Central Source and we have lost our way.

We are now living in the hyperreal realm where zombies, cyborgs, and golem robots all reside – but it is not the place for the genuine human. Things are going to have to change. Not only do we have to retain our humanity, we also must remain sane. With our continuing modern technologies, our augmented reality and bioengineering, the difference between fiction and reality will blur even further. And this blurring is likely to become more prominent as people increasingly try to reshape reality to fit around their own imaginative fictions. Staying sane, grounded, and balanced is going to be a very, very good option for the days to come.

We are going to be sharing our planetary space with the new smart machines. I am reminded of the Dr. Seuss book Horton Hears a Who! that has the refrain, ‘a person’s a person no matter how small.’ Size doesn’t count – but being human does. And staying human in these years will be the hard task allotted to us.

Phantom Performances – The Rise of the Spectacle

By Kingsley L. Dennis

Source: Waking Times

ˈspɛktək(ə)l/

noun

a visually striking performance or display

an event or scene regarded in terms of its visual impact

“Now the death of God combined with the perfection of the image has brought us to a whole new state of expectation. We are the image.” ~John Ralston Saul, Voltaire’s Bastards

“Magical thinking is the currency not only of celebrity culture, but also of totalitarian culture.” ~Chris Hedges, Empire of Illusion

Welcome to the spectacle. Or perhaps I should say the kind of spectacle that has become the face of entertainment that pervades our westernized cultures. The way that the spectacle succeeds is that it isn’t so much about fooling us into believing its lies as real, but rather that it is we who ask to be fooled. We seek to suspend our sense of reality, to pursue a space of escape. The spectacle pulls us in because we lend our willingness to its agenda. If we are honest, in this post-truth age, we will admit to living in an age of spectacle. And it is from this that many of us receive our interpretation of reality. Since the middle of the 20th century onwards the ‘western spectacle’ has been in the form of media advertisement and propaganda. We may think that we’ve only recently arrived at the age of the spectacle, where Disneylandification is becoming the norm, and Super Bowls are interspersed with scantily-clad singers, and TV programs appear in the slots between advertisers. Yet the whole spectacle show has been a form of function creep ever since telecommunications first emerged as a social phenomenon. The image has been with humanity since the first dawn of our arising; from cave paintings to hieroglyphics to cuneiform clay tablets. The major difference is that today the spectacle of the image has not only gone global, but it has also gotten inside of our heads.

Western cultures especially (and the US specifically) have now made the image, the spectacle, and hence the illusion so grand, so vivid, and so persuasively realistic that they are becoming our basis of reality. We swing from one illusion to the alternative, which is still yet another grand spectacle; just as we swing from the political left to the right, believing each side is distinctly different. Yet each is a part of the same bubble that customizes our lives – they form a part of our news, our heroes, our tragedies, and our dreams. We now serve a mosaic of ideals carefully crafted as a patchwork of phantom performances. Nothing is ever real anymore except the painful extremes that pervade our daily existence: the violence, the suffering, the deprivation, the inequality, the disease. Only these fragments that create great pain become the real, and from these many of us seek refuge in a plenitude of diversions, distractions, and triviality.

Western civilization has chosen to be played out upon a grand stage where the performance – of invented storylines and scripts – runs the show. We move through social realities that are an entanglement of signs, virtual connections, and social media status. It’s all about who is going to be the next ‘influencer’? We are encouraged to project back into the world our entertainment-mediatized fantasies. People begin to act out their imaginary landscapes, often in violent and distorted ways, as young students massacre their classmates before going to eat at McDonalds. This is the hyperreal that distorts a stable reality, making it harder to gain a grounded perspective on things. People are increasingly being guided by the false totems of media-militarized-entertainment.

The media spectacle gives us our modern guiding images. This is similar to how in the Middle Ages images depicted in stained glass windows and paintings of religious torment or salvation acted to control and influence the social behavior of our ancestors. For many of us the white-bearded god above is dead, so we have media depictions of heroes, adventurers, McGyvers, celebrity-cosmetic makeovers, beauty pageants, talk shows and reality television to be our social guides. An illusory sensate reality has been erected that runs on pseudo-lives and phantom performances. Such phantom performances mask our personal failures and conveniently hide them behind a curtain of the unreal. People prefer to watch the rich and famous on television rather than face the domestic unhappiness of their own lives. Why have ice when you can have bubblegum-flavored ice-cream?

Luckily for those of us who live in the west we inhabit a world of easy-correction where we can make ourselves better if we buy certain products, ingest certain foods, and hang-out in the right yoga gyms. For every situation there is seemingly a commercial solution. We have not been abandoned, after all. In the realm of hyperreality, our fantasies are no longer an impediment to success. On the contrary, our fantasies are the portals through which we enter. All we need is for the world of the media to give us our dream. Everybody has talent, as the reality shows tell us – ‘Britain’s Got Talent,’ ‘America’s Got Talent:’ in fact, we’ve all got talent! We are all of us hidden unique performers, and the world ‘out there’ is begging for our arrival. This is not to be confused with the manipulation by greedy commercial enterprises that are ready to discard you as soon as your ‘talent’ no longer sells.

Yet the truth of the matter is that the spectacle of celebrity culture seeks commodities, not real individuals or souls. It doesn’t want that we seek for any form of transcendence, illumination, or real growth. It is a world that seeks only those that feed the phantom and encourage others to do the same. It is the ‘real’ that gets pushed into a black hole – to become a figment of the imagination, whilst imaginary dreams take its place. Celebrity culture thrives from the very lack of inner reflection. There is no ‘going within’ unless it is a form of medication going down our throats. If we are brutally honest, the celebrity spectacle is an ugly specter that can be as cruel as it is superficial.

The Spectacle of Celebrity Culture

No one achieves celebrity status on their own. It is a stage performance that requires a hoard of cultural enablers; from media, marketers, promoters, agents, handlers, and a host of hungry and gullible people. It is a veritable stage of actors, with each person in it to gain something for themselves. They either seek attention, satisfaction, fame, wealth, or a combination of these. Celebrity culture has come to dominate how many of us define our sense of belonging. It has come to define how we relate to the world around us, and in this respect has disfigured our notions of social belonging and community. Celebrity culture funds and feeds our own movies inside our heads as we invent our roles and behavior. It is a culture in which very few participants are even real for a day.

We idolize celebrities and often project them as idealized forms of ourselves. And yet through this substitution we move further away from any real self-actualization. The transcendent – the Real – does not do substitutes. By throwing our fantasies onto others we are diminishing our own power. In the words of one serious journalist,

We are chained to the flickering shadows of celebrity culture, the spectacle of the arena and the airwaves, the lies of advertising, the endless personal dramas, many of them completely fictional, that have become the staple of news, celebrity gossip, New Age mysticism, and pop psychology …in contemporary culture the fabricated, the inauthentic, and the theatrical have displaced the natural, the genuine, and the spontaneous, until reality itself has been converted into stagecraft. 1

We are subtly pushed through the well-structured stagecraft whilst all the time thinking that it is real. Our contemporary ‘death of the gods’ has been replaced by a divine adoration of celebrities and celebrity culture. Celebrity items, like holy relics, are paraded, idolized, and sold for vast sums. People rush for autographs, only to sell them later on eBay to make an unhealthy profit. Celebrity personal possessions are sold off at prestigious auction houses for astronomical prices, so aging people can wear the clothes of their idols. The glitzy suit that Elvis wore before dying in a Las Vegas toilet; or the dress that Marilyn Monroe wore to show her knickers to the world above a subway vent. Everything is up for grabs – the profane is made sacred, and then sacrificed as celebrity talismans. It all engenders a performance of hysteria, leading sometimes to stalking, or what is nowadays referred to as ‘trolling,’ as celebrity private photos are hacked and shared online. It’s happened to Emma Watson, Jennifer Lawrence, Kate Upton, Jessica Alba, Kate Hudson, Scarlett Johansson…and the list goes on, and on, and on.

The world of celebrity culture thrusts us into a moral void. People are valued by their appearance and their skin-deep beauty rather than their humanity. Such a culture focuses upon onanistic desires and ways for self-gratification. The cult of self ‘has within it the classic traits of psychopaths: superficial charm, grandiosity, and self-importance; a need for constant stimulation, a penchant for lying, deception, and manipulation, and the inability to feel remorse or guilt.’2 The cult of self also promotes the right to get whatever we wish, and celebrity media plays into this, often at the cost of the celebrity who suffers from social media harassment and online trolling. Celebrity public life is not a sacred space; instead, it has become a theatre of performance that is open for all spectators. And those spectators who surround themselves with celebrity culture tend to live in the present, fed by an endless stream of packaged information. They live by credit promises, ignorant to the future prospect of unmanageable debt. They are hostage to a culture that keeps them enthralled, like a television commercial replete with pleasing jingles. They navigate their purchases through well-known brands, eyeing the famous logos as guides. It is an image-saturated reality, bright and tantalizing, offering comfort and satisfaction upon all levels – until the credit runs out. Then the person becomes an outlaw to the very system that fattened them up like foie gras ducks.

These are the trivial diversions that for many are necessary, and which exist in cultures that prize shallow entertainment above substance. We may wonder whether the consumerist celebrity culture is a compensation for the loss of our true freedom regarding the human spirit and our well-being. And celebrities too are often trapped within their own fairy-tale prisons. They are skillfully controlled by their handlers and pushed in front of the media – all this to compensate for the insatiable appetites of those thirsty spectators that swarm upon celebrity culture. We are tantalizingly shown that even us, the humble spectators, can triumph in fame through the lens of reality television. The celebrity machinery oils itself on the media-creation of third and fourth-rate celebrities that have their fifteen minutes of fame – crammed together on desert islands, stuffing insects into their mouths as they bad-mouth their once beloved ‘best-friend’ and vote them off the show. Reality survival, it seems, comes at a cost. And then when they finally emerge into the ‘real world’ of the hyperreal they throng and mingle with other reality-stars under the glare of media spotlight in the vain hope that together they can populate an illusory world of the celebrity.

The world of reality television is another limb on the body of phantom performance. In the last decade a multitude of reality shows have cropped up on our television screens; and they all have one thing in common – they involve being constantly watched. Popular shows such as Big Brother put strangers to live together with round-the-clock constant surveillance. These strangers are even videoed in their beds as they sleep or fondle and kiss with other contestants. Sex lives are ogled over alongside the tears and on-screen breakdowns. Then the television psychologists are wheeled out to offer ‘expert commentary’ on the contestant’s state for mass consumption. Yet underneath all this glamour and glitz is the subtle message that intrusive surveillance is a normal feature of contemporary societies. In fact, it even masquerades as something cool that can be shared online, and which can make us famous. However, the brute reality is that such reality shows normalize what would otherwise be blatant non-constitutional intervention. And yet such shows make surveillance not only routine but a potentially enjoyable part of our modern lives. We are being conditioned into monitoring and sharing our own lives for others to see. Our phantom performances can make any one of us into an enviable star.

Social media is now rife with home-grown videos where everyone from toddler to teenager to retiree is making their performances visible to the image-hungry collective. Selfies too are the new fashionable rage as we perform in front of ourselves. This trend has become so pervasive that each year the number of selfie-related deaths has been increasing. In 2015 more people died from taking selfies than from shark attacks.[i] A dedicated online Wikipedia page has been established to record some of the ongoing ‘selfie-deaths.’ Here are a few examples:

Two young men died in the Ural Mountains after they pulled the pin from a live hand grenade to take a selfie. The phone with the picture remained as evidence of the circumstance of their deaths. (Russia, January 2015)

An 18-year-old died when she attempted to take the “ultimate selfie”, posing with a friend on top of a train in the north-eastern Romanian city of Iași when her leg touched a live wire above which electrocuted her with 27,000 volts. (Romania, May 2015)

A 19-year-old from Houston died after trying to take an Instagram selfie while holding a loaded gun to his head. He accidentally fired the gun and shot himself in the throat. (USA, September 2015)

A 17-year-old student, Andrey Retrovsky from Vologda, Russia, fell to his death attempting to take a selfie while hanging from a rope from a nine-story building. The rope snapped. Retrovsky was known for taking ‘extreme’ selfies and posting them to his Instagram account. (Russia, September 2015)

Selfie deaths, it seems, are global – and not a rare occurrence. Our phantom performances come at a cost. In a world where the image is iconic, more and more people are losing themselves in a reality where a sense of achievement comes from catching the ‘ultimate selfie.’

The drive for inner fulfilment, transcendence, and growth has been wavered aside in favor of the pixilated image. We fear not being seen. We dread being anonymous. Even being a spectral ghost is preferable to being dead.

We Are the Image

The new perspective on the world is pixilated. We are awash with images without substance and which are routinely fetishized as iconic. Signs are lacking immanence; they are fleeting and transient like never before. That is why corporations spend millions trying to find an image logo that will stick around long enough to be implanted into our minds. Images are becoming signs to the disappearance of the real. Images are the new believable reality; now no one cares that the original behind the image has quietly slipped away. The world exists as if in a play of phantom appearances. The image has taken centerstage within the space of the new real. We are now the image.

Yet the danger here is that in being given the image with its glamour and glitz we are in return giving up our critical and intellectual tools that help us cope with a complex world. Where once we had the faculty of separating illusion from reality we now have a simplified hyperreal world where everything can be explained away by a platitude of post-truth phrases. Does it even matter anymore that Las Vegas with its illusion of France with the mock Eiffel Tower, or its pseudo-canals of Venice, are far from the reality of France or Venice? How many people care? Or that the fantasy worlds within the various Disney theme parks are merging with the entertainment-saturated lives outside? Would it truly matter if we were all living within a controlled environment as depicted within the film The Truman Show? Or maybe, just maybe, such films are actually trying to tell us something – to wake us up?

The danger now is that our cultural spectacles – our celebrity culture and spectral images – are making any other alternative seem dull to us. It may be that in an age of simplified gratification any complex reality is boring. What the ‘real’ presents us with may no longer be enough. In its place we are perhaps seeking a false magic.

We have lost touch with that essential something that can work like magic in our lives. As one thinker recently stated:

We live in changing times whereby humanity is undergoing a transformation…We need to understand phenomena at deeper levels, and not just accept what we are told, or what is fed to us through well-structured social institutions and channels. We must learn to accept that our thinking is a great tangible spiritual force for change. 2

The notion that our ideas, our vision, our projections onto the world can be a ‘great tangible spiritual force for change’ is eluding us. Never before has it been so important to trust in the power of the human spirit, and to put forth, with honesty and integrity, the innate human power. The alternative is that we slide into the slipstream of our own phantom performances – we become the image.

 

Extract from the book Bardo Times: hyperreality, high-velocity, simulation, automation, mutation – a hoax?

Endnotes 

Hedges, Chris. 2010. Empire of Illusion: The End of Literacy and the Triumph of Spectacle. New York: Nation Books, p15

 Gulbekian, S.E. 2004. In the Belly of the Beast: Holding Your Own in Mass Culture. Charlottesville, VA: Hampton Roads, p251

[i] See http://www.telegraph.co.uk/technology/11881900/More-people-have-died-by-taking-selfies-this-year-than-by-shark-attacks.html

How To Defeat The Empire

By Caitlin Johnstone

Source: CaitlinJohnstone.com

One of the biggest and most consistent challenges of my young career so far has been finding ways to talk about solutions to our predicament in a way that people will truly hear. I talk about these solutions constantly, and some readers definitely get it, but others will see me going on and on about a grassroots revolution against the establishment narrative control machine and then say “Okay, but what do we do?” or “You talk about problems but never offer any solutions!”

Part of the difficulty is that I don’t talk much about the old attempts at solutions we’ve already tried that people have been conditioned to listen for. I don’t endorse politicians, I don’t advocate starting a new political party, I don’t support violent revolution, I don’t say that capitalism contains the seeds of its own destruction and the proletariat will inevitably rise up against the bourgeoisie, and in general I don’t put much stock in the idea that our political systems are in and of themselves sufficient for addressing our biggest problems in any meaningful way.

What I do advocate, over and over and over again in as many different ways as I can come up with, is a decentralized guerrilla psywar against the institutions which enable the powerful to manipulate the way ordinary people think, act and vote.

I talk about narrative and propaganda all the time because they are the root of all our problems. As long as the plutocrat-controlled media are able to manufacture consent for the status quo upon which those plutocrats built their respective empires, there will never be the possibility of a successful revolution. People will never rebel against a system while they’re being successfully propagandized not to. It will never, ever happen.

Most people who want drastic systematic changes to the way power operates in our society utterly fail to take this into account. Most of them are aware to some extent that establishment propaganda is happening, but they fail to fully appreciate its effects, its power, and the fact that it’s continually getting more and more sophisticated. They continue to talk about the need for a particular political movement, for this or that new government policy, or even for a full-fledged revolution, without ever turning and squarely focusing on the elephant in the room that none of these things will ever happen as long as most people are successfully propagandized into being uninterested in making them happen.

It’s like trying to light a fire without first finding a solution to the problem that you’re standing under pouring rain. Certainly we can all agree that a fire is sorely needed because it’s cold and wet and miserable out here, but we’re never going to get one going while the kindling is getting soaked and we can’t even get a match lit. The first order of business must necessarily be to find a way to protect our fire-starting area from the downpour of establishment propaganda.

A decentralized guerrilla psywar against the propaganda machine is the best solution to this problem.

By psywar I mean a grassroots psychological war against the establishment propaganda machine with the goal of weakening public trust in pro-empire narratives. People only believe sources of information that they trust, and propaganda cannot operate without belief. Right now trust in the mass media is at an all-time low while our ability to network and share information is at an all-time high. Our psywar is fought with the goal of using our unprecedented ability to circulate information to continue to kill public trust in the mass media, not with lies and propaganda, but with truth. If we can expose journalistic malpractice and the glaring plot holes in establishment narratives about things like war, Julian Assange, Russia etc, we will make the mass media look less trustworthy.

By decentralized I mean we should each take responsibility for weakening public trust in the propaganda machine in our own way, rather than depending on centralized groups and organizations. The more centralized an operation is, the easier it is for establishment manipulators to infiltrate and undermine it. This doesn’t mean that organizing is bad, it just means a successful grassroots psywar won’t depend on it. If we’re each watching for opportunities to weaken public trust in the official narrative makers on our own personal time and in our own unique way using videos, blogs, tweets, art, paper literature, conversations and demonstrations, we’ll be far more effective.

By guerrilla I mean constantly attacking different fronts in different ways, never staying with the same line of attack for long enough to allow the propagandists to develop a counter-narrative. If they build up particularly strong armor around one area, put it aside and expose their lies on an entirely different front. The propagandists are lying constantly, so there is never any shortage of soft targets. The only consistency should be in attacking the propaganda machine as visibly as possible.

As far as how to go about that attack, my best answer is that I’m leading by example here. I’m only ever doing the thing that I advocate, so if you want to know what I think we should all do, just watch what I do. I’m only ever using my own unique set of skills, knowledge and assets to attack the narrative control engine at whatever points I perceive to be the most vulnerable on a given day.

So do what I do, but keep in mind that each individual must sort out the particulars for themselves. We’ve each got our own strengths and abilities that we bring to the psywar: some of us are funny, some are artistic, some are really good at putting together information and presenting it in a particular format, some are good at finding and boosting other people’s high-quality attacks. Everyone brings something to the table. The important thing is to do whatever will draw the most public interest and attention to what you’re doing. Don’t shy away from speaking loud and shining bright.

It isn’t necessary to come up with your own complete How It Is narrative of exactly what is happening in our world right now; with the current degree of disinformation and government opacity that’s too difficult to do with any degree of completion anyway. All you need to do is wake people up in as many ways as possible to the fact that they’re being manipulated and deceived. Every newly opened pair of eyes makes a difference, and anything you can do to help facilitate that is energy well spent.

Without an effective propaganda machine, the empire cannot rule. Once we’ve crippled public trust in that machine, we’ll exist in a very different world already, and the next step will present itself from there. Until then, the attack on establishment propaganda should be our foremost priority.

How to Avert a Digital Dystopia

By Jumana Abu-Ghazaleh

Source: OneZero

“What I find [ominous] is how seldom, today, we see the phrase ‘the 22nd century.’ Almost never. Compare this with the frequency with which the 21st century was evoked in popular culture during, say, the 1920s.”

—William Gibson, famed science-fiction author, in an interview on dystopian fiction.

The 2010s are almost over. And it doesn’t quite feel right.

When the end of 2009 came into view, the end of the 2000s felt like a relatively innocuous milestone. The current moment feels so much more, what’s the word?

Ah, yes: dystopian.

Looking back, “dystopia” might have been the watchword of the 2010s. Black Mirror debuted close to the beginning of the decade, and early in its run, it was sometimes critiqued for how over-the-top it all felt. Now, at the end of the decade, it’s regularly critiqued as made obsolete by reality.

And it’s not just prestige TV like Black Mirror reflecting the decade’s mood of incipient collapse. Of the 2010s top 10 highest-grossing films, by my count at least half involve an apocalypse either narrowly averted or, in fact, taking place (I’m looking at you, Avengers movies).

People have reasons to wallow. I get it. The existential threat of climate change alone — and seeing efforts to mitigate it slow down precisely as it becomes more pressing — could fuel whole libraries of dystopian fiction.

Meanwhile, our current tech landscape — the monopolies, the wild spread of disinformation, the sense that your most private data could go public whenever, with no recourse, all the things that risk making Black Mirror feel quaint — truly feels dystopian.

We enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

Since no one in a position to actually do something about our dystopian reality seems to be admitting it — no business leaders, politicians or legacy media — it makes sense that you might get catharsis of acknowledgment from pop culture instead. And yet, the most popular end-of-the-world fiction isn’t about actual imminent threats from climate or tech. It’s about Thanos coming to snap half of life out of existence. Or Voldemort threatening to destroy us Muggles.

Maybe that kind of pop culture, which acknowledges dystopia but not the actual threats we currently face, gives us a feeling of control: Sure, Equifax could leak my social security number and face zero consequences, but there are no Hunger Games. Wow — it really could be so much worse! Maybe we enjoy watching distant, imaginary dystopias because they distract us from oncoming, real dystopias.

But let’s look at those actual potential dystopias for a moment and think about what we need to do to avert them.

I’d suggest the big four U.S. tech giants — Amazon, Facebook, Apple, Google — each have a distinct possible dystopia associated with them. If we don’t turn around our current reality, we will likely get all four — after all, for all the antagonistic rhetoric among the giants, they are rather co-dependent. Let’s look at what we might have, ahem, look forward to — unless we demand the tech giants deliver on the utopia they purportedly set out to achieve when their respective founders raised their rounds of millions. I would argue not only that we can, but that we must hold them accountable.

“Mad Max,” or, slowly then all at once: starring Apple

“‘How did you go bankrupt?’ Bill asked. ‘Two ways,’ Mike said. ‘Gradually and then suddenly.’”

—Ernest Hemingway, The Sun Also Rises.

When you think of Mad Max, you probably think of an irradiated, post-apocalyptic desert hellscape. You’re also not thinking of Mad Max.

In the original 1979 film, the apocalypse hasn’t quite yet happened. There’s been a substantial social breakdown, but things are getting worse in slow motion. There are still functioning towns. Our protagonist, Max, is a working-class cop; and while there’s reason to believe a big crash is coming, or has even begun, society is still hanging on. (It’s only in the sequels that we’re well into the post-apocalyptic landscape people are thinking of when they say “Mad Max.”)

A relatively subtle dystopia, where things gradually decline in the background, is also a good day-to-day description of a society overrun by algorithms, even without the attention-grabbing mega-scandals of a Cambridge Analytica or massive data breach. A kind of dystopia “light” — and Apple is its poster child.

After all, Apple has a genuinely better track record than some of the other tech giants on a few key privacy issues. But it’s also genuinely aware of the value of promulgating that vision of itself — and that can lead Apple users into danger.

In January, Apple purchased a multistory billboard outside the Consumer Electronics Show in Las Vegas, with this message: “What happens on your iPhone, stays on your iPhone.” Sounds great — but it’s deeply misleading, and as journalist Mark Wilson noted, Apple’s mismatch between rhetoric and behavior fuels the nightmare that is our current data security crisis:

“[iPhone] contents are encrypted by default […] But that doesn’t stop the 2 million or so apps in the App Store from spying on iPhone users and selling details of their private lives. “Tens of millions of people have data taken from them — and they don’t have the slightest clue,” says [the] founder of [the] cybersecurity firm Guardian […] The Wall Street Journal studied 70 iOS apps […] and found several that were delivering deeply private information, including heart rate and fertility data, to Facebook.” [Emphasis mine.]

A tech giant that is claiming it’s the path to salvation, while effectively creating a trap for those who believe it, sounds ironically familiar given Apple’s famous evocation of Big Brother.

After all, when people talk about habit-forming technology in terms so terrifying they’ve convinced Silicon Valley executives to limit their children’s access to their own products, let’s be real: They’re talking about iPhones.

When academic child psychology researcher Jean Twenge talks about a possible teenage mental health epidemic fueled by social media, we know what’s at the heart of it: She’s talking about iPhones.

All those aforementioned horror stories, and a huge slice of those algorithms you’ve heard so much about, are likely first reaching you on smartphones that, with world market share above 50%, are largely, you guessed it, iPhones. (And none of these stories even mention Apple workers at overseas at facilities like Foxconn who create our iPhones and who really are living in a kind of explicit dystopia.)

What happens on your iPhone almost certainly doesn’t stay on your iPhone. But who created that surveillance capitalism running it all in the first place?

Enter Google.

“Black Mirror:” “Nosedive,” or, welcome to surveillance capitalism: starring Google

“We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

—Google’s then-CEO Eric Schmidt, in a 2011 interview.

You’ve probably heard it before: “if you’re not paying, you’re the product.” This is usually in reference to ostensibly “free” services like Facebook or Gmail. It’s a creepy thought. And, according to Shoshana Zuboff, professor emeritus at Harvard and economic analyst of what she’s termed “surveillance capitalism,” the selling of your personal information undermines autonomy. It’s worse than you being the product: “You are not the product. You are the abandoned carcass.”

Google, according to Zuboff, is the original inventor of Surveillance Capitalism. In their early “Don’t Be Evil” days, the idea of accessing people’s private Google searches and selling them was considered unthinkable. Then Google realized it could use search data for targeting purposes — and never stopped creating opportunities to surveil their users:

“Google’s new methods were prized for their ability to find data that users had opted to keep private and to infer extensive personal information that users did not provide. These operations were designed to bypass user awareness. […]In other words, from the very start Google’s breakthrough depended upon a one-way mirror: surveillance.”

Twenty years later, surveillance capitalism has become so ubiquitous that it’s hard to live in Western society without being surveilled constantly by private actors.

As far as I know, no mass popular culture has really yet captured this reality, but one small metaphor that kind of hits on its effects is a Black Mirror episode called “Nosedive.”

In “Nosedive,” everyday people’s lived experience is very clearly the picked-apart carcass for an entire economic and social order; a kind of surveillance-driven social credit score affects every aspect of your daily life, from customer service to government resources to friendships, all based on your app usage and, most creepily, how other people rate you in the app.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having a coming-out party. Increasingly, Google isn’t just surveilling us in private — with its “designing smart cities” initiatives, the company will literally be making city management decisions instead of citizens: Sidewalk Labs, a Google sister company, plans to develop “the most innovative district in the entire world” in the Quayside neighborhood of Toronto, and Google itself is planning on siphoning every bit of data about how Quayside residents live and breathe and move via ubiquitous monitoring sensors that will likely inform — for a fee naturally — how other cities will develop.

If surveillance capitalism has been the engine powering our economy in the background for nearly two decades, it’s now having its coming-out party.

Much like Apple, Google takes pains to present itself as a conscientious corporate citizen. They might be paternalistic, or antidemocratic — but they have learned it’s important to their brand that they’re seen as responsive to their workers and the broader public, largely thanks to the courageous and persistent effort of their workers and consumer advocates in civil society.

Not so much with Amazon.

“Elysium,” or, dystopia for some, Prime Day for others: starring Amazon

“[The New York Times] claims that our intentional approach is to create a soulless, dystopian workplace where no fun is had and no laughter heard. Again, I don’t recognize this Amazon and I very much hope you don’t either.” —Jeff Bezos, August 17, 2015 letter to staff after the New York Times investigation into working conditions at the company.

In 2015, Jeff Bezos felt the need to set the record straight: The New York Times was wrong about Amazon. Working there did not feel like a dystopia.

The years since have only validated the New York Times story, which focused on life for coders and executives at Amazon. Notably, when the Times and other investigative journalists have probed life for the far more numerous warehouse workers employed by Amazon, Bezos has largely stayed silent.

In fact, the further down the corporate ladder you get at Amazon, the more likely it seems that Jeff Bezos will stay quiet on any controversy. Just this month, in a report published almost exactly four years after Bezos’ “Amazon is not a dystopia” declaration, the New York Times has uncovered almost a dozen previously unreported deaths allegedly caused by Amazon’s decentralized delivery network. Rather than defend itself out loud, Amazon has kept quiet while repeating the same argument in the courts: Those delivery people aren’t Amazon workers at all, and thus Amazon is not liable.

Amazon, like every major tech giant, has a key role in the dystopia of surveillance capitalism — the monopolylike market share of Amazon Web Services, and Amazon’s involvement in increasingly ubiquitous facial recognition software, represent their own deeply dystopian trends. But the most visible dystopia Amazon creates, for all to see, is dystopia in the workplace.

In many ways, Amazon is the single company that best explains the appeal of an Andrew Yang figure to a certain slice of economically alienated young voters. When speaking near Amazon’s HQ in Seattle, Yang explicitly talked about the surveillance of Amazon workers, and how reliable those jobs are in any case:

“All the Amazon employees [here] are like, ‘Oh shit, is Jeff watching me right now?’… [Amazon will] open up a fulfillment warehouse that employs, let’s call it 20,000 people. How many retail workers worked at the malls that went out of business because of Amazon? [The] greatest thing would be if Jeff Bezos just stood up one day and said, ‘Hey, the truth is we are one of the primary organizations automating away millions of American jobs.’ […] I have friends who work at Amazon and they say point-blank that ‘we are told we are going to be trying to get rid of our own jobs.’”

You can flat-out disagree with Yang’s proposed solutions, but a lot of his appeal stems from the fact that he’s diagnosing a problem that broad swaths of people don’t feel is being talked about. Yang validates his supporters’ concerns that they are, in fact, living in a dystopia of the corporate overlord variety.

In the movie Elysium, most work is done in warehouses, under constant surveillance, with workers creating the very automation systems that surveil and punish them. The movie takes place in a company townlike setting, with no such thing as a class system or social mobility. Meanwhile, the ruling class in Elysium lives in space, having left everyone else behind to work on Earth, a planet now fully ravaged by climate change.

That might sound particularly far-fetched, but given Bezos’ explicit intention to colonize space because “we are in the process of destroying this planet,” it suddenly doesn’t feel so off the mark. And in an era where Governors and Mayors openly genuflect to Amazon, preemptively giving up vast swaths of democratic powers for the mere possibility that Amazon might host an office building there, it’s hard not to feel like we’re already in an Elysium-flavored dystopia.

Amazon has their dystopia picked out, flavor and all. But what happens when the biggest social network in the world can’t decide which dystopia it wants to be when it grows up?

Pick a dystopia — any dystopia!: starring Facebook

“Understanding who you serve is always a very important problem, and it only gets harder the more people that you serve.”

—Mark Zuckerberg, 2014 interview with the New York Times.

Ready Player One is one of the more popular recent dystopian novels.

The bleak future it depicts is relatively straightforward: In the face of economic and ecological collapse, the vast majority of human interaction and commercial activity happens over a shared virtual reality space called Oasis.

In Oasis, the downtrodden masses compete in enormous multiplayer video games, hoping to win enough prizes and gain sufficient corporate sponsorship to scrape out a decent existence. Imagine a version of The Matrix, where people choose to constantly log into unreality because actual reality has gotten so unbearably terrible, electing to let the real world waste away. Horrific.

Ready Player One is also the book that Oculus founder and former Facebook employee Palmer Luckey used to give new hires, working on virtual reality to get them “excited” about the “potential” of their work.

Sound beyond parody? In so many ways, Facebook is unique among the tech giants: It’s not hiding the specter of dystopia. It’s amplifying dystopia.

It’s hard to pick a popular dystopia Facebook isn’t invested in.

Surveillance capitalism? Google invented it, but Facebook has taken it to a whole new level with its social and emotional contagion experiments and relentless tracking of even nonusers.

1984? Sure, Facebook says, quietly patenting technology that lets your phone record you without warning.

Brave New World? Lest we forget, Facebook literally experimented with making depression contagious in 2014.

28 Days Later, or any of the various other mass-violence-as-disease horror movies like The Happening? Facebook has been used to spread mass genocidal panics far more terrifying than any apocalyptic Hollywood film.

What about the seemingly way out there dystopias — something like THX-1138 or a particularly gnarly Black Mirror episode where a brain can have its thoughts directly read, or even electronically implanted? It won’t comfort you to know that Facebook just acquired CTRL-Labs, which is developing a wearable brain-computer interface, raising questions about literal thought rewriting, brain hacking, and psychological “discontinuity.”

Roger McNamee, an early Zuckerberg advisor and arguably its most important early investor, has become unadorned about it: Facebook has become a dystopia. It’s up to the rest of us to catch up.

We spent the 2010s on dystopia—let’s spend the 2020s on utopia instead

“Plan for the worst, hope for the best, and maybe wind up somewhere in the middle.” —Bright Eyes, “Loose Leaves”

People generally seem to think dystopias are possible, but utopias are not. No one ridicules you for conceiving of a dystopia.

I think part of that is because it gives us an easy out. Dystopias paralyze us. They overwhelm. They make us feel small and powerless. Envisioning Dystopia is like getting married anticipating the divorce. All we can do is make sure it’s amicable.

Is there room for a utopian counterweight? There’s not only room, there’s an urgent need if we want to look forward (as opposed to despondently) to the 22nd century. We cannot avert or undo dystopias without believing in their counterparts.

But we need to make the utopian alternative feel real, accessible, and achievable. We need to be rooting not for the lesser of two evils, but for something actually good.

Dystopias — real, about-to-unfold dystopias — have been averted before. The threat of nuclear apocalypse during the Cold War. The shrinking hole in the ozone layer (which is both distinct from, and has lessons to teach us about, the climate crisis). We didn’t land in utopia, but it was only by hitching our wagons to a utopian vision that we averted the worst.

In 2017, cultural historian Jill Lepore penned a kind of goodbye letter to dystopian fiction, calling for a renewal of utopian imagination. “Dystopia,” she lamented, “used to be a fiction of resistance; it’s become a fiction of submission.” Dystopian narratives once served as stark warnings of what might be in store for us if we do nothing, spurring us on to devise a brighter future. Today, dystopian fiction is so prevalent and comes in so many unsavory flavors that our civic imaginations are understandably confined to identifying the one we deem most likely to inevitably happen, and to come to terms with it.

But we don’t have to.

A new decade is on the way. Let’s spend the 2020s exercising our utopian imaginations — the muscles we use to envision dystopia are now all too-well-developed, and a body that only exercises one set of muscles quickly grows off-balance.

Dystopias disempower. We are tiny, inconsequential — how could we do anything about them? Utopias, on the other hand, are rhetorical devices calling upon us to build. They invite our participation. Because a utopia where we don’t matter is a contradiction in terms.

Let’s envision a world where those creating algorithms are thinking not only about their reach, but also about their impact. A world in which we are not the carcass left behind by surveillance capitalism. A world in which calling for ethical norms and standards is in itself a utopian act.

Let’s spend the next decade fighting for what we actually want: A world in which the powerful few are held to a higher standard; an industry in which ethics aren’t an afterthought, and the phrase “unintended consequences” doesn’t absolve actors from the fall out of their very deliberate acts.

Let’s actualize the utopia which, ironically enough, the tech giants themselves so enthusiastically promised us when they set out to change the world.

Let’s spend this next decade asking for what we actually want.