Writing Nameless Things: An Interview with Ursula K. Le Guin

Photo by Motoya Nakamura/The Oregonian

(Editor’s note: in light of the recent passing of Ursula K. Le Guin we should appreciate even more the wisdom and insight communicated through her novels as well as in this article from last November, one of Le Guin’s last published in-depth interviews.)

David Streitfeld interviews Ursula K. Le Guin

Source: Los Angeles Review of Books

GREAT HONORS ARE flowing to Ursula K. Le Guin. Last year, the Library of America began a publishing program devoted to her work, a rare achievement for a living writer. The second and third volumes, containing much of her classic early SF, are now out. Her collected shorter fiction has been published in two volumes by Saga Press. In 2014, she received the National Book Foundation Medal for Distinguished Contribution to American Letters. This year, once again, she was on the betting list for the Nobel Prize in Literature. Le Guin lives quietly in Portland, Oregon, with her husband of many decades, Charles.

¤

DAVID STREITFELD: How’s your health?

URSULA K. LE GUIN: Okay.

How’s your mood?

Okay. [Laughs.] One slows down increasingly in one’s upper 80s, believe me. I’ve dropped most of my public obligations. I say, “No, thank you,” a lot. It’s too bad. I love reading at Powell’s Books. I’m a ham. Their audiences are great. But it is just physically impossible.

Much of the work in these two new Library of America volumes was done in a short span of time — a few years during the late 1960s and early ’70s. You were on fire, writing The Left Hand of Darkness (1969) and The Dispossessed (1974) practically back to back. That was a period when you also wrote the first Earthsea novels.

I worked just as hard before that and just as hard after. The work of that period isn’t all my significant work. There’s pretty good stuff after.

You were also raising three young children.

I had a child under age five for seven or eight years. Number three came along slightly unexpectedly, about the time number two was beginning to go off to kindergarten. I could not possibly have done it if Charles had not been a full-time parent. Over and over I’ve said it — two people can do three jobs but one person cannot do two. Well, sometimes they do, but it’s a killer.

How did you pace yourself?

I was very careful in those years not to work to a deadline. I never promised a book — ever. I left myself what leeway I could in what I did when. My actual time to work on my writing was going to be limited to what was left after the needs of my kids. I don’t want to be pollyannish, but the fact is both jobs were very rewarding. They were immediately rewarding. I enjoy writing and I enjoyed the kids.

I remember you once said that having kids doesn’t make the writing easier but it makes it better. Still, it took a lot of juggling.

When I discovered I was pregnant the third time, I went through a bad patch. How are we going to do this whole thing all over again? Pregnancy can be pretty devouring. But it was an easy pregnancy, a great baby, and we were really glad we did. There was all this vitality in the house.

It was clearly a time of great fecundity in all sorts of ways.

Apparently I could do it on both fronts. I was healthy and the kids were healthy. That makes such a difference. But it all didn’t seem remarkable. I was of a generation when women were expected to have kids.

When did you write?

After the kids were put to bed, or left in their bed with a book. My kids went to bed much earlier than most kids do now. I was appalled to learn my grandchildren were staying up to 11:00. That would have driven me up the wall. We kept old-fashioned hours — 8:00 p.m., 9:00 p.m. I would go up to the attic, and work 9:00 to midnight. If I was tired, it was a little tough. But I was kind of gung-ho to do it. I like to write. It’s exciting, something I’m really happy doing.

Does being in the Library of America make you feel you’ve joined the immortals? You’re now up there with all the greats — Twain, Poe, Wharton.

I grew up with a set of Mark Twain in the house. Collections of authors’ work were not such a big deal. And my agent was hesitant about the contract, since the pay upfront was less than she’s used to settling for. She’s a good agent. Her job is to make money. What I did not realize is that being published in the Library of America is a real and enduring honor. Especially while you’re still alive. Philip Roth and I make a peculiar but exclusive club.

The first book of yours in the Library of America came out last year. It was called “The Complete Orsinia,” and had some of your less famous work.

I bullied Library of America into doing it first. I didn’t realize I was bullying them, but I was. They were very good-natured about it.

Malafrena (1979), the novel that is the volume’s centerpiece, takes place during a failed revolution in the early 19th century in an imaginary European country somewhere near Hungary.

It’s one of my works that is neither fantasy nor science fiction. So what do you call it? It’s not alternative history because it’s fully connected to real European history. There is no name for it. That’s my problem, I do nameless things.

It’s been a long journey for some of these books. Fifty years ago, they were originally published as SF paperbacks.

I’m not remotely ashamed of their origins, but I am not captivated by them either the way some people are. Some people are fascinated by the pulps — there’s something remote and glamorous in the whole idea of a 25-cent book. I am in the middle of rereading Michael Chabon’s The Amazing Adventures of Kavalier & Clay. Michael is enthralled by the whole comic book thing. That is perfectly understandable and I enjoy his fascination, but my mind doesn’t work that way. I am into content. Presentation is something that just has to be there.

Fifty years ago, science fiction and fantasy were marginal genres. They weren’t respectable. In 1974, you gave a talk entitled “Why Are Americans Afraid of Dragons?”

There’s a tendency in American culture to leave the imagination to kids — they’ll grow out of it and grow up to be good businessmen or politicians.

Hasn’t that changed? We seem inundated with fantasy now.

But much of it is derivative; you can a mash lot of orcs and unicorns and intergalactic wars together without actually imagining anything. One of the troubles with our culture is we do not respect and train the imagination. It needs exercise. It needs practice. You can’t tell a story unless you’ve listened to a lot of stories and then learned how to do it.

You’ve been concerned recently about some of the downsides of the imagination.

I feel fine as far as literature is concerned. The place where the unbridled imagination worries me is when it becomes part of nonfiction — where you’re allowed to lie in a memoir. You’re encouraged to follow the “truth” instead of the facts. I’m not a curmudgeon, I’m just a scientist’s daughter. I really like facts. I have a huge respect for them. But there’s an indifference toward factuality that is encouraged in a lot of nonfiction. It worries me for instance when writers put living people into a novel, or even rather recently dead people. There’s a kind of insolence, a kind of colonialization of that person by the author. Is that right? Is that fair? And then, when we get these biographers where they are sort of making it up as they go along, I don’t want to read that. I find myself asking, what is it, a novel, a biography?

How do you feel about ebooks these days?

When I started writing about ebooks and print books, a lot of people were shouting, “The book is dead, the book is dead, it’s all going to be electronic.” I got tired of it. What I was trying to say is that now we have two ways of publishing, and we’re going to use them both. We had one, now we have two. How can that be bad? Creatures live longer if they can do things in different ways. I think I’ve been fairly consistent on that. But the tone of my voice might have changed. I was going against a trendy notion. There’s this joke I heard. You know what Gutenberg’s second book was, after the Bible? It was a book about how the book was dead.

You’re now a member of the American Academy of Arts and Letters.

I almost wasn’t. It’s so embarrassing. Either the letter got lost in the mail or I tossed it thinking it was junk, but in either case I never got the invitation. They waited and waited and waited and finally got in touch with my agent, who immediately got in touch with me. I wrote them and said, “I wasn’t pulling a Dylan.” But they must have wondered.

It’s another honor, a significant one. What does it mean to you?

To paraphrase Mary Godwin’s line about the vindication of the rights of women, it’s a vindication of the rights of science fiction. To have my career recognized on this level makes it a lot harder for the diehards and holdouts to say, “Genre fiction isn’t literature.”

Do they still say that?

You’d be surprised.

You once clarified your political stance by saying, “I am not a progressive. I think the idea of progress an invidious and generally harmful mistake. I am interested in change, which is an entirely different matter.” Why is the idea of progress harmful? Surely in the great sweep of time, there has been progress on social issues because people have an idea or even an ideal of it.

I didn’t say progress was harmful, I said the idea of progress was generally harmful. I was thinking more as a Darwinist than in terms of social issues. I was thinking about the idea of evolution as an ascending staircase with amoebas at the bottom and Man at the top or near the top, maybe with some angels above him. And I was thinking of the idea of history as ascending infallibly to the better — which, it seems to me, is how the 19th and 20th centuries tended to use the word “progress.” We leave behind us the Dark Ages of ignorance, the primitive ages without steam engines, without airplanes/nuclear power/computers/whatever is next. Progress discards the old, leads ever to the new, the better, the faster, the bigger, et cetera. You see my problem with it? It just isn’t true.

How does evolution fit in?

Evolution is a wonderful process of change — of differentiation and diversification and complication, endless and splendid; but I can’t say that any one of its products is “better than” or “superior to” any other in general terms. Only in specific ways. Rats are more intelligent and more adaptable than koala bears, and those two superiorities will keep rats going while the koalas die out. On the other hand, if there were nothing around to eat but eucalyptus, the rats would be gone in no time and the koalas would thrive. Humans can do all kinds of stuff bacteria can’t do, but if I had to bet on really long-term global survival, my money would go to the bacteria.

In your 2014 acceptance speech for the National Book Foundation medal, you said, “Hard times are coming.”

I certainly didn’t foresee Donald Trump. I was talking about longer-term hard times than that. For 30 years I’ve been saying, we are making the world uninhabitable, for God’s sake. For 30 years!

And then, right after the election, you came up with a new model of resistance that elevates not the warrior but water: “The flow of a river is a model for me of courage that can keep me going — carry me through the bad places, the bad times. A courage that is compliant by choice and uses force only when compelled.”

It’s rooted firmly in Lao Tzu and the Tao Te Ching. He goes very deep in me, back to my teenage years.

Is this a notion that comes out of an earlier work?

Most of my real work was fictional, where you don’t express things like that directly. You build it in. Like in my novel The Lathe of Heaven (1971). George, the hero, is kind of watery. He goes with the flow, as they used to say. I was dubious about publishing that piece about water as a blog entry. It was so direct, and sounded like I was trying to be some sort of guru.

You are direct.

I like to hide it in fiction when I can. But I hardly ever write fiction anymore.

For a year or two, you thought you never would again.

But then I suddenly went and wrote a little story called “Calx” for Catamaran, and then in September a long story called “Pity and Shame.” I should have remembered what all good SF writers know: prediction is not our game.

Are you getting weary of being honored and lionized?

Always remember, you’re talking to a woman. And for a woman, any literary award, honors, notice of any sort has been an uphill climb. And if she insists upon flouting convention and writing SF and fantasy and indescribable stuff, it’s even harder.

And now?

I don’t think the rewards have been overdone. I think I’ve earned them. They are welcome and useful to me because they shore up my self-esteem, which wobbles as you get old and can’t do what you used to do.

Saturday Matinee: Natural City

“Natural City” (2003) is a dystopian science fiction film from South Korean director Min Byeong-cheon. The plot focuses on two cops, R and Noma, who (not unlike Blade Runners) must hunt down renegade cyborgs. The rogue cyborgs are designed for roles ranging from military commandos to companion “dolls” and have a limited 3 year lifespan, though black market technology enables the transfer of a cyborg’s mind into the brain of a human host. This breakthrough compels R into finding Cyon, an orphaned prostitute who could potentially host the mind of Ria, a doll he’s fallen deeply in love with and who has only a few days left before expiration. Eventually, R must make a difficult decision testing his split personal and professional loyalties.

Watch the full film here.

Saturday Matinee: Land of the Blind

“Land of the Blind” (2006) is a British-American political satire directed by Robert Edwards and starring Ralph Fiennes, Donald Sutherland, Tom Hollander and Lara Flynn Boyle. The story is set in an unnamed place and time where an idealistic soldier named Joe strikes up an illicit friendship with a political prisoner who involves him in a coup d’etat. But in the post-revolutionary world, Joe and his former friend have a bitter feud which escalates until Joe’s co-conspirators conclude they must erase him from history.

The Pentagon’s New Wonder Weapons for World Dominion

Or Buck Rogers in the 21st Century

By Alfred McCoy

Source: The Unz Review

[This piece has been adapted and expanded from Alfred W. McCoy’s new book, In the Shadows of the American Century: The Rise and Decline of U.S. Global Power.]

Not quite a century ago, on January 7, 1929, newspaper readers across America were captivated by a brand-new comic strip, Buck Rogers in the 25th Century. It offered the country its first images of space-age death rays, atomic explosions, and inter-planetary travel.

“I was twenty years old,” World War I veteran Anthony “Buck” Rogers told readers in the very first strip, “surveying the lower levels of an abandoned mine near Pittsburgh… when suddenly… gas knocked me out. But I didn’t die. The peculiar gas… preserved me in suspended animation. Finally, another shifting of strata admitted fresh air and I revived.”

Staggering out of that mine, he finds himself in the 25th century surrounded by flying warriors shooting ray guns at each other. A Mongol spaceship overhead promptly spots him on its “television view plate” and fires its “disintegrator ray” at him. He’s saved from certain death by a flying woman warrior named Wilma who explains to him how this all came to be.

“Many years ago,” she says, “the Mongol Reds from the Gobi Desert conquered Asia from their great airships held aloft by gravity Repellor Rays. They destroyed Europe, then turned toward peace-loving America.” As their disintegrator beams boiled the oceans, annihilated the U.S. Navy, and demolished Washington, D.C. in just three hours, “government ceased to exist, and mobs, reduced to savagery, fought their way out of the cities to scatter and hide in the country. It was the death of a nation.” While the Mongols rebuilt 15 cities as centers of “super scientific magnificence” under their evil emperor, Americans led “hunted lives in the forests” until their “undying flame of freedom” led them to recapture “lost science” and “once more strike for freedom.”

After a year of such cartoons filled with the worst of early-twentieth-century Asian stereotypes, just as Wilma is clinging to the airship of the Mongol Viceroy as it speeds across the Pacific , a mysterious metallic orb appears high in the sky and fires death rays, sending the Mongol ship “hissing into the sea.” With her anti-gravity “inertron” belt, the intrepid Wilma dives safely into the waves only to have a giant metal arm shoot out from the mysterious orb and pull her on board to reveal — “Horrors! What strange beings!” — Martians!

With that strip, Buck Rogers in the 25th Century moved from Earth-bound combat against racialized Asians into space wars against monsters from other planets that, over the next 70 years, would take the strip into comic books, radio broadcasts, feature films, television serials, video games, and the country’s collective conscious. It would offer defining visions of space warfare for generations of Americans.

Back in the 21st Century

Now imagine us back in the 21st century. It’s 2030 and an American “triple canopy” of pervasive surveillance systems and armed drones already fills the heavens from the lower stratosphere to the exo-atmosphere. It can deliver its weaponry anywhere on the planet with staggering speed, knock out enemy satellite communications at a moment’s notice, or follow individuals biometrically for great distances. It’s a wonder of the modern age. Along with the country’s advanced cyberwar capacity, it’s also the most sophisticated military information system ever created and an insurance policy for global dominion deep into the twenty-first century.

That is, in fact, the future as the Pentagon imagines it and it’s actually under development, even though most Americans know little or nothing about it. They are still operating in another age, as was Mitt Romney during the 2012 presidential debates when he complained that “our Navy is smaller now than at any time since 1917.”

With words of withering mockery, President Obama shot back: “Well, Governor, we also have fewer horses and bayonets, because the nature of our military’s changed… the question is not a game of Battleship, where we’re counting ships. It’s what are our capabilities.” Obama then offered just a hint of what those capabilities might be: “We need to be thinking about cyber security. We need to be talking about space.”

Indeed, working in secrecy, the Obama administration was presiding over a revolution in defense planning, moving the nation far beyond bayonets and battleships to cyberwarfare and the future full-scale weaponization of space. From stratosphere to exosphere, the Pentagon is now producing an armada of fantastical new aerospace weapons worthy of Buck Rogers.

In 2009, building on advances in digital surveillance under the Bush administration, Obama launched the U.S. Cyber Command. Its headquarters were set up inside the National Security Agency (NSA) at Fort Meade, Maryland, and a cyberwar center staffed by 7,000 Air Force employees was established at Lackland Air Base in Texas. Two years later, the Pentagon moved beyond conventional combat on air, land, or sea to declare cyberspace both an offensive and defensive “operational domain.” In August, despite his wide-ranging attempt to purge the government of anything connected to Barack Obama’s “legacy,” President Trump implemented his predecessor’s long-delayed plan to separate that cyber command from the NSA in a bid to “strengthen our cyberspace operations.”

And what is all this technology being prepared for? In study after study, the intelligence community, the Pentagon, and related think tanks have been unanimous in identifying the main threat to future U.S. global hegemony as a rival power with an expanding economy, a strengthening military, and global ambitions: China, the home of those denizens of the Gobi Desert who would, in that old Buck Rogers fable, destroy Washington four centuries from now. Given that America’s economic preeminence is fading fast, breakthroughs in “information warfare” might indeed prove Washington’s best bet for extending its global hegemony further into this century — but don’t count on it, given the history of techno-weaponry in past wars.

Techno-Triumph in Vietnam

Ever since the Pentagon with its 17 miles of corridors was completed in 1943, that massive bureaucratic maze has presided over a creative fusion of science and industry that President Dwight Eisenhower would dub “the military-industrial complex” in his farewell address to the nation in 1961. “We can no longer risk emergency improvisation of national defense,” he told the American people. “We have been compelled to create a permanent armaments industry of vast proportions” sustained by a “technological revolution” that is “complex and costly.” As part of his own contribution to that complex, Eisenhower had overseen the creation of both the National Aeronautics and Space Administration, or NASA, and a “high-risk, high-gain” research unit called the Advanced Research Projects Agency, or ARPA, that later added the word “Defense” to its name and became DARPA.

For 70 years, this close alliance between the Pentagon and major defense contractors has produced an unbroken succession of “wonder weapons” that at least theoretically gave it a critical edge in all major military domains. Even when defeated or fought to a draw, as in Vietnam, Iraq, and Afghanistan, the Pentagon’s research matrix has demonstrated a recurring resilience that could turn disaster into further technological advance.

The Vietnam War, for example, was a thoroughgoing tactical failure, yet it would also prove a technological triumph for the military-industrial complex. Although most Americans remember only the Army’s soul-destroying ground combat in the villages of South Vietnam, the Air Force fought the biggest air war in military history there and, while it too failed dismally and destructively, it turned out to be a crucial testing ground for a revolution in robotic weaponry.

To stop truck convoys that the North Vietnamese were sending through southern Laos into South Vietnam, the Pentagon’s techno-wizards combined a network of sensors, computers, and aircraft in a coordinated electronic bombing campaign that, from 1968 to 1973, dropped more than a million tons of munitions — equal to the total tonnage for the whole Korean War — in that limited area. At a cost of $800 million a year, Operation Igloo White laced that narrow mountain corridor with 20,000 acoustic, seismic, and thermal sensors that sent signals to four EC-121 communications aircraft circling ceaselessly overhead.

At a U.S. air base just across the Mekong River in Thailand, Task Force Alpha deployed two powerful IBM 360/65 mainframe computers, equipped with history’s first visual display monitors, to translate all those sensor signals into “an illuminated line of light” and so launch jet fighters over the Ho Chi Minh Trail where computers discharged laser-guided bombs automatically. Bristling with antennae and filled with the latest computers, its massive concrete bunker seemed, at the time, a futuristic marvel to a visiting Pentagon official who spoke rapturously about “being swept up in the beauty and majesty of the Task Force Alpha temple.”

However, after more than 100,000 North Vietnamese troops with tanks, trucks, and artillery somehow moved through that sensor field undetected for a massive offensive in 1972, the Air Force had to admit that its $6 billion “electronic battlefield” was an unqualified failure. Yet that same bombing campaign would prove to be the first crude step toward a future electronic battlefield for unmanned robotic warfare.

In the pressure cooker of history’s largest air war, the Air Force also transformed an old weapon, the “Firebee” target drone, into a new technology that would rise to significance three decades later. By 1972, the Air Force could send an “SC/TV” drone, equipped with a camera in its nose, up to 2,400 miles across communist China or North Vietnam while controlling it via a low-resolution television image. The Air Force also made aviation history by test firing the first missile from one of those drones.

The air war in Vietnam was also an impetus for the development of the Pentagon’s global telecommunications satellite system, another important first. After the Initial Defense Satellite Communications System launched seven orbital satellites in 1966, ground terminals in Vietnam started transmitting high-resolution aerial surveillance photos to Washington — something NASA called a “revolutionary development.” Those images proved so useful that the Pentagon quickly launched an additional 21 satellites and soon had the first system that could communicate from anywhere on the globe. Today, according to an Air Force website, the third phase of that system provides secure command, control, and communications for “the Army’s ground mobile forces, the Air Force’s airborne terminals, Navy ships at sea, the White House Communications Agency, the State Department, and special users” like the CIA and NSA.

At great cost, the Vietnam War marked a watershed in Washington’s global information architecture. Turning defeat into innovation, the Air Force had developed the key components — satellite communications, remote sensing, computer-triggered bombing, and unmanned aircraft — that would merge 40 years later into a new system of robotic warfare.

The War on Terror

Facing another set of defeats in Afghanistan and Iraq, the twenty-first-century Pentagon again accelerated the development of new military technologies. After six years of failing counterinsurgency campaigns in both countries, the Pentagon discovered the power of biometric identification and electronic surveillance to help pacify sprawling urban areas. And when President Obama later conducted his troop “surge” in Afghanistan, that country became a frontier for testing and perfecting drone warfare.

Launched as an experimental aircraft in 1994, the Predator drone was deployed in the Balkans that very year for photo-reconnaissance. In 2000, it was adapted for real-time surveillance under the CIA’s Operation Afghan Eyes. It would be armed with the tank-killing Hellfire missile for the agency’s first lethal strike in Kandahar, Afghanistan, in October 2001. Seven years later, the Air Force introduced the larger MQ-9 “Reaper” drone with a flying range of 1,150 miles when fully loaded with Hellfire missiles and GBU-30 bombs, allowing it to strike targets almost anywhere in Europe, Africa, or Asia. To fulfill its expanding mission as Washington’s global assassin, the Air Force plans to have 346 Reapers in service by 2021, including 80 for the CIA.

Between 2004 and 2010, total flying time for all unmanned aerial vehicles rose sharply from just 71 hours to 250,000 hours. By 2011, there were already 7,000 drones in a growing U.S. armada of unmanned aircraft. So central had they become to its military power that the Pentagon was planning to spend $40 billion to expand their numbers by 35% over the following decade. To service all this growth, the Air Force was training 350 drone pilots, more than all its bomber and fighter pilots combined.

Miniature or monstrous, hand-held or runway-launched, drones were becoming so commonplace and so critical for so many military missions that they emerged from the war on terror as one of America’s wonder weapons for preserving its global power. Yet the striking innovations in drone warfare are, in the long run, likely to be overshadowed by stunning aerospace advances in the stratosphere and exosphere.

The Pentagon’s Triple Canopy

As in Vietnam, despite bitter reverses on the ground in Iraq and Afghanistan, Washington’s recent wars have been catalysts for the fusion of aerospace, cyberspace, and artificial intelligence into a new military regime of robotic warfare.

To effect this technological transformation, starting in 2009 the Pentagon planned to spend $55 billion annually to develop robotics for a data-dense interface of space, cyberspace, and terrestrial battle space. Through an annual allocation for new technologies reaching $18 billion in 2016, the Pentagon had, according to the New York Times, “put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power,” exemplified by future drones that will be capable of identifying and eliminating enemy targets without recourse to human overseers. By 2025, the United States will likely deploy advanced aerospace and cyberwarfare to envelop the planet in a robotic matrix theoretically capable of blinding entire armies or atomizing an individual insurgent.

During 15 years of nearly limitless military budgets for the war on terror, DARPA has spent billions of dollars trying to develop new weapons systems worthy of Buck Rogers that usually die on the drawing board or end in spectacular crashes. Through this astronomically costly process of trial and error, Pentagon planners seem to have come to the slow realization that established systems, particularly drones and satellites, could in combination create an effective aerospace architecture.

Within a decade, the Pentagon apparently hopes to patrol the entire planet ceaselessly via a triple-canopy aerospace shield that would reach from sky to space and be secured by an armada of drones with lethal missiles and Argus-eyed sensors, monitored through an electronic matrix and controlled by robotic systems. It’s even possible to take you on a tour of the super-secret realm where future space wars will be fought, if the Pentagon’s dreams become reality, by exploring both DARPA websites and those of its various defense contractors.

Drones in the Lower Stratosphere

At the bottom tier of this emerging aerospace shield in the lower stratosphere (about 30,000 to 60,000 feet high), the Pentagon is working with defense contractors to develop high-altitude drones that will replace manned aircraft. To supersede the manned U-2 surveillance aircraft, for instance, the Pentagon has been preparing a projected armada of 99 Global Hawk drones at a mind-boggling cost of $223 million each, seven times the price of the current Reaper model. Its extended 116-foot wingspan (bigger than that of a Boeing 737) is geared to operating at 60,000 feet. Each Global Hawk is equipped with high-resolution cameras, advanced electronic sensors, and efficient engines for a continuous 32-hour flight, which means that it can potentially survey up to 40,000 square miles of the planet’s surface daily. With its enormous bandwidth needed to bounce a torrent of audio-visual data between satellites and ground stations, however, the Global Hawk, like other long-distance drones in America’s armada, may prove vulnerable to a hostile hack attack in some future conflict.

The sophistication, and limitations, of this developing aerospace technology were exposed in December 2011 when an advanced RQ-170 Sentinel drone suddenly landed in Iran, whose officials then released photos of its dart-shaped, 65-foot wingspan meant for flights up to 50,000 feet. Under a highly classified “black” contract, Lockheed Martin had built 20 of these espionage drones at a cost of about $200 million with radar-evading stealth and advanced optics that were meant to provide “surveillance support to forward-deployed combat forces.”

So what was this super-secret drone doing in hostile Iran? By simply jamming its GPS navigation system, whose signals are notoriously susceptible to hacking, Iranian engineers took control of the drone and landed it at a local base of theirs with the same elevation as its home field in neighboring Afghanistan. Although Washington first denied the capture, the event sent shock waves down the Pentagon’s endless corridors.

In the aftermath of this debacle, the Defense Department worked with one of its top contractors, Northrop Grumman, to accelerate development of its super-stealth RQ-180 drone with an enormous 130-foot wingspan, an extended range of 1,200 miles, and 24 hours of flying time. Its record cost, $300 million a plane, could be thought of as inaugurating a new era of lavishly expensive war-fighting drones.

Simultaneously, the Navy’s dart-shaped X-47B surveillance and strike drone has proven capable both of in-flight refueling and of carrying up to 4,000 pounds of bombs or missiles. Three years after it passed its most crucial test by a joy-stick landing on the deck of an aircraft carrier, the USS George H.W. Bush in July 2013, the Navy announced that this experimental drone would enter service sometime after 2020 as the “MQ-25 Stingray” aircraft.

Dominating the Upper Stratosphere

To dominate the higher altitudes of the upper stratosphere (about 70,000 to 160,000 feet), the Pentagon has pushed its contractors to the technological edge, spending billions of dollars on experimentation with fanciful, futuristic aircraft.

For more than 20 years, DARPA pursued the dream of a globe-girding armada of solar-powered drones that could fly ceaselessly at 90,000 feet and would serve as the equivalent of low-flying satellites, that is, as platforms for surveillance intercepts or signals transmission. With an arching 250-foot wingspan covered with ultra-light solar panels, the “Helios” drone achieved a world-record altitude of 98,000 feet in 2001 before breaking up in a spectacular crash two years later. Nonetheless, DARPA launched the ambitious “Vulture” project in 2008 to build solar-powered aircraft with hugewingspans of 300 to 500 feet capable of ceaseless flight at 90,000 feet for five years at a time. After DARPA abandoned the project as impractical in 2012, Google and Facebook took over the technology with the goal of building future platforms for their customers’ Internet connections.

Since 2003, both DARPA and the Air Force have struggled to shatter the barrier for suborbital speeds by developing the dart-shaped Falcon Hypersonic Cruise Vehicle. Flying at an altitude of 100,000 feet, it was expected to “deliver 12,000 pounds of payload at a distance of 9,000 nautical miles from the continental United States in less than two hours.” Although the first test launches in 2010 and 2011 crashed in midflight, they did briefly reach an amazing 13,000 miles per hour, 22 times the speed of sound.

As often happens, failure produced progress. In the wake of the Falcon’s crashes, DARPA has applied its hypersonics to develop a missile capable of penetrating China’s air-defenses at an altitude of 70,000 feet and a speed of Mach 5 (about 3,300 miles per hour).

Simultaneously, Lockheed’s secret “Skunk Works” experimental unit is using the hypersonic technology to develop the SR-72 unmanned surveillance aircraft as a successor to its SR-71 Blackbird, the world’s fastest manned aircraft. When operational by 2030, the SR-72 is supposed to fly at about 4,500 mph, double the speed of its manned predecessor, with an extreme stealth fuselage making it undetectable as it crosses any continent in an hour at 80,000 feet scooping up electronic intelligence.

Space Wars in the Exosphere

In the exosphere, 200 miles above Earth, the age of space warfare dawned in April 2010 when the Defense Department launched the robotic X-37B spacecraft, just 29 feet long, into orbit for a seven-month mission. By removing pilots and their costly life-support systems, the Air Force’s secretive Rapid Capabilities Office had created a miniaturized, militarized space drone with thrusters to elude missile attacks and a cargo bay for possible air-to-air missiles. By the time the second X-37B prototype landed in June 2012, its flawless 15-month flight had established the viability of “robotically controlled reusable spacecraft.”

In the exosphere where these space drones will someday roam, orbital satellites will be the prime targets in any future world war. The vulnerability of U.S. satellite systems became obvious in 2007 when China used a ground-to-air missile to shoot down one of its own satellites in orbit 500 miles above the Earth. A year later, the Pentagon accomplished the same feat, firing an SM-3 missile from a Navy cruiser to score a direct hit on a U.S. satellite 150 miles high.

Unsuccessful in developing an advanced F-6 satellite, despite spending over $200 million in an attempt to split the module into more resilient microwave-linked components, the Pentagon has opted instead to upgrade its more conventional single-module satellites, such as the Navy’s five interconnected Mobile User Objective Systems (MUOS) satellites. These were launched between 2013 and 2016 into geostationary orbits for communications with aircraft, ships, and motorized infantry.

Reflecting its role as a player in the preparation for future and futuristic wars, the Joint Functional Component Command for Space, established in 2006, operates the Space Surveillance Network. To prevent a high-altitude attack on America, this worldwide system of radar and telescopes in 29 remote locations like Ascension Island and Kwajalein Atoll makes about 400,000 observations daily, monitoring every object in the skies.

The Future of Wonder Weapons

By the mid-2020s, if the military’s dreams are realized, the Pentagon’s triple-canopy shield should be able to atomize a single “terrorist” with a missile strike or, with equal ease, blind an entire army by knocking out all of its ground communications, avionics, and naval navigation. It’s a system that, were it to work as imagined, just might allow the United States a diplomatic veto of global lethality, an equalizer for any further loss of international influence.

But as in Vietnam, where aerospace wonders could not prevent a searing defeat, history offers some harsh lessons when it comes to technology trumping insurgencies, no less the fusion of forces (diplomatic, economic, and military) whose sum is geopolitical power. After all, the Third Reich failed to win World War II even though it had amazingly advanced “wonder weapons,” including the devastating V-2 missile, the unstoppable Me-262 jet fighter, and the ship-killing Hs-293 guided missile.

Washington’s dogged reliance on and faith in military technology to maintain its hegemony will certainly guarantee endless combat operations with uncertain outcomes in the forever war against terrorists along the ragged edge of Asia and Africa and incessant future low-level aggression in space and cyberspace. Someday, it may even lead to armed conflict with rivals China and Russia.

Whether the Pentagon’s robotic weapon systems will offer the U.S. an extended lease on global hegemony or prove a fantasy plucked from the frames of a Buck Rogers comic book, only the future can tell. Whether, in that moment to come, America will play the role of the indomitable Buck Rogers or the Martians he eventually defeated is another question worth asking. One thing is likely, however: that future is coming far more quickly and possibly far more painfully than any of us might imagine.

Saturday Matinee: Soldier/Demon With a Glass Hand

“Soldier” and “Demon With a Glass Hand” (both 1964) are two classic Outer Limits episodes with screenplays by science fiction author Harlan Ellison. Both take place in the shared universe of the Earth-Kyba War, a backdrop Ellison also used for a series of stories compiled in the graphic novel anthology “Night and the Enemy” as well as the short story “The Human Operators” which became an episode of the New Outer Limits. Soldier is notable for being the screenplay for which Ellison filed a lawsuit against producers of The Terminator for plagiarism. Demon With a Glass Hand features an excellent performance from Robert Culp as a man who carries the burden of being the last chance for humanity’s survival. Most of the episode’s action takes place in the Bradbury Building (most famous for the final scenes in Blade Runner) which adds to its creepy atmosphere.

Initial Thoughts on Blade Runner 2049

Upon hearing early reports of a planned Blade Runner sequel a couple years ago, I felt both anticipation and dread. I considered it a singular vision which didn’t necessarily need a sequel, yet could understand the desire to re-immerse oneself in the compelling world it introduced. Re-experiencing the film through its Director’s Cut and Final Cut versions in subsequent years seemed to me as satisfying as watching sequels since even the relatively minor changes had a significant impact on its meaning and the richness of the sound and production design allows for the discovery of new details with every viewing. Also, one’s subjective experience watching even the same movie can be vastly different depending on one’s age and other circumstances.

One of my earliest cinematic memories was seeing the first Star Wars film as a toddler. At around the same time I remember staying up late with my parents to watch the network television premiere of 2001: A Space Odyssey. Though I was too young to fully comprehend those films’ narratives, the spectacle and sounds definitely left an impression and established a lifelong appreciation for the sci-fi genre and it’s mind-expanding possibilities.

Flash forward to an evening sometime in early 1982. After viewing a commercial for Blade Runner I instantly knew it was a movie I had to see. In the short trailer there were glimpses of flying cars over vast cityscapes, the guy that played Han Solo in a trench coat, bizarre humanoid robots within settings as strange yet detailed as 2001: a Space Odyssey. My parents, responsible as they were, refused to give in to incessant demands to see Blade Runner and that Summer I must have been the only kid who reluctantly agreed to see “E.T.” as a compromise. I probably did enjoy it more than I expected to, but might have enjoyed it more had I not viewed it as a weak Blade Runner substitute and if I actually paid attention to the entire film.

Back then our family usually saw films at drive-in theaters and the one we went to that night had two screens, one showing E.T. and the other, to my delight and frustration, happened to be Blade Runner. Even without sound and at a distorted angle I was awestruck by the establishing shots of LA in 2019 (which I glanced over to witness just as E.T.’s ship was landing on the screen in front of us, and for the entire duration of the films my eyes would switch back and forth between screens. Even without understanding anything about the plot of Blade Runner it made the most fantastical elements of E.T. pale in comparison. Judging from the box-office receipts of its theatrical run, the majority must have thought otherwise since Blade Runner earned a relatively meager $28 million while E.T. was the breakout hit of the year with nearly $360 million.

Within a few years I’d see portions of Blade Runner on cable TV at a friend’s house and finally saw the complete film after my family got a VCR and it was one of the first videos I rented. The film served as a gateway to many other interests such as cyberpunk, film noir, electronic music, but most importantly, an appreciation of the novels of Philip K. Dick. Like a psychedelic drug, they inspired philosophical questioning regarding the nature of reality, consciousness, society and what it means to be human.

This background, which is probably not too dissimilar to other stories of obsessive fandom, outlines how one’s immersion in media is rooted not just in the work itself but how it resonates with and shapes aspects of one’s identity and personal narrative as much as other memories. There’s also a nostalgia factor involved because, similar to a souvenir or any object with sentimental value, revisiting such media can recapture a sense of the feelings and sensations associated with the initial experience and sometimes the milieu of the content as well. Nostalgia is a longing for the past, even a past one has never directly experienced, never was and/or never will be, often prompted by loneliness and disconnectedness. Because it can sometimes provide comfort and hope it’s a feeling too often exploited by the marketing industry as well as media producers such as those behind reboots and sequels. Though Blade Runner 2049 may not have been solely created to cash in on nostalgia for the original, as with most big studio sequels it’s still a factor.

The type of nostalgia evoked by Blade Runner is singular, for it envisions a (near and soon to be past) future through the lens of the early eighties combining a pastiche of styles of previous eras. The film also serves as a meditation on the importance of memory and its relation to identity and the human experience. In a sense, being a longtime fan of the film is like having nostalgia for distilled nostalgia. Also unique is the fact that it took 35 years for the sequel to get made, just a couple years shy of the year in which the original takes place. The long delay is largely due to Blade Runner being so far ahead of its time it took over a decade for it to be widely regarded as a science fiction masterwork. Also, it took an additional decade and a half to develop plans for a sequel. But perhaps now is the ideal time for a follow-up as aspects of our world become more dystopian and there’s a greater need for nostalgic escape, even through narratives predicting dystopia.

While the future world of the original Blade Runner was definitely grim, it was also oddly alluring due to it’s depiction of a chaotic globalized culture, exotic yet functional-looking technology and hybrid retro/futuristic aesthetic shaped by sources as diverse as punk rock fashion, Heavy Metal magazine, film noir and Futurism among many others. The imagery of Blade Runner 2049 expands on the original by visualizing how the future (or alternate reality) LA has evolved over the course of 30 years as well as the environmentally and socially devastating impact of trying to sustain a technocratic corporate global system for so long.

Blade Runner opens with shots of oil refineries in the city intercut with close-ups of a replicant’s eye. 2049 opens with a close-up of an eye and transitions to an overhead shot of an endless array of solar panels, indicating a post peak-oil world. Despite the use of cleaner energy, the world of 2049 is far from clean with the entirety of San Diego depicted as a massive dumping ground for Los Angeles. Scavengers survive off the scraps which are recycled into products assembled by masses of orphaned child laborers in dilapidated sweatshop factories.

The Los Angeles of Blade Runner 2049 looks (and is) even colder and more foreboding than before. Gone are the Art Deco-inspired architecture and furnishings, replaced by Brutalist architecture and fluorescent-lit utilitarian interiors (with a few exceptions such as Deckard’s residence, Stelline Corporation headquarters and the Wallace Corporation building). Aerial shots reveal a vast elevated sprawl of uniform city blocks largely consisting of dark flat rooftops with glimmers of light emanating from below, visible only in the deep but narrow chasms between.

One of the more prominant structures is the LAPD headquarters which looks like an armored watchtower, signalling its role as a hub of the future surveillance state panopticon. Though an imposing feature of the city’s skyline, it’s dwarfed by larger structures housing even more powerful institutions. Just as a massive ziggurat owned by the Tyrell Corporation dominated the cityscape of the first film, by 2049 the Wallace Corporation has bought out the Tyrell Corporation and not only claims the ziggurat but has constructed an absurdly large pyramid behind it. Protecting the entire coastline of the city is a giant sea wall, presumably to prevent mass flooding from rising sea levels.

In a referential nod to the original film, city scenes of 2049 display some of the same ads such as Atari, Coca-Cola and Pan Am, but even more distracting are product placements for Sony, one of the companies which produced the new film. Such details might work as “Easter eggs” for fans (and shareholders), but takes away from the verisimilitude of the world depicted in the film where the Wallace Corporation has such seeming dominance over the economy and society in general, it probably wouldn’t leave much room for competition large enough to afford mass advertising.

While the background characters in the city of the first film seemed rude or largely indifferent to one another, 30 years later citizens are more outwardly hostile. This could reflect increasing social tensions from economic stratification as well as hostility towards replicants because the protagonist of this film is openly identified as one. Speaking of which, Ryan Gosling turns in an excellent performance as the new Blade Runner, Officer K (aka Joe, an obvious reference to Joseph K from Kafka’s “The Trial”).

Ironically, the replicants and other forms of AI in 2049 seem a little more self-aware and human-like while the humans and social institutions have become correspondingly android-like. From the perspective of the future CEOs (and some today), both replicants and non-wealthy humans (known as “little people” in cityspeak) exist to be exploited for labor and money and then “retired” when no longer needed. Reflecting this brutal reality are the largely grey and drab color scheme of the landscapes, interiors, and fashions. Adding to the mood is the soundtrack which, while at times evoking the calmer and more subtle Vangelis music of the original, is more often louder and harsher, sometimes blending with the noisy diegetic (background) soundscape.

2049‘s screenplay is almost a meta-sequel, introducing plot elements seemingly designed to address problems and inconsistencies in the original which have been pointed out by fans and critics through the years. Numerous references to Blade Runner, while nostalgic and crowd-pleasing, are almost distracting enough to break the spell of the film (at least for those who’ve re-watched the original enough times to memorize every detail). Fortunately, just as frequently new revelations, concepts and hints at potential new directions pulls one back in. I especially appreciated the further exploration of the origins and impact of false memories and its parallels to the creation and consumption of media and the way the film expanded the scope of the story beyond the city. Also surprising were references to films partly inspired by the works of Philip K. Dick such as Ghost in the Shell, The Matrix, and Her as well as stylistic influences from more contemporary aesthetic subcultures such as glitch and vaporwave.

Like with most sequels, the main draw for fans is the chance to see familiar faces from the original and 2049 doesn’t disappoint too much. Judging from the posters, trailers, interviews, etc. it was clear Harrison Ford would make a return, but unfortunately it wasn’t until after the majority of the duration of the film had passed. Nevertheless, the reappearance of Ford’s character Deckard was memorable, found by Agent K as a disheveled hermit in an abandoned casino surrounded by copious amounts of alcohol and ancient pop culture detritus. Deckard is apparently as much of a drinker as in the first film, but now not just to block out the pain of the past and present but to escape to an idealized past. Though his involvement in the plot seemed too brief it nevertheless plays a pivotal role in resolving the central mystery of the film and providing additional metacommentary.

Ford’s performance is arguably more compelling than his work in the original, though his character’s lack of charisma in the first film could be seen as intentional. Deckard’s character arc in the film, as well as that of Ford’s last two iconic roles from the 80s he reprised, cements his status as our culture’s archetype for the deadbeat father. This seems inevitable in hindsight because for a generation of latchkey kids (many with actual deadbeat dads), stars such as Harrison Ford were virtually surrogate father figures. Thus, it makes sense that the beloved characters Ford drifted away from for so long would be written as variations of a long absent deadbeat parent in their last installments.

An interesting detail about the way Deckard was characterized in the film was how he seemed more in line with a typical baby-boomer today than the gen x-er one would be at that age 32 years from now.  For example, people in their seventies today are probably more likely to be nostalgic for Elvis and Sinatra than a seventy-something person in 2049 who in our actual timeline would have more likely spent formative years listening to grunge or hiphop. A possible subtextual meaning might be that like false memories, nostalgia for media of enduring cultural value transcends lived experience. The referencing of “real” pop-culture figures within the world of the film seemed anachronistic at first, but the way it was done was interesting and worked with the themes and aesthetic (I suppose it’s preferable to having something like Beastie Boys’ “Sabotage” shoe-horned into the film like in the Star Trek reboots). Getting back to the point, in the original Blade Runner, nostalgia permeated the film through its themes, production design, costumes and soundtrack. In Blade Runner 2049, nostalgia is a subtext of repeated callbacks to the original film, Agent K’s idealized retro relationship with his AI girlfriend Joi and Deckard’s hideout within the ruins of a city once associated with fun and glamour. The simulacrum of iconic figures from the past like Elvis and Marilyn Monroe (and Ford) haunting the deserted casino like ghosts reinforces the idea of media and culture’s ability to “implant” memories and resultant nostalgia.

As for the finale, I was disappointed that it was so far from the unconventional conclusion of the showdown between Roy Batty and Deckard. One could argue it’s a reflection of the state of the world (in and of film and reality), but it’d be nice to have a little more creativity and risk-taking. Though viscerally exciting and suspenseful, it wasn’t distinguishable enough from countless modern action films to be truly memorable. More satisfying was the epilogue which paralleled the contemplative nature of the original while reconnecting to the film’s recurring themes.

In a sense, the writers and director of Blade Runner 2049 were in a catch-22 situation. Creating a film too unlike or similar to the original Blade Runner would provoke criticism from fans. What director Denis Villeneuve and co-writers Hampton Fancher and Michael Green have managed to pull off is a balancing act of a film that’s unique in many ways yet interwoven with the original; nostalgic, but not in an obvious or overly sentimental way. Both have their flaws, but while I admire the thought and craft put into the sequel, I prefer the originality, tone, texture and atmosphere of Blade RunnerBlade Runner 2049 will likely satisfy most sci-fi fans, but I’m not sure it proves a sequel was necessary or that it stands alone as a classic.

Though not given the recognition it deserved in its time, Blade Runner was a groundbreaking and visionary film upping the bar for intellectual depth, moral complexity, production design and special effects to a degree not seen since 2001: a Space Odyssey. Its influence can be spotted in countless dystopian science fiction films made since. Though it’s too early to tell how influential Blade Runner 2049 will be, it doesn’t seem to have pushed the genre forward to a similar extent (of course contemporaneous opinions can seem wildly off the mark in hindsight). Regardless, it’s an above-average science fiction film by any reasonable standard so it’s unfortunate that judging from disappointing initial box-office reports, it seems to be following in the footsteps of the first Blade Runner pretty closely in that regard. Time will tell whether it achieves a similar cult status in years to come. Perhaps in 35 years?

 

You Want a Picture of the Future? Imagine a Boot Stamping on Your Face

By John W. Whitehead

Source: The Rutherford Institute

“The Internet is watching us now. If they want to. They can see what sites you visit. In the future, television will be watching us, and customizing itself to what it knows about us. The thrilling thing is, that will make us feel we’re part of the medium. The scary thing is, we’ll lose our right to privacy. An ad will appear in the air around us, talking directly to us.”—Director Steven Spielberg, Minority Report

We have arrived, way ahead of schedule, into the dystopian future dreamed up by such science fiction writers as George Orwell, Aldous Huxley, Margaret Atwood and Philip K. Dick.

Much like Orwell’s Big Brother in 1984, the government and its corporate spies now watch our every move.

Much like Huxley’s A Brave New World, we are churning out a society of watchers who “have their liberties taken away from them, but … rather enjoy it, because they [are] distracted from any desire to rebel by propaganda or brainwashing.”

Much like Atwood’s The Handmaid’s Tale, the populace is now taught to “know their place and their duties, to understand that they have no real rights but will be protected up to a point if they conform, and to think so poorly of themselves that they will accept their assigned fate and not rebel or run away.”

And in keeping with Philip K. Dick’s darkly prophetic vision of a dystopian police state—which became the basis for Steven Spielberg’s futuristic thriller Minority Report which was released 15 years ago—we are now trapped into a world in which the government is all-seeing, all-knowing and all-powerful, and if you dare to step out of line, dark-clad police SWAT teams and pre-crime units will crack a few skulls to bring the populace under control.

Minority Report is set in the year 2054, but it could just as well have taken place in 2017.

Seemingly taking its cue from science fiction, technology has moved so fast in the short time since Minority Report premiered in 2002 that what once seemed futuristic no longer occupies the realm of science fiction.

Incredibly, as the various nascent technologies employed and shared by the government and corporations alike—facial recognition, iris scanners, massive databases, behavior prediction software, and so on—are incorporated into a complex, interwoven cyber network aimed at tracking our movements, predicting our thoughts and controlling our behavior, Spielberg’s unnerving vision of the future is fast becoming our reality.

Both worlds—our present-day reality and Spielberg’s celluloid vision of the future—are characterized by widespread surveillance, behavior prediction technologies, data mining, fusion centers, driverless cars, voice-controlled homes, facial recognition systems, cybugs and drones, and predictive policing (pre-crime) aimed at capturing would-be criminals before they can do any damage.

Surveillance cameras are everywhere. Government agents listen in on our telephone calls and read our emails. Political correctness—a philosophy that discourages diversity—has become a guiding principle of modern society.

The courts have shredded the Fourth Amendment’s protections against unreasonable searches and seizures. In fact, SWAT teams battering down doors without search warrants and FBI agents acting as a secret police that investigate dissenting citizens are common occurrences in contemporary America.

We are increasingly ruled by multi-corporations wedded to the police state. Much of the population is either hooked on illegal drugs or ones prescribed by doctors. And bodily privacy and integrity has been utterly eviscerated by a prevailing view that Americans have no rights over what happens to their bodies during an encounter with government officials, who are allowed to search, seize, strip, scan, spy on, probe, pat down, taser, and arrest any individual at any time and for the slightest provocation.

All of this has come about with little more than a whimper from a clueless American populace largely comprised of nonreaders and television and internet zombies. But we have been warned about such an ominous future in novels and movies for years.

The following 15 films may be the best representation of what we now face as a society.

Fahrenheit 451 (1966). Adapted from Ray Bradbury’s novel and directed by Francois Truffaut, this film depicts a futuristic society in which books are banned, and firemen ironically are called on to burn contraband books—451 Fahrenheit being the temperature at which books burn. Montag is a fireman who develops a conscience and begins to question his book burning. This film is an adept metaphor for our obsessively politically correct society where virtually everyone now pre-censors speech. Here, a brainwashed people addicted to television and drugs do little to resist governmental oppressors.

2001: A Space Odyssey (1968). The plot of Stanley Kubrick’s masterpiece, as based on an Arthur C. Clarke short story, revolves around a space voyage to Jupiter. The astronauts soon learn, however, that the fully automated ship is orchestrated by a computer system—known as HAL 9000—which has become an autonomous thinking being that will even murder to retain control. The idea is that at some point in human evolution, technology in the form of artificial intelligence will become autonomous and that human beings will become mere appendages of technology. In fact, at present, we are seeing this development with massive databases generated and controlled by the government that are administered by such secretive agencies as the National Security Agency and sweep all websites and other information devices collecting information on average citizens. We are being watched from cradle to grave.

Planet of the Apes (1968). Based on Pierre Boulle’s novel, astronauts crash on a planet where apes are the masters and humans are treated as brutes and slaves. While fleeing from gorillas on horseback, astronaut Taylor is shot in the throat, captured and housed in a cage. From there, Taylor begins a journey wherein the truth revealed is that the planet was once controlled by technologically advanced humans who destroyed civilization. Taylor’s trek to the ominous Forbidden Zone reveals the startling fact that he was on planet earth all along. Descending into a fit of rage at what he sees in the final scene, Taylor screams: “We finally really did it. You maniacs! You blew it up! Damn you.” The lesson is obvious here, but will we listen? The script, although rewritten, was initially drafted by Rod Serling and retains Serling’s Twilight Zone-ish ending.

THX 1138 (1970). George Lucas’ directorial debut, this is a somber view of a dehumanized society totally controlled by a police state. The people are force-fed drugs to keep them passive, and they no longer have names but only letter/number combinations such as THX 1138. Any citizen who steps out of line is quickly brought into compliance by robotic police equipped with “pain prods”—electro-shock batons. Sound like tasers?

A Clockwork Orange (1971). Director Stanley Kubrick presents a future ruled by sadistic punk gangs and a chaotic government that cracks down on its citizens sporadically. Alex is a violent punk who finds himself in the grinding, crushing wheels of injustice. This film may accurately portray the future of western society that grinds to a halt as oil supplies diminish, environmental crises increase, chaos rules, and the only thing left is brute force.

Soylent Green (1973). Set in a futuristic overpopulated New York City, the people depend on synthetic foods manufactured by the Soylent Corporation. A policeman investigating a murder discovers the grisly truth about what soylent green is really made of. The theme is chaos where the world is ruled by ruthless corporations whose only goal is greed and profit. Sound familiar?

Blade Runner (1982). In a 21st century Los Angeles, a world-weary cop tracks down a handful of renegade “replicants” (synthetically produced human slaves). Life is now dominated by mega-corporations, and people sleepwalk along rain-drenched streets. This is a world where human life is cheap, and where anyone can be exterminated at will by the police (or blade runners). Based upon a Philip K. Dick novel, this exquisite Ridley Scott film questions what it means to be human in an inhuman world.

Nineteen Eighty-Four (1984). The best adaptation of Orwell’s dark tale, this film visualizes the total loss of freedom in a world dominated by technology and its misuse, and the crushing inhumanity of an omniscient state. The government controls the masses by controlling their thoughts, altering history and changing the meaning of words. Winston Smith is a doubter who turns to self-expression through his diary and then begins questioning the ways and methods of Big Brother before being re-educated in a most brutal fashion.

Brazil (1985). Sharing a similar vision of the near future as 1984 and Franz Kafka’s novel The Trial, this is arguably director Terry Gilliam’s best work, one replete with a merging of the fantastic and stark reality. Here, a mother-dominated, hapless clerk takes refuge in flights of fantasy to escape the ordinary drabness of life. Caught within the chaotic tentacles of a police state, the longing for more innocent, free times lies behind the vicious surface of this film.

They Live (1988). John Carpenter’s bizarre sci-fi social satire action film assumes the future has already arrived. John Nada is a homeless person who stumbles across a resistance movement and finds a pair of sunglasses that enables him to see the real world around him. What he discovers is a world controlled by ominous beings who bombard the citizens with subliminal messages such as “obey” and “conform.” Carpenter manages to make an effective political point about the underclass—that is, everyone except those in power. The point: we, the prisoners of our devices, are too busy sucking up the entertainment trivia beamed into our brains and attacking each other up to start an effective resistance movement.

The Matrix (1999). The story centers on a computer programmer Thomas A. Anderson, secretly a hacker known by the alias “Neo,” who begins a relentless quest to learn the meaning of “The Matrix”—cryptic references that appear on his computer. Neo’s search leads him to Morpheus who reveals the truth that the present reality is not what it seems and that Anderson is actually living in the future—2199. Humanity is at war against technology which has taken the form of intelligent beings, and Neo is actually living in The Matrix, an illusionary world that appears to be set in the present in order to keep the humans docile and under control. Neo soon joins Morpheus and his cohorts in a rebellion against the machines that use SWAT team tactics to keep things under control.

Minority Report (2002). Based on a short story by Philip K. Dick and directed by Steven Spielberg, the setting is 2054 where PreCrime, a specialized police unit, apprehends criminals before they can commit the crime. Captain Anderton is the chief of the Washington, DC, PreCrime force which uses future visions generated by “pre-cogs” (mutated humans with precognitive abilities) to stop murders. Soon Anderton becomes the focus of an investigation when the precogs predict he will commit a murder. But the system can be manipulated. This film raises the issue of the danger of technology operating autonomously—which will happen eventually if it has not already occurred. To a hammer, all the world looks like a nail. In the same way, to a police state computer, we all look like suspects. In fact, before long, we all may be mere extensions or appendages of the police state—all suspects in a world commandeered by machines.

V for Vendetta (2006). This film depicts a society ruled by a corrupt and totalitarian government where everything is run by an abusive secret police. A vigilante named V dons a mask and leads a rebellion against the state. The subtext here is that authoritarian regimes through repression create their own enemies—that is, terrorists—forcing government agents and terrorists into a recurring cycle of violence. And who is caught in the middle? The citizens, of course. This film has a cult following among various underground political groups such as Anonymous, whose members wear the same Guy Fawkes mask as that worn by V.

Children of Men (2006). This film portrays a futuristic world without hope since humankind has lost its ability to procreate. Civilization has descended into chaos and is held together by a military state and a government that attempts to keep its totalitarian stronghold on the population. Most governments have collapsed, leaving Great Britain as one of the few remaining intact societies. As a result, millions of refugees seek asylum only to be rounded up and detained by the police. Suicide is a viable option as a suicide kit called Quietus is promoted on billboards and on television and newspapers. But hope for a new day comes when a woman becomes inexplicably pregnant.

Land of the Blind (2006). This dark political satire is based on several historical incidents in which tyrannical rulers were overthrown by new leaders who proved just as evil as their predecessors. Maximilian II is a demented fascist ruler of a troubled land named Everycountry who has two main interests: tormenting his underlings and running his country’s movie industry. Citizens who are perceived as questioning the state are sent to “re-education camps” where the state’s concept of reality is drummed into their heads. Joe, a prison guard, is emotionally moved by the prisoner and renowned author Thorne and eventually joins a coup to remove the sadistic Maximilian, replacing him with Thorne. But soon Joe finds himself the target of the new government.

All of these films—and the writers who inspired them—understood what many Americans, caught up in their partisan, flag-waving, zombified states, are still struggling to come to terms with: that there is no such thing as a government organized for the good of the people. Even the best intentions among those in government inevitably give way to the desire to maintain power and control at all costs.

Eventually, as I point out in my book Battlefield America: The War on the American People, even the sleepwalking masses (who remain convinced that all of the bad things happening in the police state—the police shootings, the police beatings, the raids, the roadside strip searches—are happening to other people) will have to wake up.

Sooner or later, the things happening to other people will start happening to us and our loved ones.

When that painful reality sinks in, it will hit with the force of a SWAT team crashing through your door, a taser being aimed at your stomach, and a gun pointed at your head. And there will be no channel to change, no reality to alter, and no manufactured farce to hide behind.

As George Orwell warned, “If you want a picture of the future, imagine a boot stamping on a human face forever.”

End-times for humanity

 

Humanity is more technologically powerful than ever before, and yet we feel ourselves to be increasingly fragile. Why?

By Claire Colebrook

Source: Aeon

The end of the world is a growth industry. You can almost feel Armageddon in the air: from survivalist and ‘prepper’ websites (survivopedia.com, doomandbloom.net, prepforshtf.com) to new academic disciplines (‘disaster studies’, ‘Anthropocene studies’, ‘extinction studies’), human vulnerability is in vogue.

The panic isn’t merely about civilisational threats, but existential ones. Beyond doomsday proclamations about mass extinction, climate change, viral pandemics, global systemic collapse and resource depletion, we seem to be seized by an anxiety about losing the qualities that make us human. Social media, we’re told, threatens our capacity for empathy and genuine connection. Then there’s the disaster porn and apocalyptic cinema, in which zombies, vampires, genetic mutants, artificial intelligence and alien invaders are oh-so-nearly human that they cast doubt on the value and essence of the category itself.

How did we arrive at this moment in history, in which humanity is more technologically powerful than ever before, and yet we feel ourselves to be increasingly fragile? The answer lies in the long history of how we’ve understood the quintessence of ‘the human’, and the way this category has fortified itself by feeding on the fantasy of its own collapse. Fears about the frailty of human wisdom go back at least as far as Ancient Greece and the fable of Plato’s cave, in which humans are held captive and can only glimpse the shadows of true forms flickering on the stone walls. We prisoners struggle to turn towards the light and see the source (or truth) of images, and we resist doing so. In another Platonic dialogue, the Phaedrus, Socrates worries that the very medium of knowledge – writing – might discourage us from memorising and thinking for ourselves. It’s as though the faculty of reason that defines us is also something we’re constantly in danger of losing, and even tend to avoid.

This paradoxical logic of loss – in which we value that which we’re at the greatest risk of forsaking – is at work in how we’re dealing with our current predicament. It’s only by confronting how close we are to destruction that we might finally do something; it’s only by embracing the vulnerability of humanity itself that we have any hope of establishing a just future. Or so say the sages of pop culture, political theory and contemporary philosophy. Ecological destruction is what will finally force us to act on the violence of capitalism, according to Naomi Klein in This Changes Everything: Capitalism vs the Climate (2014). The philosopher Martha Nussbaum has long argued that an attempt to secure humans from fragility and vulnerability explains the origins of political hierarchies from Plato to the present; it is only if we appreciate our own precarious bodily life, and the emotions and fears that attach to being human animals, that we can understand and overcome racism, sexism and other irrational hatreds. Disorder and potential destruction are actually opportunities to become more robust, argues Nassim Nicholas Taleb in Antifragile (2012) – and in Thank You for Being Late (2016), the New York Times’ columnist Thomas Friedman claims that the current, overwhelming ‘age of accelerations’ is an opportunity to take a pause. Meanwhile, Oxford University’s Future of Humanity Institute pursues research focused on avoiding existential catastrophes, at the same time as working on technological maturity and ‘superintelligence’.

It’s here that one can discern a tight knit between fragility and virility. ‘Humanity’ is a hardened concept, but a brittle one. History suggests that the more we define ‘the human’ as a subject of intellect, mastery and progress – the more ‘we’ insist on global unity under the umbrella of a supposedly universal kinship – the less possible it becomes to imagine any other mode of existence as human. The apocalypse is typically depicted as humanity reduced to mere life, fragile, exposed to all forms of exploitation and the arbitrary exercise of power. But these dystopian future scenarios are nothing worse than the conditions in which most humans live as their day-to-day reality. By ‘end of the world’, we usually mean the end of our world. What we don’t tend to ask is who gets included in the ‘we’, what it cost to attain our world, and whether we were entitled to such a world in the first place.

Stories about the end of time have a long history, from biblical eschatology to medieval plague narratives. But our fear of a peculiarly ‘human’ apocalypse really begins with the 18th-century Enlightenment. This was the intellectual birthplace of the modern notion of ‘humanity’, a community of fellow beings united by shared endowments of reason and rights. This humanist ideal continues to inform progressive activism and democratic discourse to this day. However, it’s worth taking a moment to go back to René Descartes’s earlier declaration of ‘I think, therefore I am’, and ask how it was possible for an isolated self to detach their person from the world, and devote writing, reading and persuasion to the task of defending an isolated and pure ego. Or fast-forward a few centuries to 1792, and consider how Mary Wollstonecraft had the time to read about the rights of man, and then demand the rights of woman.

The novelist Amitav Ghosh provides a compelling answer in his study of global warming, The Great Derangement (2017). Colonisation, empire and climate change are inextricably intertwined as practices, he says. The resources of what would become the Third World were crucial in creating the comfortable middle-class existences of the modern era, but those resources could not be made available to all: ‘the patterns of life that modernity engenders can only be practised by a small minority … Every family in the world cannot have two cars, a washing machine and a refrigerator – not because of technical or economic limitations but because humanity would asphyxiate in the process.’

Ghosh disputes one crucial aspect of the story of humanity: that it should involve increasing progress and inclusion until we all reap the benefits. But I’d add a further strand to this dissenting narrative: the Enlightenment conception of rights, freedom and the pursuit of happiness simply wouldn’t have been imaginable if the West had not enjoyed a leisured ease and technological sophistication that allowed for an increasingly liberal middle class. The affirmation of basic human freedoms could become widespread moral concerns only because modern humans were increasingly comfortable at a material level – in large part thanks to the economic benefits afforded by the conquest, colonisation and enslavement of others. So it wasn’t possible to be against slavery and servitude (in the literal and immediate sense) until large portions of the globe had been subjected to the industries of energy-extraction. The rights due to ‘us all’, then, relied on ignoring the fact that these favourable conditions had been purchased at the expense of the lives of other humans and non-humans. A truly universal entitlement to security, dignity and rights came about only because the beneficiaries of ‘humanity’ had secured their own comfort and status by rendering those they deemed less than human even more fragile.

What’s interesting about the emergence of this 18th-century humanism isn’t only that it required a prior history of the abjection it later rejected. It’s also that the idea of ‘humanity’ continued to have an ongoing relation to that same abjection. After living off the wealth extracted from the bodies and territories of ‘others’, Western thought began to extend the category of ‘humanity’ to capture more and more of these once-excluded individuals, via abolitionism, women’s suffrage and movements to expand the franchise. In a strange way these shifts resemble the pronouncements of today’s tech billionaires, who, having extracted unimaginable amounts of value from the mechanics of global capitalism, are now calling for Universal Basic Income to offset the impacts of automation and artificial intelligence. Mastery can afford to correct itself only from a position of leisured ease, after all.

But there’s a twist. While everyone’s ‘humanity’ had become inherent and unalienable, certain people still got to be more fully ‘realised’ as humans than others. As the circle of humanity grew to capture the vulnerable, the risk that ‘we’ would slip back into a semi-human or non-human state seemed more present than before – and so justified demands for an ever more elevated and robust conception of ‘the human’.

One can see this dynamic at work in the 18th-century discussions about slavery. By then the practice itself had become morally repugnant, not only because it dehumanised slaves, but because the very possibility of enslavement – of some humans not realising their potential as rational subjects – was considered pernicious for humanity as a whole. In A Vindication of the Rights of Woman (1792), for example, Wollstonecraft compared women to slaves, but insisted that slavery would allow no one to be a true master. ‘We’ are all rendered more brutal and base by enslaving others, she said. ‘[Women] may be convenient slaves,’ Wollstonecraft wrote, ‘but slavery will have its constant effect, degrading the master and the abject dependent.’

These statements assumed that an entitlement to freedom was the natural condition of the ‘human’, and that real slavery and servitude were no longer genuine threats to ‘us’. When Jean-Jacques Rousseau argued in The Social Contract (1762) that ‘man is born free, and everywhere he is in chains’, he was certainly not most concerned about those who were literally in chains; likewise William Blake’s notion of ‘mind-forg’d manacles’ implies that the true horror is not physical entrapment but a capacity to enslave oneself by failing to think. It’s thus at the very moment of abolition, when slavery is reduced to a mere symbol of fragility, that it becomes a condition that imperils the potency of humanity from within.

I’m certainly not suggesting that there is something natural or inevitable about slavery. What I’m arguing is that the very writers who argued against slavery, who argued that slavery was not fitting for humans in their very nature, nevertheless saw the unnatural and monstrous potential for slavery as far too proximate to humans in their proper state. Yet rather than adopt a benevolence towards the world in light of this vulnerability in oneself, the opposite has tended to be the case. It is because humans can fail to reach their rational potential and be ‘everywhere in chains’ that they must ever more vigilantly secure their future. ‘Humanity’ was to be cherished and protected precisely because it was so precariously elevated above mere life. The risk of debasement to ‘the human’ turned into a force that solidified and extended the category itself. And so slavery was not conceived as a historical condition for some humans, subjected by ruthless, inhuman and overpowering others; it was an ongoing insider threat, a spectre of fragility that has justified the drive for power.

How different are the stories we tell ourselves today? Movies are an interesting barometer of the cultural mood. In the 1970s, cinematic disaster tales routinely featured parochial horrors such as shipwrecks (The Poseidon Adventure, 1972), burning skyscrapers (The Towering Inferno, 1974), and man-eating sharks (Jaws, 1975). Now, they concern the whole of humanity. What threatens us today are not localised incidents, but humans. The wasteland of Interstellar (2014) is one of resource depletion following human over-consumption; the world reduced to enslaved existence in Elysium (2013) is a result of species-bifurcation, as some humans seize the only resources left, while those left on Earth enjoy a life of indentured labour. That the world will end (soon) seems to be so much a part of the cultural imagination that we entertain ourselves by imagining how, not whether, it will play out.

But if you look closely, you’ll see that most ‘end of the world’ narratives end up becoming ‘save the world’ narratives. Popular culture might heighten the scale and intensity of catastrophe, but it does so with the payoff of a more robust and final triumph. Interstellar pits the frontier spirit of space exploration over a miserly and merely survivalist bureaucracy, culminating with a retired astronaut risking it all to save the world. Even the desolate cinematic version (2009) of Cormac McCarthy’s novel The Road (2006) concludes with a young boy joining a family. The most reduced, enslaved, depleted and lifeless terrains are still opportunities for ‘humanity’ to confront the possibility of non-existence in order to achieve a more resilient future.

Such films hint at a desire for new ways of being. In Avatar (2009), a militaristic and plundering West invades the moon Pandora in order to mine ‘unobtanium’; they are ultimately thwarted by the indigenous Na’vi, whose attitude to nature is not one of acquisition but of symbiotic harmony. Native ecological wisdom and attunement is what ultimately leads to victory over the instrumental reason of the self-interested invaders. In Mad Max: Fury Road (2015), a resource-depleted future world is controlled by a rapacious, parasitic, and wasteful elite. But salvation comes from the revolutionary return of a group of ecologically attuned and other-directed women, all blessed with a mythic wisdom that enables ultimate triumph over the violent self-interest of the literally blood-sucking tyrant family. These stories rely on quasi-indigenous and feminist images of community to offer alternatives to Western hyper-extraction; both resolve their disaster narratives with the triumph of intuitive and holistic modes of existence over imperialism and militarism. They not only depict the post-post-apocalyptic future in joyous terms, but do so by appealing to a more benevolent and ecologically attuned humanity.

These films whisper: take a second glance at the present, and what looks like a desperate situation might actually be an occasion for enhancement. The very world that appears to be at the brink of destruction is really a world of opportunity. Once again, the self-declared universal humanity of the Enlightenment – that same humanity that enslaved and colonised on the grounds that ‘we’ would all benefit from the march of reason and progress – has started to appear as both fragile and capable of ethical redemption. It’s our own weakness, we seem to say, that endows humanity with a right to ultimate mastery.

What contemporary post-apocalyptic culture fears isn’t the end of ‘the world’ so much as the end of ‘a world’ – the rich, white, leisured, affluent one. Western lifestyles are reliant on what the French philosopher Bruno Latour has referred to as a ‘slowly built set of irreversibilities’, requiring the rest of the world to live in conditions that ‘humanity’ regards as unliveable. And nothing could be more precarious than a species that contracts itself to a small portion of the Earth, draws its resources from elsewhere, transfers its waste and violence, and then declares that its mode of existence is humanity as such.

To define humanity as such by this specific form of humanity is to see the end of that humanity as the end of the world. If everything that defines ‘us’ relies upon such a complex, exploitative and appropriative mode of existence, then of course any diminution of this hyper-humanity is deemed to be an apocalyptic event. ‘We’ have lost our world of security, we seem to be telling ourselves, and will soon be living like all those peoples on whom we have relied to bear the true cost of what it means for ‘us’ to be ‘human’.

The lesson that I take from this analysis is that the ethical direction of fragility must be reversed. The more invulnerable and resilient humanity insists on trying to become, the more vulnerable it must necessarily be. But rather than looking at the apocalypse as an inhuman horror show that might befall ‘us’, we should recognise that what presents itself as ‘humanity’ has always outsourced its fragility to others. ‘We’ have experienced an epoch of universal ‘human’ benevolence, a globe of justice and security as an aspiration for all, only by intensifying and generating utterly fragile modes of life for other humans. So the supposedly galvanising catastrophes that should prompt ‘us’ to secure our stability are not only things that many humans have already lived through, but perhaps shouldn’t be excluded from how we imagine our own future.

This is why contemporary disaster scenarios still depict a world and humans, but this world is not ‘the world’, and the humans who are left are not ‘humanity’. The ‘we’ of humanity, the ‘we’ that imagines itself to be blessed with favourable conditions that ought to extend to all, is actually the most fragile of historical events. If today ‘humanity’ has started to express a sense of unprecedented fragility, this is not because a life of precarious, exposed and vulnerable existence has suddenly and accidentally interrupted a history of stability. Rather, it reveals that the thing calling itself ‘humanity’ is better seen as a hiatus and an intensification of an essential and transcendental fragility.