Saturday Matinee: Obsolete

Source: Truthstream Media

The Future Doesn’t Need Us… Or So We’ve Been Told. With the rise of technology and the real-time pressures of an online, global economy, humans will have to be very clever – and very careful – not to be left behind by the future. From the perspective of those in charge, human labor is losing its value, and people are becoming a liability. This documentary reveals the real motivation behind the secretive effort to reduce the population and bring resource use into strict, centralized control. Could it be that the biggest threat we face isn’t just automation and robots destroying jobs, but the larger sense that humans could become obsolete altogether? *Please watch and share!* Link to film: http://amzn.to/2f69Ocr

Saturday Matinee: Summer Wars

“Summer Wars” (2009) is a sci-fi anime directed by Mamoru Hosoda (The Girl Who Leapt Through Time) and animated by Madhouse studio. The film’s plot follows Kenji Koiso, an eleventh-grade math prodigy who, while visiting his friend Natsuki’s great-grandmother, is falsely implicated in the hacking of a virtual world by a rogue AI called Love Machine. With the help of his community, Kenji must undo the damage and prevent the AI’s spread into the real world.

Watch the full film here.

Fear our new robot overlords: This is why you need to take artificial intelligence seriously

Matrix-Machines-Best-Movie-AI

Killer computers determined to kill us? Nope. Forget “Terminator” — there’s something more specific to worry about

By Phil Torres

Source: Salon

There are a lot of major problems today with tangible, real-world consequences. A short list might include terrorism, U.S.-Russian relations, climate change and biodiversity loss, income inequality, health care, childhood poverty, and the homegrown threat of authoritarian populism, most notably associated with the presumptive nominee for the Republican Party, Donald Trump.

Yet if you’ve been paying attention to the news for the past several years, you’ve almost certainly seen articles from a wide range of news outlets about the looming danger of artificial general intelligence, or “AGI.” For example, Stephen Hawking has repeatedly expressed that “the development of full artificial intelligence could spell the end of the human race,” and Elon Musk — of Tesla and SpaceX fame — has described the creation of superintelligence as “summoning the demon.” Furthermore, the Oxford philosopher and director of the Future of Humanity Institute, Nick Bostrom, published a New York Times best-selling book in 2014 called Superintelligence, in which he suggests that the “default outcome” of building a superintelligent machine will be “doom.”

What’s with all this fear-mongering? Should we really be worried about a takeover by killer computers hell-bent on the total destruction of Homo sapiens? The first thing to recognize is that a Terminator-style war between humanoid robots is not what the experts are anxious about. Rather, the scenarios that keep these individuals awake at night are far more catastrophic. This may be difficult to believe but, as I’ve written elsewhere, sometimes truth is stranger than science fiction. Indeed, given that the issue of AGI isn’t going anywhere anytime soon, it’s increasingly important for the public to understand exactly why the experts are nervous about superintelligent machines. As the Future of Life Institute recently pointed out, there’s a lot of bad journalism about AGI out there. This is a chance to correct the record.

Toward this goal, step one is to realize is that your brain is an information-processing device. In fact, many philosophers talk about the brain as the hardware — or rather, the “wetware” — of the mind, and the mind as the software of the brain. Directly behind your eyes is a high-powered computer that weighs about three pounds and has roughly the same consistency as Jell-o. It’s also the most complex object in the known universe. Nonetheless, the rate at which it’s able to process information is much, much slower than the information-processing speed of an actual computer. The reason is that computers process information by propagating electrical potentials, and electrical potentials move at the speed of light, whereas the fastest signals in your brain travel at around 100 miles per second. Fast, to be sure, but not nearly as fast as light.

Consequently, an AGI could think about the world at speeds many orders of magnitude faster than our brains can. From the AGI’s point of view, the outside world — including people — would move so slowly that everything would appear almost frozen. As the theorist Eliezer Yudkowsky calculates, for a computer running a million times faster than our puny brains, “a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”

Already, then, an AGI would have a huge advantage. Imagine yourself in a competition against a machine that has a whole year to work through a cognitive puzzle for every 31 seconds that you spend trying to think up a solution. The mental advantage of the AGI would be truly profound. Even a large team of humans working together would be no match for a single AGI with so much time on its hands. Now imagine that we’re not in a puzzle-solving competition with an AGI but a life-and-death situation in which the AGI wants to destroy humanity. While we struggle to come up with strategies for keeping it contained, it would have ample time to devise a diabolical scheme to exploit any technology within electronic reach for the purpose of destroying humanity.

But a diabolical AGI isn’t — once again — what many experts are actually worried about. This is a crucial point that the Harvard psychologist Steven Pinker misses in a comment about AGI for the website Edge.org. To quote Pinker at length:

“The other problem with AGI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.” Pinker then concludes with, “It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.”

Unfortunately, such criticism misunderstands the danger. While it’s conceptually possible that an AGI really does have malevolent goals — for example, someone could intentionally design an AGI to be malicious — the more likely scenario is one in which the AGI kills us because doing so happens to be useful. By analogy, when a developer wants to build a house, does he or she consider the plants, insects, and other critters that happen to live on the plot of land? No. Their death is merely incidental to a goal that has nothing to do with them. Or consider the opening scenes of The Hitchhiker’s Guide to the Galaxy, in which “bureaucratic” aliens schedule Earth for demolition to make way for a “hyperspatial express route” — basically, a highway. In this case, the aliens aren’t compelled to destroy us out of hatred. We just happen to be in the way.

The point is that what most theorists are worried about is an AGI whose values — or final goals — don’t fully align with ours. This may not sound too bad, but a bit of reflection shows that if an AGI’s values fail to align with ours in even the slightest ways, the outcome could very well be, as Bostrom argues, doom. Consider the case of an AGI — thinking at the speed of light, let’s not forget — that is asked to use its superior intelligence for the purpose of making humanity happy. So what does it do? Well, it destroys humanity, because people can’t be sad if they don’t exist. Start over. You tell it to make humanity happy, but without killing us. So it notices that humans laugh when we’re happy, and hooks up a bunch of electrodes to our faces and diaphragm that make us involuntarily convulse as if we’re laughing. The result is a strange form of hell. Start over, again. You tell it to make us happy without killing us or forcing our muscles to contract. So it implants neural electrodes into the pleasure centers of everyone’s brains, resulting in a global population in such euphoric trances that people can no longer engage in the activities that give life meaning. Start over — once more. This process can go on for hours. At some point it becomes painfully obvious that getting an AGI’s goals to align with ours is going to be a very, very tricky task.

Another famous example that captures this point involves a superintelligence whose sole mission is to manufacture paperclips. This sounds pretty benign, right? How could a “paperclip maximizer” pose an existential threat to humanity? Well, if the goal is to make as many paperclips as possible, then the AGI will need resources to do this. And what are paperclips composed of? Atoms — the very same physical stuff out of which your body is composed. Thus, for the AGI, humanity is nothing more than a vast reservoir of easily accessible atoms, atoms, atoms. As Yudkowsky eloquently puts it, “The [AGI] does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” And just like that, the flesh and bones of human beings are converted into bendable metal for holding short stacks of paper.

At this point, one might think the following, “Wait a second, we’re talking about superintelligence, right? How could a truly superintelligent machine be fixated on something so dumb as creating as many paperclips as possible?” Well, just look around at humanity. By every measure, we are by far the most intelligent creatures on our planetary spaceship. Yet our species is obsessed with goals and values that are, when one takes a step back and peers at the world with “new eyes,” incredibly idiotic, perplexing, harmful, foolish, self-destructive, other-destructive, and just plain weird.

For example, some people care so much about money that they’re willing to ruin friendships, destroy lives and even commit murder or start wars to acquire it. Others are so obsessed with obeying the commandments of ancient “holy texts” that they’re willing to blow themselves up in a market full of non-combatants. Or consider a less explicit goal: sex. Like all animals, humans have an impulse to copulate, and this impulse causes us to behave in certain ways — in some cases, to risk monetary losses and personal embarrassment. The appetite for sex is just there, pushing us toward certain behaviors, and there’s little we can do about the urge itself.

The point is that there’s no strong connection between how intelligent a being is and what its final goals are. As Pinker correctly notes above, intelligence is nothing more than a measure of one’s ability to achieve a particular aim, whatever it happens to be. It follows that any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as, well, stupid. A superintelligent machine could be no less infatuated with obeying Allah’s divine will or conquering countries for oil as some humans are.

So far, we’ve discussed the thought-speed of machines, the importance of making sure their values align with ours, and the weak connection between intelligence and goals. These considerations alone warrant genuine concern about AGI. But we haven’t yet mentioned the clincher that makes AGI an utterly unique problem unlike anything humanity has ever encountered. To understand this crucial point, consider how the airplane was invented. The first people to keep a powered aircraft airborne were the Wright brothers. On the windy beaches of North Carolina, they managed to stay off the ground for a total of 12 seconds. This was a marvelous achievement, but the aircraft was hardly adequate for transporting goods or people from one location to another. So, they improved its design, as did a long lineage of subsequent inventors. Airplanes were built with one, two, or three wings, composed of different materials, and eventually the propeller was replaced by the jet engine. One particular design — the Concorde — could even fly faster than the speed of sound, traversing the Atlantic from New York to London in less than 3.5 hours.

The crucial idea here is that the airplane underwent many iterations of innovation. Problems that arose in previous designs were improved upon, leading to increasingly safe and reliable aircraft. But this is not the situation we’re likely to be in with AGI. Rather, we’re likely to have one, and only one, chance to get all the problems mentioned above exactly right. Why? Because intelligence is power. For example, we humans are the dominant species on the planet not because of our long claws, sharp teeth and bulky musculatures. The key difference between Homo sapiens and the rest of the Animal Kingdom concerns our oversized brains, which enable us to manipulate and rearrange the world in incredible ways. It follows that if an AGI were to exceed our level of intelligence, it could potentially dominate not only the biosphere, but humanity as well.

Even more, since creating intelligent machines is an intellectual task, an AGI could attempt to modify its own code, a possibility known as “recursive self-improvement.” The result could be an exponential intelligence explosion that, before one has a chance to say “What the hell is happening?,” yields a super-super-superintelligent AGI, or a being that towers over us to the extent that we tower over the lowly cockroach. Whoever creates the first superintelligent computer — whether it’s Google, the U.S. government, the Chinese government, the North Korean government, or a lone hacker in her or his garage — they’ll have to get everything just right the first time. There probably won’t be opportunities for later iterations of innovation to fix flaws in the original design, if there are any. When it comes to AGI, the stakes are high.

It’s increasingly important for the public to understand the nature of thinking machines and why some experts are so worried about them. Without a grasp of these issues, claims like “A paperclip maximizer could destroy humanity!” will sound as apocalyptically absurd as “The Rapture is near! Save your soul while you still can!” Consequently, organizations dedicated to studying AGI safety could get defunded or shut down, and the topic of AGI could become the target of misguided mockery. The fact is that if we manage to create a “friendly” AGI, the benefits to humanity could be vast. But if we fail to get things right on the first go around, the naked ape could very well end up as a huge pile of paperclips.

 

 

Phil Torres is the founder of the X-Risks Institute and author of The End: What Science and Religion Tell Us About the Apocalypse. He’s on Twitter @xriskology.

Ex Machina, et al, and the Metaphysics of Computer Consciousness

ex_machina_2015_movie-wide

By Steven Harp

Source: Reality Sandwich

( ex machina from the phrase “deus ex machina” meaning “god from the machine”)

It seems unquestioned in the world today that science is on the verge of creating consciousness with computers. In a Promethean rapture inspired by its enormous technological success, science aspires now to seize control of fundamental powers at the very heart of the universe.

With the advent of modern science the reality of human consciousness has come to be regarded as physical alone.  A caricature of consciousness has been compounded from such disparate elements as digital code, speculative evolutionary psychology, and a “neuro-phrenology” derived from colorized brain imaging. This caricature from scientists and engineers has gone into public circulation with the help of the media and it has become an acceptable counterfeit currency. And with cinematic virtuosity it has been made plausible by representations in the movies.

In the movie, Ex Machina we see another recycling of the classic Frankenstein story: Life is created from nonliving materials. A lone genius in an isolated laboratory, using the mysterious powers of science, creates new life. In the original Frankenstein story we have a dead body made alive by electricity. In Ex Machina we have a non-living “wetware” circuit given a mechanical body and made conscious by electricity.

This takes the story to a whole new level. Here the scientist is creating the very roots of being. To create consciousness-itself is equivalent to creating de novo cosmic absolutes such as space, matter, or light. It would be equivalent to creating a spectrum of color, a scale of tones, entire ranges of emotion, thought, pain, pleasure, and the entire dictionary of the contents of consciousness, all from the dark and silent abyss of nothingness.

How can something with neither mass nor dimension arise from that which has mass and dimension? How can that which has subjectivity and intentionality arise from that which has objectivity and has no intentionality?  This is the magisterial conundrum and is recognized as the greatest mystery in science.  No one, neither philosopher nor scientist, has a clue to the answer. It has famously been labeled the “hard problem of consciousness” by David Chalmers.

In both cases we see technology extrapolated to the creation our most fundamental being in which man becomes the maker of his most central essence, of what he is himself. The creation becomes the creator, the hand that draws itself.

This year alone has seen 8 major movies featuring synthetic or digital consciousness: Transcendence, Her, ChappieEx Machina, Lucy, Extant, Tomorrowland, and Terminator again.  One has to ask, is there something more than a good story line here?

The claim that technology will give birth to consciousness itself within a computer is entirely based on implicit assumptions about the nature of consciousness and reality. The often made assumption that the brain is like a computer and that nerve impulses are like digital code has no direct experimental foundation and is based on superficial resemblances only. There is no real scientific basis for the claim that the digital processing of symbols should somehow be accompanied by inner experience, that is, by consciousness, awareness, qualia, feeling, sentience, etc. 

A computer simulation of brain function is not going to produce consciousness any more than a computer simulation of kidney function is going to produce urine. There is no magic in computation. No amount of digital processing alone is ever going to produce a color. Without consciousness a computer program is a flow of electrons as meaningless and non-referential as those flowing in a wall.

Despite the flagrant and unbridgeable abyss between mind and matter it is the modern claim that if one can set up the right connections and run some electricity through it, a` la Frankenstein, consciousness will arrive on schedule from nonexistence. When undressed from the bewitching technical language this seems to be an equivalent in science of the Immaculate Conception. Or, in the current philosophical language we would call it the Immaculate Emergence. But perhaps Particle Parthenogenesis would be more accurate.

“We are on the edge of change comparable to the rise of human life on earth.” -Vernon Vinge

For materialists the arrival of artificial intelligence and machine consciousness is inevitable and only a matter of time. We have two main schools of thought developing on how to meet the coming technological tsunami – those who fear it and those who embrace it. We have on the fear side the notion that we are headed toward a near future where artificial intelligence or machine consciousness presents a danger to mankind (à la Stephen Hawking, Elon Musk, Nick Bostrom, etc.)

How this danger will manifest is the great unknown. There are countless possibilities. An embryonic AI lurking in the internet could suddenly cross the threshold into self-awareness and seize control of the world’s nuclear arsenals and missiles and demand surrender.  Or, a self-aware internet could lay low and send out brain wave controlling vibrations through WIFI and the background hum of our electrical circuitry to enslave humankind in order to advance technology sufficiently to develop the body or bodies necessary for a now paralyzed internet consciousness. This may have already happened.

And for those who embrace the change we have the Kurzwellians’ vision of the very technological replacement of humanity. This scenario will begin when computers begin to learn and thus redesign themselves. At this point the computer, or computer network, or robot would be capable of designing computers or robots better than itself. Repetitions of this cycle would result in an intelligence explosion resulting in a superintelligence which may be beyond human comprehension. This has been called the technological singularity and could begin as early as 2040, although the date keeps getting pushed further into the future.

In this process consciousness will transcend the hazards and horrors of warm-blooded protoplasmic existence. The machine descendants of man will transcend our obsolete and obscene modules of flesh. They shall put away the sweaty, smelly, hairy, warty, fatty, itchy, scarred, flawed, urinating, shitting, hurting, needy, conflicted, misshapen sac of meat and gristle and the gravity-enslaved earthly existence to become ascended silicon masters and rule like gods in a heavenly cyberspace and perhaps even reconfigure the universe itself. “We shall be as gods!” is a not so hidden background thought.

Consciousness will emerge like a butterfly from its earthbound caterpillar stage and fly freely in the new digital noosphere of a virtual reality (à la Kurzweil, Moravec, Fredkin).  The mortal human self will be subsumed like mitochondria in a giant computational eukaryote.  Our evolutionary period will expire like the dinosaurs’ and we will become a symbiont in the superior host technology. We have been upgraded by Google! All hail Google! Superintelligence is all! Praise Intelligence!

For artificial intelligence enthusiasts this will be good news for mankind. Maintaining mortal human flesh is a logistical nightmare. It requires very specific atmospheric conditions, it requires a very limited temperature range, it requires a vast range of chemical and energy inputs, it requires specific social and sexual connections, it even requires entertainment. Not meeting even one of these requirements could result in the entire operating system crashing and all the data lost (you).  Our wetware obviously makes for an inferior product when compared to a silicon based circuitry which could just as well exist in the vacuum of space with just a single source of electricity.

We shall put aside our earthly raiment of mortal skin and bone and be arrayed in the finest of indestructible metals, plastics, and silicons. We shall be free at last of nature and its’ inconveniences.  All the wealth and riches of the imagination will be at the tip of our cursor.  A million movie channels will be available and we will have an unbreakable silicon heart. We can even have our heart amputated like an infected appendix.  After all it is only pixels!  It is the next stage of evolution! Rejoice in the in the wonderful future of technology! Praise Evolution!

The notion that mind can be uploaded into a computer (Transcendence, Her, Chappie), if not completely loco, is radical in the extreme. But given the hubris of technological success and the realism of movie depictions, it has been made believable and in mainstream scientific circles it is near heresy to doubt the materialist premise of consciousness synthesis from raw physical materials. 

However there is a curiosity in the movie, Ex Machina, that perhaps reveals a crack in the technological juggernaut.  In the movie, Nathan, the techno-wizard internet mogul, has just created the most extraordinary technology in the history of science, a technology that would revolutionize the world and beyond. With Promethean daring he has just robbed the very cradle of consciousness and created Ava, a conscious robot that passes every Turing test.  It would seem that he would be in a state of elation and brimming with fulfillment.  Instead he is getting drunk at every opportunity. Alcohol is featured in almost every scene in which he appears.  One must ask the question, what has gone wrong with Nathan?    

Is this just an iniquitous twist of character?  Or could he be plain old lonely? Or is it a metaphysical crisis?  He lives like a hermit in a remote and isolated Northern region, but he has a retinue of very lovely synthetic ladies waiting for him in closets. And he has a beautiful and near mindless female companion and assistant that likes to dance. And then, he has the mysterious and unknown otherness of Ava. That should be adequate companionship.

But he has just synthesized consciousness. He has dramatically and inescapably demonstrated that life and consciousness are a merely physical phenomena that have no more meaning than electricity passing through a copper wire. He has shown that he himself is not much more than the ionic exchanges occurring through a polarized lipid membrane in a cranial bone flask.  And when the switch is turned off he dissolves into nothingness.

Our lone genius clearly has grounds for a metaphysical crisis.  He has experimentally proven a deeper isolation:  That is the isolation that the vision of materialism prescribes for man – as a spark of consciousness in a meaningless void. There is no wider mystery in being alive… he is all there is… a pathetic lonely little god… isolated in time as well as space with a separation that he cannot mitigate, even with the agreeable companionship of his ersatz bitches.

It is more than ironic that our synthesizer of new consciousness is intent on anaesthetizing his own.  But is this not also modern man? Alcohol is the universal drug of the world today. Nathan here is materialist everyman rather than the oversensitive genius. Modern man closes the door on his personal consciousness while aspiring to extend consciousness through external technological means. It seems modern man shares the same metaphysical disturbance as our techno-wizard, Nathan.

The materialist everyman has fixated on a physical literalism that excludes the meaningfulness inherent within every conscious experience. He has radically reduced the ontological range of life. Life has been stripped of inner meaning. He is abandoned to a complete separation and isolation in both time and space.

He has embraced the lawful Stalinesque reality of materialism as a total explanation for consciousness. He has embraced the scientific fundamentalism of consilience. And total explanations produce repressive states, both political and personal. However, modern man, like an eviscerated organism continues to live… even though partially.

The Frankenstein of today is more than an out-of-control technology. Our Frankenstein monster is the story that science has authority over all other interpretations of life and has replaced them with a grim and desolate paradigm about the nature of the universe and our place in it. Technology has come to shape the imagery by which the world is depicted and to affirm the underlying metaphysics of materialism. We have shaped our reality and now it shapes us. It is only natural then that Ava, the beautiful and sexy creature in Ex Machina kills her creator, Nathan. But modern man cannot kill his own soul so he must anaesthesize it.

But, exercising our imagination, let us suppose that consciousness, rather than being proven physically dependent is proven physically independent. Materialism, irrespective of technological successes, would be shown wrong and suggest that we have been living in the dark ages of a materialist ideology. And it would reveal the present day metaphysics of consciousness at the heart of a dysfunctional civilization.

Breaking: Moguls Fear AI Apocalypse

Matrix-Machines-Best-Movie-AI

By Jacob Silverman

Source: The Baffler

A funny thing happened on the way to the Singularity. In the past few months, some of the tech industry’s most prominent figures (Elon Musk, Bill Gates), as well as at least one associated guru (Stephen Hawking), have publicly worried about the consequences of out-of-control artificial intelligence. They fear nothing less than the annihilation of humanity. Heady stuff, dude.

These pronouncements come meme-ready—apocalyptic, pithy, trading on familiar Skynet references—grade-A ore for the viral mill. The bearers of these messages seem utterly serious, evincing not an inkling of skepticism. “I think we should be very careful about artificial intelligence,” Elon Musk said. “If I had to guess at what our biggest existential threat is, it’s probably that.”

“The development of full artificial intelligence could spell the end of the human race,” said Stephen Hawking, whose speech happens to be aided by a comparatively primitive artificial intelligence.

Gates recently completed the troika, sounding a more circumspect, but still troubled, position. During a Reddit AMA, he wrote: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

It’s easy to see why these men expressed these fears. For one thing, someone asked them. This is no small distinction. Most people are not, in their daily lives, asked whether they think super-smart computers are going to take over the world and end humanity as we know it. And if they are asked, the questioner is usually not rapt with attention, lingering on every word as if it were gospel.

This may sound pedantic, but the point is that it’s pretty fucking flattering—to one’s ego, to every nerd fantasy one has ever pondered about the end of days—to be asked these questions, knowing that the answer will be immediately converted (perhaps, by a machine!) into headlines shared all over the world. Musk, a particularly skilled player of media hype for vaporous ideas like his Hyperloop, must have been aware of these conditions when he took up the question at an MIT student event in October.

Another reason Silicon Valley has begun spinning up its doomsday machine is that the tech industry, despite its agnostic leanings, has long searched for a kind of theological mantle that it can drape over itself. Hence the popularity of Arthur C. Clarke’s maxim: “Any sufficiently advanced technology is indistinguishable from magic.” Any sufficiently advanced religion needs its eschatological prophecies, and the fear of AI is fundamentally a self-serving one. It implies that the industry’s visionaries might create something so advanced that even they might not be able to control it. It places them at the center of the mechanical universe, where their invention—not God’s, not ExxonMobil’s—threatens the human species.

But AI is also seen as a risk worth taking. Rollo Carpenter, the creator of Cleverbot, an app that learns from its conversations with human beings, told the BBC, “I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised.”

There’s a clever justification embedded in here, the notion that we have to clear the runway for technologies that might solve our problems, but that might also, Icarus-like, become too bold, and lead to disaster. Carpenter’s remarks are, like all of the other ones shared here, conveniently devoid of any concerns about what technologies of automation are already doing to people and economic structures now. For that’s really the fear here, albeit in a far amplified form: that machines will develop capabilities, including a sense of self-direction, that render human beings useless. We will become superfluous machines—which is the same thing as being dead.

For many participants in today’s technologized marketplace, though, this is already the case. They have been replaced by object-character recognition software, which can read documents faster than they can; or by a warehouse robot, which can carry more packages; or by an Uber driver, who doesn’t need a dispatcher and will soon be replaced by a more efficient model—that is, a self-driving car. The people who find themselves here, among the disrupted, have been cast aside by the same forces of technological change that people like Gates and Musk treat as immutable.

Of course, if you really worry about what a business school professor might call AI’s “negative externalities,” then there all kinds of things you can do—like industry conclaves, mitigation studies, campaigns to open-source and regulate AI technologies. But then you might risk deducing that many of the concerns we express regarding AI—a lack of control, environmental devastation, a mindless growth for the sake of growth, the rending of social and cultural fabric in service of a disinterested higher authority ravenous for ever-more information and power—are currently happening.

Take a look out the window at Miami’s flooded downtown, the e-waste landfills of Ghana, or the fetid dormitories of Foxconn. To misappropriate the prophecy of another technological sage: the post-human dystopia is already here; it’s just not evenly distributed yet.

Jacob Silverman’s book, Terms of Service: Social Media and the Price of Constant Connection, will be published in March.