After the Crash

Dispatches From a Long Recovery (Est. 10/2024)

After the Crash

Fear our new robot overlords: This is why you need to take artificial intelligence seriously

Matrix-Machines-Best-Movie-AI

Killer computers determined to kill us? Nope. Forget “Terminator” — there’s something more specific to worry about

By Phil Torres

Source: Salon

There are a lot of major problems today with tangible, real-world consequences. A short list might include terrorism, U.S.-Russian relations, climate change and biodiversity loss, income inequality, health care, childhood poverty, and the homegrown threat of authoritarian populism, most notably associated with the presumptive nominee for the Republican Party, Donald Trump.

Yet if you’ve been paying attention to the news for the past several years, you’ve almost certainly seen articles from a wide range of news outlets about the looming danger of artificial general intelligence, or “AGI.” For example, Stephen Hawking has repeatedly expressed that “the development of full artificial intelligence could spell the end of the human race,” and Elon Musk — of Tesla and SpaceX fame — has described the creation of superintelligence as “summoning the demon.” Furthermore, the Oxford philosopher and director of the Future of Humanity Institute, Nick Bostrom, published a New York Times best-selling book in 2014 called Superintelligence, in which he suggests that the “default outcome” of building a superintelligent machine will be “doom.”

What’s with all this fear-mongering? Should we really be worried about a takeover by killer computers hell-bent on the total destruction of Homo sapiens? The first thing to recognize is that a Terminator-style war between humanoid robots is not what the experts are anxious about. Rather, the scenarios that keep these individuals awake at night are far more catastrophic. This may be difficult to believe but, as I’ve written elsewhere, sometimes truth is stranger than science fiction. Indeed, given that the issue of AGI isn’t going anywhere anytime soon, it’s increasingly important for the public to understand exactly why the experts are nervous about superintelligent machines. As the Future of Life Institute recently pointed out, there’s a lot of bad journalism about AGI out there. This is a chance to correct the record.

Toward this goal, step one is to realize is that your brain is an information-processing device. In fact, many philosophers talk about the brain as the hardware — or rather, the “wetware” — of the mind, and the mind as the software of the brain. Directly behind your eyes is a high-powered computer that weighs about three pounds and has roughly the same consistency as Jell-o. It’s also the most complex object in the known universe. Nonetheless, the rate at which it’s able to process information is much, much slower than the information-processing speed of an actual computer. The reason is that computers process information by propagating electrical potentials, and electrical potentials move at the speed of light, whereas the fastest signals in your brain travel at around 100 miles per second. Fast, to be sure, but not nearly as fast as light.

Consequently, an AGI could think about the world at speeds many orders of magnitude faster than our brains can. From the AGI’s point of view, the outside world — including people — would move so slowly that everything would appear almost frozen. As the theorist Eliezer Yudkowsky calculates, for a computer running a million times faster than our puny brains, “a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”

Already, then, an AGI would have a huge advantage. Imagine yourself in a competition against a machine that has a whole year to work through a cognitive puzzle for every 31 seconds that you spend trying to think up a solution. The mental advantage of the AGI would be truly profound. Even a large team of humans working together would be no match for a single AGI with so much time on its hands. Now imagine that we’re not in a puzzle-solving competition with an AGI but a life-and-death situation in which the AGI wants to destroy humanity. While we struggle to come up with strategies for keeping it contained, it would have ample time to devise a diabolical scheme to exploit any technology within electronic reach for the purpose of destroying humanity.

But a diabolical AGI isn’t — once again — what many experts are actually worried about. This is a crucial point that the Harvard psychologist Steven Pinker misses in a comment about AGI for the website Edge.org. To quote Pinker at length:

“The other problem with AGI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.” Pinker then concludes with, “It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.”

Unfortunately, such criticism misunderstands the danger. While it’s conceptually possible that an AGI really does have malevolent goals — for example, someone could intentionally design an AGI to be malicious — the more likely scenario is one in which the AGI kills us because doing so happens to be useful. By analogy, when a developer wants to build a house, does he or she consider the plants, insects, and other critters that happen to live on the plot of land? No. Their death is merely incidental to a goal that has nothing to do with them. Or consider the opening scenes of The Hitchhiker’s Guide to the Galaxy, in which “bureaucratic” aliens schedule Earth for demolition to make way for a “hyperspatial express route” — basically, a highway. In this case, the aliens aren’t compelled to destroy us out of hatred. We just happen to be in the way.

The point is that what most theorists are worried about is an AGI whose values — or final goals — don’t fully align with ours. This may not sound too bad, but a bit of reflection shows that if an AGI’s values fail to align with ours in even the slightest ways, the outcome could very well be, as Bostrom argues, doom. Consider the case of an AGI — thinking at the speed of light, let’s not forget — that is asked to use its superior intelligence for the purpose of making humanity happy. So what does it do? Well, it destroys humanity, because people can’t be sad if they don’t exist. Start over. You tell it to make humanity happy, but without killing us. So it notices that humans laugh when we’re happy, and hooks up a bunch of electrodes to our faces and diaphragm that make us involuntarily convulse as if we’re laughing. The result is a strange form of hell. Start over, again. You tell it to make us happy without killing us or forcing our muscles to contract. So it implants neural electrodes into the pleasure centers of everyone’s brains, resulting in a global population in such euphoric trances that people can no longer engage in the activities that give life meaning. Start over — once more. This process can go on for hours. At some point it becomes painfully obvious that getting an AGI’s goals to align with ours is going to be a very, very tricky task.

Another famous example that captures this point involves a superintelligence whose sole mission is to manufacture paperclips. This sounds pretty benign, right? How could a “paperclip maximizer” pose an existential threat to humanity? Well, if the goal is to make as many paperclips as possible, then the AGI will need resources to do this. And what are paperclips composed of? Atoms — the very same physical stuff out of which your body is composed. Thus, for the AGI, humanity is nothing more than a vast reservoir of easily accessible atoms, atoms, atoms. As Yudkowsky eloquently puts it, “The [AGI] does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” And just like that, the flesh and bones of human beings are converted into bendable metal for holding short stacks of paper.

At this point, one might think the following, “Wait a second, we’re talking about superintelligence, right? How could a truly superintelligent machine be fixated on something so dumb as creating as many paperclips as possible?” Well, just look around at humanity. By every measure, we are by far the most intelligent creatures on our planetary spaceship. Yet our species is obsessed with goals and values that are, when one takes a step back and peers at the world with “new eyes,” incredibly idiotic, perplexing, harmful, foolish, self-destructive, other-destructive, and just plain weird.

For example, some people care so much about money that they’re willing to ruin friendships, destroy lives and even commit murder or start wars to acquire it. Others are so obsessed with obeying the commandments of ancient “holy texts” that they’re willing to blow themselves up in a market full of non-combatants. Or consider a less explicit goal: sex. Like all animals, humans have an impulse to copulate, and this impulse causes us to behave in certain ways — in some cases, to risk monetary losses and personal embarrassment. The appetite for sex is just there, pushing us toward certain behaviors, and there’s little we can do about the urge itself.

The point is that there’s no strong connection between how intelligent a being is and what its final goals are. As Pinker correctly notes above, intelligence is nothing more than a measure of one’s ability to achieve a particular aim, whatever it happens to be. It follows that any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as, well, stupid. A superintelligent machine could be no less infatuated with obeying Allah’s divine will or conquering countries for oil as some humans are.

So far, we’ve discussed the thought-speed of machines, the importance of making sure their values align with ours, and the weak connection between intelligence and goals. These considerations alone warrant genuine concern about AGI. But we haven’t yet mentioned the clincher that makes AGI an utterly unique problem unlike anything humanity has ever encountered. To understand this crucial point, consider how the airplane was invented. The first people to keep a powered aircraft airborne were the Wright brothers. On the windy beaches of North Carolina, they managed to stay off the ground for a total of 12 seconds. This was a marvelous achievement, but the aircraft was hardly adequate for transporting goods or people from one location to another. So, they improved its design, as did a long lineage of subsequent inventors. Airplanes were built with one, two, or three wings, composed of different materials, and eventually the propeller was replaced by the jet engine. One particular design — the Concorde — could even fly faster than the speed of sound, traversing the Atlantic from New York to London in less than 3.5 hours.

The crucial idea here is that the airplane underwent many iterations of innovation. Problems that arose in previous designs were improved upon, leading to increasingly safe and reliable aircraft. But this is not the situation we’re likely to be in with AGI. Rather, we’re likely to have one, and only one, chance to get all the problems mentioned above exactly right. Why? Because intelligence is power. For example, we humans are the dominant species on the planet not because of our long claws, sharp teeth and bulky musculatures. The key difference between Homo sapiens and the rest of the Animal Kingdom concerns our oversized brains, which enable us to manipulate and rearrange the world in incredible ways. It follows that if an AGI were to exceed our level of intelligence, it could potentially dominate not only the biosphere, but humanity as well.

Even more, since creating intelligent machines is an intellectual task, an AGI could attempt to modify its own code, a possibility known as “recursive self-improvement.” The result could be an exponential intelligence explosion that, before one has a chance to say “What the hell is happening?,” yields a super-super-superintelligent AGI, or a being that towers over us to the extent that we tower over the lowly cockroach. Whoever creates the first superintelligent computer — whether it’s Google, the U.S. government, the Chinese government, the North Korean government, or a lone hacker in her or his garage — they’ll have to get everything just right the first time. There probably won’t be opportunities for later iterations of innovation to fix flaws in the original design, if there are any. When it comes to AGI, the stakes are high.

It’s increasingly important for the public to understand the nature of thinking machines and why some experts are so worried about them. Without a grasp of these issues, claims like “A paperclip maximizer could destroy humanity!” will sound as apocalyptically absurd as “The Rapture is near! Save your soul while you still can!” Consequently, organizations dedicated to studying AGI safety could get defunded or shut down, and the topic of AGI could become the target of misguided mockery. The fact is that if we manage to create a “friendly” AGI, the benefits to humanity could be vast. But if we fail to get things right on the first go around, the naked ape could very well end up as a huge pile of paperclips.

 

 

Phil Torres is the founder of the X-Risks Institute and author of The End: What Science and Religion Tell Us About the Apocalypse. He’s on Twitter @xriskology.

Skynet Ascendant

t2skynetbdcap1

By Cory Doctorow

Source: Locus Online

As I’ve written here before, science fiction is terrible at predicting the future, but it’s great at predicting the present. SF writers imagine all the futures they can, and these futures are processed by a huge, dynamic system consisting of editors, booksellers, and readers. The futures that attain popular and commercial success tell us what fears and aspirations for technology and society are bubbling in our collective imaginations.

When you read an era’s popular SF, you don’t learn much about the future, but you sure learn a lot about the past. Fright and hope are the inner and outer boundaries of our imagination, and the stories that appeal to either are the parameters of an era’s political reality.

Pay close attention to the impossibilities. When we find ourselves fascinated by faster than light travel, consciousness uploading, or the silly business from The Matrix of AIs using human beings as batteries, there’s something there that’s chiming with our lived experience of technology and social change.

Postwar SF featured mass-scale, state-level projects, a kind of science fictional New Deal. Americans and their imperial rivals built cities in space, hung skyhooks in orbit, even made Dyson Spheres that treated all the Solar System’s matter as the raw material for the a new, human-optimized megaplanet/space-station that would harvest every photon put out by our sun and put it to work for the human race.

Meanwhile, the people buying these books were living in an era of rapid economic growth, and even more importantly, the fruits of that economic growth were distributed to the middle class as well as to society’s richest. This was thanks to nearly unprecedented policies that protected tenants at the expense of landlords, workers at the expense of employers, and buy­ers at the expense of sellers. How those policies came to be enacted is a question of great interest today, even as most of them have been sunsetted by successive governments across the developed world.

Thomas Piketty’s data-driven economics bestseller Capital in the Twenty-First Century argues that the vast capital destruction of the two World Wars (and the chaos of the interwar years) weakened the grip of the wealthy on the governments of the world’s developed states. The arguments in favor of workplace safety laws, taxes on capital gains, and other policies that undermined the wealthy and benefited the middle class were not new. What was new was the political possibility of these ideas.

As developed nations’ middle classes grew, so did their material wealth, political influence, and expectations that governments would build am­bitious projects like interstate highways and massive civil engineering projects. These were politically popular – because lawmakers could use them to secure pork for their voters – and also lucrative for government contractors, making ‘‘Big Government’’ a rare point of agreement between the rich and middle-income earners.

(A note on poor people: Piketty’s data suggests that the share of the national wealth controlled by the bottom 50% has not changed much for several centuries – eras of prosperity are mostly about redistributing from the top 10-20% to the next 30-40%)

Piketty hypothesizes that the returns on investment are usually greater than the rate of growth in an economy. The best way to get rich is to start with a bunch of money that you turn over to professional managers to invest for you – all things being equal, this will make you richer than you could get by inventing something everyone uses and loves. For example, Piketty contrasts Bill Gates’s fortunes as the founder of Microsoft, once the most profitable company in the world, with Gates’s fortunes as an investor after his retirement from the business. Gates-the-founder made a lot less by creating one of the most successful and profitable products in history than he did when he gave up making stuff and started owning stuff for a living.

By the early 1980s, the share of wealth controlled by the top decile tipped over to the point where they could make their political will felt again – again, Piketty supports this with data showing that nations elect seriously investor-friendly/worker-unfriendly governments when investors gain control over a critical percentage of the national wealth. Leaders like Reagan, Thatcher, Pinochet, and Mulroney enacted legislative reforms that reversed the post-war trend, dis­mantling the rules that had given skilled workers an edge over their employers – and the investors the employers served.

The greed-is-good era was also the cyberpunk era of literary globalized corporate dystopias. Even though Neuromancer and Mirrorshades predated the anti-WTO protests by a decade and a half, they painted similar pictures. Educated, skilled people – people who comprised the mass of SF buyers – became a semi-disposable under­class in world where the hyperrich had literally ascended to the heavens, living in orbital luxury hotels and harvesting wealth from the bulk of humanity like whales straining krill.

Seen in this light, the vicious literary feuds between the cyberpunks and the old guard of space-colonizing stellar engineer writers can be seen as a struggle over our political imagination. If we crank the state’s dials all the way over the right, favoring the industrialist ‘‘job creators’’ to the exclusion of others, will we find our way to the stars by way of trickle-down, or will the overclass graft their way into a decadent New Old Rome, where reality TV and hedge fund raids consume the attention and work we once devoted to exploring our solar system?

Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Termina­tor and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.

It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.

Stephen Hawking, Elon Musk, and other luminaries have issued pan­icked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.

The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recurr­ing payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.

Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.

We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.

We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

The age of automation gave us stories like Chap­lan’s Modern Times, and the age of multinational hedge-fund capitalism made The Matrix into an enduring parable. We’ve gone from being cogs to being a reproductive agar within which new cor­porations can breed. As Mitt Romney reminded us, ‘‘Corporations are people.’’

Lara Trace Hentz

INDIAN COUNTRY NEWS

In Saner Thought

"It is the duty of every man, as far as his ability extends, to detect and expose delusion and error"..Thomas Paine

ZEDJournAI

Human in Algorithms

Rooster Crows

From the Roof Top

Aisle C

I See This

The Free

blog of the post capitalist transition.. Read or download the novel here + latest relevant posts

अध्ययन-अनुसन्धान(Essential Knowledge of the Overall Subject)

अध्ययन-अनुसन्धानको सार