Make Way for the Killer Robots: The Government Is Expanding Its Power to Kill

By John & Nisha Whitehead

Source: The Rutherford Institute

“Crush! Kill! Destroy!”—The Robot, Lost in Space

The purpose of a good government is to protect the lives and liberties of its people.

Unfortunately, we have gone so far in the opposite direction from the ideals of a good government that it’s hard to see how this trainwreck can be redeemed.

It gets worse by the day.

For instance, despite an outcry by civil liberties groups and concerned citizens alike, in an 8-3 vote on Nov. 29, 2022, the San Francisco Board of Supervisors approved a proposal to allow police to arm robots with deadly weapons for use in emergency situations.

This is how the slippery slope begins.

According to the San Francisco Police Department’s draft policy, “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

Yet as investigative journalist Sam Biddle points out, this is “what nearly every security agency says when it asks the public to trust it with an alarming new power: We’ll only use it in emergencies—but we get to decide what’s an emergency.”

last-minute amendment to the SFPD policy limits the decision-making authority for deploying robots as a deadly force option to high-ranking officers, and only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means.

In other words, police now have the power to kill with immunity using remote-controlled robots.

These robots, often acquired by local police departments through federal grants and military surplus programs, signal a tipping point in the final shift from a Mayberry style of community policing to a technologically-driven version of law enforcement dominated by artificial intelligence, surveillance, and militarization.

It’s only a matter of time before these killer robots intended for use as a last resort become as common as SWAT teams.

Frequently justified as vital tools necessary to combat terrorism and deal with rare but extremely dangerous criminal situations, such as those involving hostages, SWAT teams—which first appeared on the scene in California in the 1960s—have now become intrinsic parts of local law enforcement operations, thanks in large part to substantial federal assistance and the Pentagon’s military surplus recycling program, which allows the transfer of military equipment, weapons and training to local police for free or at sharp discounts.

Consider this: In 1980, there were roughly 3,000 SWAT team-style raids in the U.S. By 2014, that number had grown to more than 80,000 SWAT team raids per year.

Given the widespread use of these SWAT teams and the eagerness with which police agencies have embraced them, it’s likely those raids number upwards of 120,000 by now.

There are few communities without a SWAT team today.

No longer reserved exclusively for deadly situations, SWAT teams are now increasingly deployed for relatively routine police matters, with some SWAT teams being sent out as much as five times a day. In the state of Maryland alone, 92 percent of 8200 SWAT missions were used to execute search or arrest warrants.

For example, police in both Baltimore and Dallas have used SWAT teams to bust up poker games. A Connecticut SWAT team swarmed a bar suspected of serving alcohol to underage individuals. In Arizona, a SWAT team was used to break up an alleged cockfighting ring. An Atlanta SWAT team raided a music studio, allegedly out of a concern that it might have been involved in illegal music piracy.

A Minnesota SWAT team raided the wrong house in the middle of the night, handcuffed the three young children, held the mother on the floor at gunpoint, shot the family dog, and then “forced the handcuffed children to sit next to the carcass of their dead pet and bloody pet for more than an hour” while they searched the home.

A California SWAT team drove an armored Lenco Bearcat into Roger Serrato’s yard, surrounded his home with paramilitary troops wearing face masks, threw a fire-starting flashbang grenade into the house, then when Serrato appeared at a window, unarmed and wearing only his shorts, held him at bay with rifles. Serrato died of asphyxiation from being trapped in the flame-filled house. Incredibly, the father of four had done nothing wrong. The SWAT team had misidentified him as someone involved in a shooting.

These incidents are just the tip of the iceberg.

Nationwide, SWAT teams have been employed to address an astonishingly trivial array of nonviolent criminal activity or mere community nuisances: angry dogs, domestic disputes, improper paperwork filed by an orchid farmer, and misdemeanor marijuana possession, to give a brief sampling.

If these raids are becoming increasingly common and widespread, you can chalk it up to the “make-work” philosophy, by which police justify the acquisition of sophisticated military equipment and weapons and then rationalize their frequent use.

Mind you, SWAT teams originated as specialized units that were supposed to be dedicated to defusing extremely sensitive, dangerous situations (that language is almost identical to the language being used to rationalize adding armed robots to local police agencies). They were never meant to be used for routine police work such as serving a warrant.

As the role of paramilitary forces has expanded, however, to include involvement in nondescript police work targeting nonviolent suspects, the mere presence of SWAT units has actually injected a level of danger and violence into police-citizen interactions that was not present as long as these interactions were handled by traditional civilian officers. 

Indeed, a study by Princeton University concludes that militarizing police and SWAT teams “provide no detectable benefits in terms of officer safety or violent crime reduction.” The study, the first systematic analysis on the use and consequences of militarized force, reveals that “police militarization neither reduces rates of violent crime nor changes the number of officers assaulted or killed.”

In other words, warrior cops aren’t making us or themselves any safer.

Americans are now eight times more likely to die in a police confrontation than they are to be killed by a terrorist.

The problem, as one reporter rightly concluded, is “not that life has gotten that much more dangerous, it’s that authorities have chosen to respond to even innocent situations as if they were in a warzone.”

Now add killer robots into that scenario.

How long before these armed, militarized robots, authorized to use lethal force against American citizens, become as commonplace as SWAT teams and just as deadly?

Likewise, how long before mistakes are made, technology gets hacked or goes haywire, robots are deployed based on false or erroneous information, and innocent individuals get killed in the line of fire?

And who will shoulder the blame and the liability for rogue killer robots? Given the government’s track record when it comes to sidestepping accountability for official misconduct through the use of qualified immunity, it’s completely feasible that they’d get a free pass here, too.

In the absence of any federal regulations or guidelines to protect Americans against what could eventually become autonomous robotic SWAT teams equipped with artificial intelligence, surveillance and lethal weapons, “we the people” are left defenseless.

We’re gaining ground fast on the kind of autonomous, robotic assassins that Terminator envisioned would be deployed by 2029.

If these killer robots follow the same trajectory as militarized weapons, which, having been deployed to local police agencies as part of the Pentagon’s 1033 recycling program, are turning America into a battlefield, it’s just a matter of time before they become the first line of defense in interactions between police and members of the public.

Some within the robotics industry have warned against weaponizing general-purpose robots, which could be used “to invade civil rights or to threaten, harm, or intimidate others.”

Yet it may already be too late for that.

As Sam Biddle writes for The Intercept, “As with any high-tech toy, the temptation to use advanced technology may surpass whatever institutional guardrails the police have in place.”

There are thousands of police robots across the country, and those numbers are growing exponentially. It won’t take much in the way of weaponry and programming to convert these robots to killer robots, and it’s coming.

The first time police used a robot as a lethal weapon was in 2016, when it was deployed with an explosive device to kill a sniper who had shot and killed five police officers.

This scenario has been repeatedly trotted out by police forces eager to add killer robots to their arsenal of deadly weapons. Yet as Paul Scharre, author of Army Of None: Autonomous Weapons And The Future Of War, recognizes, presenting a scenario in which the only two options are to use a robot for deadly force or put law enforcement officers at risk sets up a false choice that rules out any consideration of non-lethal options.

As Biddle concludes:

“Once a technology is feasible and permitted, it tends to linger. Just as drones, mine-proof trucks, and Stingray devices drifted from Middle Eastern battlefields to American towns, critics of … police’s claims that lethal robots would only be used in one-in-a-million public emergencies isn’t borne out by history. The recent past is littered with instances of technologies originally intended for warfare mustered instead against, say, constitutionally protected speech, as happened frequently during the George Floyd protests.”

This gradual dismantling of cultural, legal and political resistance to what was once considered unthinkable is what Liz O’Sullivan, a member of the International Committee for Robot Arms Control, refers to as “a well-executed playbook to normalize militarization.”

It’s the boiling frog analogy all over again, and yet there’s more at play than just militarization or suppressing dissent.

There’s a philosophical underpinning to this debate over killer robots that we can’t afford to overlook, and that is the government’s expansion of its power to kill the citizenry.

Although the government was established to protect the inalienable rights to life, liberty and the pursuit of happiness of the American people, the Deep State has been working hard to strip us of any claims to life and liberty, while trying to persuade us that happiness can be found in vapid pursuits, entertainment spectacles and political circuses.

Having claimed the power to kill through the use of militarized police who shoot first and ask questions later, SWAT team raids, no-knock raids, capital punishment, targeted drone attacks, grisly secret experiments on prisoners and unsuspecting communities, weapons of mass destruction, endless wars, etc., the government has come to view “we the people” as collateral damage in its pursuit of absolute power.

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, we are at a dangerous crossroads.

Not only are our lives in danger. Our very humanity is at stake.

Automatons – Life Inside the Unreal Machine

By Kingsley L. Dennis

Source: Waking Times

ɔːˈtɒmət(ə)n/

noun

a moving mechanical device made in imitation of a human being.

a machine which performs a range of functions according to a predetermined set of coded instructions.

used in similes and comparisons to refer to a person who seems to act in a mechanical or unemotional way.

“Don’t you wish you were free, Lenina?”

“I don’t know what you mean. I am free. Free to have the most wonderful time. Everybody’s happy nowadays.”

He laughed. “Yes, ‘Everybody’s happy nowadays.’ We have been giving the children that at five. But wouldn’t you like to be free to be happy in some other way, Lenina? In your own way, for example; not in everybody else’s way.”

“I don’t know what you mean,” she repeated.

Aldous Huxley, Brave New World

Are we turning into a mass of unaware sleepwalkers? Our eyes are seemingly open and yet we are living as if asleep and the dream becomes our waking lives. It seems that more and more people, in the highly technologized nations at least, are in danger of succumbing to the epidemic of uniformity. People follow cycles of fashions and wear stupid clothes when they think it is the ‘in thing;’ and hyper-budget films take marketing to a whole new level forcing parents to rush out to buy the merchandise because their kids are screaming for it. And if one child in the class doesn’t have the latest toy like all their classmates then they are ostracized for this lack. Which means that poor mummy and daddy have to make sure they get their hands on these gadgets. Put the two items together – zombies and uniformity – and what do you get? Welcome to the phenomenon of Black Fridays, which have become the latest manifestation of national Zombie Days.

Unless you’ve been living in a cave somewhere (or living a normal, peaceful existence) then you will know what this event is – but let me remind you anyway of what a Black Friday is. It is a day when members of the public are infected with the ‘must buy’ and ‘act like an idiot’ virus that turns them into screaming, raging hordes banging on the doors of hyper-market retailers hours before they open. Many of these hordes sleep outside all night to get early entry. Then when the doors are finally opened they go rushing in fighting and screaming as if re-enacting a scene from Games of Thrones. Those that do survive the fisticuffs come away with trolleys full of boxes too big to carry. This display of cultural psychosis, generally named as idiocracy, is also a condition nurtured by societies based on high-consumption with even higher inequalities of wealth distribution. In other words, a culture conditioned to commodity accumulation will buy with fervour when things are cheap. This is because although conditioned to buy, they lack the financial means to satiate this desire. Many people suffer from a condition which psychologists have named as ‘miswanting,’ which means that we desire things we don’t like and like things we don’t desire. What this is really saying is that we tend to ‘want badly’ rather than having genuine need. What we are witnessing in these years is an epidemic of idiocracy and its propagating faster than post-war pregnancies. And yet we are programmed by our democratic societies to not think differently. In this respect, many people also suffer from a condition known as ‘confirmation bias.’

Confirmation bias is our conditioned tendency to pick and choose that information which confirms our pre-existing beliefs or ideas. Two people may be able to look at the same evidence and yet they will interpret it according to how it fits into and validates their own thinking. That’s why so many debates go nowhere as people generally don’t wish to be deviated away from those ideas they have invested so much time and effort in upholding. It’s too much of a shock to realize that what we thought was true, or valid, is not the case. To lose the safety and security of our ideas would be too much for many people. It is now well understood in psychology that we like to confirm our existing beliefs; after all, it makes us feel right!

Many of our online social media platforms are adhering to this principle by picking and choosing those items of news, events, etc that their algorithms have deemed we are most likely to want to see. As convenient as it may seem, it is unlikely to be in our best interests in the long term. The increasing automation of the world around us is set to establish a new ecology in our hyperreality. We will be forced to acknowledge that algorithms and intelligent software will soon, if it isn’t already, be running nearly everything in our daily lives. Historian Yuval Harari believes that ‘the twenty-first century will be dominated by algorithms. “Algorithm” is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is.’1 Algorithms already follow our shopping habits, recommend products for us, pattern recognize our online behavior, help us drive our cars, fly our planes, trade our economies, coordinate our public transport, organize our energy distribution, and a lot, lot more that we are just not really aware of. One of the signs of living in a hyperreality is that we are surrounded by an invisible coded environment, written in languages we don’t understand, making our lives more abstracted from reality.

Modern societies are adapting to universal computing infrastructures that will usher in new arrangements and relations. Of course, these are only the early years, although there is already a lot of uncertainty and unpredictability. As it is said, industrialization didn’t turn us into machines and automation isn’t going to turn us into automatons. Which is more or less correct; after all, being human is not that simple. Yet there will be new dependencies and relations forming as algorithms continue to create and establish what can be called ‘pervasive assistance.’ Again, it is a question of being alert so that we don’t feel compelled just to give ourselves over to our algorithms. The last thing we want is for a bunch of psychologists trying to earn yet more money from a new disease of ‘algorithmic dependency syndrome’ or something similar.

It needs stating that by automating the world we also run the risk of being distanced from our own responsibilities. And this also implies, importantly, the responsibility we have to ourselves – to transcend our own limitations and to develop our human societies for the better. We should not forget that we are here to mature as a species and we should not allow the world of automation to distract us from this. Already literature and film have portrayed such possibilities. Examples are David Brin’s science-fiction novel Kiln People (2002 – also adapted into the film Surrogates, 2009), which clearly showed how automation may provide a smokescreen for people to disappear behind their surrogate substitutes.

Algorithms are the new signals that code an unseen territory all around us. In a world of rapidly increasing automation and digital identities we’ll have to keep our wits about us in order to retain what little of our identities we have left. We want to make sure that we don’t get lost in our emoji messages, our smilies of flirtation; or, even worse, loose our life in the ‘death cult’ of the selfies. Identities by their very nature are constructs; in fact, we can go so far as to call them fake. They are constructed from layers of ongoing conditioning which a person identifies with. This identity functions as a filter to interpret incoming perceptions. The limited degree of perceptions available to us almost guarantees that identities fall into a knowable range of archetypes. We would be wise to remember that who we are is not always the same as what we project. And yet some people on social media are unable to distinguish their public image from their personal identity, which starts to sound a bit scary. Philosopher Jean Baudrillard, not opposed to saying what he thought, stated it in another way:

We are in a social trance: vacant, withdrawn, lacking meaning in our own eyes. Abstracted, irresponsible, enervated. They have left us the optic nerve, but all the others have been disabled…All that is left is the mental screen of indifference, which matches the technical in-difference of the images.2

Baudrillard would probably be the first to agree that breathing is often a disguise to make us think that someone is alive. After all, don’t we breathe automatically without thinking about it?

We must not make the human spirit obsolete just because our technological elites are dreaming of a trans-human future. Speaking of such futures, inventor and futurist Ray Kurzweil predicts that in the 2030s human brains will be able to connect to the cloud and to use it just like we use cloud computing today. That is, we will be able to transfer emails and photos directly from the cloud to our brain as well as backing up our thoughts and memories. How will this futuristic scenario be possible? Well, Kurzweil says that nanobots – tiny robots constructed from DNA strands – will be swimming around in our brains. And the result? According to Kurzweil we’re going to be funnier, sexier, and better at expressing our loving sentiments. Well, that’s okay then – nanobot my brain up! Not only will being connected to the computing cloud make us sexier and funnier humans, it will even take us closer to our gods says Kurzweil – ‘So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world – it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.’It’s hard to argue with such a bargain – a few nanobots in our brain to become godlike? I can imagine a lot of people will be signing up for this. There may even be a hefty monthly charge for those wanting more than 15GB of back-up headspace. Personally, I prefer the headspace that’s ad infinitum and priceless. I hope I’m not in the minority.

Looking at the choices on offer so far it seems that there is the zombie option, which comes with add-on idiocracy (basic model), and the trans-human nanobot sexy-god upgrade (pricy). But then let’s not forget that in an automated world it may be the sentient robots that come out on top. Now, that would be an almost perfect demonstration of a simulation reality.

Life in Imitation

There are those who believe that self-awareness is going to be the end game of artificial intelligence – the explosive ‘wow factor’ that really throws everything into high gear. The new trend now is deep machine-learning to the point where machines will program not only themselves but also other machines. Cognitive computer scientists are attempting to recapture the essence of human consciousness in the hope of back-engineering this complexity into machine code. It’s a noble endeavor, if not at least for their persistence. The concern here is that if machines do finally achieve sentience then the next thing that we’ll need to roll out will be machine psychologists. Consciousness, after all, comes at a price. There is no free lunch when it comes to possessing a wide-awake brain. With conscious awareness comes responsibilities, such as values, ethics, morality, compassion, forgiveness, empathy, goodness, and good old-fashioned love. And I personally like the love part (gives me a squishy feeling every time).

It may not actually be the sentient robots we need to worry about; it’s the mindless ones we need to be cautious of (of course, we could say the same thing about ourselves). One of the methods used in training such robots is, in the words of their trainers, to provide them with enough ‘intrinsic motivation.’ Not only will this help the robots to learn their environments, it is also hoped that it will foster attention in them to acquire sufficient situational awareness. If I were to write a science-fiction scenario on this I would make it so that the sentient robots end up being more human than we are, and humans turn into their automated counterparts. Funny, maybe – but more so in the funny-bone hurting sort of way rather than the laugh-out-loud variety. Or perhaps it’s already been done. It appears that we are attempting to imbue our devices with qualities we are also striving to possess for ourselves. Humans are naturally vulnerable; it is part of our organic make-up. Whatever we create may inherit those vulnerabilities. However, this here is not a discussion on the pros and cons of smart machines and artificial intelligence (there are many more qualified discussions on that huge topic).

While we are creating, testing, worrying, or arguing over machines and their like we are taking our attention away from the center – ourselves. The trick of surviving in the ‘unreal machine’ of life is by becoming more human, the very antithesis of the robotic. Technology can assist us in interacting and participating to a better degree with our environments. The question, as always, is the uses to which such tools are put – and by whom. Such tools can help us realize our dreams, or they can entrap us in theirs. Algorithms, smart machines, intelligent infrastructure, and automated processes: these are all going to come about and be a part of our transforming world. And in many respects, they will make life more comfortable for us. Yet within this comfort zone we still need to strive and seek for our betterment. We should not allow an automated environment to deprive us of our responsibility, and need, to find meaning and significance in our world. Our technologies should force us to acknowledge our human qualities and to uplift them, and not to turn us into an imitation of them.

Another metaphor for the simulated ‘robotic’ creature is the golem. The golem legend speaks of a creature fashioned from clay, a Cabbalistic motif which has appeared frequently in literary and cinematic form (such as Frankenstein). The Cabbalistic automaton that is the golem, which means ‘unformed,’ has often been used to show the struggle between mechanical limitation and human feelings. This struggle depicts the tension that combines cogs and consciousness; the entrapment in matter and the spirit of redemption and liberation. This is a myth that speaks of the hubris in humanity fashioning its own creatures and ‘magically’ bestowing life upon them. It is the act of creating a ‘sacred machine’ from the parts and pieces of a material world and then to imbue them with human traits. And through this human likeness they are required to fulfil human chores and work as slaves. Sounds familiar? The Cabbalistic humanoid – the sentient robot – is forever doomed, almost like the divine nature of Man trapped within the confines and limitations of a material reality. They represent the conflict of being torn between a fixed fate and freedom.

Our material reality may be the ultimate unreal machine. We are the cogs, the clay golem, the imperfect creature fashioned by another. Our fears of automation may only be a reflection of our own automation. We struggle to express some form of release whilst unaware that the binds that mechanize us are forever tightening.

We have now shifted through the zombie-idiocracy model (basic), the trans-human nanobot sexy-god model (pricy), to arrive at the realization that it is us – and not our sentient robots – that are likely to be the automaton (tragic). And this is the biblical fall from grace; the disconnection from our god(s). We have come loose from Central Source and we have lost our way.

We are now living in the hyperreal realm where zombies, cyborgs, and golem robots all reside – but it is not the place for the genuine human. Things are going to have to change. Not only do we have to retain our humanity, we also must remain sane. With our continuing modern technologies, our augmented reality and bioengineering, the difference between fiction and reality will blur even further. And this blurring is likely to become more prominent as people increasingly try to reshape reality to fit around their own imaginative fictions. Staying sane, grounded, and balanced is going to be a very, very good option for the days to come.

We are going to be sharing our planetary space with the new smart machines. I am reminded of the Dr. Seuss book Horton Hears a Who! that has the refrain, ‘a person’s a person no matter how small.’ Size doesn’t count – but being human does. And staying human in these years will be the hard task allotted to us.

Luddism and Economic Ideology

ludd1

Source: the HipCrime Vocab

Smithsonian Magazine has a very good feature on the Luddites, well worth a read. There are many elements you just don’t read in many economic histories; for example, the 40-hour work week was not brought down from the mountaintop by Moses and inscribed in stone tablets, despite what you may have heard elsewhere:

At the turn of 1800, the textile industry in the United Kingdom was an economic juggernaut that employed the vast majority of workers in the North. Working from home, weavers produced stockings using frames, while cotton-spinners created yarn. “Croppers” would take large sheets of woven wool fabric and trim the rough surface off, making it smooth to the touch.

These workers had great control over when and how they worked—and plenty of leisure. “The year was chequered with holidays, wakes, and fairs; it was not one dull round of labor,” as the stocking-maker William Gardiner noted gaily at the time. Indeed, some “seldom worked more than three days a week.” Not only was the weekend a holiday, but they took Monday off too, celebrating it as a drunken “St. Monday.”

Croppers in particular were a force to be reckoned with. They were well-off—their pay was three times that of stocking-makers—and their work required them to pass heavy cropping tools across the wool, making them muscular, brawny men who were fiercely independent. In the textile world, the croppers were, as one observer noted at the time, “notoriously the least manageable of any persons employed.”

The introduction of machinery in cloth manufacture did not make these people’s lives better. In fact, it made them a lot worse:

“They [the merchant class] were obsessed with keeping their factories going, so they were introducing machines wherever they might help,” says Jenny Uglow, a historian and author of In These Times: Living in Britain Through Napoleon’s Wars, 1793-1815.

The workers were livid. Factory work was miserable, with brutal 14-hour days that left workers—as one doctor noted—“stunted, enfeebled, and depraved.” Stocking-weavers were particularly incensed at the move toward cut-ups. It produced stockings of such low quality that they were “pregnant with the seeds of its own destruction,” as one hosier put it: Pretty soon people wouldn’t buy any stockings if they were this shoddy. Poverty rose as wages plummeted.

Yes, you read that right- the introduction of “labor-saving” technology made the amount these people worked increase dramatically. It also made their work much, much more unpleasant. It transferred control to a smaller circle of wealthy people and took it away from the workers themselves. It made the rich richer, increased poverty, and tore society apart.

But more technology is always good, right?

And since history is written by the victors, “Luddite” is a term now inextricably wound up with the knee-jerk rejection of new technology. But the Luddites weren’t opposed to new technology at all! What they were fighting against was the economic conditions that took away their autonomy and turned them into mendicants in their own country:

The workers tried bargaining. They weren’t opposed to machinery, they said, if the profits from increased productivity were shared. The croppers suggested taxing cloth to make a fund for those unemployed by machines. Others argued that industrialists should introduce machinery more gradually, to allow workers more time to adapt to new trades.

The plight of the unemployed workers even attracted the attention of Charlotte Brontë, who wrote them into her novel Shirley. “The throes of a sort of moral earthquake,” she noted, “were felt heaving under the hills of the northern counties.”

[…]

At heart, the fight was not really about technology. The Luddites were happy to use machinery—indeed, weavers had used smaller frames for decades. What galled them was the new logic of industrial capitalism, where the productivity gains from new technology enriched only the machines’ owners and weren’t shared with the workers.

In fact, the Luddites actually spared the machines that were used by employers who treated workers fairly. Funny how you never hear that in most popular descriptions of the Luddite revolt:

The Luddites were often careful to spare employers who they felt dealt fairly. During one attack, Luddites broke into a house and destroyed four frames—but left two intact after determining that their owner hadn’t lowered wages for his weavers. (Some masters began posting signs on their machines, hoping to avoid destruction: “This Frame Is Making Full Fashioned Work, at the Full Price.”)

Unlike today, labor actually fought back against these attempts to destroy their way of life:

As a form of economic protest, machine-breaking wasn’t new. There were probably 35 examples of it in the previous 100 years, as the author Kirkpatrick Sale found in his seminal history Rebels Against the Future. But the Luddites, well-organized and tactical, brought a ruthless efficiency to the technique: Barely a few days went by without another attack, and they were soon breaking at least 175 machines per month. Within months they had destroyed probably 800, worth £25,000—the equivalent of $1.97 million, today.

Rather than the “natural course” of free-market economics, once again it was government intervention, including brutal state violence, that made modern capitalism possible:

Parliament was now fully awakened, and began a ferocious crackdown. In March 1812, politicians passed a law that handed out the death penalty for anyone “destroying or injuring any Stocking or Lace Frames, or other Machines or Engines used in the Framework knitted Manufactory.” Meanwhile, London flooded the Luddite counties with 14,000 soldiers.

By winter of 1812, the government was winning. Informants and sleuthing finally tracked down the identities of a few dozen Luddites. Over a span of 15 months, 24 Luddites were hanged publicly, often after hasty trials, including a 16-year-old who cried out to his mother on the gallows, “thinking that she had the power to save him.” Another two dozen were sent to prison and 51 were sentenced to be shipped off to Australia.

But wait, isn’t capitalism all about “freedom and liberty?” Freedom and liberty for some, I guess.

The problem, then as now, was not technology itself, but the economic relations that it unfolded against. What I found most interesting is that even back then, the emerging pseudoscience of economics was used to justify the harsh treatment of the workers and the bottomless greed of capitalists, in particular the “sacred text” of modern Neoclassical economics, Adam Smith’s The Wealth of Nations:

For the Luddites, “there was the concept of a ‘fair profit,’” says Adrian Randall, the author of Before the Luddites. In the past, the master would take a fair profit, but now he adds, “the industrial capitalist is someone who is seeking more and more of their share of the profit that they’re making.” Workers thought wages should be protected with minimum-wage laws. Industrialists didn’t: They’d been reading up on laissez-faire economic theory in Adam Smith’s The Wealth of Nations, published a few decades earlier.

“The writings of Dr. Adam Smith have altered the opinion, of the polished part of society,” as the author of a minimum wage proposal at the time noted. Now, the wealthy believed that attempting to regulate wages “would be as absurd as an attempt to regulate the winds.”

It seems as though nothing’s really changed. Using economic “science” to justify social inequality and private ownership goes back to the very beginnings of the Market.

When Robots Take All of Our Jobs, Remember the Luddites (Smithsonian Magazine). Smithsonian wrote about this before, see also: What the Luddites Really Fought Against

As the above history shows, there is nothing “natural” or normal about extreme busyness and brutally long working hours. It is entirely an artificial creation:

A nice post at the HBR blog…describes how being busy is now celebrated as a symbol of high status. This is not natural. Marshall Sahlins has shown that in hunter-gather societies (which were the human condition for nine-tenths of our existence) people typically worked for only around 20 hours a week. In pre-industrial societies, work was task-oriented; people did as much as necessary and then stopped. Max Weber wrote:

“Man does not “by nature” wish to earn more and more money, but simply to live as he is accustomed to live and to earn as much as is necessary for that purpose. Wherever modern capitalism has begun its work of increasing the productivity of human labour by increasing its intensity, it has encountered the immensely stubborn resistance of this leading trait of pre-capitalistic labour. (The Protestant Ethic and the Spirit of Capitalism, p24”

The backward-bending supply curve of labour was normal.

E.P. Thompson has described how pre-industrial working hours were irregular, with Mondays usually taken as holidays. He, and writers such as Sidney Pollard and Stephen Marglin, have shown how the working day as we know it was imposed by ruthless discipline, reinforced by Christian moralists. (There’s a clue in the title of Weber’s book). Marglin quotes Andrew Ure, author of The Philosophy of Manufacturers in 1835:

The main difficulty [faced by Richard Arkwright] did not, to my apprehension, lie so much in the invention of a proper mechanism for drawing out and twisting cotton into a continuous thread, as in…training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automation. To devise and administer a successful code of factory discipline, suited to the necessities of factory diligence, was the Herculean enterprise, the noble achievement of Arkwright…It required, in fact, a man of a Napoleon nerve and ambition to subdue the refractory tempers of workpeople accustomed to irregular paroxysms of diligence.”

Today, though, such external discipline is no longer so necessary because many of us – more so in the UK and US than elsewhere – have internalized the capitalist imperative that we work long hours, …Which just vindicates a point made by Bertrand Russell back in 1932:

“The conception of duty, speaking historically, has been a means used by the holders of power to induce others to live for the interests of their masters rather than for their own.”

Against busyness (Stumbling and Mumbling)

Honestly, the five-day workweek is outmoded and ridiculous. It’s more of a babysitting operation for adults than anything else. It’s a silly as arguing that we need over two decades of formal education in order to do our jobs.

I was reminded of this over the holidays. In the U.S. we get virtually no time off from our jobs, unlike most other countries (East Asia might be an exception). But Christmas/New Year’s is a rare exception, and we have several four-day weeks in a row (without pay for some of us, of course). Those weeks are so much more pleasant, and I would even say productive, than the rest of the year. Every year at this time I think to myself, “Why isn’t every week a four-day workweek?” Some places do have such an arrangement, but they justify it by four long, ten-hour days. I don’t know about you, but towards the end of ten hours in a row of “work” I doubt anyone’s accomplishing much of anything. Is 32 hours a week really not enough to keep society functioning in the twenty-first century?

Not only that, but many people use whatever little vacation they do have in order to take the whole time period at the end of the year off. This is typical in Europe, but rarer here. In any case, while going to work I noticed that there was hardly any traffic. The roads were empty. There were plenty of seats on the bus. The streets and sidewalks were empty. There was no waiting in the restaurants and cafes. There was plenty of room for everything. There was a laid-back feeling everywhere. It was so pleasant. I couldn’t help but think to myself, “why isn’t every week like this?” If more people could stay home and work less, it very well could be. Instead we’re trapped on a treadmill. Working less would actually pay dividends in terms of reduced traffic, less crowding, less pollution, and better health outcomes due to less stress and more time to exercise.

There’s also a simple logic problem at work here. If we say the 40-hour week is inviolable and set-in-stone for the rest of time, and we do not wish to increase the problem of unemployment, then literally no labor-saving technology will ever save labor! We might as well dispense with the creation of any labor-saving technology, since by the above logic, it cannot save labor. You could equivocate and say that it frees us from doing “lower” level work and allows us to do “higher” level work, as when ditch diggers become factory workers, or something. That may have been a valid argument a hundred years ago, but in an age when most of us are low-paid service workers or useless paper-pushers, it’s pretty hard to make that case with any seriousness anymore.

***

I often refer to economics as a religion, with its practitioners as priests. So it’s interesting to read that in other contexts. This is from Chris Dillow’s blog, where the above passage about work was taken:

The social power, i.e. the multiplied productive force”, wrote Marx, appears to people “not as their own united power but as an alien force existing outside them, of the origin and end of which they are ignorant, which they thus cannot control.”

I was reminded of this by a fine passage in The Econocracy in which the authors show that “the economy” in the sense we now know it is a relatively recent invention and that economists claim to be experts capable of understanding this alien force:

“As increasing areas of political and social life are colonized by economic language and logic, the vast majority of citizens face the struggle of making informed democratic choices in a language they have never been taught. (p19)”

This leads to the sort of alienation which Marx described. This is summed up by respondents to a You Gov survey cited by Earle, Moran and Ward-Perkins, who said; “Economics is out of my hands so there is no point discussing it.”

In one important sense such an attitude is absurd. Every time you decide what to buy, or how much to save, or what job to do or how long to work, economics is in your hands and you are making an economic decision.

This suggests to me two different conceptions of what economics is. In one conception – that of Earle, Moran and Ward-Perkins – economists claim to be a priestly elite who understand “the economy”. As Alasdair MacIntyre said, such a claim functions as a demand for power and wealth:

“Civil servants and managers alike [he might have added economists-CD] justify themselves and their claims to authority, power and money by invoking their own competence as scientific managers (After Virtue, p 86).”

There is, though, a second conception of what economists should do. Rather than exploit alienation for their own advantage, we should help people mitigate it…

Economists in an alienated society (Stumbling and Mumbling)

This makes a point I often refer to – this depiction of “The Economy” as some of “natural” force that we have no control over, subject to its own inexorable logic. We saw above how the writings of Adam Smith provided the ideological justification for the wealthy merchants to screw over the workers. It cemented the perception that the economy was just a natural force with its own internal logic that could no more be regulated than could the wind or the tides. And over the course of several hundred years, we have intentionally designed our politcal institutions such that government cannot “interfere” in the “natural workings” of the economy. Doing so would only make all of us worse off, or so goes the argument.

There is a telling passage in this column by Noah Smith:

…Even now, when economic models have become far more complex than anything in [Milton] Friedman’s time, economists still go back to Friedman’s theory as a mental touchstone — a fundamental intuition that guides the way they make their models. My first macroeconomics professor believed in it deeply and instinctively, and would even bring it up in department seminars.

Unfortunately, intuition based on incorrect theories can lead us astray. Economists have known for a while that this theory doesn’t fit the facts. When people get a windfall, they tend to spend some of it immediately. So economists have tried to patch up Friedman’s theory, using a couple of plausible fixes….

Milton Friedman’s Cherished Theory Is Laid to Rest (Bloomberg)

Yes, you read that right, economists knew for a long time that a particular theory did not accord with the observed facts, but they didn’t discard it because it was necessary for the complex mathematical models that they use to supposedly describe reality. Rather, instead of discarding it, they tried to “patch it up,” because it told them what they wanted to hear. Note how his economics professor “believed deeply” in the theory, much as how people believe in the Good Book.

Nice “science” you got there.

That methodology ought to tell you everything you need to know about economic “science.” One wonders how many other approaches economists take that such thinking applies to.

Friedman was, of course, the author of “Capitalism and Freedom,” which as we saw above, is quite an ironic title. Friedman’s skill was coming up with ideas that the rich wanted hear, and then coming up with the requisite economic “logic” to justify them, from deregulation, to privatization, to globalization, to the elimination of minimum wages and suppression of unions. His most famous idea was that the sole purpose of a firm is to make money for its shareholders, and all other responsibilities were ‘unethical.’ The resulting “libertarian” economics was promoted tirelessly, including a series on PBS, by wealthy organizations and right-wing think-tanks with bottomless funding, as it still is today (along with its even more extreme cousin, “Austrian” economics). One thing the Luddites did not have to contend with was the power of the media to shape society, one reason why such revolts would be unthinkable today (along with the panopticon police states constructed by capitalist regimes beginning with Great Britain— “freedom” indeed!).

Smith himself has written about what he calls 101-ism:

We all know basically what 101ism says. Markets are efficient. Firms are competitive. Partial-equilibrium supply and demand describes most things. Demand curves slope down and supply curves slope up. Only one curve shifts at a time. No curve is particularly inelastic or elastic; all are somewhere in the middle (straight lines with slopes of 1 and -1 on a blackboard). Etc.

Note that 101 classes don’t necessarily teach that these things are true! I would guess that most do not. Almost all 101 classes teach about elasticity, and give examples with perfectly elastic and perfectly inelastic supply and demand curves. Most teach about market failures and monopolies. Most at least mention general equilibrium.

But for some reason, people seem to come away from 101 classes thinking that the cases that are the easiest to draw on the board are – God only knows why – the benchmark cases.

101ism (Noahpinion)

But the best criticism I’ve read lately is from James Kwak who has written an entire book on the subject: Economism: Bad Economics and the Rise of Inequality. He’s written several posts on the topic, but this post is a good introduction to the concept. Basically, he argues that modern economics allows policies that benefit the rich at the expense of the rest of society to masquerade as objective “scientific” truths thanks to the misapplication of economic ideology. As we saw above ,that goes back to very beginnings of “free market” economics in the nineteenth century:

In policy debates and public relations campaigns…what you are … likely to hear is that a minimum wage must increase unemployment—because that’s what the model says. This conviction that the world must behave the way it does on the blackboard is what I call economism. This style of thinking is influential because it is clear and logical, reducing complex issues to simple, pseudo-mathematical axioms. But it is not simply an innocent mistake made by inattentive undergraduates. Economism is Economics 101 transformed into an ideology—an ideology that is particularly persuasive because it poses as a neutral means of understanding the world.

In the case of low-skilled labor, it’s clear who benefits from a low minimum wage: the restaurant and hotel industries. In their PR campaigns, however, these corporations can hardly come out and say they like their labor as cheap as possible. Instead, armed with the logic of supply and demand, they argue that raising the minimum wage will only increase unemployment and poverty. Similarly, megabanks argue that regulating derivatives will starve the real economy of capital; multinational manufacturing companies argue that new trade agreements will benefit everyone; and the wealthy argue that lower taxes will increase savings and investment, unleashing economic growth.

In each case, economism allows a private interest to pretend that its preferred policies will really benefit society as a whole.The usual result is to increase inequality or to legitimize the widening gulf between rich and poor in contemporary society.

Economics 101, Economism, and Our New Gilded Age (The Baseline Scenario)

All of the above reinforces a couple of points I often like to make:

1.) Capitalism was a creation of government from day one. There is nothing “natural” or “free” about markets.

2.) It is sustained by a particular ideology which poses as a science but is anything but.

These is no fundamental reason we need to work 40 hours a week. There is no reason we have to go into debt just to get a job. There is no benefit to the extreme wealth inequality; it’s not due to any sort of “merit.” And on and on. Economic “logic” is destroying society along with the natural world and preventing any adaptive response to these crises. But its power over the hearts and minds of society seems to be unassailable, at least until it all falls apart.

Saturday Matinee: Obsolete

Source: Truthstream Media

The Future Doesn’t Need Us… Or So We’ve Been Told. With the rise of technology and the real-time pressures of an online, global economy, humans will have to be very clever – and very careful – not to be left behind by the future. From the perspective of those in charge, human labor is losing its value, and people are becoming a liability. This documentary reveals the real motivation behind the secretive effort to reduce the population and bring resource use into strict, centralized control. Could it be that the biggest threat we face isn’t just automation and robots destroying jobs, but the larger sense that humans could become obsolete altogether? *Please watch and share!* Link to film: http://amzn.to/2f69Ocr

Will Robots Take Your Job?

Walmart Robots

By Nick Srnicek and Alex Williams

Source: ROAR

In recent months, a range of studies has warned of an imminent job apocalypse. The most famous of these—a study from Oxford—suggests that up to 47 percent of US jobs are at high-risk of automation over the next two decades. Its methodology—assessing likely developments in technology, and matching them up to the tasks typically deployed in jobs—has been replicated since then for a number of other countries. One study finds that 54 percent of EU jobs are likely automatable, while the chief economist of the Bank of England has argued that 45 percent of UK jobs are similarly under threat.

This is not simply a rich-country problem, either: low-income economies look set to be hit even harder by automation. As low-skill, low-wage and routine jobs have been outsourced from rich capitalist countries to poorer economies, these jobs are also highly susceptible to automation. Research by Citi suggests that for India 69 percent of jobs are at risk, for China 77 percent, and for Ethiopia a full 85 percent of current jobs. It would seem that we are on the verge of a mass job extinction.

Nothing New?

For many economists however, there is nothing to worry about. If we look at the history of technology and the labor market, past experiences would suggest that automation has not caused mass unemployment. Automation has always changed the labor market. Indeed, one of the primary characteristics of the capitalist mode of production has been to revolutionize the means of production—to really subsume the labor process and reorganize it in ways that more efficiently generate value. The mechanization of agriculture is an early example, as is the use of the cotton gin and spinning jenny. With Fordism, the assembly line turned complex manufacturing jobs into a series of simple and efficient tasks. And with the era of lean production, we have had the computerized management of long commodity chains turn the production process into a more and more heavily automated system.

In every case, we have not seen mass unemployment. Instead we have seen some jobs disappear, while others have been created to replace not only the lost jobs but also the new jobs necessary for a growing population. The only times we see massive unemployment tend to be the result of cyclical factors, as in the Great Depression, rather than some secular trend towards higher unemployment resulting from automation. On the basis of these considerations, most economists believe that the future of work will likely be the same as the past: some jobs will disappear, but others will be created to replace them.

In typical economist fashion, however, these thoughts neglect the broader social context of earlier historical periods. Capitalism may not have seen a massive upsurge in unemployment, but this is not a necessary outcome. Rather, it was dependent upon unique circumstances of earlier moments—circumstances that are missing today. In the earliest periods of automation, there was a major effort by the labor movement to reduce the working week. It was a successful project that reduced the week from around 60 hours at the turn of the century, down to 40 hours during the 1930s, and very nearly even down to 30 hours. In this context, it was no surprise that Keynes would famously extrapolate to a future where we all worked 15 hours. He was simply looking at the existing labor movement. With reduced work per person, however, this meant that the remaining work would be spread around more evenly. The impact of technology at that time was therefore heavily muted by a 33 percent reduction in the amount of work per person.

Today, by contrast, we have no such movement pushing for a reduced working week, and the effects of automation are likely to be much more serious. Similar issues hold for the postwar era. With most Western economies left in ruins, and massive American support for the revitalization of these economies, the postwar era saw incredibly high levels of economic growth. With the further addition of full employment policies, this period also saw incredibly high levels of job growth and a compact between trade unions and capital to maintain a sufficient amount of good jobs. This led to healthy wage growth and, subsequently, healthy growth in aggregate demand to stimulate the economy and keep jobs coming. Moreover, this was a period where nearly 50 percent of the potential labor force was constrained to the household.

Under these unique circumstances, it is no wonder that capitalism was able to create enough jobs even as automation continued to transform for the labor process. Today, we have sluggish economic growth, no commitments to full employment (even as we have commitments to harsh welfare policies), stagnant wage growth, and a major influx of women into the labor force. The context for a wave of automation is drastically different from the way it was before.

Likewise, the types of technology that are being developed and potentially introduced into the labor process are significantly different from earlier technologies. Whereas earlier waves of automation affected what economists call “routine work” (work that can be laid out in a series of explicit steps), today’s technology is beginning to affect non-routine work. The difference is between a factory job on an assembly line and driving a car in the chaotic atmosphere of the modern urban environment. Research from economists like David Autor and Maarten Goos shows that the decline of routine jobs in the past 40 years has played a significant role in increased job polarization and rising inequality. While these jobs are gone, and highly unlikely to come back, the next wave of automation will affect the remaining sphere of human labor. An entire range of low-wage jobs are now potentially automatable, involving both physical and mental labor.

Given that it is quite likely that new technologies will have a larger impact on the labor market than earlier waves of technological change, what is likely to happen? Will robots take your job? While one side of the debate warns of imminent apocalypse and the other yawns from the historical repetition, both tend to neglect the political economy of automation—particularly the role of labor. Put simply, if the labor movement is strong, we are likely to see more automation; if the labor movement is weak, we are likely to see less automation.

Workers Fight Back

In the first scenario, a strong labor movement is able to push for higher and higher wages (particularly relative to globally stagnant productivity growth). But the rising cost of labor means that machines become relatively cheap in comparison. We can already see this in China, where real wages have been surging for more than 10 years, thereby making Chinese labor increasingly less cheap. The result is that China has become the world’s biggest investor in industrial robots, and numerous companies—most famously Foxconn—have all stated their intentions to move towards increasingly automated factories.

This is the archetype of a highly automated world, but in order to be achievable under capitalism it requires that the power of labor be strong, given that the relative costs of labor and machines are key determinants for investment. What then happens under these circumstances? Do we get mass unemployment as robots take all the jobs? The simple answer is no. Rather than mass decimation of jobs, most workers who have their jobs automated end up moving into new sectors.

In the advanced capitalist economies this has been happening over the past 40 years, as workers move from routine jobs to non-routine jobs. As we saw earlier, the next wave of automation is different, and therefore its effects on the labor market are also different. Some job sectors are likely to take heavy hits under this scenario. Jobs in retail and transport, for instance, will likely be heavily affected. In the UK, there are currently 3 million retail workers, but estimates by the British Retail Consortium suggest this may decrease by a million over the next decade. In the US, there are 3.4 million cashiers alone—nearly all of whose work could be automated. The transport sector is similarly large, with 3.7 million truck drivers in the US, most of whose jobs could be incrementally automated as self-driving trucks become viable on public roads. Large numbers of workers in such sectors are likely to be pushed out of their jobs if mass automation takes place.

Where will they go? The story that Silicon Valley likes to tell us is that we will all become freelance programmers and software developers and that we should all learn how to code to succeed in their future utopia. Unfortunately they seem to have bought into their own hype and missed the facts. In the US, 1.8 percent of all jobs require knowledge of programming. This compares to the agricultural sector, which creates about 1.5 percent of all American jobs, and to the manufacturing sector, which employs 8.1 percent of workers in this deindustrialized country. Perhaps programming will grow? The facts here are little better. The Bureau of Labor Statistics (BLS) projects that by 2024 jobs involving programming will be responsible for a tiny 2.2 percent of the jobs available. If we look at the IT sector as a whole, according to Citi, it is expected to take up less than 3 percent of all jobs.

What about the people needed to take care of the robots? Will we see a massive surge in jobs here? Presently, robot technicians and engineers take up less than 0.1 percent of the job market—by 2024, this will dwindle even further. We will not see a major increase in jobs taking care of robots or in jobs involving coding, despite Silicon Valley’s best efforts to remake the world in its image.

This continues a long trend of new industries being very poor job creators. We all know about how few employees worked at Instagram and WhatsApp when they were sold for billions to Facebook. But the low levels of employment are a widespread sectoral problem. Research from Oxford has found that in the US, only 0.5 percent of the labor force moved into new industries (like streaming sites, web design and e-commerce) during the 2000s. The future of work does not look like a bunch of programmers or YouTubers.

In fact, the fastest growing job sectors are not for jobs that require high levels of education at all. The belief that we will all become high-skilled and well-paid workers is ideological mystification at its purest. The fastest growing job sector, by far, is the healthcare industry. In the US, the BLS estimates this sector to create 3.8 million new jobs between 2014 and 2024. This will increase its share of employment from 12 percent to 13.6 percent, making it the biggest employing sector in the country. The jobs of “healthcare support” and “healthcare practitioner” alone will contribute 2.3 million jobs—or 25 percent of all new jobs expected to be created.

There are two main reasons for why this sector will be such a magnet for workers forced out of other sectors. In the first place, the demographics of high-income economies all point towards a significantly growing elderly population. Fewer births and longer lives (typically with chronic conditions rather than infectious diseases) will put more and more pressure on our societies to take care of elderly, and force more and more people into care work. Yet this sector is not amenable to automation; it is one of the last bastions of human-centric skills like creativity, knowledge of social context and flexibility. This means the demand for labor is unlikely to decrease in this sector, as productivity remains low, skills remain human-centric, and demographics make it grow.

In the end, under the scenario of a strong labor movement, we are likely to see wages rise, which will cause automation to rapidly proceed in certain sectors, while workers are forced to struggle for jobs in a low-paying healthcare sector. The result is the continued elimination of middle-wage jobs and the increased polarization of the labor market as more and more are pushed into the low-wage sectors. On top of this, a highly educated generation that was promised secure and well-paying jobs will be forced to find lower-skilled jobs, putting downward pressure on wages—generating a “reserve army of the employed”, as Robert Brenner has put it.

Workers Fall Back

Yet what happens if the labor movement remains weak? Here we have an entirely different future of work awaiting us. In this case, we end up with stagnant wages, and workers remain relatively cheap compared to investment in new equipment. The consequences of this are low levels of business investment, and subsequently, low levels of productivity growth. Absent any economic reason to invest in automation, businesses fail to increase the productivity of the labor process. Perhaps unexpectedly, under this scenario we should expect high levels of employment as businesses seek to maximize the use of cheap labor rather than investing in new technology.

This is more than a hypothetical scenario, as it rather accurately describes the situation in the UK today. Since the 2008 crisis, real wages have stagnated and even fallen. Real average weekly earnings have started to rise since 2014, but even after eight years they have yet to return to their pre-crisis levels. This has meant that businesses have had incentives to hire cheap workers rather than invest in machines—and the low levels of investment in the UK bear this out. Since the crisis, the UK has seen long periods of decline in business investment—the most recent being a 0.4 percent decline between Q12015 and Q12016. The result of low levels of investment has been virtually zero growth in productivity: from 2008 to 2015, growth in output per worker has averaged 0.1 percent per year. Almost all of the UK’s recent growth has come from throwing more bodies into the economic machine, rather than improving the efficiency of the economy. Even relative to slow productivity growth across the world, the UK is particularly struggling.

With cheap wages, low investment and low productivity, we see that companies have instead been hiring workers. Indeed, employment levels in the UK have reached the highest levels on record—74.2 percent as of May 2016. Likewise, unemployment is low at 5.1 percent, especially when compared to their neighbors in Europe who average nearly double that level. So, somewhat surprisingly, an environment with a weak labor movement leads here to high levels of employment.

What is the quality of these jobs, however? We have already seen that wages have been stagnant, and that two-thirds of net job creation since 2008 has been in self-employed jobs. Yet there has also been a major increase in zero-hour contracts (employment situations that do not guarantee any hours to workers). Estimates are that up to 5 percent of the labor force is in such situations, with over 1.7 million zero-hour contracts out. Full-time employment is down as well: as a percentage of all jobs, its pre-crisis levels of 65 percent have been cut to 63 percent and refused to budge even as the economy grows (slowly). The percentage of involuntary part-time workers—those who would prefer a full-time job but cannot find one—more than doubled after the crisis, and has barely begun to recover since.

Likewise with temporary employees: involuntary temporary workers as a percentage of all temporary workers rose from below 25 percent to over 40 percent during the crisis, only partly recovering to around 35 percent today. There is a vast number of workers who would prefer to work in more permanent and full-time jobs, but who can no longer find them. The UK is increasingly becoming a low-wage and precarious labor market—or, in the Tories’ view, a competitive and flexible labor market. This, we would argue, is the future that obtains with a weak labor movement: low levels of automation, perhaps, but at the expense of wages (and aggregate demand), permanent jobs and full-time work. We may not get a fully automated future, but the alternative looks just as problematic.

These are therefore the two poles of possibility for the future of work. On the one hand, a highly automated world where workers are pushed out of much low-wage non-routine work and into lower-wage care work. On the other hand, a world where humans beat robots but only through lower wages and more precarious work. In either case, we need to build up the social systems that will enable people to survive and flourish in the midst of these significant changes. We need to explore ideas like a Universal Basic Income, we need to foster investment in automation that could eliminate the worst jobs in society, and we need to recover that initial desire of the labor movement for a shorter working week.

We must reclaim the right to be lazy—which is neither a demand to be lazy nor a belief in the natural laziness of humanity, but rather the right to refuse domination by a boss, by a manager, or by a capitalist. Will robots take our jobs? We can only hope so.

Note: All uncited figures either come directly from, or are based on authors’ calculations of, data from the Bureau of Labor Statistics, O*NET and the Office for National Statistics.

A Universal Basic Income Is The Bipartisan Solution To Poverty We’ve Been Waiting For

 Molly Crabapple Basic Income Banner

What if the government simply paid everyone enough so that no one was poor? It’s an insane idea that’s gaining an unlikely alliance of supporters.

By Ben Schiller

Source: FastCoexist.com

There’s a simple way to end poverty: the government just gives everyone enough money, so nobody is poor. No ifs, buts, conditions, or tests. Everyone gets the minimum they need to survive, even if they already have plenty.

This, in essence, is “universal minimum income” or “guaranteed basic income”—where, instead of multiple income assistance programs, we have just one: a single payment to all citizens, regardless of background, gender, or race. It’s a policy idea that sounds crazy at first, but actually begins to make sense when you consider some recent trends.

The first is that work isn’t what it used to be. Many people now struggle through a 50-hour week and still don’t have enough to live on. There are many reasons for this—including the heartlessness of employers and the weakness of unions—but it’s a fact. Work no longer pays. The wages of most American workers have stagnated or declined since the 1970s. About 25% of workers (including 40% of those in restaurants and food service) now need public assistance to top up what they earn.

The second: it’s likely to get worse. Robots already do many menial tasks. In the future, they’ll do more sophisticated jobs as well. A study last year from Carl Frey and Michael Osborne at Oxford University found that 47% of jobs are at risk of computerization over the next two decades. That includes positions in transport and logistics, office and administration, sales and construction, and even law, financial services and medicine. Of course, it’s possible that people who lose their jobs will find others. But it’s also feasible we’re approaching an era when there will simply be less to do.

The third is that traditional welfare is both not what it used to be and not very efficient. The value of welfare for families with children is now well below what it was in the 1990s, for example. The move towards means-testing, workfare—which was signed into law by Bill Clinton in 1996—and other forms of conditionality have killed the universal benefit. And not just in the U.S. It’s now rare anywhere in the world that people get a check without having to do something in return. Whatever the rights and wrongs of this, that makes the income assistance system more complicated and expensive to manage. Up to up to 10% of the income assistance budget now goes to administrating its distribution.

For these reasons and others, the idea of a basic income for everyone is becoming increasingly popular. There has been a flurry of reports and papers about it recently, and, unusually, the idea has advocates across the political spectrum.

The libertarian right likes basic income because it hates bureaucracy and thinks people should be responsible for themselves. Rather than giving out food stamps and health care (which are in-kind services), it thinks people should get cash, because cash is fungible and you do what you like with it.

The left likes basic income because it thinks society is unequal and basic income is redistributive. It evens up the playing field for people who haven’t had good opportunities in life by establishing a floor under the poorest. The “precariat” goes from being perpetually insecure to knowing it has something to live on. That, in turn, should raise well-being and produce more productive citizens.

The technology elite, like Netscape’s Marc Andreessen, also likes the idea. “As a VC, I like the fact that a lot of the political establishment is ignoring or dismissing this idea,” Albert Wenger, of Union Square Ventures, told a TED audience recently, “because what we see in startups is that the most powerful innovative ideas are ones truly dismissed by the incumbents.” A minimum income would allow us to “embrace automation rather than be afraid of it” and let more of us participate in the era of “digital abundance,” he says.

The exact details of basic income still need to be worked out, but it might work something like this: Instead of welfare payments, subsidies for health care, and tax credits for the working poor, we would take that money and use it to cover a single payment that would give someone the chance to live reasonably. Switzerland recently held an (unsuccessful) is planning to hold a referendum on a basic income this year, though no date is set. The proposed amount is $2,800 per month.

But would it actually work? The evidence from actual experiments is limited, though it’s more positive than not. A pilot in the 1970s in Manitoba, Canada, showed that a “Mincome” not only ended poverty but also reduced hospital visits and raised high-school completion rates. There seemed to be a community-affirming effect, which showed itself in people making use of free public services more responsibly.

Meanwhile, there were eight “negative income tax” trials in the U.S. in the ’70s, where people received payments and the government clawed back most of it in taxes based on your other income. The results for those trials was more mixed. They reduced poverty, but people also worked slightly less than normal. To some, this is the major drawback of basic income: it could make people lazier than they would otherwise be. That would certainly be a problem, though it’s questionable whether, in the future, there will be as much employment anyway. The age of robots and artificial intelligence seems likely to hollow out many jobs, perhaps changing how we view notions of laziness and productivity altogether.

Experiments outside the U.S. have been more encouraging. One in Namibia cut poverty from 76% to 37%, increased non-subsidized incomes, raised education and health standards, and cut crime levels. Another involving 6,000 people in India paid people $7 month—about a third of subsistence levels. It, too, proved successful.

“The important thing is to create a floor on which people can start building some security. If the economic situation allows, you can gradually increase the income to where it meets subsistence,” says Guy Standing, a professor of development studies at the School of Oriental and African Studies, in London, who was involved with the pilot. “Even that modest amount had incredible effects on people’s savings, economic status, health, in children going to school, in the acquisition of items like school shoes, so people felt in control of their lives. The amount of work people were doing increased as well.”

Given the gridlock in Congress, it’s unlikely we’ll see basic income here for a while. Though the idea has supporters in both left and right-leaning think-tanks, it’s doubtful actual politicians could agree to redesign much of the federal government if they can’t agree on much else. But the idea could take off in poorer countries that have more of a blank slate and suffer from less polarization. Perhaps we’ll re-import the concept one day once the developing world has perfected it?