National (In)Security In the United States of Inequality

By Rajan Menon

Source: Unz Review

So effectively has the Beltway establishment captured the concept of national security that, for most of us, it automatically conjures up images of terrorist groups, cyber warriors, or “rogue states.” To ward off such foes, the United States maintains a historically unprecedented constellation of military bases abroad and, since 9/11, has waged wars in Afghanistan, Iraq, Syria, Libya, and elsewhere that have gobbled up nearly $4.8 trillion. The 2018 Pentagon budget already totals $647 billion — four times what China, second in global military spending, shells out and more than the next 12 countries combined, seven of them American allies. For good measure, Donald Trump has added an additional $200 billion to projected defense expenditures through 2019.

Yet to hear the hawks tell it, the United States has never been less secure. So much for bang for the buck.

For millions of Americans, however, the greatest threat to their day-to-day security isn’t terrorism or North Korea, Iran, Russia, or China. It’s internal — and economic. That’s particularly true for the 12.7% of Americans (43.1 million of them) classified as poor by the government’s criteria: an income below $12,140 for a one-person household, $16,460 for a family of two, and so on… until you get to the princely sum of $42,380 for a family of eight.

Savings aren’t much help either: a third of Americans have no savings at all and another third have less than $1,000 in the bank. Little wonder that families struggling to cover the cost of food alone increased from 11% (36 million) in 2007 to 14% (48 million) in 2014.

The Working Poor

Unemployment can certainly contribute to being poor, but millions of Americans endure poverty when they have full-time jobs or even hold down more than one job. The latest figures from the Bureau of Labor Statistics show that there are 8.6 million“working poor,” defined by the government as people who live below the poverty line despite being employed at least 27 weeks a year. Their economic insecurity doesn’t register in our society, partly because working and being poor don’t seem to go together in the minds of many Americans — and unemployment has fallen reasonably steadily. After approaching 10% in 2009, it’s now at only 4%.

Help from the government? Bill Clinton’s 1996 welfare “reform” program concocted in partnership with congressional Republicans, imposed time limits on government assistance, while tightening eligibility criteria for it. So, as Kathryn Edin and Luke Shaefer show in their disturbing book, $2.00 a Day: Living on Almost Nothing in America, many who desperately need help don’t even bother to apply. And things will only get worse in the age of Trump. His 2019 budget includes deep cuts in a raftof anti-poverty programs.

Anyone seeking a visceral sense of the hardships such Americans endure should read Barbara Ehrenreich’s 2001 book Nickel and Dimed: On (Not) Getting By in America. It’s a gripping account of what she learned when, posing as a “homemaker” with no special skills, she worked for two years in various low-wage jobs, relying solely on her earnings to support herself. The book brims with stories about people who had jobs but, out of necessity, slept in rent-by-the-week fleabag motels, flophouses, or even in their cars, subsisting on vending machine snacks for lunch, hot dogs and instant noodles for dinner , and forgoing basic dental care or health checkups. Those who managed to get permanent housing would choose poor, low-rent neighborhoods close to work because they often couldn’t afford a car. To maintain even such a barebones lifestyle, many worked more than one job.

Though politicians prattle on about how times have changed for the better, Ehrenreich’s book still provides a remarkably accurate picture of America’s working poor. Over the past decade the proportion of people who exhausted their monthly paychecks just to pay for life’s essentials actually increased from 31% to 38%. In 2013, 71% of the families that had children and used food pantries run by Feeding America, the largest private organization helping the hungry, included at least one person who had worked during the previous year. And in America’s big cities, chiefly because of a widening gap between rent and wages, thousands of working poor remain homeless, sleeping in shelters, on the streets, or in their vehicles, sometimes along with their families. In New York City, no outlier when it comes to homelessness among the working poor, in a third of the families with children that use homeless shelters at least one adult held a job.

The Wages of Poverty

The working poor cluster in certain occupations. They are salespeople in retail stores, servers or preparers of fast food, custodial staff, hotel workers, and caregivers for children or the elderly. Many make less than $10 an hour and lack any leverage, union or otherwise, to press for raises. In fact, the percentage of unionized workers in such jobs remains in the single digits — and in retail and food preparation, it’s under 4.5%. That’s hardly surprising, given that private sector union membership has fallen by 50% since 1983 to only 6.7% of the workforce.

Low-wage employers like it that way and — Walmart being the poster child for this — work diligently to make it ever harder for employees to join unions. As a result, they rarely find themselves under any real pressure to increase wages, which, adjusted for inflation, have stood still or even decreased since the late 1970s. When employment is “at-will,” workers may be fired or the terms of their work amended on the whim of a company and without the slightest explanation. Walmart announced this year that it would hike its hourly wage to $11 and that’s welcome news. But this had nothing to do with collective bargaining; it was a response to the drop in the unemployment rate, cash flows from the Trump tax cut for corporations (which saved Walmart as much as $2 billion), an increase in minimum wages in a number of states, and pay increases by an arch competitor, Target. It was also accompanied by the shutdown of 63 of Walmart’s Sam’s Club stores, which meant layoffs for 10,000 workers. In short, the balance of power almost always favors the employer, seldom the employee.

As a result, though the United States has a per-capita income of $59,500 and is among the wealthiest countries in the world, 12.7% of Americans (that’s 43.1 million people), officially are impoverished. And that’s generally considered a significant undercount. The Census Bureau establishes the poverty rate by figuring out an annual no-frills family food budget, multiplying it by three, adjusting it for household size, and pegging it to the Consumer Price Index. That, many economists believe, is a woefully inadequate way of estimating poverty. Food prices haven’t risen dramatically over the past 20 years, but the cost of other necessities like medical care (especially if you lack insurance) and housing have: 10.5% and 11.8% respectively between 2013 and 2017 compared to an only 5.5% increase for food.

Include housing and medical expenses in the equation and you get the Supplementary Poverty Measure (SPM), published by the Census Bureau since 2011. It reveals that a larger number of Americans are poor: 14% or 45 million in 2016.

Dismal Data

For a fuller picture of American (in)security, however, it’s necessary to delve deeper into the relevant data, starting with hourly wages, which are the way more than 58%of adult workers are paid. The good news: only 1.8 million, or 2.3% of them, subsist at or below minimum wage. The not-so-good news: one-third of all workers earn less than $12 an hour and 42% earn less than $15. That’s $24,960 and $31,200 a year. Imagine raising a family on such incomes, figuring in the cost of food, rent, childcare, car payments (since a car is often a necessity simply to get to a job in a country with inadequate public transportation), and medical costs.

The problem facing the working poor isn’t just low wages, but the widening gap between wages and rising prices. The government has increased the hourly federal minimum wage more than 20 times since it was set at 25 cents under the 1938 Fair Labor Standards Act. Between 2007 and 2009 it rose to $7.25, but over the past decade that sum lost nearly 10% of its purchasing power to inflation, which means that, in 2018, someone would have to work 41 additional days to make the equivalent of the 2009 minimum wage.

Workers in the lowest 20% have lost the most ground, their inflation-adjusted wages falling by nearly 1% between 1979 and 2016, compared to a 24.7% increase for the top 20%. This can’t be explained by lackluster productivity since, between 1985 and 2015, it outstripped pay raises, often substantially, in every economic sector except mining.

Yes, states can mandate higher minimum wages and 29 have, but 21 have not, leaving many low-wage workers struggling to cover the costs of two essentials in particular: health care and housing.

Even when it comes to jobs that offer health insurance, employers have been shifting ever more of its cost onto their workers through higher deductibles and out-of-pocket expenses, as well as by requiring them to cover more of the premiums. The percentage of workers who paid at least 10% of their earnings to cover such costs — not counting premiums — doubled between 2003 and 2014.

This helps explain why, according to the Bureau of Labor Statistics, only 11% of workers in the bottom 10% of wage earners even enrolled in workplace healthcare plans in 2016 (compared to 72% in the top 10%). As a restaurant server who makes $2.13 an hour before tips — and whose husband earns $9 an hour at Walmart — put it, after paying the rent, “it’s either put food in the house or buy insurance.”

The Affordable Care Act, or ACA (aka Obamacare), provided subsidies to help people with low incomes cover the cost of insurance premiums, but workers with employer-supplied healthcare, no matter how low their wages, weren’t covered by it. Now, of course, President Trump, congressional Republicans, and a Supreme Court in which right-wing justices are going to be even more influential will be intent on poleaxing the ACA.

It’s housing, though, that takes the biggest bite out of the paychecks of low-wage workers. The majority of them are renters. Ownership remains for many a pipe dream. According to a Harvard study, between 2001 and 2016, renters who made $30,000-$50,000 a year and paid more than a third of their earnings to landlords (the threshold for qualifying as “rent burdened”) increased from 37% to 50%. For those making only $15,000, that figure rose to 83%.

In other words, in an ever more unequal America, the number of low-income workers struggling to pay their rent has surged. As the Harvard analysis shows, this is, in part, because the number of affluent renters (with incomes of $100,000 or more) has leapt and, in city after city, they’re driving the demand for, and building of, new rental units. As a result, the high-end share of new rental construction soared from a third to nearly two-thirds of all units between 2001 and 2016. Not surprisingly, new low-income rental units dropped from two-fifths to one-fifth of the total and, as the pressure on renters rose, so did rents for even those modest dwellings. On top of that, in places like New York City, where demand from the wealthy shapes the housing market, landlords have found ways — some within the law, others not — to get rid of low-income tenants.

Public housing and housing vouchers are supposed to make housing affordable to low-income households, but the supply of public housing hasn’t remotely matched demand. Consequently, waiting lists are long and people in need languish for years before getting a shot — if they ever do. Only a quarter of those who qualify for such assistance receive it. As for those vouchers, getting them is hard to begin with because of the massive mismatch between available funding for the program and the demand for the help it provides. And then come the other challenges: finding landlords willing to accept vouchers or rentals that are reasonably close to work and not in neighborhoods euphemistically labelled “distressed.”

The bottom line: more than 75% of “at-risk” renters (those for whom the cost of rent exceeds 30% or more of their earnings) do not receive assistance from the government. The real “risk” for them is becoming homeless, which means relying on shelters or family and friends willing to take them in.

President Trump’s proposed budget cuts will make life even harder for low-income workers seeking affordable housing. His 2019 budget proposal slashes $6.8 billion(14.2%) from the resources of the Department of Housing and Urban Development’s (HUD) by, among other things, scrapping housing vouchers and assistance to low-income families struggling to pay heating bills. The president also seeks to slash funds for the upkeep of public housing by nearly 50%. In addition, the deficits that his rich-come-first tax “reform” bill is virtually guaranteed to produce will undoubtedly set the stage for yet more cuts in the future. In other words, in what’s becoming the United States of Inequality, the very phrases “low-income workers” and “affordable housing” have ceased to go together.

None of this seems to have troubled HUD Secretary Ben Carson who happily ordered a $31,000 dining room set for his office suite at the taxpayers’ expense, even as he visited new public housing units to make sure that they weren’t too comfortable (lest the poor settle in for long stays). Carson has declared that it’s time to stop believing the problems of this society can be fixed merely by having the government throw extra money at them — unless, apparently, the dining room accoutrements of superbureaucrats aren’t up to snuff.

Money Talks

The levels of poverty and economic inequality that prevail in America are not intrinsic to either capitalism or globalization. Most other wealthy market economies in the 36-nation Organization for Economic Cooperation and Development (OECD) have done far better than the United States in reducing them without sacrificing innovation or creating government-run economies.

Take the poverty gap, which the OECD defines as the difference between a country’s official poverty line and the average income of those who fall below it. The United States has the second largest poverty gap among wealthy countries; only Italy does worse.

Child poverty? In the World Economic Forum’s ranking of 41 countries — from best to worst — the U.S. placed 35th. Child poverty has declined in the United States since 2010, but a Columbia University report estimates that 19% of American kids (13.7 million) nevertheless lived in families with incomes below the official poverty line in 2016. If you add in the number of kids in low-income households, that number increases to 41%.

As for infant mortality, according to the government’s own Centers for Disease Control, the U.S., with 6.1 deaths per 1,000 live births, has the absolute worst record among wealthy countries. (Finland and Japan do best with 2.3.)

And when it comes to the distribution of wealth, among the OECD countries only Turkey, Chile, and Mexico do worse than the U.S.

It’s time to rethink the American national security state with its annual trillion-dollar budget. For tens of millions of Americans, the source of deep workaday insecurity isn’t the standard roster of foreign enemies, but an ever-more entrenched system of inequality, still growing, that stacks the political deck against the least well-off Americans. They lack the bucks to hire big-time lobbyists. They can’t write lavish checks to candidates running for public office or fund PACs. They have no way of manipulating the myriad influence-generating networks that the elite uses to shape taxation and spending policies. They are up against a system in which money truly does talk — and that’s the voice they don’t have. Welcome to the United States of Inequality.

 

Rajan Menon, a TomDispatch regular, is the Anne and Bernard Spitzer Professor of International Relations at the Powell School, City College of New York, and Senior Research Fellow at Columbia University’s Saltzman Institute of War and Peace Studies. He is the author, most recently, of The Conceit of Humanitarian Intervention 

Suicide? American Society is Murdering Us

By Ted Rall

Source: CounterPunch

They say that 10 million Americans seriously consider committing suicide every year. In 1984, when I was 20, I was one of them.

Most people who kill themselves feel hopeless. They are miserable and distraught and can’t imagine how or if their lives will ever improve. That’s how I felt. Within a few months I got expelled from college, dumped by a girlfriend I foolishly believed I would marry, fired from my job and evicted from my apartment. I was homeless, bereft, broke. I didn’t have enough money for more than a day of cheap food. And I had no prospects.

I tried in vain to summon up the guts to jump off the roof of my dorm. I went down to the subway but couldn’t make myself jump in front of a train. I wanted to. But I couldn’t.

Obviously things got better. I’m writing this.

Things got better because my luck changed. But — why did it have to? Isn’t there something wrong with a society in which life or death turns on luck?

I wish I could tell my 20-year-old self that suicide isn’t necessary, that there is another way, that there will be plenty of time to be dead in the end. I’ve seen those other ways when I’ve traveled overseas.

In Thailand and Central Asia and the Caribbean and all over the world you will find Americans whose American lives ran hard against the shoals of bankruptcy, lost love, addiction or social shame. Rather than off themselves, they gathered their last dollars and headed to the airport and went somewhere else to start over. They showed up at some dusty ex-pat bar in the middle of nowhere with few skills other than speaking English and asked if they could crash in the back room in between washing dishes. Eventually they scraped together enough money to conduct tours for Western tourists, maybe working as a divemaster or taking rich vacationers deep-sea fishing. They weren’t rich themselves; they were OK and that was more than enough.

You really can start over. But maybe not in this uptight, stuck-up, class-stratified country.

I remembered that in 2015 when I suffered another setback. Unbeknownst to me, the Los Angeles Times — where I had worked as a cartoonist since 2009 – had gotten itself into a corrupt business deal with the LAPD, which I routinely criticized in my cartoons. A piece-of-work police chief leveraged his department’s financial influence on the newspaper by demanding that the idiot ingénue publisher, his political ally, fire me as a favor. But mere firing wasn’t enough for these two goons. They published not one, but two articles, lying about me in an outrageous attempt to destroy my journalistic credibility. I’m suing but the court system is slower than molasses in the pre-climate change Arctic.

Suicide crossed my mind many times during those dark weeks and months. Although I had done nothing wrong the Times’ smears made me feel ashamed. I was angry: at the Times editors who should have quit rather than carry out such shameful orders, at the media outlets who refused to cover my story, at the friends and colleagues who didn’t support me. Though many people stood by me, I felt alone. I couldn’t imagine salvaging my reputation — as a journalist, your reputation for truthtelling and integrity are your most valuable asset and essential to do your job and to get new ones.

As my LA Times nightmare unfolded, however, I remembered the Texas-born bartender who had reinvented himself in Belize after his wife left him and a family court judge ordered him to pay 90% of his salary in alimony. I thought about the divemaster in Cozumel running away from legal trouble back in the States that he refused to describe. If my career were to crumble away, I could split.

You can opt out of BS without having to opt out of life.

Up 30% since 1999, suicide has become an accelerating national epidemic — 1.4 million Americans tried to kill themselves in a single year, 2015 — but the only times the media focuses on suicide is when it claims the lives of celebrities like Kate Spade and Anthony Bourdain. While the media has made inroads by trying to cover high-profile suicides discreetly so as to minimize suicidal ideation and inspiring others to follow their example, it’s frustrating that no one seems to want to identify societal and political factors so that this trend might be reversed.

Experts believe that roughly half of men who commit suicide suffer from undiagnosed mental illness such as a severe personality disorder or clinical depression. Men commit suicide in substantially higher numbers than women. The healthcare insurance business isn’t much help. One in five Americans is mentally ill but 60% get no treatment at all.

Then there’s stress. Journalistic outlets and politicians don’t target the issue of stress in any meaningful way other than to foolishly, insipidly advise people to avoid it. If you subject millions of people to inordinate stress, some of them, the fragile ones, will take their own lives. We should be working to create a society that minimizes rather than increases stress.

It doesn’t require a lot of heavy lifting to come up with major sources of stress in American society. People are working longer hours but earning lower pay. Even people with jobs are terrified of getting laid off without a second’s notice. The American healthcare system, designed to fatten for-profit healthcare corporations, is a sick joke. When you lose your job or get sick, that shouldn’t be your problem alone. We’re social creatures. We must help each other personally, locally and through strong safety-net social programs.

Loneliness and isolation are likely leading causes of suicide; technology is alienating us from one another even from those who live in our own homes. This is a national emergency. We have to discuss it, then act.

Life in the United States has become vicious and brutal, too much to take even for this nation founded upon the individualistic principles of rugged libertarian pioneers. Children are pressured to exhibit fake joy and success on social media. Young adults are burdened with gigantic student loans they strongly suspect they will never be able to repay. The middle-aged are divorced, outsourced, downsized and repeatedly told they are no longer relevant. And the elderly are thrown away or warehoused, discarded and forgotten by the children they raised.

We don’t have to live this way. It’s a choice. Like the American ex-pats I run into overseas, American society can opt out of crazy-making capitalism without having to opt out.

America is Disneyland

By Chris Kanthan

Source: Activist Post

Disneyland is the Happiest Place on Earth! Millions of families visit the theme park every year to enjoy the magical place of rides, spectacular shows and cheerful cartoon figures. Everything is clean, perfect and joyful. Unless … you realize that Cinderella might actually be homeless. That’s right, 10% of Disneyland’s employees are actually homeless, many more are on food stamps, and 75% struggle to make ends meet.

Does this ring familiar? Think of America. Behind the façade of being the greatest country on Earth with the largest GDP and the wealthiest billionaires, there are tens of millions of Americans who are left behind just like Disney’s employees.

This neo-feudalistic model isn’t isolated to Disney or Walmart, it’s systemic. For example, the bus driver at Apple – which has $280 billion in cash – is forced to sleep in a van because he can’t afford the Silicon Valley rent; Facebook’s cafeteria workers live in a garage; and thousands of American Airlines’ employees are forced to depend on food stamps.

America is being eaten alive by corporate greed; and Disneyland has been taken over by Scrooge.

Let’s look at some Disney Inc. statistics.

Total profit per year: $9 billion

Total employees: 200,000

Notice that the profit reflects what’s left after all the expenses, including the salaries, have been paid. So, in a utopian world, the Disney management will do the math ($9 billion / 200,000 = $45,000) and send a check for $45K to every employee, Mickey included. That kind of profit-sharing would really make Disneyland the happiest place on Earth. Does that happen? No way!

Does Cinderella get a check for perhaps $20K, $10K, $5K or even $1K? Nope, nope, nope, nope. Cinderella gets nada, zero, zilch. She should be content with the $12/hour salary and must smile happily for the kids.

In Disneyland, Cinderella never gets to meet her prince.

Disney’s CEO gets paid $46 million a year, which translates to $23,000 an hour. Imagine Disney’s CEO coming to work on Jan 2nd. He wishes a few people “happy new year,” orders coffee, sits on his desk, makes a few phone calls … and he has already made more money than what Ariel would make during the rest of the year.

Of course, the CEO should get paid more, but does he deserve a salary that’s equivalent to 2,000 Disney employees? If the CEO doesn’t show up for work for a day, Disneyland will continue running. If 2,000 employees take a day off, the park would be shut down.

In the 1960s, the CEO-to-worker salary ratio was 25. Today it’s often 600 or more, sometimes even more than 1000 (for example, at Walmart). Much of the executive compensation comes in the form of stock options and bonuses based on stock performance. In a rational and unrigged world, the CEOs would increase their revenues and profits to get bonuses. Not anymore.

Now, the CEOs simply use a no-brainer solution to boost the stock prices – it’s called stock buybacks or share repurchases. This involves a firm using corporate profits (or even borrowed money) to buy its own stocks. BTW, this used to be illegal until the 1980s.

Since 2007, US corporations have spent trillions of dollars on stock buybacks. In 2018 alone, they will spend $800 billion on this financial engineering tool (which has also led to a massive stock market bubble). They won’t use the billions to hire Americans, boost wages or innovate new products. Instead, the CEOs will buy yachts and tell you that Chinese or Mexicans stole your jobs.

Do the low-wage employees of Disneyland get any shares or stock options? A silly question, indeed.

Thus we have a situation where American employers ruthlessly exploit American workers. This isn’t a good model for a country. China and Mexico don’t make us poor; predatory capitalism does.

Paying good wages to hardworking employees is not socialism or communism. Henry Ford understood this when he more than doubled the wages of his workers in 1914.

However, hundred years later, maximizing profit has become a fundamentalist dogma. You can imagine a conversation among the factory-farming executives:

Guy #1: Why the heck are these chickens roaming out in the farms? We would save so much money if we lock them up in cages.

Guy #2: Brilliant idea! Let’s lock up five chickens in a cage. We will save more. More is always better.

Guy #3: I really don’t understand why we feed them expensive salads and healthy stuff. Let’s feed them cheap GMO corn and GMO soy from my friends at Monsanto.

Guy #4: Experts tell me that if we give them caffeine and anti-depressants, the chickens will stay awake longer, eat more, and get fatter.

Guy #5: And when they get sick, load them up with antibiotics and steroids.

Guy #5: These stupid chickens are also so small. Let’s drug them with some growth hormones. I am getting a lot of pressure from the private equity funds about profits per chicken.

Apart from being inhumane and psychopathic, this system forgets or ignores the fact that we have to eat these chickens. Sick chicken = sick people. Call it Karma or “revenge of the chickens.”

Similarly, poor workers = poor country. And you can imagine a similar conversation among corporate executives regarding workers – “cut their wages and benefits”, “make them work overtime”, “hire part-time employees rather than full-time” and so on.

You can’t grow the economy if American workers don’t get paid enough, especially by profitable multi-billion dollar corporations. 2/3rd of our GDP is based on consumer spending. It’s no wonder that in the last ten years, the US economy cumulatively grew only by a dismal 35%. Compare that to China, which grew by an astounding 200% during that same period.

And it’s not a coincidence that China’s average wages have more than doubled in the same period:

The solution for low wages primarily lies in the hands of corporate elites. Labor unions are almost non-existent in the private sector these days; and the government doesn’t have much control over corporate America – in fact, corporations control the U.S. political system. Free market doesn’t have to translate to cancerous greed and extreme exploitation. Free market also means that corporations are free to share their profits with their employees. Finally, free market can and must also incorporate patriotism, responsibility to the society and strategies for sustainable prosperity.

 

Chris Kanthan is the author of a new book, Deconstructing the Syrian War. Chris lives in the San Francisco Bay Area, has traveled to 35 countries, and writes about world affairs, politics, economy and health. His other book is Deconstructing Monsanto. Follow him on Twitter: @GMOChannel

 

 

A 2% Financial Wealth Tax Would Provide A $12,000 Annual Stipend To Every American Household

Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income.

By Paul Buchheit

Source: Nation of Change

It’s not hard to envision the benefits in work opportunities, stress reduction, child care, entrepreneurial activity, and artistic pursuits for American households with an extra $1,000 per month. It’s also very easy to justify a financial wealth tax, given that the dramatic stock market surge in recent years is largely due to an unprecedented degree of technological and financial productivity that derives from the work efforts and taxes of ALL Americans. A 2% annual tax on financial wealth is a small price to pay for the great fortunes bestowed on the most fortunate Americans.

The REASONS? Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income (UBI).

(1) Our Jobs are Disappearing

A 2013 Oxford study determined that nearly HALF of American jobs are at risk of being replaced by computers, AI, and robots. Society simply can’t keep up with technology. As for the skeptics who cite the Industrial Revolution and its job-enhancing aftermath (which actually took 60 years to develop), the McKinsey Global Institute says that society is being transformed at a pace “ten times faster and at 300 times the scale” of the radical changes of two hundred years ago.

(2) Half of America is Stressed Out or Sick

Half of Americans are in or near poverty, unable to meet emergency expenses, living from paycheck to paycheck, and getting physically and emotionally ill because of it. Numerous UBI experiments have led to increased well-being for their participants. A guaranteed income reduces the debilitating effects of inequality. As one recipient put it, “It takes me out of depression…I feel more sociable.”

(3) Children Need Our Help

This could be the best reason for monthly household stipends. Parents, especially mothers, are unable to work outside the home because of the all-important need to care for their children. Because we currently lack a UBI, more and more children are facing hunger and health problems and educational disadvantages.

(4) We Need More Entrepreneurs

A sudden influx of $12,000 per year for 126 million households will greatly stimulate the economy, potentially allowing millions of Americans to TAKE RISKS that could lead to new forms of innovation and productivity.

Perhaps most significantly, a guaranteed income could relieve some of the pressure on our newest generation of young adults, who are deep in debt, underemployed, increasingly unable to live on their own, and ill-positioned to take the entrepreneurial chances that are needed to spur innovative business growth. No other group of Americans could make more productive use of an immediate boost in income.

(5) We Need the Arts & Sciences

A recent Gallup poll found that nearly 70% of workers don’t feel ‘engaged’ (enthusiastic and committed) in their jobs. The work chosen by UBI recipients could unleash artistic talents and creative impulses that have been suppressed by personal financial concerns, leading, very possibly, to a repeat of the 1930s, when the Works Progress Administration hired thousands of artists and actors and musicians to help sustain the cultural needs of the nation.

Arguments against

The usual uninformed and condescending opposing argument is that UBI recipients will waste the money, spending it on alcohol and drugs and other ‘temptation’ goods. Not true. Studies from the World Bank and the Brooks World Poverty Institute found that money going to poor families is used primarily for essential needs, and that the recipients experience greater physical and mental well-being as a result of their increased incomes. Other arguments against the workability of the UBI are countered by the many successful experiments conducted in the present and recent past: FinlandCanada, Netherlands, Kenya, IndiaGreat Britain, Uganda, Namibia, and in the U.S. in Alaska and California.

How to pay for it

Largely because of the stock market, U.S. financial wealth has surged to $77 trillion, with the richest 10% owning over three-quarters of it. Just a 2 percent tax on total financial wealth would generate enough revenue to provide a $12,000 annual stipend to every American household (including those of the richest families).

It’s easy to justify a wealth tax. Over half of all basic research is paid for by our tax dollars. All the technology in our phones and computers started with government research and funding. Pharmaceutical companies wouldn’t exist without decades of support from the National Institutes of Health. Yet the tech and pharmaceutical companies claim patents on the products paid for and developed by the American people.

The collection of a wealth tax would not be simple, since only about half of U.S. financial wealth is held directly in equities and liquid assets (Table 5-2). But it’s doable. As Thomas Piketty notes, “A progressive tax on net wealth is better than a progressive tax on consumption because first, net wealth is better defined for very wealthy individuals..”

And certainly a financial industry that knows how to package worthless loans into A-rated mortgage-backed securities should be able to figure out how to tax the investment companies that manage the rest of our ever-increasing national wealth.

 

Wasted lives: The worldwide tragedy of youth suicide

Principles of goodness together with the golden seed of social justice – sharing – need to be the guiding ideals of a radically redesigned socio-economic paradigm.

By Graham Peebles

Source: Nation of Change

The pressures of modern life are colossal; for young people – those under 25 years of age – they are perhaps greater than at any other time. Competition in virtually every aspect of contemporary life, a culture obsessed with image and material success, and the ever-increasing cost of living are creating a cocktail of anxiety and self-doubt that drives some people to take their own lives and many more to self-abuse of one kind or another.

Amongst this age group today, suicide constitutes the second highest cause of death after road/traffic accidents, and is the most common cause of death in female adolescents aged 15–19 years. This fact is an appalling reflection on our society and the materialistic values driven into the minds of children throughout the world.

The World Health Organization (WHO) estimates that in total “close to 800,000 people die due to suicide every year, which is one person every 40 seconds. Many more attempt suicide,” and those who have attempted suicide are the ones at greatest risk of trying again. Whilst these figures are startling, WHO acknowledges that suicide is widely under-reported. In some countries (throughout Sub-Saharan Africa, for example) where stigma still attaches to suicide, it is not always recorded as the cause of death when in fact it should be, meaning the overall suicide figures are without doubt a great deal higher.

Unless there is fundamental change in the underlying factors that cause suicide, the WHO forecasts that by 2020 – a mere three years away, someone, somewhere will take their own life every 20 seconds. This worldwide issue, WHO states, is increasing year on year; it is a symptom of a certain approach to living – a divisive approach that believes humanity is inherently greedy and selfish and has both created, and is perpetuated by, an unjust socio-economic system which is at the root of many of our problems.

Sliding into despair

Suicide is a global matter and is something that can no longer be dismissed, nor its societal causes ignored. It is the final act in a painful journey of anguish; it signifies a desperate attempt by the victim to be free of the pain they feel, and which, to them, is no longer bearable. It is an attempt to escape inner conflict and emotional agony, persecution or intimidation. It may follow a pattern of self-harm, alcohol or drug abuse, and, is in many cases, but not all, related to depression, which blights the lives of more than 300 million people worldwide, is debilitating and deeply painful. As William Styron states in Darkness Invisible, “The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne. The prevention of many suicides will continue to be hindered until there is a general awareness of the nature of this pain.”

Any suicide is a tragedy and a source of great sadness, particularly if the victim is a teenager, or someone in there twenties, who had their whole life ahead of them, but for some reason or another could not face it. As with all age groups, mental illness amongst young people is cited as the principle reason for, or an impelling cause of suicide, as well as for people suffering from an untreated illness such as anxiety anorexia or bulimia; alcohol and drug abuse are also regularly mentioned, as well as isolation.

All of these factors are effects, the result of the environment in which people – young and not so young – are living: family life, the immediate society, the broader national and world society. The values and codes of behavior that these encourage, and, flowing from this environment, the manner in which people treat one another together with their prevailing attitudes. It must be here that, setting aside any individual pre-disposition, the underlying causes leading to mental illness or alcohol/drug dependency in the first place are rooted.

Unsurprisingly young people who are unemployed for a long time; who have been subjected to physical or sexual abuse; who come from broken families in which there is continuous anxiety due to job insecurity and low wages are at heightened risk of suicide, as are homeless people, young gay and bi-sexual men and those locked up in prison or young offenders institutions. In addition, WHO states that, “Experiencing conflict, […] loss and a sense of isolation are strongly associated with suicidal behavior.”

Lack of hope is another key factor. Absence of hope leads to despair, and from despair flows all manner of negative thoughts and destructive actions, including suicide. In Japan, where suicide is the leading cause of death among people aged between 15 and 39 (death by suicide in Japan is around twice that of America, France and Canada, and three times that of Germany and the U.K.), the BBC reports that, “young people are killing themselves because they have lost hope and are incapable of seeking help.” Suicides began to increase dramatically in Japan in 1988 after the Asian financial crisis and climbed again after the 2008 worldwide economic crash. Economic insecurity is thought to be the cause, driven by “the practice of employing young people on short-term contracts.”

Hope is extremely important, hope that life will improve, that circumstances will change, that people will be kinder and that life will be gentler. That one’s life has meaning. Interestingly, in the aftermath of Princess Diana’s death in 1997, suicides in Britain increased by almost 20 per cent, and cases of self-harm rose by 44 per cent. To many people she was a symbol of compassion and warmth in a brittle, hostile world, and somehow engendered hope.

The list of those most vulnerable to suicide is general and no doubt incomplete; suicide is an individual act and flows from specific circumstances and a particular state of mind. Generalizations miss the subtleties of each desperate cry. Some suicides are spontaneous acts, spur of the moment decisions (as is often the case in Asian countries, where poison is the most common method of suicide), others may be drawn out over years, in the case of the alcoholic for example, punctuated perhaps by times of relief and optimism, only to collapse under the weight of life’s intense demands once more.

It is these constant pressures that are often the principle causes of the slide into despair and the desire to escape the agony of daily life. They are all pervasive, hard to resist, impossible, apparently, to escape. Firstly, we are all faced with the practical demands of earning a living, paying the rent or mortgage, buying food, and covering the energy bills etc. Secondly, there are the more subtle pressures, closely related to our ability to meet the practical demands of the day: the pressure to succeed, to make something of one’s life, to be strong – particularly of you’re a young man, to be sexually active, to be popular, to know what you want and have the strength to get it; to have the confidence to dream and the determination to fulfill your dreams. And if you don’t know what you want, if you don’t have ‘dreams’ in a world of dreamers, this is seen as weakness, which will inevitably result in ‘failure’. And by failure, is meant material inadequacy as well as unfulfilled potential and perhaps loneliness, because who would want to be with a ‘failure’?

These and other expectations and pressures constitute the relentless demands faced by us all, practical and psychological, and our ability to meet them colours the way we see ourselves and determines, to a degree, how others see us. The images of what we should be, how we should behave, what we should think and aspire too, the values we should adopt and the belief system we should accept are thrust into the minds of everyone from birth. They are narrow, inhibiting, prescribed and deeply unhealthy.

The principle tool of this process of psychological and sociological conditioning is the media, as well as parents and peers, all of whom have themselves fallen foul of the same methodology, and education.

Beyond reward and punishment

Step outside the so-called norm, stand out as someone different, and risk being persecuted, bullied and socially excluded. The notion of individuality has been outwardly championed but systematically and institutionally denied. Our education systems are commonly built on two interconnected foundations – conformity and competition – and reinforced through methods, subtle and crude, of reward and punishment. All of which stifles true individuality, which needs a quiet, loving space, free from judgment in which to flower. For the most sensitive, vulnerable and uncertain, the pressure to conform, to compete and succeed, is often too much to bear. Depression, self-doubt, anxiety, self-harm, addiction and, for some, suicide, are the dire consequences.

There are many initiatives aimed at preventing suicide amongst young people – alcohol/drug services, mental health treatment, reducing access to the means of suicide – and these are of tremendous value. However if the trend of increased suicides among young people is to be reversed it is necessary to dramatically reduce the pressures on them and inculcate altogether more inclusive values. This means changing the environments in which life is lived, most notably the socio-economic environment that infects all areas of society. Worldwide, life is dominated by the neoliberal economic system, an extreme form of capitalism that has infiltrated every area of life. Under this decrepit unjust model everything is classed as a commodity, everyone as a consumer, inequality guaranteed with wealth and power concentrated in the hands of a tiny percentage of the population – 1% of 1%, or less in fact. All facets of life have become commercialized, from health care to the supply of water and electricity, and the schooling of our children. The educational environment has become poisoned by the divisive values of the market place, with competition at the forefront, and competition has no place in schools and universities, except perhaps on the sports field: streaming and selection should be vetoed totally and testing, until final exams (that should be coursework based), scrapped.

All that divides within our societies should be called out and rejected, cooperation inculcated instead of competition in every area of human endeavor, including crucially the political-economic sphere; tolerance encouraged, unity built in all areas of society, local, national and global. These principles of goodness together with the golden seed of social justice – sharing – need to be the guiding ideals of a radically redesigned socio-economic paradigm, one that meets the needs of all to live dignified, fulfilled lives, promotes compassion, and, dare I say, cultivates love. Only then, will the fundamental causes of suicide, amongst young people in particular, but men and women of all ages, be eradicated.

Against meaninglessness and precarity: the crisis of work

work

By David Frayne

Source: ROAR Magazine

If work is vital for income, social inclusion and a sense of identity, then one of the most troubling contradictions of our time is that the centrality of work in our societies persists even when work is in a state of crisis. The steady erosion of stable and satisfying employment makes it less and less clear whether modern jobs can offer the sense of moral agency, recognition and pride required to secure work as a source of meaning and identity. The standardization, precarity and dubious social utility that characterize many modern jobs are a major source of modern misery.

Mass unemployment is also now an enduring structural feature of capitalist societies. The elimination of huge quantities of human labor by the development of machine technologies is a process that has spanned centuries. However, perhaps due to high-profile developments like Apple’s Siri computer assistant or Amazon’s delivery drones, the discussion around automation has once again been ignited.

An often-cited study by Carl Frey and Michael Osborne anticipates an escalation of technological unemployment over the coming years. Occupations at high risk include the likes of models, cooks and construction workers, thanks to advances such as digital avatars, burger flipping machines and the ability to manufacture prefabricated buildings in factories with robots. It is also anticipated that advances in artificial intelligence and machine learning will allow an increasing quantity of cognitive work tasks to become automated.

What all of this means is that we are steadily becoming a society of workers without work: a society of people who are materially, culturally and psychologically bound to paid employment, but for whom there are not enough stable and meaningful jobs to go around. Perversely, the most pressing problem for many people is no longer exploitation, but the absence of opportunities to be sufficiently and dependably exploited. The impact of this problem in today’s epidemic of anxiety and exhaustion should not be underestimated.

What makes the situation all the crueler is the pervasive sense that the precarious victims of the crisis are somehow personally responsible for their fate. In the UK, barely a week goes by without a smug reaffirmation of the work ethic in the media, or some story that constructs unemployment as a form of deviance. The UK television show Benefits Street comes to mind, but perhaps the most outrageous example in recent times was not from the world of trash TV, but from Dr. Adam Perkins’ thesis, The Welfare Trait. Published last year, Perkins’ book tackled what he defined as the “employment-resistant personality”. Joblessness is explained in terms of an inter-generationally transmitted psychological disorder. Perkins’ study is the most polished product of the ideology of work one can imagine. His study is so dazzled by its own claims to scientific objectivity, so impervious to its own grounding in the work ethic, that it beggars belief.

It seems we find ourselves at a rift. On the one hand, work has been positioned as a central source of income, solidarity and social recognition, whereas on the other, the promise of stable, meaningful and satisfying employment crumbles around us. The crucial question: how should societies adjust to this deepening crisis of work?

 


This is an excerpt from David Frayne’s “Towards a Post-Work Society”, which will appear in ROAR Issue #2, The Future of Work, scheduled for release in June/July.

Will Robots Take Your Job?

Walmart Robots

By Nick Srnicek and Alex Williams

Source: ROAR

In recent months, a range of studies has warned of an imminent job apocalypse. The most famous of these—a study from Oxford—suggests that up to 47 percent of US jobs are at high-risk of automation over the next two decades. Its methodology—assessing likely developments in technology, and matching them up to the tasks typically deployed in jobs—has been replicated since then for a number of other countries. One study finds that 54 percent of EU jobs are likely automatable, while the chief economist of the Bank of England has argued that 45 percent of UK jobs are similarly under threat.

This is not simply a rich-country problem, either: low-income economies look set to be hit even harder by automation. As low-skill, low-wage and routine jobs have been outsourced from rich capitalist countries to poorer economies, these jobs are also highly susceptible to automation. Research by Citi suggests that for India 69 percent of jobs are at risk, for China 77 percent, and for Ethiopia a full 85 percent of current jobs. It would seem that we are on the verge of a mass job extinction.

Nothing New?

For many economists however, there is nothing to worry about. If we look at the history of technology and the labor market, past experiences would suggest that automation has not caused mass unemployment. Automation has always changed the labor market. Indeed, one of the primary characteristics of the capitalist mode of production has been to revolutionize the means of production—to really subsume the labor process and reorganize it in ways that more efficiently generate value. The mechanization of agriculture is an early example, as is the use of the cotton gin and spinning jenny. With Fordism, the assembly line turned complex manufacturing jobs into a series of simple and efficient tasks. And with the era of lean production, we have had the computerized management of long commodity chains turn the production process into a more and more heavily automated system.

In every case, we have not seen mass unemployment. Instead we have seen some jobs disappear, while others have been created to replace not only the lost jobs but also the new jobs necessary for a growing population. The only times we see massive unemployment tend to be the result of cyclical factors, as in the Great Depression, rather than some secular trend towards higher unemployment resulting from automation. On the basis of these considerations, most economists believe that the future of work will likely be the same as the past: some jobs will disappear, but others will be created to replace them.

In typical economist fashion, however, these thoughts neglect the broader social context of earlier historical periods. Capitalism may not have seen a massive upsurge in unemployment, but this is not a necessary outcome. Rather, it was dependent upon unique circumstances of earlier moments—circumstances that are missing today. In the earliest periods of automation, there was a major effort by the labor movement to reduce the working week. It was a successful project that reduced the week from around 60 hours at the turn of the century, down to 40 hours during the 1930s, and very nearly even down to 30 hours. In this context, it was no surprise that Keynes would famously extrapolate to a future where we all worked 15 hours. He was simply looking at the existing labor movement. With reduced work per person, however, this meant that the remaining work would be spread around more evenly. The impact of technology at that time was therefore heavily muted by a 33 percent reduction in the amount of work per person.

Today, by contrast, we have no such movement pushing for a reduced working week, and the effects of automation are likely to be much more serious. Similar issues hold for the postwar era. With most Western economies left in ruins, and massive American support for the revitalization of these economies, the postwar era saw incredibly high levels of economic growth. With the further addition of full employment policies, this period also saw incredibly high levels of job growth and a compact between trade unions and capital to maintain a sufficient amount of good jobs. This led to healthy wage growth and, subsequently, healthy growth in aggregate demand to stimulate the economy and keep jobs coming. Moreover, this was a period where nearly 50 percent of the potential labor force was constrained to the household.

Under these unique circumstances, it is no wonder that capitalism was able to create enough jobs even as automation continued to transform for the labor process. Today, we have sluggish economic growth, no commitments to full employment (even as we have commitments to harsh welfare policies), stagnant wage growth, and a major influx of women into the labor force. The context for a wave of automation is drastically different from the way it was before.

Likewise, the types of technology that are being developed and potentially introduced into the labor process are significantly different from earlier technologies. Whereas earlier waves of automation affected what economists call “routine work” (work that can be laid out in a series of explicit steps), today’s technology is beginning to affect non-routine work. The difference is between a factory job on an assembly line and driving a car in the chaotic atmosphere of the modern urban environment. Research from economists like David Autor and Maarten Goos shows that the decline of routine jobs in the past 40 years has played a significant role in increased job polarization and rising inequality. While these jobs are gone, and highly unlikely to come back, the next wave of automation will affect the remaining sphere of human labor. An entire range of low-wage jobs are now potentially automatable, involving both physical and mental labor.

Given that it is quite likely that new technologies will have a larger impact on the labor market than earlier waves of technological change, what is likely to happen? Will robots take your job? While one side of the debate warns of imminent apocalypse and the other yawns from the historical repetition, both tend to neglect the political economy of automation—particularly the role of labor. Put simply, if the labor movement is strong, we are likely to see more automation; if the labor movement is weak, we are likely to see less automation.

Workers Fight Back

In the first scenario, a strong labor movement is able to push for higher and higher wages (particularly relative to globally stagnant productivity growth). But the rising cost of labor means that machines become relatively cheap in comparison. We can already see this in China, where real wages have been surging for more than 10 years, thereby making Chinese labor increasingly less cheap. The result is that China has become the world’s biggest investor in industrial robots, and numerous companies—most famously Foxconn—have all stated their intentions to move towards increasingly automated factories.

This is the archetype of a highly automated world, but in order to be achievable under capitalism it requires that the power of labor be strong, given that the relative costs of labor and machines are key determinants for investment. What then happens under these circumstances? Do we get mass unemployment as robots take all the jobs? The simple answer is no. Rather than mass decimation of jobs, most workers who have their jobs automated end up moving into new sectors.

In the advanced capitalist economies this has been happening over the past 40 years, as workers move from routine jobs to non-routine jobs. As we saw earlier, the next wave of automation is different, and therefore its effects on the labor market are also different. Some job sectors are likely to take heavy hits under this scenario. Jobs in retail and transport, for instance, will likely be heavily affected. In the UK, there are currently 3 million retail workers, but estimates by the British Retail Consortium suggest this may decrease by a million over the next decade. In the US, there are 3.4 million cashiers alone—nearly all of whose work could be automated. The transport sector is similarly large, with 3.7 million truck drivers in the US, most of whose jobs could be incrementally automated as self-driving trucks become viable on public roads. Large numbers of workers in such sectors are likely to be pushed out of their jobs if mass automation takes place.

Where will they go? The story that Silicon Valley likes to tell us is that we will all become freelance programmers and software developers and that we should all learn how to code to succeed in their future utopia. Unfortunately they seem to have bought into their own hype and missed the facts. In the US, 1.8 percent of all jobs require knowledge of programming. This compares to the agricultural sector, which creates about 1.5 percent of all American jobs, and to the manufacturing sector, which employs 8.1 percent of workers in this deindustrialized country. Perhaps programming will grow? The facts here are little better. The Bureau of Labor Statistics (BLS) projects that by 2024 jobs involving programming will be responsible for a tiny 2.2 percent of the jobs available. If we look at the IT sector as a whole, according to Citi, it is expected to take up less than 3 percent of all jobs.

What about the people needed to take care of the robots? Will we see a massive surge in jobs here? Presently, robot technicians and engineers take up less than 0.1 percent of the job market—by 2024, this will dwindle even further. We will not see a major increase in jobs taking care of robots or in jobs involving coding, despite Silicon Valley’s best efforts to remake the world in its image.

This continues a long trend of new industries being very poor job creators. We all know about how few employees worked at Instagram and WhatsApp when they were sold for billions to Facebook. But the low levels of employment are a widespread sectoral problem. Research from Oxford has found that in the US, only 0.5 percent of the labor force moved into new industries (like streaming sites, web design and e-commerce) during the 2000s. The future of work does not look like a bunch of programmers or YouTubers.

In fact, the fastest growing job sectors are not for jobs that require high levels of education at all. The belief that we will all become high-skilled and well-paid workers is ideological mystification at its purest. The fastest growing job sector, by far, is the healthcare industry. In the US, the BLS estimates this sector to create 3.8 million new jobs between 2014 and 2024. This will increase its share of employment from 12 percent to 13.6 percent, making it the biggest employing sector in the country. The jobs of “healthcare support” and “healthcare practitioner” alone will contribute 2.3 million jobs—or 25 percent of all new jobs expected to be created.

There are two main reasons for why this sector will be such a magnet for workers forced out of other sectors. In the first place, the demographics of high-income economies all point towards a significantly growing elderly population. Fewer births and longer lives (typically with chronic conditions rather than infectious diseases) will put more and more pressure on our societies to take care of elderly, and force more and more people into care work. Yet this sector is not amenable to automation; it is one of the last bastions of human-centric skills like creativity, knowledge of social context and flexibility. This means the demand for labor is unlikely to decrease in this sector, as productivity remains low, skills remain human-centric, and demographics make it grow.

In the end, under the scenario of a strong labor movement, we are likely to see wages rise, which will cause automation to rapidly proceed in certain sectors, while workers are forced to struggle for jobs in a low-paying healthcare sector. The result is the continued elimination of middle-wage jobs and the increased polarization of the labor market as more and more are pushed into the low-wage sectors. On top of this, a highly educated generation that was promised secure and well-paying jobs will be forced to find lower-skilled jobs, putting downward pressure on wages—generating a “reserve army of the employed”, as Robert Brenner has put it.

Workers Fall Back

Yet what happens if the labor movement remains weak? Here we have an entirely different future of work awaiting us. In this case, we end up with stagnant wages, and workers remain relatively cheap compared to investment in new equipment. The consequences of this are low levels of business investment, and subsequently, low levels of productivity growth. Absent any economic reason to invest in automation, businesses fail to increase the productivity of the labor process. Perhaps unexpectedly, under this scenario we should expect high levels of employment as businesses seek to maximize the use of cheap labor rather than investing in new technology.

This is more than a hypothetical scenario, as it rather accurately describes the situation in the UK today. Since the 2008 crisis, real wages have stagnated and even fallen. Real average weekly earnings have started to rise since 2014, but even after eight years they have yet to return to their pre-crisis levels. This has meant that businesses have had incentives to hire cheap workers rather than invest in machines—and the low levels of investment in the UK bear this out. Since the crisis, the UK has seen long periods of decline in business investment—the most recent being a 0.4 percent decline between Q12015 and Q12016. The result of low levels of investment has been virtually zero growth in productivity: from 2008 to 2015, growth in output per worker has averaged 0.1 percent per year. Almost all of the UK’s recent growth has come from throwing more bodies into the economic machine, rather than improving the efficiency of the economy. Even relative to slow productivity growth across the world, the UK is particularly struggling.

With cheap wages, low investment and low productivity, we see that companies have instead been hiring workers. Indeed, employment levels in the UK have reached the highest levels on record—74.2 percent as of May 2016. Likewise, unemployment is low at 5.1 percent, especially when compared to their neighbors in Europe who average nearly double that level. So, somewhat surprisingly, an environment with a weak labor movement leads here to high levels of employment.

What is the quality of these jobs, however? We have already seen that wages have been stagnant, and that two-thirds of net job creation since 2008 has been in self-employed jobs. Yet there has also been a major increase in zero-hour contracts (employment situations that do not guarantee any hours to workers). Estimates are that up to 5 percent of the labor force is in such situations, with over 1.7 million zero-hour contracts out. Full-time employment is down as well: as a percentage of all jobs, its pre-crisis levels of 65 percent have been cut to 63 percent and refused to budge even as the economy grows (slowly). The percentage of involuntary part-time workers—those who would prefer a full-time job but cannot find one—more than doubled after the crisis, and has barely begun to recover since.

Likewise with temporary employees: involuntary temporary workers as a percentage of all temporary workers rose from below 25 percent to over 40 percent during the crisis, only partly recovering to around 35 percent today. There is a vast number of workers who would prefer to work in more permanent and full-time jobs, but who can no longer find them. The UK is increasingly becoming a low-wage and precarious labor market—or, in the Tories’ view, a competitive and flexible labor market. This, we would argue, is the future that obtains with a weak labor movement: low levels of automation, perhaps, but at the expense of wages (and aggregate demand), permanent jobs and full-time work. We may not get a fully automated future, but the alternative looks just as problematic.

These are therefore the two poles of possibility for the future of work. On the one hand, a highly automated world where workers are pushed out of much low-wage non-routine work and into lower-wage care work. On the other hand, a world where humans beat robots but only through lower wages and more precarious work. In either case, we need to build up the social systems that will enable people to survive and flourish in the midst of these significant changes. We need to explore ideas like a Universal Basic Income, we need to foster investment in automation that could eliminate the worst jobs in society, and we need to recover that initial desire of the labor movement for a shorter working week.

We must reclaim the right to be lazy—which is neither a demand to be lazy nor a belief in the natural laziness of humanity, but rather the right to refuse domination by a boss, by a manager, or by a capitalist. Will robots take our jobs? We can only hope so.

Note: All uncited figures either come directly from, or are based on authors’ calculations of, data from the Bureau of Labor Statistics, O*NET and the Office for National Statistics.

Is the US Economy Heading for Recession?