After the Crash

Dispatches From a Long Recovery (Est. 10/2024)

After the Crash

Who Profits from the Pandemic?

West Virginia National Guard members reporting to a Charleston nursing home to assist with Covid-19 testing. April 6, 2020. (U.S. Army National Guard, Edwin L. Wriston)

By Pepe Escobar

Source: Consortium News

You don’t need to read Michel Foucault’s work on biopolitics to understand that neoliberalism – in deep crisis since at least 2008 – is a control/governing technique in which surveillance capitalism is deeply embedded.

But now, with the world-system collapsing at breathtaking speed, neoliberalism is at a loss to deal with the next stage of dystopia, ever present in our hyper-connected angst: global mass unemployment.

Henry Kissinger, anointed oracle/gatekeeper of the ruling class, is predictably scared. He claims that, “sustaining the public trust is crucial to social solidarity.” He’s convinced the Hegemon should “safeguard the principles of the liberal world order.” Otherwise, “failure could set the world on fire.”

That’s so quaint. Public trust is dead across the spectrum. The liberal world “order” is now social Darwinist chaos. Just wait for the fire to rage.

The numbers are staggering. The Japan-based Asian Development Bank (ADB), in its annual economic report, may not have been exactly original. But it did note that the impact of the “worst pandemic in a century” will be as high as $4.1 trillion, or 4.8 percent of global GDP.

This an underestimation, as “supply disruptions, interrupted remittances, possible social and financial crises, and long-term effects on health care and education are excluded from the analysis.”

We cannot even start to imagine the cataclysmic social consequences of the crash. Entire sub-sectors of the global economy may not be recomposed at all.

The International Labor Organization (ILO) forecasts global unemployment at a conservative, additonal 24.7 million people – especially in aviation, tourism and hospitality.

The global aviation industry is a humongous $2.7 trillion business. That’s 3.6 percent of global GDP. It employs 2.7 million people. When you add air transport and tourism —everything from hotels and restaurants to theme parks and museums — it accounts for a minimum of 65.5 million jobs around the world.

According to the ILO, income losses for workers may range from $860 billion to an astonishing $3.4 trillion. “Working poverty” will be the new normal – especially across the Global South.

“Working poor,” in ILO terminology, means employed people living in households with a per capita income below the poverty line of $2 a day. As many as an additional 35 million people worldwide will become working poor in 2020.

Switching to feasible perspectives for global trade, it’s enlightening to examine that this report about how the economy may rebound is centered on the notorious hyperactive merchants and traders of Yiwu in eastern China – the world’s busiest small-commodity, business hub.

Their experience spells out a long and difficult recovery. As the rest of the world is in a coma, Lu Ting, chief China economist at Nomura in Hong Kong stresses that China faces a 30 percent decline in external demand at least until next Fall.

Neoliberalism in Reverse?

In the next stage, the strategic competition between the U.S. and China will be no-holds-barred, as emerging narratives of China’s new, multifaceted global role – on trade, technology, cyberspace, climate change – will set in, even more far-reaching than the New Silk Roads. That will also be the case in global public health policies. Get ready for an accelerated Hybrid War between the “Chinese virus” narrative and the Health Silk Road.

The latest report by the China Institute of International Studies would be quite helpful for the West — hubris permitting — to understand how Beijing adopted key measures putting the health and safety of the general population first.

Now, as the Chinese economy slowly picks up, hordes of fund managers from across Asia are tracking everything from trips on the metro to noodle consumption to preview what kind of economy may emerge post-lockdown.

In contrast, across the West, the prevailing doom and gloom elicited a priceless editorial from The Financial Times. Like James Brown in the 1980s Blues Brothers pop epic, the City of London seems to have seen the light, or at least giving the impression it really means it. Neoliberalism in reverse. New social contract. “Secure” labor markets. Redistribution.

Cynics won’t be fooled. The cryogenic state of the global economy spells out a vicious Great Depression 2.0 and an unemployment tsunami. The plebs eventually reaching for the pitchforks and the AR-15s en masse is now a distinct possibility. Might as well start throwing a few breadcrumbs to the beggars’ banquet.

That may apply to European latitudes. But the American story is in a class by itself.

For decades, we were led to believe that the world-system put in place after WWII provided the U.S. with unrivalled structural power. Now, all that’s left is structural fragility, grotesque inequalities, unpayable Himalayas of debt, and a rolling crisis.

No one is fooled anymore by the Fed’s magic quantitative easing powers, or the acronym salad – TALF, ESF, SPV – built into the Fed/U.S. Treasury exclusive obsession with big banks, corporations and the Goddess of the Market, to the detriment of the average American.

It was only a few months ago that a serious discussion evolved around the $2.5 quadrillion derivatives market imploding and collapsing the global economy, based on the price of oil skyrocketing, in case the Strait of Hormuz – for whatever reason – was shut down.

Now it’s about Great Depression 2.0: the whole system crashing as a result of the shutdown of the global economy. The questions are absolutely legitimate: is the political and social cataclysm of the global economic crisis arguably a larger catastrophe than Covid-19 itself?  And will it provide an opportunity to end neoliberalism and usher in a more equitable system, or something even worse?

 ‘Transparent’ BlackRock

Wall Street, of course, lives in an alternative universe. In a nutshell, Wall Street turned the Fed into a hedge fund. The Fed is going to own at least two thirds of all U.S. Treasury bills in the market before the end of 2020.

The U.S. Treasury will be buying every security and loan in sight while the Fed will be the banker – financing the whole scheme.

So essentially this is a Fed/Treasury merger. A behemoth dispensing loads of helicopter money.

And the winner is BlackRock—the biggest money manager on the planet, with tentacles everywhere, managing the assets of over 170 pension funds, banks, foundations, insurance companies, in fact a great deal of the money in private equity and hedge funds. BlackRock — promising to be fully  “transparent” — will buy these securities and manage those dodgy SPVs on behalf of the Treasury.

BlackRock, founded in 1988 by Larry Fink, may not be as big as Vanguard, but it’s the top investor in Goldman Sachs, along with Vanguard and State Street, and with $6.5 trillion in assets, bigger than Goldman Sachs, JP Morgan and Deutsche Bank combined.

Now, BlackRock is the new operating system (OS) of the Fed and the Treasury. The world’s biggest shadow bank – and no, it’s not Chinese.

Compared to this high-stakes game, mini-scandals such as the one around Georgia Senator Kelly Loffler are peanuts. Loffler allegedly profited from inside information on Covid-19 by the CDC to make a stock market killing. Loffler is married to Jeffrey Sprecher – who happens to be the chairman of the NYSE, installed by Goldman Sachs.

While corporate media followed this story like headless chickens, post-Covid-19 plans, in Pentagon parlance, “move forward” by stealth.

The price? A meager $1,200 check per person for a month. Anyone knows that, based on median salary income, a typical American family would need $12,000 to survive for two months. Treasury Secretary Steven Mnuchin, in an act of supreme effrontry, allows them a mere 10 percent of that. So American taxpayers will be left with a tsunami of debt while selected Wall Street players grab the whole loot, part of an unparalleled transfer of wealth upwards, complete with bankruptcies en masse of small and medium businesses.

Fink’s letter to his shareholders almost gives the game away: “I believe we are on the edge of a fundamental reshaping of finance.”

And right on cue, he forecasted that, “in the near future – and sooner than most anticipate – there will be a significant reallocation of capital.”

He was referring, then, to climate change. Now that refers to Covid-19.

Implant Our Nanochip, Or Else?

The game ahead for the elites, taking advantage of the crisis, might well contain these four elements: a social credit system, mandatory vaccination, a digital currency and a Universal Basic Income (UBI). This is what used to be called, according to the decades-old, time-tested CIA playbook, a “conspiracy theory.” Well, it might actually happen.

A social credit system is something that China set up already in 2014. Before the end of 2020, every Chinese citizen will be assigned his/her own credit score – a de facto “dynamic profile”, elaborated with extensive use of AI and the internet of things (IoT), including ubiquitous facial recognition technology. This implies, of course, 24/7 surveillance, complete with Blade Runner-style roving robotic birds.

The U.S., the U.K., France, Germany, Canada, Russia and India may not be far behind. Germany, for instance, is tweaking its universal credit rating system, SCHUFA. France has an ID app very similar to the Chinese model, verified by facial recognition.

Mandatory vaccination is Bill Gates’s dream, working in conjunction with the WHO, the World Economic Forum (WEF) and Big Pharma. He wants “billions of doses” to be enforced over the Global South. And it could be a cover to everyone getting a digital implant.

Here it is, in his own words. At 34:15: “Eventually what we’ll have to have is certificates of who’s a recovered person, who’s a vaccinated person…Because you don’t want people moving around the world where you’ll have some countries that won’t have it under control, sadly. You don’t want to completely block off the ability for people to go there and come back and move around.”

Then comes the last sentence which was erased from the official TED video. This was noted by Rosemary Frei, who has a master on molecular biology and is an independent investigative journalist in Canada. Gates says: “So eventually there will be this digital immunity proof that will help facilitate the global reopening up.”

This “digital immunity proof” is crucial to keep in mind, something that could be misused by the state for nefarious purposes.

The three top candidates to produce a coronavirus vaccine are American biotech firm Moderna, as well as Germans CureVac and BioNTech.

Digital cash might then become an offspring of blockchain. Not only the U.S., but China and Russia are also interested in a national crypto-currency. A global currency – of course controlled by central bankers – may soon be adopted in the form of a basket of currencies, and would circulate virtually. Endless permutations of the toxic cocktail of IoT, blockchain technology and the social credit system could loom ahead.

Already Spain has announced that it is introducing UBI, and wants it to be permanent. It’s a form insurance for the elite against social uprisings, especially if millions of jobs never come back.

So the key working hypothesis is that Covid-19 could be used as cover for the usual suspects to bring in a new digital financial system and a mandatory vaccine with a “digital identity” nanochip with dissent not tolerated: what Slavoj Zizek calls the “erotic dream” of every totalitarian government.

Yet underneath it all, amid so much anxiety, a pent-up rage seems to be gathering strength, to eventually explode in unforeseeable ways. As much as the system may be changing at breakneck speed, there’s no guarantee even the 0.1 percent will be safe.

A 2% Financial Wealth Tax Would Provide A $12,000 Annual Stipend To Every American Household

Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income.

By Paul Buchheit

Source: Nation of Change

It’s not hard to envision the benefits in work opportunities, stress reduction, child care, entrepreneurial activity, and artistic pursuits for American households with an extra $1,000 per month. It’s also very easy to justify a financial wealth tax, given that the dramatic stock market surge in recent years is largely due to an unprecedented degree of technological and financial productivity that derives from the work efforts and taxes of ALL Americans. A 2% annual tax on financial wealth is a small price to pay for the great fortunes bestowed on the most fortunate Americans.

The REASONS? Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income (UBI).

(1) Our Jobs are Disappearing

A 2013 Oxford study determined that nearly HALF of American jobs are at risk of being replaced by computers, AI, and robots. Society simply can’t keep up with technology. As for the skeptics who cite the Industrial Revolution and its job-enhancing aftermath (which actually took 60 years to develop), the McKinsey Global Institute says that society is being transformed at a pace “ten times faster and at 300 times the scale” of the radical changes of two hundred years ago.

(2) Half of America is Stressed Out or Sick

Half of Americans are in or near poverty, unable to meet emergency expenses, living from paycheck to paycheck, and getting physically and emotionally ill because of it. Numerous UBI experiments have led to increased well-being for their participants. A guaranteed income reduces the debilitating effects of inequality. As one recipient put it, “It takes me out of depression…I feel more sociable.”

(3) Children Need Our Help

This could be the best reason for monthly household stipends. Parents, especially mothers, are unable to work outside the home because of the all-important need to care for their children. Because we currently lack a UBI, more and more children are facing hunger and health problems and educational disadvantages.

(4) We Need More Entrepreneurs

A sudden influx of $12,000 per year for 126 million households will greatly stimulate the economy, potentially allowing millions of Americans to TAKE RISKS that could lead to new forms of innovation and productivity.

Perhaps most significantly, a guaranteed income could relieve some of the pressure on our newest generation of young adults, who are deep in debt, underemployed, increasingly unable to live on their own, and ill-positioned to take the entrepreneurial chances that are needed to spur innovative business growth. No other group of Americans could make more productive use of an immediate boost in income.

(5) We Need the Arts & Sciences

A recent Gallup poll found that nearly 70% of workers don’t feel ‘engaged’ (enthusiastic and committed) in their jobs. The work chosen by UBI recipients could unleash artistic talents and creative impulses that have been suppressed by personal financial concerns, leading, very possibly, to a repeat of the 1930s, when the Works Progress Administration hired thousands of artists and actors and musicians to help sustain the cultural needs of the nation.

Arguments against

The usual uninformed and condescending opposing argument is that UBI recipients will waste the money, spending it on alcohol and drugs and other ‘temptation’ goods. Not true. Studies from the World Bank and the Brooks World Poverty Institute found that money going to poor families is used primarily for essential needs, and that the recipients experience greater physical and mental well-being as a result of their increased incomes. Other arguments against the workability of the UBI are countered by the many successful experiments conducted in the present and recent past: FinlandCanada, Netherlands, Kenya, IndiaGreat Britain, Uganda, Namibia, and in the U.S. in Alaska and California.

How to pay for it

Largely because of the stock market, U.S. financial wealth has surged to $77 trillion, with the richest 10% owning over three-quarters of it. Just a 2 percent tax on total financial wealth would generate enough revenue to provide a $12,000 annual stipend to every American household (including those of the richest families).

It’s easy to justify a wealth tax. Over half of all basic research is paid for by our tax dollars. All the technology in our phones and computers started with government research and funding. Pharmaceutical companies wouldn’t exist without decades of support from the National Institutes of Health. Yet the tech and pharmaceutical companies claim patents on the products paid for and developed by the American people.

The collection of a wealth tax would not be simple, since only about half of U.S. financial wealth is held directly in equities and liquid assets (Table 5-2). But it’s doable. As Thomas Piketty notes, “A progressive tax on net wealth is better than a progressive tax on consumption because first, net wealth is better defined for very wealthy individuals..”

And certainly a financial industry that knows how to package worthless loans into A-rated mortgage-backed securities should be able to figure out how to tax the investment companies that manage the rest of our ever-increasing national wealth.

 

Newsletter: From Neoliberal Injustice To Economic Democracy

By Kevin Zeese and Margaret Flowers

Source: Dissident Voice

The work to transform society involves two parallel paths: resisting harmful systems and institutions and creating new systems and institutions to replace them. Our focus in this article is on positive work that people are doing to change current systems in ways that reduce the wealth divide, meet basic needs, ensure sustainability, create economic and racial justice and provide people with greater control over their lives.

When we and others organized the Occupation of Washington, DC in 2011, we subtitled the encampment ‘Stop the Machine, Create a New World’, to highlight both aspects of movement tasks — resistance and creation. One Popular Resistance project, It’s Our Economy, reports on economic democracy and new forms of ownership and economic development.

Throughout US history, resistance movements have coincided with the growth of economic democracy alternatives such as worker cooperatives, mutual aid and credit unions. John Curl writes about this parallel path in “For All the People,” which we summarized in “Cooperatives and Community Work are Part of American DNA.”

Mahatma Gandhi’s program of nonviolent resistance, satyagraha, had two components: obstructive resistance and constructive programs. Gandhi promoted Swaraj, a form of “self-rule” that would bring independence not just from the British Empire but also from the state through building community-based systems of self-sufficiency. He envisioned economic democracy at the village level. With his approach, economics is tied to ethics and justice — an economy that hurts the moral well-being of an individual or nation is immoral and business and industry should be measured not by shareholder profit but by their impact on people and community.

Today, we suffer from an Empire Economy. We can use Swaraj to break free from it. Many people are working to build a new economy and many cities are putting in place examples of economic democracy. One city attempting an overall transformation is Cooperation Jackson in Jackson, Mississippi.

Economic Democracy in response to neoliberalism

In his new book, Out of the Wreckage: A New Politics for an Age of Crisis, George Monbiot argues that a toxic ideology of greed and self–interest resulting in extreme competition and individualism rules the current economic and political culture. It is built on a misrepresentation of human nature. Evolutionary biology and psychology show that humans are actually supreme altruists and cooperators.  Monbiot argues that the economy and government can be radically reorganized from the bottom up, enabling people to take back control and overthrow the forces that have thwarted human ambitions for a more just and equal society.

In an interview with Mark Karlin, Monbiot describes how neolibealism arose over decades, beginning in the 1930s and 40s with John Maynard Keynes, Friedrich Hayek and others, and is now losing steam, as ideologies do. Monbiot says we need a new “Restoration Story.”

We are in the midst of writing that new story as people experience the injustice of the current system with economic and racial inequality, destruction of the environment and never ending wars. Indeed, we are further ahead in creating the new Restoration Story than we realize.

Cooperatives

New research from the University of Wisconsin–Madison’s Center for Cooperatives (UWCC) has found there are 39,594 cooperatives in the United States, excluding the housing sector, and there are 7 million employer businesses that remain “potential co-op candidates.” These cooperatives account for more than $3 trillion in assets, more than $500 billion in annual revenue and sustain nearly two million jobs. This May, the Office of Management and Budget approved including coop questions in the Economic Census so that next year the US should have more accurate figures. The massive growth of cooperatives impacts many segments of the economy including banking, food, energy, transit and housing among others.

In cooperatives, workers or consumers decide directly how their business operate and work together to achieve their goals; it is a culture change from the competitive extreme capitalist view dominated by self-interest.

In Energy Democracy: Advancing Equity in Clean Energy Solutions, editors Denise Fairchild and Al Weinrub describe energy cooperatives that are creating a new model for how we organize the production and distribution of energy, which is decentralized, multi-racial and multi-class.

Lyn Benander of Co-op Power, a network of many cooperatives in New England and New York, writes that they transform not just energy but also their communities:

First, people come together across class and race to make change in their community by using their power as investors, workers, consumers, and citizens ready to take action together. Then, they work together to build community-owned enterprises with local capital and local jobs to serve local energy needs. It’s a proven strategy for making a real difference.

In Lancaster, CA, the mayor has turned the town into a solar energy capital where they produce power not just for themselves, but also to sell to other cities. They are also moving to create manufacturing jobs in electric buses, which more cities are buying, and energy storage. Research finds that rooftop solar and net-metering programs reduce electricity prices for all utility customers, not just those with solar panels. The rapid growth of rooftop solar is creating well-paying jobs at a rate that’s 17 times faster than the total U.S. economy. Rooftop solar, built on existing structures, such as homes and schools, puts energy choices in the hands of customers rather than centralized monopolies, thereby democratizing energy.

Including housing cooperatives would greatly increase the number of cooperatives. According to the National Association of Housing Cooperatives, “Housing cooperatives offer the more than one million families who live in them several benefits such as: a collective and democratic ownership structure, limited liability, lower costs and non-profit status.”  Residents of a mobile home park in Massachusetts decided to create a housing cooperative to put the residents in charge of the community when the owner planned to sell it.

Related to this are community land trusts. A section of land is owned in a trust run as a non-profit that represents the interests of local residents and businesses. Although the land is owned by the trust, buildings can be bought and sold. The trust lowers prices and can prevent gentrification.

Universal Basic Income

Another tool gaining greater traction is a universal basic income.  James King writes in People’s Policy Project that “. . . a universal basic income (UBI) – a cash payment made to every person in the country with no strings attached – is becoming increasingly popular in experimental policy circles. . . payments  [would be] large enough to guarantee a minimum standard of living to every person independent of work. In the US, that would be roughly $12,000 per person based on the poverty line.”

The wealth divide has become so extreme in the United States that nearly half of all people are living in poverty. A small UBI would provide peace of mind, financial security and the possibility of saving money and building some wealth. A report by the Roosevelt Institute, this week, found that a conservative analysis of the impact of a UBI of $1,000 per month would grow the economy by 12.56 percent after an eight-year implementation, this translates to a total growth of $2.48 trillion.

Public Finance

Another major area of economic democracy is the finance sector. At the end of 2016 there were 2,479 credit unions with assets under 20 million dollars in the United States. Members who bank in credit unions are part of a cooperative bank where the members vote for the board and participate in other decisions.

Another economic democracy approach is a public bank where a city, state or even the national government creates a bank using public dollars such as taxes and fee revenues. Public banks save millions of dollars that are usually paid in fees to Wall Street banks, and the savings can be used to fund projects such as infrastructure, transit, housing, healthcare and education, among other social needs. Public banks can also partner with community banks or credit unions to fund local projects. This could help to offset one of the negative impacts of Dodd-Frank, which has been a reduction in community banks. In testimony, the Secretary of Treasury, Stephen Munchin, said we could “end up in a world where we have four big banks in this country.”

North Dakota is the only state with a public bank, and it has the most diverse, locally-owned banking system in the country. Stacey Mitchell writes that “North Dakota has six times as many locally owned financial institutions per person as the rest of the nation. And these local banks and credit unions control a resounding 83 percent of deposits in the state, more than twice the 30 percent market share such banks have nationally.” Public banking campaigns are making progress in many parts of the country, among them are Oakland, Los Angeles, Philadelphia, Santa Fe, and other areas.

Mutual Aid

When crises occur, no matter what their cause, people can work together cooperatively and outside of slow and unresponsive state systems to meet their needs. This is happening in Athens, Greece, which has been wracked by financial crisis and austerity for years. People have formed “networks of resistance” that meet in community assemblies organized around needs of the community, such as health care and food. They started with time banks as a base for a new non-consumer society.

Similar efforts are underway in Puerto Rico following the devastation of Hurricane Maria. A group called El Llamado is coordinating more than 20 mutual aid efforts, and providing political education and support for self-organizing at the same time.

As George Monbiot describes it, this is consistent with the truth about what human beings are:

We survived despite being weaker and slower than both our potential predators and most of our prey. We did so through developing, to an extraordinary degree, a capacity for mutual aid. As it was essential to our survival, this urge to cooperate was hard-wired into our brains through natural selection.

As we face more crises, whether in lack of access to health care, education, housing, food or economic and climate disasters, let’s remember that we have the capacity to meet our needs collectively.  In fact, every day, people are putting in place a new economic democracy that allows people to participate based on economic and racial justice as well as real democracy. As these alternatives are put in place, they may become dominant in our economy, communities and politics and bring real democracy and security to our lives.

 

Kevin Zeese and Margaret Flowers are co-directors of Popular Resistance. Read other articles by Kevin Zeese and Margaret Flowers.

The United States of Work

Employers exercise vast control over our lives, even when we’re not on the job. How did our bosses gain power that the government itself doesn’t hold?

By Miya Tokumitsu

Source: New Republic

Work no longer works. “You need to acquire more skills,” we tell young job seekers whose résumés at 22 are already longer than their parents’ were at 32. “Work will give you meaning,” we encourage people to tell themselves, so that they put in 60 hours or more per week on the job, removing them from other sources of meaning, such as daydreaming or social life. “Work will give you satisfaction,” we insist, even though it requires abiding by employers’ rules, and the unwritten rules of the market, for most of our waking hours. At the very least, work is supposed to be a means to earning an income. But if it’s possible to work full time and still live in poverty, what’s the point?

Even before the global financial crisis of 2008, it had become clear that if waged work is supposed to provide a measure of well-being and social structure, it has failed on its own terms. Real household wages in the United States have remained stagnant since the 1970s, even as the costs of university degrees and other credentials rise. Young people find an employment landscape defined by unpaid internships, temporary work, and low pay. The glut of degree-holding young workers has pushed many of them into the semi- or unskilled labor force, making prospects even narrower for non–degree holders. Entry-level wages for high school graduates have in fact fallen. According to a study by the Federal Reserve Bank of New York, these lost earnings will depress this generation’s wages for their entire working lives. Meanwhile, those at the very top—many of whom derive their wealth not from work, but from returns on capital—vacuum up an ever-greater share of prosperity.

Against this bleak landscape, a growing body of scholarship aims to overturn our culture’s deepest assumptions about how work confers wealth, meaning, and care throughout society. In Private Government: How Employers Rule Our Lives (and Why We Don’t Talk About It), Elizabeth Anderson, a professor of philosophy at the University of Michigan, explores how the discipline of work has itself become a form of tyranny, documenting the expansive power that firms now wield over their employees in everything from how they dress to what they tweet. James Livingston, a historian at Rutgers, goes one step further in No More Work: Why Full Employment Is a Bad Idea. Instead of insisting on jobs for all or proposing that we hold employers to higher standards, Livingston argues, we should just scrap work altogether.

Livingston’s vision is the more radical of the two; his book is a wide-ranging polemic that frequently delivers the refrain “Fuck work.” But in original ways, both books make a powerful claim: that our lives today are ruled, above all, by work. We can try to convince ourselves that we are free, but as long as we must submit to the increasing authority of our employers and the labor market, we are not. We therefore fancy that we want to work, that work grounds our character, that markets encompass the possible. We are unable to imagine what a full life could be, much less to live one. Even more radically, both books highlight the dramatic and alarming changes that work has undergone over the past century—insisting that, in often unseen ways, the changing nature of work threatens the fundamental ideals of democracy: equality and freedom.

Anderson’s most provocative argument is that large companies, the institutions that employ most workers, amount to a de facto form of government, exerting massive and intrusive power in our daily lives. Unlike the state, these private governments are able to wield power with little oversight, because the executives and boards of directors that rule them are accountable to no one but themselves. Although they exercise their power to varying degrees and through both direct and “soft” means, employers can dictate how we dress and style our hair, when we eat, when (and if) we may use the toilet, with whom we may partner and under what arrangements. Employers may subject our bodies to drug tests; monitor our speech both on and off the job; require us to answer questionnaires about our exercise habits, off-hours alcohol consumption, and childbearing intentions; and rifle through our belongings. If the state held such sweeping powers, Anderson argues, we would probably not consider ourselves free men and women.

Employees, meanwhile, have few ways to fight back. Yes, they may leave the company, but doing so usually necessitates being unemployed or migrating to another company and working under similar rules. Workers may organize, but unions have been so decimated in recent years that their clout is greatly diminished. What’s more, employers are swift to fire anyone they suspect of speaking to their colleagues about organizing, and most workers lack the time and resources to mount a legal challenge to wrongful termination.

It wasn’t supposed to be this way. As corporations have worked methodically to amass sweeping powers over their employees, they have held aloft the beguiling principle of individual freedom, claiming that only unregulated markets can guarantee personal liberty. Instead, operating under relatively few regulations themselves, these companies have succeeded at imposing all manner of regulation on their employees. That is to say, they use the language of individual liberty to claim that corporations require freedom to treat workers as they like.

Anderson sets out to discredit such arguments by tracing them back to their historical origins. The notion that personal freedom is rooted in free markets, for instance, originated with the Levellers in seventeenth-century England, when working conditions differed substantially from today’s. The Levellers believed that a market society was essential to liberate individuals from the remnants of feudal hierarchies; their vision of utopia was a world in which men could meet and interact on terms of equality and dignity. Their ideas echoed through the writing and politics of later figures like John Locke, Adam Smith, Thomas Paine, and Abraham Lincoln, all of whom believed that open markets could provide the essential infrastructure for individuals to shape their own destiny.

An anti-statist streak runs through several of these thinkers, particularly the Levellers and Paine, who viewed markets as the bulwark against state oppression. Paine and Smith, however, would hardly qualify as hard-line contemporary libertarians. Smith believed that public education was essential to a fair market society, and Paine proposed a system of social insurance that included old-age pensions as well as survivor and disability benefits. Their hope was not for a world of win-or-die competition, but one in which open markets would allow individuals to make the fullest use of their talents, free from state monopolies and meddlesome bosses.

For Anderson, the latter point is essential; the notion of lifelong employment under a boss was anathema to these earlier visions of personal freedom. Writing in the 1770s, Smith assumes that independent actors in his market society will be self-employed, and uses butchers and bakers as his exemplars; his “pin factory,” meant to illustrate division of labor, employs only ten people. These thinkers could not envision a world in which most workers spend most of their lives performing wage labor under a single employer. In an address before the Wisconsin State Agricultural Society in 1859, Lincoln stated, “The prudent, penniless beginner in the world labors for wages awhile, saves a surplus with which to buy tools or land for himself, then labors on his own account another while, and at length hires another new beginner to help him.” In other words, even well into the nineteenth century, defenders of an unregulated market society viewed wage labor as a temporary stage on the way to becoming a proprietor.

Lincoln’s scenario does not reflect the way most people work today. Yet the “small business owner” endures as an American stock character, conjured by politicians to push through deregulatory measures that benefit large corporations. In reality, thanks to a lack of guaranteed, nationalized health care and threadbare welfare benefits, setting up a small business is simply too risky a venture for many Americans, who must rely on their employers for health insurance and income. These conditions render long-term employment more palatable than a precarious existence of freelance gigs, which further gives companies license to oppress their employees.

The modern relationship between employer and employee began with the rise of large-scale companies in the nineteenth century. Although employment contracts date back to the Middle Ages, preindustrial arrangements bore little resemblance to the documents we know today. Like modern employees, journeymen and apprentices often served their employers for years, but masters performed the same or similar work in proximity to their subordinates. As a result, Anderson points out, working conditions—the speed required of workers and the hazards to which they might be exposed—were kept in check by what the masters were willing to tolerate for themselves.

The Industrial Revolution brought radical changes, as companies grew ever larger and management structures more complex. “Employers no longer did the same kind of work as employees, if they worked at all,” Anderson observes. “Mental labor was separated from manual labor, which was radically deskilled.” Companies multiplied rapidly in size. Labor contracts now bonded workers to massive organizations in which discipline, briefs, and decrees flowed downward, but whose leaders were unreachable by ordinary workers. Today, fast food workers or bank tellers would be hard-pressed to petition their CEOs at McDonald’s or Wells Fargo in person.

Despite this, we often speak of employment contracts as agreements between equals, as if we are living in Adam Smith’s eighteenth-century dream world. In a still-influential paper from 1937 titled “The Nature of the Firm,” the economist and Nobel laureate Ronald Coase established himself as an early observer and theorist of corporate concerns. He described the employment contract not as a document that handed the employer unaccountable powers, but as one that circumscribed those powers. In signing a contract, the employee “agrees to obey the directions of an entrepreneur within certain limits,” he emphasized. But such characterizations, as Anderson notes, do not reflect reality; most workers agree to employment without any negotiation or even communication about their employer’s power or its limits. The exceptions to this rule are few and notable: top professional athletes, celebrity entertainers, superstar academics, and the (increasingly small) groups of workers who are able to bargain collectively.

Yet because employment contracts create the illusion that workers and companies have arrived at a mutually satisfying agreement, the increasingly onerous restrictions placed on modern employees are often presented as “best practices” and “industry standards,” framing all sorts of behaviors and outcomes as things that ought to be intrinsically desired by workers themselves. Who, after all, would not want to work on something in the “best” way? Beyond employment contracts, companies also rely on social pressure to foster obedience: If everyone in the office regularly stays until seven o’clock every night, who would risk departing at five, even if it’s technically allowed? Such social prods exist alongside more rigid behavioral codes that dictate everything from how visible an employee’s tattoo can be to when and how long workers can break for lunch.

Many workers, in fact, have little sense of the legal scope of their employer’s power. Most would be shocked to discover that they could be fired for being too attractive, declining to attend a political rally favored by their employer, or finding out that their daughter was raped by a friend of the boss—all real-life examples cited by Anderson. Indeed, it is only after dismissal for such reasons that many workers learn of the sweeping breadth of at-will employment, the contractual norm that allows American employers to fire workers without warning and without cause, except for reasons explicitly deemed illegal.

In reality, the employment landscape is even more dire than Anderson outlines. The rise of staffing or “temp” agencies, for example, undercuts the very idea of a direct relationship between worker and employer. In The Temp Economy: From Kelly Girls to Permatemps in Postwar America, sociologist Erin Hatton notes that millions of workers now labor under subcontracting arrangements, which give employers even greater latitude to abuse employees. For years, Walmart—America’s largest retailer—used a subcontracting firm to hire hundreds of cleaners, many from Eastern Europe, who worked for months on end without overtime pay or a single day off. After federal agents raided dozens of Walmarts and arrested the cleaners as illegal immigrants, company executives used the subcontracting agreement to shirk responsibility for their exploitation of the cleaners, claiming they had no knowledge of their immigration status or conditions.

By any reasonable standard, much “temp” work is not even temporary. Employees sometimes work for years in a single workplace, even through promotions, without ever being granted official status as an employee. Similarly, “gig economy” platforms like Uber designate their workers as contractors rather than employees, a distinction that exempts the company from paying them minimum wage and overtime. Many “permatemps” and contractors perform the same work as employees, yet lack even the paltry protections and benefits awarded to full-time workers.

A weak job market, paired with the increasing precarity of work, means that more and more workers are forced to make their living by stringing together freelance assignments or winning fixed-term contracts, subjecting those workers to even more rules and restrictions. On top of their actual jobs, contractors and temp workers must do the additional work of appearing affable and employable not just on the job, but during their ongoing efforts to secure their next gig. Constantly pitching, writing up applications, and personal branding on social media requires a level of self-censorship, lest a controversial tweet or compromising Facebook photo sink their job prospects. Forced to anticipate the wishes not of a specific employer, but of all potential future employers, many opt out of participating in social media or practicing politics in any visible capacity. Their public personas are shaped not by their own beliefs and desires, but by the demands of the labor market.


For Livingston, it’s not just employers but work itself that is the problem. We toil because we must, but also because our culture has trained us to see work as the greatest enactment of our dignity and personal character. Livingston challenges us to turn away from such outmoded ideas, rooted in Protestant ideals. Like Anderson, he sweeps through centuries of labor theory with impressive efficiency, from Marx and Hegel to Freud and Lincoln, whose 1859 speech he also quotes. Livingston centers on these thinkers because they all found the connection between work and virtue troubling. Hegel believed that work causes individuals to defer their desires, nurturing a “slave morality.” Marx proposed that “real freedom came after work.” And Freud understood the Protestant work ethic as “the symptom of repression, perhaps even regression.”

Nor is it practical, Livingston argues, to exalt work: There are simply not enough jobs to keep most adults employed at a living wage, given the rise of automation and increases in productivity. Besides, the relation between income and work is arbitrary. Cooking dinner for your family is unpaid work, while cooking dinner for strangers usually comes with a paycheck. There’s nothing inherently different in the labor involved—only in the compensation. Anderson argues that work impedes individual freedom; Livingston points out that it rarely pays enough. As technological advances continue to weaken the demand for human labor, wages will inevitably be driven down even further. Instead of idealizing work and making it the linchpin of social organization, Livingston suggests, why not just get rid of it?

Livingston belongs to a cadre of thinkers, including Kathi Weeks, Nick Srnicek, and Alex Williams, who believe that we should strive for a “postwork” society in one form or another. Strands of this idea go back at least as far as Keynes’s 1930 essay on “Economic Possibilities for our Grandchildren.” Not only would work be eliminated or vastly reduced by technology, Keynes predicted, but we would also be unburdened spiritually. Devotion to work was, he deemed, one of many “pseudo-moral principles” that “exalted some of the most distasteful of human qualities into the position of the highest virtues.”

Since people in this new world would no longer have to earn a salary, they would, Livingston envisions, receive some kind of universal basic income. UBI is a slippery concept, adaptable to both the socialist left and libertarian right, but it essentially entails distributing a living wage to every member of society. In most conceptualizations, the income is indeed basic—no cases of Dom Pérignon—and would cover the essentials like rent and groceries. Individuals would then be free to choose whether and how much they want to work to supplement the UBI. Leftist proponents tend to advocate pairing UBI with a strong welfare state to provide nationalized health care, tuition-free education, and other services. Some libertarians view UBI as a way to pare down the welfare state, arguing that it’s better simply to give people money to buy food and health care directly, rather than forcing them to engage with food stamp and Medicaid bureaucracies.

According to Livingston, we are finally on the verge of this postwork society because of automation. Robots are now advanced enough to take over complex jobs in areas like agriculture and mining, eliminating the need for humans to perform dangerous or tedious tasks. In practice, however, automation is a double-edged sword, with the capacity to oppress as well as unburden. Machines often accelerate the rate at which humans can work, taxing rather than liberating them. Conveyor belts eliminated the need for workers to pass unfinished products along to their colleagues—but as Charlie Chaplin and Lucille Ball so hilariously demonstrated, the belts also increased the pace at which those same workers needed to turn wrenches and wrap chocolates. In retail and customer service, a main function of automation has been not to eliminate work, but to eliminate waged work, transferring much of the labor onto consumers, who must now weigh and code their own vegetables at the supermarket, check out their own library books, and tag their own luggage at the airport.

At the same time, it may be harder to automate some jobs that require a human touch, such as floristry or hairstyling. The same goes for the delicate work of caring for the young, sick, elderly, or otherwise vulnerable. In today’s economy, the demand for such labor is rising rapidly: “Nine of the twelve fastest-growing fields,” The New York Times reported earlier this year, “are different ways of saying ‘nurse.’” These jobs also happen to be low-paying, emotionally and physically grueling, dirty, hazardous, and shouldered largely by women and immigrants. Regardless of whether employment is virtuous or not, our immediate goal should perhaps be to distribute the burdens of caregiving, since such work is essential to the functioning of society and benefits us all.


A truly work-free world is one that would entail a revolution from our present social organizations. We could no longer conceive of welfare as a last resort—as the “safety net” metaphor implies—but would be forced to treat it as an unremarkable and universal fact of life. This alone would require us to support a massive redistribution of wealth, and to reclaim our political institutions from the big-money interests that are allergic to such changes. Tall orders indeed—but as Srnicek and Williams remind us in their book, Inventing the Future: Postcapitalism and a World Without Work, neoliberals pulled off just such a revolution in the postwar years. Thanks to their efforts, free-market liberalism replaced Keynesianism as the political and economic common sense all around the world.

Another possible solution to the current miseries of unemployment and worker exploitation is the one Livingston rejects in his title: full employment. For anti-work partisans, full employment takes us in the wrong direction, and UBI corrects the course. But the two are not mutually exclusive. In fact, rather than creating new jobs, full employment could require us to reduce our work hours drastically and spread them throughout the workforce—a scheme that could radically de-center waged work in our lives. A dual strategy of pursuing full employment while also demanding universal benefits—including health care, childcare, and affordable housing—would maximize workers’ bargaining power to ensure that they, and not just owners of capital, actually get to enjoy the bounty of labor-saving technology.

Nevertheless, Livingston’s critiques of full employment are worth heeding. As with automation, it can all go wrong if we use the banner of full employment to create pointless roles—what David Graeber has termed “bullshit jobs,” in which workers sit in some soul-sucking basement office for eight hours a day—or harmful jobs, like building nuclear weapons. If we do not have a deliberate politics rooted in universal social justice, then full employment, a basic income, and automation will not liberate us from the degradations of work.

Both Livingston and Anderson reveal how much of our own power we’ve already ceded in making waged work the conduit for our ideals of liberty and morality. The scale and coordination of the institutions we’re up against in the fight for our emancipation is, as Anderson demonstrates, staggering. Employers hold the means to our well-being, and they have the law on their side. Individual efforts to achieve a better “work-life balance” for ourselves and our families miss the wider issue we face as waged employees. Livingston demonstrates the scale at which we should be thinking: Our demands should be revolutionary, our imaginations wide. Standing amid the wreckage of last year’s presidential election, what other choice do we have?

 

Miya Tokumitsu is a lecturer of art history at the University of Melbourne and a contributing editor at Jacobin. She is the author of Do What You Love.  And Other Lies about Success and Happiness.

Will Robots Take Your Job?

Walmart Robots

By Nick Srnicek and Alex Williams

Source: ROAR

In recent months, a range of studies has warned of an imminent job apocalypse. The most famous of these—a study from Oxford—suggests that up to 47 percent of US jobs are at high-risk of automation over the next two decades. Its methodology—assessing likely developments in technology, and matching them up to the tasks typically deployed in jobs—has been replicated since then for a number of other countries. One study finds that 54 percent of EU jobs are likely automatable, while the chief economist of the Bank of England has argued that 45 percent of UK jobs are similarly under threat.

This is not simply a rich-country problem, either: low-income economies look set to be hit even harder by automation. As low-skill, low-wage and routine jobs have been outsourced from rich capitalist countries to poorer economies, these jobs are also highly susceptible to automation. Research by Citi suggests that for India 69 percent of jobs are at risk, for China 77 percent, and for Ethiopia a full 85 percent of current jobs. It would seem that we are on the verge of a mass job extinction.

Nothing New?

For many economists however, there is nothing to worry about. If we look at the history of technology and the labor market, past experiences would suggest that automation has not caused mass unemployment. Automation has always changed the labor market. Indeed, one of the primary characteristics of the capitalist mode of production has been to revolutionize the means of production—to really subsume the labor process and reorganize it in ways that more efficiently generate value. The mechanization of agriculture is an early example, as is the use of the cotton gin and spinning jenny. With Fordism, the assembly line turned complex manufacturing jobs into a series of simple and efficient tasks. And with the era of lean production, we have had the computerized management of long commodity chains turn the production process into a more and more heavily automated system.

In every case, we have not seen mass unemployment. Instead we have seen some jobs disappear, while others have been created to replace not only the lost jobs but also the new jobs necessary for a growing population. The only times we see massive unemployment tend to be the result of cyclical factors, as in the Great Depression, rather than some secular trend towards higher unemployment resulting from automation. On the basis of these considerations, most economists believe that the future of work will likely be the same as the past: some jobs will disappear, but others will be created to replace them.

In typical economist fashion, however, these thoughts neglect the broader social context of earlier historical periods. Capitalism may not have seen a massive upsurge in unemployment, but this is not a necessary outcome. Rather, it was dependent upon unique circumstances of earlier moments—circumstances that are missing today. In the earliest periods of automation, there was a major effort by the labor movement to reduce the working week. It was a successful project that reduced the week from around 60 hours at the turn of the century, down to 40 hours during the 1930s, and very nearly even down to 30 hours. In this context, it was no surprise that Keynes would famously extrapolate to a future where we all worked 15 hours. He was simply looking at the existing labor movement. With reduced work per person, however, this meant that the remaining work would be spread around more evenly. The impact of technology at that time was therefore heavily muted by a 33 percent reduction in the amount of work per person.

Today, by contrast, we have no such movement pushing for a reduced working week, and the effects of automation are likely to be much more serious. Similar issues hold for the postwar era. With most Western economies left in ruins, and massive American support for the revitalization of these economies, the postwar era saw incredibly high levels of economic growth. With the further addition of full employment policies, this period also saw incredibly high levels of job growth and a compact between trade unions and capital to maintain a sufficient amount of good jobs. This led to healthy wage growth and, subsequently, healthy growth in aggregate demand to stimulate the economy and keep jobs coming. Moreover, this was a period where nearly 50 percent of the potential labor force was constrained to the household.

Under these unique circumstances, it is no wonder that capitalism was able to create enough jobs even as automation continued to transform for the labor process. Today, we have sluggish economic growth, no commitments to full employment (even as we have commitments to harsh welfare policies), stagnant wage growth, and a major influx of women into the labor force. The context for a wave of automation is drastically different from the way it was before.

Likewise, the types of technology that are being developed and potentially introduced into the labor process are significantly different from earlier technologies. Whereas earlier waves of automation affected what economists call “routine work” (work that can be laid out in a series of explicit steps), today’s technology is beginning to affect non-routine work. The difference is between a factory job on an assembly line and driving a car in the chaotic atmosphere of the modern urban environment. Research from economists like David Autor and Maarten Goos shows that the decline of routine jobs in the past 40 years has played a significant role in increased job polarization and rising inequality. While these jobs are gone, and highly unlikely to come back, the next wave of automation will affect the remaining sphere of human labor. An entire range of low-wage jobs are now potentially automatable, involving both physical and mental labor.

Given that it is quite likely that new technologies will have a larger impact on the labor market than earlier waves of technological change, what is likely to happen? Will robots take your job? While one side of the debate warns of imminent apocalypse and the other yawns from the historical repetition, both tend to neglect the political economy of automation—particularly the role of labor. Put simply, if the labor movement is strong, we are likely to see more automation; if the labor movement is weak, we are likely to see less automation.

Workers Fight Back

In the first scenario, a strong labor movement is able to push for higher and higher wages (particularly relative to globally stagnant productivity growth). But the rising cost of labor means that machines become relatively cheap in comparison. We can already see this in China, where real wages have been surging for more than 10 years, thereby making Chinese labor increasingly less cheap. The result is that China has become the world’s biggest investor in industrial robots, and numerous companies—most famously Foxconn—have all stated their intentions to move towards increasingly automated factories.

This is the archetype of a highly automated world, but in order to be achievable under capitalism it requires that the power of labor be strong, given that the relative costs of labor and machines are key determinants for investment. What then happens under these circumstances? Do we get mass unemployment as robots take all the jobs? The simple answer is no. Rather than mass decimation of jobs, most workers who have their jobs automated end up moving into new sectors.

In the advanced capitalist economies this has been happening over the past 40 years, as workers move from routine jobs to non-routine jobs. As we saw earlier, the next wave of automation is different, and therefore its effects on the labor market are also different. Some job sectors are likely to take heavy hits under this scenario. Jobs in retail and transport, for instance, will likely be heavily affected. In the UK, there are currently 3 million retail workers, but estimates by the British Retail Consortium suggest this may decrease by a million over the next decade. In the US, there are 3.4 million cashiers alone—nearly all of whose work could be automated. The transport sector is similarly large, with 3.7 million truck drivers in the US, most of whose jobs could be incrementally automated as self-driving trucks become viable on public roads. Large numbers of workers in such sectors are likely to be pushed out of their jobs if mass automation takes place.

Where will they go? The story that Silicon Valley likes to tell us is that we will all become freelance programmers and software developers and that we should all learn how to code to succeed in their future utopia. Unfortunately they seem to have bought into their own hype and missed the facts. In the US, 1.8 percent of all jobs require knowledge of programming. This compares to the agricultural sector, which creates about 1.5 percent of all American jobs, and to the manufacturing sector, which employs 8.1 percent of workers in this deindustrialized country. Perhaps programming will grow? The facts here are little better. The Bureau of Labor Statistics (BLS) projects that by 2024 jobs involving programming will be responsible for a tiny 2.2 percent of the jobs available. If we look at the IT sector as a whole, according to Citi, it is expected to take up less than 3 percent of all jobs.

What about the people needed to take care of the robots? Will we see a massive surge in jobs here? Presently, robot technicians and engineers take up less than 0.1 percent of the job market—by 2024, this will dwindle even further. We will not see a major increase in jobs taking care of robots or in jobs involving coding, despite Silicon Valley’s best efforts to remake the world in its image.

This continues a long trend of new industries being very poor job creators. We all know about how few employees worked at Instagram and WhatsApp when they were sold for billions to Facebook. But the low levels of employment are a widespread sectoral problem. Research from Oxford has found that in the US, only 0.5 percent of the labor force moved into new industries (like streaming sites, web design and e-commerce) during the 2000s. The future of work does not look like a bunch of programmers or YouTubers.

In fact, the fastest growing job sectors are not for jobs that require high levels of education at all. The belief that we will all become high-skilled and well-paid workers is ideological mystification at its purest. The fastest growing job sector, by far, is the healthcare industry. In the US, the BLS estimates this sector to create 3.8 million new jobs between 2014 and 2024. This will increase its share of employment from 12 percent to 13.6 percent, making it the biggest employing sector in the country. The jobs of “healthcare support” and “healthcare practitioner” alone will contribute 2.3 million jobs—or 25 percent of all new jobs expected to be created.

There are two main reasons for why this sector will be such a magnet for workers forced out of other sectors. In the first place, the demographics of high-income economies all point towards a significantly growing elderly population. Fewer births and longer lives (typically with chronic conditions rather than infectious diseases) will put more and more pressure on our societies to take care of elderly, and force more and more people into care work. Yet this sector is not amenable to automation; it is one of the last bastions of human-centric skills like creativity, knowledge of social context and flexibility. This means the demand for labor is unlikely to decrease in this sector, as productivity remains low, skills remain human-centric, and demographics make it grow.

In the end, under the scenario of a strong labor movement, we are likely to see wages rise, which will cause automation to rapidly proceed in certain sectors, while workers are forced to struggle for jobs in a low-paying healthcare sector. The result is the continued elimination of middle-wage jobs and the increased polarization of the labor market as more and more are pushed into the low-wage sectors. On top of this, a highly educated generation that was promised secure and well-paying jobs will be forced to find lower-skilled jobs, putting downward pressure on wages—generating a “reserve army of the employed”, as Robert Brenner has put it.

Workers Fall Back

Yet what happens if the labor movement remains weak? Here we have an entirely different future of work awaiting us. In this case, we end up with stagnant wages, and workers remain relatively cheap compared to investment in new equipment. The consequences of this are low levels of business investment, and subsequently, low levels of productivity growth. Absent any economic reason to invest in automation, businesses fail to increase the productivity of the labor process. Perhaps unexpectedly, under this scenario we should expect high levels of employment as businesses seek to maximize the use of cheap labor rather than investing in new technology.

This is more than a hypothetical scenario, as it rather accurately describes the situation in the UK today. Since the 2008 crisis, real wages have stagnated and even fallen. Real average weekly earnings have started to rise since 2014, but even after eight years they have yet to return to their pre-crisis levels. This has meant that businesses have had incentives to hire cheap workers rather than invest in machines—and the low levels of investment in the UK bear this out. Since the crisis, the UK has seen long periods of decline in business investment—the most recent being a 0.4 percent decline between Q12015 and Q12016. The result of low levels of investment has been virtually zero growth in productivity: from 2008 to 2015, growth in output per worker has averaged 0.1 percent per year. Almost all of the UK’s recent growth has come from throwing more bodies into the economic machine, rather than improving the efficiency of the economy. Even relative to slow productivity growth across the world, the UK is particularly struggling.

With cheap wages, low investment and low productivity, we see that companies have instead been hiring workers. Indeed, employment levels in the UK have reached the highest levels on record—74.2 percent as of May 2016. Likewise, unemployment is low at 5.1 percent, especially when compared to their neighbors in Europe who average nearly double that level. So, somewhat surprisingly, an environment with a weak labor movement leads here to high levels of employment.

What is the quality of these jobs, however? We have already seen that wages have been stagnant, and that two-thirds of net job creation since 2008 has been in self-employed jobs. Yet there has also been a major increase in zero-hour contracts (employment situations that do not guarantee any hours to workers). Estimates are that up to 5 percent of the labor force is in such situations, with over 1.7 million zero-hour contracts out. Full-time employment is down as well: as a percentage of all jobs, its pre-crisis levels of 65 percent have been cut to 63 percent and refused to budge even as the economy grows (slowly). The percentage of involuntary part-time workers—those who would prefer a full-time job but cannot find one—more than doubled after the crisis, and has barely begun to recover since.

Likewise with temporary employees: involuntary temporary workers as a percentage of all temporary workers rose from below 25 percent to over 40 percent during the crisis, only partly recovering to around 35 percent today. There is a vast number of workers who would prefer to work in more permanent and full-time jobs, but who can no longer find them. The UK is increasingly becoming a low-wage and precarious labor market—or, in the Tories’ view, a competitive and flexible labor market. This, we would argue, is the future that obtains with a weak labor movement: low levels of automation, perhaps, but at the expense of wages (and aggregate demand), permanent jobs and full-time work. We may not get a fully automated future, but the alternative looks just as problematic.

These are therefore the two poles of possibility for the future of work. On the one hand, a highly automated world where workers are pushed out of much low-wage non-routine work and into lower-wage care work. On the other hand, a world where humans beat robots but only through lower wages and more precarious work. In either case, we need to build up the social systems that will enable people to survive and flourish in the midst of these significant changes. We need to explore ideas like a Universal Basic Income, we need to foster investment in automation that could eliminate the worst jobs in society, and we need to recover that initial desire of the labor movement for a shorter working week.

We must reclaim the right to be lazy—which is neither a demand to be lazy nor a belief in the natural laziness of humanity, but rather the right to refuse domination by a boss, by a manager, or by a capitalist. Will robots take our jobs? We can only hope so.

Note: All uncited figures either come directly from, or are based on authors’ calculations of, data from the Bureau of Labor Statistics, O*NET and the Office for National Statistics.

A Universal Basic Income Is The Bipartisan Solution To Poverty We’ve Been Waiting For

 Molly Crabapple Basic Income Banner

What if the government simply paid everyone enough so that no one was poor? It’s an insane idea that’s gaining an unlikely alliance of supporters.

By Ben Schiller

Source: FastCoexist.com

There’s a simple way to end poverty: the government just gives everyone enough money, so nobody is poor. No ifs, buts, conditions, or tests. Everyone gets the minimum they need to survive, even if they already have plenty.

This, in essence, is “universal minimum income” or “guaranteed basic income”—where, instead of multiple income assistance programs, we have just one: a single payment to all citizens, regardless of background, gender, or race. It’s a policy idea that sounds crazy at first, but actually begins to make sense when you consider some recent trends.

The first is that work isn’t what it used to be. Many people now struggle through a 50-hour week and still don’t have enough to live on. There are many reasons for this—including the heartlessness of employers and the weakness of unions—but it’s a fact. Work no longer pays. The wages of most American workers have stagnated or declined since the 1970s. About 25% of workers (including 40% of those in restaurants and food service) now need public assistance to top up what they earn.

The second: it’s likely to get worse. Robots already do many menial tasks. In the future, they’ll do more sophisticated jobs as well. A study last year from Carl Frey and Michael Osborne at Oxford University found that 47% of jobs are at risk of computerization over the next two decades. That includes positions in transport and logistics, office and administration, sales and construction, and even law, financial services and medicine. Of course, it’s possible that people who lose their jobs will find others. But it’s also feasible we’re approaching an era when there will simply be less to do.

The third is that traditional welfare is both not what it used to be and not very efficient. The value of welfare for families with children is now well below what it was in the 1990s, for example. The move towards means-testing, workfare—which was signed into law by Bill Clinton in 1996—and other forms of conditionality have killed the universal benefit. And not just in the U.S. It’s now rare anywhere in the world that people get a check without having to do something in return. Whatever the rights and wrongs of this, that makes the income assistance system more complicated and expensive to manage. Up to up to 10% of the income assistance budget now goes to administrating its distribution.

For these reasons and others, the idea of a basic income for everyone is becoming increasingly popular. There has been a flurry of reports and papers about it recently, and, unusually, the idea has advocates across the political spectrum.

The libertarian right likes basic income because it hates bureaucracy and thinks people should be responsible for themselves. Rather than giving out food stamps and health care (which are in-kind services), it thinks people should get cash, because cash is fungible and you do what you like with it.

The left likes basic income because it thinks society is unequal and basic income is redistributive. It evens up the playing field for people who haven’t had good opportunities in life by establishing a floor under the poorest. The “precariat” goes from being perpetually insecure to knowing it has something to live on. That, in turn, should raise well-being and produce more productive citizens.

The technology elite, like Netscape’s Marc Andreessen, also likes the idea. “As a VC, I like the fact that a lot of the political establishment is ignoring or dismissing this idea,” Albert Wenger, of Union Square Ventures, told a TED audience recently, “because what we see in startups is that the most powerful innovative ideas are ones truly dismissed by the incumbents.” A minimum income would allow us to “embrace automation rather than be afraid of it” and let more of us participate in the era of “digital abundance,” he says.

The exact details of basic income still need to be worked out, but it might work something like this: Instead of welfare payments, subsidies for health care, and tax credits for the working poor, we would take that money and use it to cover a single payment that would give someone the chance to live reasonably. Switzerland recently held an (unsuccessful) is planning to hold a referendum on a basic income this year, though no date is set. The proposed amount is $2,800 per month.

But would it actually work? The evidence from actual experiments is limited, though it’s more positive than not. A pilot in the 1970s in Manitoba, Canada, showed that a “Mincome” not only ended poverty but also reduced hospital visits and raised high-school completion rates. There seemed to be a community-affirming effect, which showed itself in people making use of free public services more responsibly.

Meanwhile, there were eight “negative income tax” trials in the U.S. in the ’70s, where people received payments and the government clawed back most of it in taxes based on your other income. The results for those trials was more mixed. They reduced poverty, but people also worked slightly less than normal. To some, this is the major drawback of basic income: it could make people lazier than they would otherwise be. That would certainly be a problem, though it’s questionable whether, in the future, there will be as much employment anyway. The age of robots and artificial intelligence seems likely to hollow out many jobs, perhaps changing how we view notions of laziness and productivity altogether.

Experiments outside the U.S. have been more encouraging. One in Namibia cut poverty from 76% to 37%, increased non-subsidized incomes, raised education and health standards, and cut crime levels. Another involving 6,000 people in India paid people $7 month—about a third of subsistence levels. It, too, proved successful.

“The important thing is to create a floor on which people can start building some security. If the economic situation allows, you can gradually increase the income to where it meets subsistence,” says Guy Standing, a professor of development studies at the School of Oriental and African Studies, in London, who was involved with the pilot. “Even that modest amount had incredible effects on people’s savings, economic status, health, in children going to school, in the acquisition of items like school shoes, so people felt in control of their lives. The amount of work people were doing increased as well.”

Given the gridlock in Congress, it’s unlikely we’ll see basic income here for a while. Though the idea has supporters in both left and right-leaning think-tanks, it’s doubtful actual politicians could agree to redesign much of the federal government if they can’t agree on much else. But the idea could take off in poorer countries that have more of a blank slate and suffer from less polarization. Perhaps we’ll re-import the concept one day once the developing world has perfected it?

A Libertarian Argument for Universal Basic Income

It’s Time to Start Believing Again – Why Basic Income Could and Should be the Next Global Political Movement

Source: Thought Infection

Things change slowly and then all at once. 

If there is one great consistency about change in the 21st century, it is that things seem to change almost imperceptibly right up until they become inevitable. Many good examples of this effect can be found in the world of technology such as the rise of the internetthe fall of film-cameras, or the explosive growth of the green energy industry. In all of these cases the exponential nature of technological advances led many to discount major changes that eventually disrupted entire industries. While this effect is best understood in the world of technology I think this kind of change can also be seen in social and political spheres.

Political movements must by necessity start with only a minority of individuals working very hard for very many years to push forward on an issue. For a very long time it can appear that little or no progress is being made, but below the surface opinions and minds are slowly shifting. This slow progression continues in the background, almost imperceptibly until some sort of tipping point is reached and a sudden shift in the public and political sentiment can occur. A good example of this effect would be the momentous shift away from a deep and vitriolic hatred of gays only a few decades ago towards increasing acceptance today.

In addition to the energy provided by a small group of dedicated individuals, flashpoint social or political change also requires the maneuvering room in order for rapid revolutionary change to happen. The room for new ideas to maneuver can be created by a collapse of incumbent ideology, or in the case of the greatest shifts it often comes from a wider, systemic loss of faith in the system. When people become embittered with things as they are they will inevitably start looking to those offering alternative views.

A person without belief is a power vacuum. 

I think we are currently stand at time when conditions are set for the next global political movement to take hold. We are seeing clear symptoms of a systemic erosion of faith in the political and economic systems as they stand today.

Economic hardship and unemployment has become endemic across large parts of the developed world. Those who do work find themselves squeezed between longer working hours, higher on the job demands, increasing costs of living, and loss of both job security and benefits.

Times feel tough, and people are starting to ask why they are tough. Did we have some sort of disaster? Are our crops failing, or our industries falling apart? What happened that is making institutions like education and health too expensive to support?

Thomas Piketty, in his recent book provides strong evidence that the economic pathology of the current geopolitical situation may simply be the symptoms of a larger economic disease. When capital out-competes labour, it inevitably leads to increasing wealth disparity and the associated economic problems that we see today. People can see that the economic gains that our collective hard-work creates is going disproportionately into the hands of the wealthy. People can see that the game is rigged against them, and they don’t really want to play any more.

At the same time as economic realities are being thrust upon workers around the world, people are also increasingly detached from mainstream politics. Little real change has happened despite perpetual political promises to deliver such. Political detachment combined with economic hardship is a dangerous mix, and is credited with leading to the rise of extreme political groups like the Golden Dawn in Greece and other far-right parties in the UK and France. The rise of more extremist politics is also apparent in the increasingly polarized and broken political landscape of the United States.

The disengagement of the public from the political sphere is particularly strong for those who are also disproportionately affected by the economic slow-down, the youth. It is an unappreciated fact that there are actually more millenials in the United States than there are baby boomers. Whatever politician figures out how to engage the millenial generation politically is going to run the world.

From my perspective, there seems to be a clear build-up of political tension across the globe. While we can argue about specific economic and political maladies that have led us to this point, I think the simple fact is that people are losing faith in the system as a whole. As people lose faith, governments become more detached and fearful of their citizens, leading more people to lose faith in the system, and thus a vicious cycle of political breakdown is perpetuated.

So how do we stop this?

The answer is surprisingly simple – We need to believe again.

People need to believe that the world will be better for their children than it was for them. This is the magic that drives people to get up in the morning and go to school and work, to put in the long hours of hard work, to make discoveries, to invent new technologies, and improve the world. The economy will flourish only as long as people truly believe they can better their own life, and that of their children.

Without faith in the global economic and political system, we have nothing. 

Believe it or not, there just might be one simple medicine which (while it would not solve all of our problems) could go a long way to solving the twin problems of political and economic break down.

Basic income.

There is a long list of reasons that basic income makes for sensible economic policy, which I will not go through here. Suffice it to say that basic income would (1) give workers the leverage to demand more from work, (2) give individuals and innovators the means to do their thing, (3) give corporations more incentive to automate their production, and (4) generally support the consumption economy. (Some worry that such a basic income might lead to less incentive to work, but I say that if you need to use starvation as a threat to get people to work for you, then your business is not profitable enough.)

Perhaps most importantly, basic income would be the solution to restore the faith of the common individual in the current system of global capitalism. By institutionalizing the social contract in the form of a cash dividend for everyone, basic income would finally enshrine the promise that a rich and successful society must first deliver a minimal living standard to everyone.

Serious realistic types might rush to play down the importance of belief in the political system. Who cares whether the rabble believes in what the government and politicians do, as long as it is functional? But these people are completely missing the central truth of the matter here. Belief is the only power in the world that matters. My dollar is only worth what we collectively agree it to be worth, and the same goes for our societies. If we fail to create societies which inspire belief, then we are lost. If we do not find a way fill that vacuum left by eroding belief, then someone else will.

It is time for something that we can believe in, it is time for basic income. 

Lara Trace Hentz

INDIAN COUNTRY NEWS

In Saner Thought

"It is the duty of every man, as far as his ability extends, to detect and expose delusion and error"..Thomas Paine

ZEDJournAI

Human in Algorithms

Rooster Crows

From the Roof Top

Aisle C

I See This

The Free

blog of the post capitalist transition.. Read or download the novel here + latest relevant posts

अध्ययन-अनुसन्धान(Essential Knowledge of the Overall Subject)

अध्ययन-अनुसन्धानको सार