Loneliness

By Yogi Prateado

Source: Adbusters

The Silicon Valley in which I live, a culture infused with the cocaine high of technological breakthroughs, grates against my earthly sensibilities.

Riding on the crest of adrenalin, discovery, and money, what many in the fair Bay Area know, is not in fact what is. This temporary party atmosphere, around until catastrophe hits, is the last hurrah of capitalism. Whether technology will trap us in a surveillance state, or liberate us from mediating political, economic, and social predators, dangles in the hands of deliberate planning and meta-organizing on the part of those developers.

As users and citizens, we are all developers.

Not knowing the implications of one’s discoveries is very different from saying that there aren’t any. The neoteny of tech bros and gals is part of the enforced juvenilization of tech “campuses” and a society that values brain plasticity over wisdom.

After all, wisdom doesn’t sell. You can’t fake wisdom; there’s no wise way to put lipstick on a pig’s face, but there is money to be made from exploiting our mammalian dispositions.

Thus, tech people are predisposed to think, act, and do as children do. They are rewarded for doing so. But this has consequences for where we are going as a culture, and as a planet.

Enforced childhood and adolescence—having other people clean your clothes, make your food, and take you to work—creates “first world problems”, obsessions with gourmet food, and infantile competition. By disconnecting life from work and work from life, millenials are entranced by expensive eating out, Instagramming meals, and living off rich meat, sugar, and dairy (internal parasite-cultivating, climate change-causing) indulgences. Meanwhile, the products you make hand the keys of ultimate social control over to the highest bidder.

Thus, to work in the tech industry becomes a self-fulfilling prophecy. Young minds are inculcated into believing what they are doing is indeed a good thing, a helpful thing for society, rewarded by their superiors with a glamorous job which treats them like a kid, pays well, and provides a roulette wheel of opportunity.

At the same time, those highest on the food chain realize their techno optimism must be tempered with social intelligence. Zuckerberg’s Harvard Address talks about the need for universal health care (duh) and Universal Basic Income, simple solutions needed before we can begin to talk about truth, fairness, justice, diversity, or intelligence. Here, the boy billionaire unfortunately does have a point, one he’s cynically selling to the masses as he consolidates money and power.

It’s now a given that there are multiple intelligences, but in the West (particularly the self-interested, individualistic US), this is still some sort of revelation. It has yet to be integrated into schools, politics, commerce, or tech, let alone science, as many conveniently believe in one scale of intelligence (usually the one they excel at).

As Maslow knew, until we have basic welfare (food, water, shelter), the people have no hope of participating in democracy. Since 1970, Rawls and every reasonable political theorist agreed with this theory, yet we still expect a polity without a society. “Society” is fragmented, violent, scared, and unequal; millionaire 20-year-olds dodging and ignoring homeless 60-year-olds in the street.

The illusion of progress is one of the most pernicious veils currently enthralling our eyes.

The high of “being part of the solution” or “doing well while doing good,” is a strong opiate indeed. Since Marx, drug metaphors have been used to compare capitalism’s co-opting and metabolizing any opposition, novelty, or expression of freedom. Like an out-of-control nanobot army, capitalism turns color into grey goo, turns freedom into products, and commodifies all authentic expression. To quote Wright, we are in a technology trap, where every problem demands a technofix.

But the will-to-power techno-optimism concept doesn’t pan out. What cosmology could support the absurd conclusion that a problem’s old template can be used to “innovate” a new solution? The notion that there is a template—the homogenization of the mind globally through damaging the climate, spreading uniform media, and the colonialism of language and culture—is the problem.

We’re talking out of both sides of our mouths, saying we value diversity then quenching it, saying we’re open to difference then suppressing it. When it does pop up, diversity is first a novelty then a perversion, objectified and commodified as it fights to exist in a white capitalist heteropatriarchy. As Erica Wohldmann says, “Complete control is merely an illusion so we might as well be comfortable.” Comfort requires being pushed back, eating and being eaten, giving and taking. Symbiosis—sharing.

We have been stingy with sharing, forgetting our knowledge is limited.
It’s time to come home to fallibility, responsibility, and the truth about our poor state of affairs. Delaying the operation makes the sickness worse and decreases the chance for a resilient recovery.

Our ancestors only owned as much as they could carry. Live simply so that others may simply live. But what of this simplicity at the societal, political, legal, medical, global level?

Decentralization is the first priority; no more anonymous policy-making. If a decision doesn’t directly affect you, you have no right to legislate over it. No more long arms of the government, no more far-reaching corporations preying on the people thousands of miles from their headquarters.

How does the internet and digital technology play into this? Well, we need a localized technology. Can we still have global epistemic cultures with a non-commercial and non-domination clause?

A moratorium on new technologies will allow us room to assess, democratize, and redistribute the existing technologies. We need to remove oil drilling and gas mining from our technological arsenal. Prioritize technologies that clean up social, economic, racial, and gender-centred wounds. The precious resources—human and natural—that we command should be distributed to the most urgent problems with an interdisciplinary, cross-cultural plan of attack, not one made to make money. Then we collectively apply solutions, not with the bull-in-a-China-shop attitude of big tech, but with care and empathy.

We need to shut the 10,000 Pandora’s boxes opened by technology, and doing that will require technologies. But those technologies must be different form the single, aggressive one we have. An indigenous science, a feminist science, a postcolonial science—all are needed for any hope of change.

We need to go where scares us to know our courage. May we be humbled by the sublime, overwhelmed before the creations which created us. May we work in inner and outer service only towards the true liberation of all beings.

Share more, use less.
Evolve ourselves.

Will Robots Take Your Job?

Walmart Robots

By Nick Srnicek and Alex Williams

Source: ROAR

In recent months, a range of studies has warned of an imminent job apocalypse. The most famous of these—a study from Oxford—suggests that up to 47 percent of US jobs are at high-risk of automation over the next two decades. Its methodology—assessing likely developments in technology, and matching them up to the tasks typically deployed in jobs—has been replicated since then for a number of other countries. One study finds that 54 percent of EU jobs are likely automatable, while the chief economist of the Bank of England has argued that 45 percent of UK jobs are similarly under threat.

This is not simply a rich-country problem, either: low-income economies look set to be hit even harder by automation. As low-skill, low-wage and routine jobs have been outsourced from rich capitalist countries to poorer economies, these jobs are also highly susceptible to automation. Research by Citi suggests that for India 69 percent of jobs are at risk, for China 77 percent, and for Ethiopia a full 85 percent of current jobs. It would seem that we are on the verge of a mass job extinction.

Nothing New?

For many economists however, there is nothing to worry about. If we look at the history of technology and the labor market, past experiences would suggest that automation has not caused mass unemployment. Automation has always changed the labor market. Indeed, one of the primary characteristics of the capitalist mode of production has been to revolutionize the means of production—to really subsume the labor process and reorganize it in ways that more efficiently generate value. The mechanization of agriculture is an early example, as is the use of the cotton gin and spinning jenny. With Fordism, the assembly line turned complex manufacturing jobs into a series of simple and efficient tasks. And with the era of lean production, we have had the computerized management of long commodity chains turn the production process into a more and more heavily automated system.

In every case, we have not seen mass unemployment. Instead we have seen some jobs disappear, while others have been created to replace not only the lost jobs but also the new jobs necessary for a growing population. The only times we see massive unemployment tend to be the result of cyclical factors, as in the Great Depression, rather than some secular trend towards higher unemployment resulting from automation. On the basis of these considerations, most economists believe that the future of work will likely be the same as the past: some jobs will disappear, but others will be created to replace them.

In typical economist fashion, however, these thoughts neglect the broader social context of earlier historical periods. Capitalism may not have seen a massive upsurge in unemployment, but this is not a necessary outcome. Rather, it was dependent upon unique circumstances of earlier moments—circumstances that are missing today. In the earliest periods of automation, there was a major effort by the labor movement to reduce the working week. It was a successful project that reduced the week from around 60 hours at the turn of the century, down to 40 hours during the 1930s, and very nearly even down to 30 hours. In this context, it was no surprise that Keynes would famously extrapolate to a future where we all worked 15 hours. He was simply looking at the existing labor movement. With reduced work per person, however, this meant that the remaining work would be spread around more evenly. The impact of technology at that time was therefore heavily muted by a 33 percent reduction in the amount of work per person.

Today, by contrast, we have no such movement pushing for a reduced working week, and the effects of automation are likely to be much more serious. Similar issues hold for the postwar era. With most Western economies left in ruins, and massive American support for the revitalization of these economies, the postwar era saw incredibly high levels of economic growth. With the further addition of full employment policies, this period also saw incredibly high levels of job growth and a compact between trade unions and capital to maintain a sufficient amount of good jobs. This led to healthy wage growth and, subsequently, healthy growth in aggregate demand to stimulate the economy and keep jobs coming. Moreover, this was a period where nearly 50 percent of the potential labor force was constrained to the household.

Under these unique circumstances, it is no wonder that capitalism was able to create enough jobs even as automation continued to transform for the labor process. Today, we have sluggish economic growth, no commitments to full employment (even as we have commitments to harsh welfare policies), stagnant wage growth, and a major influx of women into the labor force. The context for a wave of automation is drastically different from the way it was before.

Likewise, the types of technology that are being developed and potentially introduced into the labor process are significantly different from earlier technologies. Whereas earlier waves of automation affected what economists call “routine work” (work that can be laid out in a series of explicit steps), today’s technology is beginning to affect non-routine work. The difference is between a factory job on an assembly line and driving a car in the chaotic atmosphere of the modern urban environment. Research from economists like David Autor and Maarten Goos shows that the decline of routine jobs in the past 40 years has played a significant role in increased job polarization and rising inequality. While these jobs are gone, and highly unlikely to come back, the next wave of automation will affect the remaining sphere of human labor. An entire range of low-wage jobs are now potentially automatable, involving both physical and mental labor.

Given that it is quite likely that new technologies will have a larger impact on the labor market than earlier waves of technological change, what is likely to happen? Will robots take your job? While one side of the debate warns of imminent apocalypse and the other yawns from the historical repetition, both tend to neglect the political economy of automation—particularly the role of labor. Put simply, if the labor movement is strong, we are likely to see more automation; if the labor movement is weak, we are likely to see less automation.

Workers Fight Back

In the first scenario, a strong labor movement is able to push for higher and higher wages (particularly relative to globally stagnant productivity growth). But the rising cost of labor means that machines become relatively cheap in comparison. We can already see this in China, where real wages have been surging for more than 10 years, thereby making Chinese labor increasingly less cheap. The result is that China has become the world’s biggest investor in industrial robots, and numerous companies—most famously Foxconn—have all stated their intentions to move towards increasingly automated factories.

This is the archetype of a highly automated world, but in order to be achievable under capitalism it requires that the power of labor be strong, given that the relative costs of labor and machines are key determinants for investment. What then happens under these circumstances? Do we get mass unemployment as robots take all the jobs? The simple answer is no. Rather than mass decimation of jobs, most workers who have their jobs automated end up moving into new sectors.

In the advanced capitalist economies this has been happening over the past 40 years, as workers move from routine jobs to non-routine jobs. As we saw earlier, the next wave of automation is different, and therefore its effects on the labor market are also different. Some job sectors are likely to take heavy hits under this scenario. Jobs in retail and transport, for instance, will likely be heavily affected. In the UK, there are currently 3 million retail workers, but estimates by the British Retail Consortium suggest this may decrease by a million over the next decade. In the US, there are 3.4 million cashiers alone—nearly all of whose work could be automated. The transport sector is similarly large, with 3.7 million truck drivers in the US, most of whose jobs could be incrementally automated as self-driving trucks become viable on public roads. Large numbers of workers in such sectors are likely to be pushed out of their jobs if mass automation takes place.

Where will they go? The story that Silicon Valley likes to tell us is that we will all become freelance programmers and software developers and that we should all learn how to code to succeed in their future utopia. Unfortunately they seem to have bought into their own hype and missed the facts. In the US, 1.8 percent of all jobs require knowledge of programming. This compares to the agricultural sector, which creates about 1.5 percent of all American jobs, and to the manufacturing sector, which employs 8.1 percent of workers in this deindustrialized country. Perhaps programming will grow? The facts here are little better. The Bureau of Labor Statistics (BLS) projects that by 2024 jobs involving programming will be responsible for a tiny 2.2 percent of the jobs available. If we look at the IT sector as a whole, according to Citi, it is expected to take up less than 3 percent of all jobs.

What about the people needed to take care of the robots? Will we see a massive surge in jobs here? Presently, robot technicians and engineers take up less than 0.1 percent of the job market—by 2024, this will dwindle even further. We will not see a major increase in jobs taking care of robots or in jobs involving coding, despite Silicon Valley’s best efforts to remake the world in its image.

This continues a long trend of new industries being very poor job creators. We all know about how few employees worked at Instagram and WhatsApp when they were sold for billions to Facebook. But the low levels of employment are a widespread sectoral problem. Research from Oxford has found that in the US, only 0.5 percent of the labor force moved into new industries (like streaming sites, web design and e-commerce) during the 2000s. The future of work does not look like a bunch of programmers or YouTubers.

In fact, the fastest growing job sectors are not for jobs that require high levels of education at all. The belief that we will all become high-skilled and well-paid workers is ideological mystification at its purest. The fastest growing job sector, by far, is the healthcare industry. In the US, the BLS estimates this sector to create 3.8 million new jobs between 2014 and 2024. This will increase its share of employment from 12 percent to 13.6 percent, making it the biggest employing sector in the country. The jobs of “healthcare support” and “healthcare practitioner” alone will contribute 2.3 million jobs—or 25 percent of all new jobs expected to be created.

There are two main reasons for why this sector will be such a magnet for workers forced out of other sectors. In the first place, the demographics of high-income economies all point towards a significantly growing elderly population. Fewer births and longer lives (typically with chronic conditions rather than infectious diseases) will put more and more pressure on our societies to take care of elderly, and force more and more people into care work. Yet this sector is not amenable to automation; it is one of the last bastions of human-centric skills like creativity, knowledge of social context and flexibility. This means the demand for labor is unlikely to decrease in this sector, as productivity remains low, skills remain human-centric, and demographics make it grow.

In the end, under the scenario of a strong labor movement, we are likely to see wages rise, which will cause automation to rapidly proceed in certain sectors, while workers are forced to struggle for jobs in a low-paying healthcare sector. The result is the continued elimination of middle-wage jobs and the increased polarization of the labor market as more and more are pushed into the low-wage sectors. On top of this, a highly educated generation that was promised secure and well-paying jobs will be forced to find lower-skilled jobs, putting downward pressure on wages—generating a “reserve army of the employed”, as Robert Brenner has put it.

Workers Fall Back

Yet what happens if the labor movement remains weak? Here we have an entirely different future of work awaiting us. In this case, we end up with stagnant wages, and workers remain relatively cheap compared to investment in new equipment. The consequences of this are low levels of business investment, and subsequently, low levels of productivity growth. Absent any economic reason to invest in automation, businesses fail to increase the productivity of the labor process. Perhaps unexpectedly, under this scenario we should expect high levels of employment as businesses seek to maximize the use of cheap labor rather than investing in new technology.

This is more than a hypothetical scenario, as it rather accurately describes the situation in the UK today. Since the 2008 crisis, real wages have stagnated and even fallen. Real average weekly earnings have started to rise since 2014, but even after eight years they have yet to return to their pre-crisis levels. This has meant that businesses have had incentives to hire cheap workers rather than invest in machines—and the low levels of investment in the UK bear this out. Since the crisis, the UK has seen long periods of decline in business investment—the most recent being a 0.4 percent decline between Q12015 and Q12016. The result of low levels of investment has been virtually zero growth in productivity: from 2008 to 2015, growth in output per worker has averaged 0.1 percent per year. Almost all of the UK’s recent growth has come from throwing more bodies into the economic machine, rather than improving the efficiency of the economy. Even relative to slow productivity growth across the world, the UK is particularly struggling.

With cheap wages, low investment and low productivity, we see that companies have instead been hiring workers. Indeed, employment levels in the UK have reached the highest levels on record—74.2 percent as of May 2016. Likewise, unemployment is low at 5.1 percent, especially when compared to their neighbors in Europe who average nearly double that level. So, somewhat surprisingly, an environment with a weak labor movement leads here to high levels of employment.

What is the quality of these jobs, however? We have already seen that wages have been stagnant, and that two-thirds of net job creation since 2008 has been in self-employed jobs. Yet there has also been a major increase in zero-hour contracts (employment situations that do not guarantee any hours to workers). Estimates are that up to 5 percent of the labor force is in such situations, with over 1.7 million zero-hour contracts out. Full-time employment is down as well: as a percentage of all jobs, its pre-crisis levels of 65 percent have been cut to 63 percent and refused to budge even as the economy grows (slowly). The percentage of involuntary part-time workers—those who would prefer a full-time job but cannot find one—more than doubled after the crisis, and has barely begun to recover since.

Likewise with temporary employees: involuntary temporary workers as a percentage of all temporary workers rose from below 25 percent to over 40 percent during the crisis, only partly recovering to around 35 percent today. There is a vast number of workers who would prefer to work in more permanent and full-time jobs, but who can no longer find them. The UK is increasingly becoming a low-wage and precarious labor market—or, in the Tories’ view, a competitive and flexible labor market. This, we would argue, is the future that obtains with a weak labor movement: low levels of automation, perhaps, but at the expense of wages (and aggregate demand), permanent jobs and full-time work. We may not get a fully automated future, but the alternative looks just as problematic.

These are therefore the two poles of possibility for the future of work. On the one hand, a highly automated world where workers are pushed out of much low-wage non-routine work and into lower-wage care work. On the other hand, a world where humans beat robots but only through lower wages and more precarious work. In either case, we need to build up the social systems that will enable people to survive and flourish in the midst of these significant changes. We need to explore ideas like a Universal Basic Income, we need to foster investment in automation that could eliminate the worst jobs in society, and we need to recover that initial desire of the labor movement for a shorter working week.

We must reclaim the right to be lazy—which is neither a demand to be lazy nor a belief in the natural laziness of humanity, but rather the right to refuse domination by a boss, by a manager, or by a capitalist. Will robots take our jobs? We can only hope so.

Note: All uncited figures either come directly from, or are based on authors’ calculations of, data from the Bureau of Labor Statistics, O*NET and the Office for National Statistics.

The world wide cage

zuckerberg_VR_people-625x350

Technology promised to set us free. Instead it has trained us to withdraw from the world into distraction and dependency

By Nicholas Carr

Source: Aeon

It was a scene out of an Ambien nightmare: a jackal with the face of Mark Zuckerberg stood over a freshly killed zebra, gnawing at the animal’s innards. But I was not asleep. The vision arrived midday, triggered by the Facebook founder’s announcement – in spring 2011 – that ‘The only meat I’m eating is from animals I’ve killed myself.’ Zuckerberg had begun his new ‘personal challenge’, he told Fortune magazine, by boiling a lobster alive. Then he dispatched a chicken. Continuing up the food chain, he offed a pig and slit a goat’s throat. On a hunting expedition, he reportedly put a bullet in a bison. He was ‘learning a lot’, he said, ‘about sustainable living’.

I managed to delete the image of the jackal-man from my memory. What I couldn’t shake was a sense that in the young entrepreneur’s latest pastime lay a metaphor awaiting explication. If only I could bring it into focus, piece its parts together, I might gain what I had long sought: a deeper understanding of the strange times in which we live.

What did the predacious Zuckerberg represent? What meaning might the lobster’s reddened claw hold? And what of that bison, surely the most symbolically resonant of American fauna? I was on to something. At the least, I figured, I’d be able to squeeze a decent blog post out of the story.

The post never got written, but many others did. I’d taken up blogging early in 2005, just as it seemed everyone was talking about ‘the blogosphere’. I’d discovered, after a little digging on the domain registrar GoDaddy, that ‘roughtype.com’ was still available (an uncharacteristic oversight by pornographers), so I called my blog Rough Type. The name seemed to fit the provisional, serve-it-raw quality of online writing at the time.

Blogging has since been subsumed into journalism – it’s lost its personality – but back then it did feel like something new in the world, a literary frontier. The collectivist claptrap about ‘conversational media’ and ‘hive minds’ that came to surround the blogosphere missed the point. Blogs were crankily personal productions. They were diaries written in public, running commentaries on whatever the writer happened to be reading or watching or thinking about at the moment. As Andrew Sullivan, one of the form’s pioneers, put it: ‘You just say what the hell you want.’ The style suited the jitteriness of the web, that needy, oceanic churning. A blog was critical impressionism, or impressionistic criticism, and it had the immediacy of an argument in a bar. You hit the Publish button, and your post was out there on the world wide web, for everyone to see.

Or to ignore. Rough Type’s early readership was trifling, which, in retrospect, was a blessing. I started blogging without knowing what the hell I wanted to say. I was a mumbler in a loud bazaar. Then, in the summer of 2005, Web 2.0 arrived. The commercial internet, comatose since the dot-com crash of 2000, was up on its feet, wide-eyed and hungry. Sites such as MySpace, Flickr, LinkedIn and the recently launched Facebook were pulling money back into Silicon Valley. Nerds were getting rich again. But the fledgling social networks, together with the rapidly inflating blogosphere and the endlessly discussed Wikipedia, seemed to herald something bigger than another gold rush. They were, if you could trust the hype, the vanguard of a democratic revolution in media and communication – a revolution that would change society forever. A new age was dawning, with a sunrise worthy of the Hudson River School.

Rough Type had its subject.

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim in ‘The Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.

Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: they’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.

The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in ‘social production’ outside the marketplace. As the Harvard law professor Yochai Benkler wrote in his influential book The Wealth of Networks (2006):

This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.

Calling it a revolution, he said, is no exaggeration.

Benkler and his cohort had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when the system’s commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organised to enrich a small group of businesses and their owners.

The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort – and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterised by frenetic production and consumption – smartphones have made media machines of us all – but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that shrouds the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.

Late in his life, the economist John Kenneth Galbraith coined the term ‘innocent fraud’. He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. ‘It is innocent because most who employ it are without conscious guilt,’ Galbraith wrote in 1999. ‘It is fraud because it is quietly in the service of special interest.’ The idea of the computer network as an engine of liberation is an innocent fraud.

I love a good gizmo. When, as a teenager, I sat down at a computer for the first time – a bulging, monochromatic terminal connected to a two-ton mainframe processor – I was wonderstruck. As soon as affordable PCs came along, I surrounded myself with beige boxes, floppy disks and what used to be called ‘peripherals’. A computer, I found, was a tool of many uses but also a puzzle of many mysteries. The more time you spent figuring out how it worked, learning its language and logic, probing its limits, the more possibilities it opened. Like the best of tools, it invited and rewarded curiosity. And it was fun, head crashes and fatal errors notwithstanding.

In the early 1990s, I launched a browser for the first time and watched the gates of the web open. I was enthralled – so much territory, so few rules. But it didn’t take long for the carpetbaggers to arrive. The territory began to be subdivided, strip-malled and, as the monetary value of its data banks grew, strip-mined. My excitement remained, but it was tempered by wariness. I sensed that foreign agents were slipping into my computer through its connection to the web. What had been a tool under my own control was morphing into a medium under the control of others. The computer screen was becoming, as all mass media tend to become, an environment, a surrounding, an enclosure, at worst a cage. It seemed clear that those who controlled the omnipresent screen would, if given their way, control culture as well.

‘Computing is not about computers any more,’ wrote Nicholas Negroponte of the Massachusetts Institute of Technology in his bestseller Being Digital (1995). ‘It is about living.’ By the turn of the century, Silicon Valley was selling more than gadgets and software: it was selling an ideology. The creed was set in the tradition of US techno-utopianism, but with a digital twist. The Valley-ites were fierce materialists – what couldn’t be measured had no meaning – yet they loathed materiality. In their view, the problems of the world, from inefficiency and inequality to morbidity and mortality, emanated from the world’s physicality, from its embodiment in torpid, inflexible, decaying stuff. The panacea was virtuality – the reinvention and redemption of society in computer code. They would build us a new Eden not from atoms but from bits. All that is solid would melt into their network. We were expected to be grateful and, for the most part, we were.

Our craving for regeneration through virtuality is the latest expression of what Susan Sontag in On Photography (1977) described as ‘the American impatience with reality, the taste for activities whose instrumentality is a machine’. What we’ve always found hard to abide is that the world follows a script we didn’t write. We look to technology not only to manipulate nature but to possess it, to package it as a product that can be consumed by pressing a light switch or a gas pedal or a shutter button. We yearn to reprogram existence, and with the computer we have the best means yet. We would like to see this project as heroic, as a rebellion against the tyranny of an alien power. But it’s not that at all. It’s a project born of anxiety. Behind it lies a dread that the messy, atomic world will rebel against us. What Silicon Valley sells and we buy is not transcendence but withdrawal. The screen provides a refuge, a mediated world that is more predictable, more tractable, and above all safer than the recalcitrant world of things. We flock to the virtual because the real demands too much of us.

‘You and I are alive at this moment.’ That Wired story – under headline ‘We Are the Web’ – nagged at me as the excitement over the rebirth of the internet intensified through the fall of 2005. The article was an irritant but also an inspiration. During the first weekend of October, I sat at my Power Mac G5 and hacked out a response. On Monday morning, I posted the result on Rough Type – a short essay under the portentous title ‘The Amorality of Web 2.0’. To my surprise (and, I admit, delight), bloggers swarmed around the piece like phagocytes. Within days, it had been viewed by thousands and had sprouted a tail of comments.

So began my argument with – what should I call it? There are so many choices: the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age, the posthuman age. The more names we pin on it, the more vaporous it seems. If nothing else, it is an age geared to the talents of the brand manager. I’ll just call it Now.

It was through my argument with Now, an argument that has now careered through more than a thousand blog posts, that I arrived at my own revelation, if only a modest, terrestrial one. What I want from technology is not a new world. What I want from technology are tools for exploring and enjoying the world that is – the world that comes to us thick with ‘things counter, original, spare, strange’, as Gerard Manley Hopkins once described it. We might all live in Silicon Valley now, but we can still act and think as exiles. We can still aspire to be what Seamus Heaney, in his poem ‘Exposure’, called inner émigrés.

A dead bison. A billionaire with a gun. I guess the symbolism was pretty obvious all along.

Forget Techno-Optimism: We Can’t Innovate Our Way Out of Inequality

thediplomat_2015-07-10_09-53-42-386x255

By Chris Lehmann

Source: In These Times

Toward the end of his 250-page hymn to digital-age innovation, The Industries of the Future, Alec Ross pauses to offer a rare cautionary note. Silicon Valley may have incubated all the wonders and conveniences one can imagine—and oh, so many more! But for the international business elites looking to remake their emerging market economies in the Valley’s gleaming, khaki-clad image, there’s some bad news: It can no longer be done. A “decades-long head start” has granted too great a competitive advantage to the charmed peninsula along the Northern California coast.

Not to worry, though! On-the-make tech globalists can still make a go of it, provided they’re prepared to embrace “specific cultural and labor market characteristics that can contradict both a society’s norms and the more controlling impulses of government leaders.”

Stripped of the vague and glowing techno-babble, this is a prescription for good old-fashioned neoliberal market discipline. Everywhere Ross looks across the radically transformed world of digital commerce, the benign logic of market triumphalism wins the day. When Terry Gou—the Taiwanese CEO of Foxconn, the vast Chinese electronics sweatshop that doubles as an incubator for worker suicides—plans to eliminate the headache of supervising an unstable human workforce by replacing it with “the first fully automated plant” in manufacturing history, why, he’s simply “responding to pure market forces”: i.e., an increase in Chinese wages that cuts into Foxconn’s ridiculously broad profit margins. And you and I might see the so-called sharing economy as a means to casualize service workers into nonunion, benefit-free gigs that transfer economic value on a massive scale to a rentier class of Silicon Valley app marketers. But bouncy New Economy cheerleaders like Ross see “a way of making a market out of anything, and a microentrepreneur out of anyone.”

When confronted with the spiraling of income inequality in the digital age, Ross, like countless other prophets of better living through software, sagely counsels that “rapid progress often comes with greater instability.” Sure, the “wealthy generally benefit over the short term,” but remember, kids: “Innovations have the potential to become cheaper over time and spread throughout the greater population.”

Ross first stormed into political prominence as an architect of Barack Obama’s “technology and innovation plan” during his 2008 presidential campaign, and he has spent four years captaining his own charmed, closed circle of tech triumphalism as the White House’s “senior advisor for innovation” under Secretary of State Hillary Clinton. This renders The Industries of the Future something more than another breathless, Tom Friedman-style tour of the wonderments being hatched in startups, trade confabs and gadget factories. Ross’ book is also a tech-policy playbook for the likely Democratic presidential nominee, who has spared no effort in soliciting the policy input—and landing the campaign donations—of the Silicon Valley mogul set. As such, it should give any Hillary-curious supporter of economic justice considerable pause.

To be sure, Ross raises some vague concerns about how, for example, the runaway growth of the sharing economy drains workers of job security, healthcare benefits, pensions and the like. He avers that “as the sharing economy grows … the safety net needs to grow with it,” but, much like his politically savvy boss, he offers nothing in the way of policy specifics besides the inarguable yet unactionable truism that if the sharing economy “generates enormous amounts of wealth for the platform owners, then the platform owners can and should help pay for added costs to society.”

The larger point for Ross, in any event, is that the innovative megafirms of tomorrow will come to spontaneously serve the public good. Not to mention that many IPO investors “are pension funds,” Ross coos, which “manage the retirement funds for people in the working class like teachers, police officers, and other civil servants.” Never mind, of course, that the neoliberal logic of the Uber model means that we’re creating a workforce that’s unlikely ever to come within shouting distance of a pension benefit again.

This kind of terminal Silicon Valley myopia also accounts for the vast economic and political blindspots that continually undermine Ross’ relentlessly chipper TED patter. To take just one instructive instance, in a book that devotes considerable real estate to the innovations of “fintech” (the streamlining of global digital currency exchanges and investment transactions) nowhere does the author acknowledge the pivotal role that tech-savvy Wall Street analysts—the “quants” as they’re known in Street argot—played in stoking the early-aughts housing bubble that led to the near-meltdown of the global economy.

That’s because it’s an axiomatic faith for this brand of techno-prophecy that innovation can never actually make anything worse—in just the same fashion that the quants were insisting, right up until the end, that there could never be a downturn in the national housing market. If this is the kind of wisdom Hillary Clinton relied on to promote her global innovation agenda at the State Department, one shudders to think of how it might run riot through the White House come next January.

Related Video:

Glenn Greenwald Stands by the Official Narrative

2014_09_Screen-Shot-2014-09-12-at-12.35.03-PM

By William A. Blunden

Source: Dissident Voice

Glenn Greenwald has written an op-ed piece for the Los Angeles Times. In this editorial he asserts that American spies are motivated primarily by the desire to thwart terrorist plots. Such that their inability to do so (i.e., the attacks in Paris) coupled with the associated embarrassment motivates a public relations campaign against Ed Snowden. Greenwald further concludes that recent events are being opportunistically leveraged by spy masters to pressure tech companies into installing back doors in their products. Over the course of this article what emerges is a worldview which demonstrates a remarkable tendency to accept events at face value, a stance that’s largely at odds with Snowden’s own documents and statements.

For example, Greenwald states that American spies have a single overriding goal, to “find and stop people who are plotting terrorist attacks.” To a degree this concurs with the official posture of the intelligence community. Specifically, the Office of the Director of National Intelligence specifies four topical missions in its National Intelligence Strategy: Cyber Intelligence, Counterterrorism, Counterproliferation, and Counterintelligence.

Yet Snowden himself dispels this notion. In an open letter to Brazil he explained that “these [mass surveillance] programs were never about terrorism: they’re about economic spying, social control, and diplomatic manipulation. They’re about power.”

And the public record tends to support Snowden’s observation. If the NSA is truly focused on combatting terrorism it has an odd habit of spying on oil companies in Brazil and Venezuela. In addition anyone who does their homework understands that the CIA has a long history of overthrowing governments. This has absolutely nothing to do with stopping terrorism and much more to do with catering to powerful business interests in places like Iran (British Petroleum), Guatemala (United Fruit), and Chile (ITT Corporation). The late Michael Ruppert characterized the historical links between spies and the moneyed elite as follows: “The CIA is Wall Street, and Wall Street is the CIA.”1

The fact that Greenwald appears to accept the whole “stopping terrorism” rationale is extraordinary all by itself. But things get even more interesting…

Near the end of his article Greenwald notes that the underlying motivation behind the recent uproar of spy masters “is to depict Silicon Valley as terrorist-helpers for the crime of offering privacy protections to Internet users, in order to force those companies to give the U.S. government ‘backdoor’ access into everyone’s communications.”

But if history shows anything, it’s that the perception of an adversarial relationship between government spies and corporate executives has often concealed secret cooperation. Has Greenwald never heard of Crypto AG, or RSA, or even Google? These are companies who at the time of their complicity marketed themselves as protecting user privacy. In light of these clandestine arrangements Cryptome’s John Young comments that it’s “hard to believe anything crypto advocates have to say due to the far greater number of crypto sleazeball hominids reaping rewards of aiding governments than crypto hominid honorables aiding one another.”

It’s as if Greenwald presumes that the denizens of Silicon Valley, many of whose origins are deeply entrenched in government programs, have magically turned over a new leaf. As though the litany of past betrayals can conveniently be overlooked because things are different. Now tech vendors are here to defend our privacy. Or at least that’s what they’d like us to believe. In the aftermath of the PRISM scandal, which was disclosed by none other than Greenwald and Snowden, the big tech of Silicon Valley is desperate to portray itself as a victim of big government.

You see, the envoys of the Bay Area’s new economy have formulated a convincing argument. That’s what they get paid to do. The representatives of Silicon Valley explain in measured tones that tech companies have stopped working with spies because it’s bad for their bottom line. Thus aligning the interests of private capital with user privacy. But the record shows that spies often serve private capital. To help open up markets and provide access to resources in foreign countries. And make no mistake there’s big money to be made helping spies. Both groups do each other a lot of favors.

And so a question for Glenn Greenwald: what pray tell is there to prevent certain CEOs in Silicon Valley from betraying us yet again, secretly via covert backdoors, while engaged in a reassuring Kabuki Theater with government officials about overt backdoors? Giving voice to public outrage while making deals behind closed doors. It’s not like that hasn’t happened before during an earlier debate about allegedly strong cryptography. Subtle zero-day flaws are, after all, plausibly deniable.

How can the self-professed advocate of adversarial journalism be so credulous? How could a company like Apple, despite its bold public rhetoric, resist overtures from spy masters any more than Mohammad Mosaddegh, Jacobo Árbenz, or Salvador Allende? Doesn’t adversarial journalism mean scrutinizing corporate power as well as government power?

Glenn? Hello?

Methinks Mr. Greenwald has some explaining to do. Whether he actually responds with anything other than casual dismissal has yet to be seen.

  1. Michael C. Ruppert, Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil, New Society Publishers, 2004, Chapter 3, page 53.

Bill Blunden is an independent investigator whose current areas of inquiry include information security, anti-forensics, and institutional analysis. He is the author of several books, including The Rootkit Arsenal and Behold a Pale Farce: Cyberwar, Threat Inflation, and the Malware-Industrial Complex. He is the lead investigator at Below Gotham Labs.