After the Crash

Dispatches From a Long Recovery (Est. 10/2024)

After the Crash

The Crisis of the Now: Distracted and Diverted from the Ever-Encroaching Police State

corp_media_500

By John W. Whitehead

Source: The Rutherford Institute

“When a population becomes distracted by trivia, when cultural life is redefined as a perpetual round of entertainments, when serious public conversation becomes a form of baby talk, when, in short, a people become an audience and their public business a vaudeville act, then a nation finds itself at risk: culture-death is a clear possibility.”—Author Neil Postman

Caught up in the spectacle of the forthcoming 2016 presidential elections, Americans (never very good when it comes to long-term memory) have not only largely forgotten last year’s hullabaloo over militarized police, police shootings of unarmed citizens, asset forfeiture schemes, and government surveillance but are also generally foggy about everything that has happened since.

Then again, so much is happening on a daily basis that it’s understandable if the average American has a hard time keeping up with and remembering all of the “events,” manufactured or otherwise, which occur like clockwork and keep us distracted, deluded, amused, and insulated from reality while the government continues to amass more power and authority over the citizenry.

In fact, when we’re being bombarded with wall-to-wall news coverage and news cycles that change every few days, it’s difficult to stay focused on one thing—namely, holding the government accountable to abiding by the rule of law—and the powers-that-be understand this. As investigative journalist Mike Adams points out:

This psychological bombardment is waged primarily via the mainstream media which assaults the viewer by the hour with images of violence, war, emotions and conflict. Because the human nervous system is hard wired to focus on immediate threats accompanied by depictions of violence, mainstream media viewers have their attention and mental resources funneled into the never-ending ‘crisis of the NOW’ from which they can never have the mental breathing room to apply logic, reason or historical context.

Consider if you will the regularly scheduled trivia and/or distractions in the past year alone that have kept us tuned into the various breaking news headlines and entertainment spectacles and tuned out to the government’s steady encroachments on our freedoms:

Americans were riveted when the Republican presidential contenders went head-to-head for the second time in a three-hour debate that put Carly Fiorina in a favored position behind Donald Trump; Hillary Clinton presented the softer side of her campaign image during an appearance on The Tonight Show with Jimmy Fallon; scientists announced the discovery of what they believed to be a new pre-human species, Homo naledi, that existed 2.8 million years ago; an 8.3 magnitude earthquake hit Chile; massive wildfires burned through 73,000 acres in California; a district court judge reversed NFL player Tom Brady’s four-game suspension; tennis superstar Serena Williams lost her chance at a calendar grand slam; and President Obama and Facebook mogul Mark Zuckerberg tweeted their support for a Texas student arrested for bringing a homemade clock to school.

That was preceded by the first round of the Republican presidential debates; an immigration crisis in Europe; the relaxing of Cuba-U.S. relations; the first two women soldiers graduating from Army Ranger course; and three Americans being hailed as heroes for thwarting a train attack in France. Before that, there was the removal of the Confederate flag from the South Carolina statehouse; shootings at a military recruiting center in Tennessee and a movie theater in Louisiana; the Boy Scouts’ decision to end its ban on gay adult leaders; the first images sent by the New Horizons spacecraft of Pluto; and the victory over Japan of the U.S. in the Women’s World Cup soccer finals.

No less traumatic and distracting were the preceding months’ newsworthy events, which included a shooting at a Charleston, S.C., church; the trial and sentencing of Boston Marathon bomber suspect Dzhokhar Tsarnaev; the U.S. Supreme Court’s affirmation of same-sex marriage, Obamacare, lethal injection drugs and government censorship of Confederate flag license plates; and an Amtrak train crash in Philadelphia that left more than 200 injured and eight dead.

Also included in the mix of distressing news coverage was the death of 25-year-old Freddie Gray while in police custody and the subsequent riots in Baltimore and city-wide lockdown; the damning report by the Dept. of Justice into discriminatory and abusive practices by the Ferguson police department; the ongoing saga of Hillary Clinton’s use of a private email account while serving as secretary of state; the apparently deliberate crash by a copilot of a German jetliner in the French Alps, killing all 150 passengers and crew; the New England Patriots’ fourth Super Bowl win; a measles outbreak in Disneyland; the escalating tensions between New York police and Mayor Bill de Blasio over his seeming support for anti-police protesters; and a terror attack at the Paris office of satire magazine Charlie Hebdo.

Rounding out the year’s worth of headline-worthy new stories were protests over grand jury refusals to charge police for the deaths of Eric Garner and Michael Brown; the disappearance of an AirAsia flight over the Java Sea; an Ebola outbreak that results in several victims being transported to the U.S. for treatment; reports of domestic violence among NFL players; a security breach at the White House in which a man managed to jump the fence, cross the lawn and enter the main residence; and the reported beheading of American journalist Steven Sotloff by ISIS.

That doesn’t even begin to touch on the spate of entertainment news that tends to win the battle for Americans’ attention: Bruce Jenner’s transgender transformation to Caitlyn Jenner; the death of Whitney Houston’s daughter Bobbi Kristina Brown; Kim Kardashian’s “break the internet” nude derriere photo; sexual assault allegations against Bill Cosby; the suicide of Robin Williams; the cancellation of the comedy The Interview in movie theaters after alleged terror hack threats; the wedding of George Clooney to Amal Alamuddin; the wedding of Angelina Jolie and Brad Pitt; the ALS ice bucket challenge; and the birth of a baby girl to Prince William and Kate.

As I point out in my book Battlefield America: The War on the American People, these sleight-of-hand distractions, diversions and news spectacles are how the corporate elite controls a population by entrapping them in the “crisis of the NOW,” either inadvertently or intentionally, advancing their agenda without much opposition from the citizenry.

Professor Jacques Ellul studied this phenomenon of overwhelming news, short memories and the use of propaganda to advance hidden agendas. “One thought drives away another; old facts are chased by new ones,” wrote Ellul.

“Under these conditions there can be no thought. And, in fact, modern man does not think about current problems; he feels them. He reacts, but he does not understand them any more than he takes responsibility for them. He is even less capable of spotting any inconsistency between successive facts; man’s capacity to forget is unlimited. This is one of the most important and useful points for the propagandists, who can always be sure that a particular propaganda theme, statement, or event will be forgotten within a few weeks.”

But what exactly has the government (aided and abetted by the mainstream media) been doing while we’ve been so cooperatively fixated on whatever current sensation happens to be monopolizing the so-called “news” shows?

If properly disclosed, consistently reported on and properly digested by the citizenry, the sheer volume of the government’s activities, which undermine the Constitution and in many instances are outright illegal, would inevitably give rise to a sea change in how business is conducted in our seats of power.

Surely Americans would be concerned about the Obama administration’s plans to use behavioral science tactics to “nudge” citizens to comply with the government’s public policy and program initiatives? There would be no end to the uproar if Americans understood the ramifications of the government’s plan to train non-medical personnel—teachers, counselors and other lay people—in “mental first aid” in order to train them to screen, identify and report individuals suspected of suffering from mental illness. The problem, of course, arises when these very same mental health screeners misdiagnose opinions or behavior involving lawful First Amendment activities as a mental illness, resulting in involuntary detentions in psychiatric wards for the unfortunate victims.

Parents would be livid if they had any inkling about the school-to-prison pipeline, namely, how the public schools are being transformed from institutions of learning to prison-like factories, complete with armed police and surveillance cameras, aimed at churning out compliant test-takers rather than independent-minded citizens. And once those same young people reach college, they will be indoctrinated into believing that they have a “right” to be free from acts and expressions of intolerance with which they might disagree.

Concerned citizens should be up in arms over the government’s end-run tactics to avoid abiding by the rule of law, whether by outsourcing illegal surveillance activities to defense contractors, outsourcing inhumane torture to foreign countries, causing American citizens to disappear into secret interrogation facilities, or establishing policies that would allow the military to indefinitely detain any citizen—including journalists—considered a belligerent or enemy.

And one would hope American citizens would be incensed about being treated like prisoners in an electronic concentration camp, their every movement monitored, tracked and recorded by a growing government surveillance network that runs the gamut from traffic cameras and police body cameras to facial recognition software. Or outraged that we will be forced to fund a $93 billion drone industry that will be used to spy on our movements and activities, not to mention the fact that private prisons are getting rich (on our taxpayer dollars) by locking up infants, toddlers, children and pregnant women?

Unfortunately, while 71% of American voters are “dissatisfied” with the way things are going in the United States, that discontent has yet to bring about any significant changes in the government, nor has it caused the citizenry to get any more involved in their government beyond the ritualistic election day vote.

Professor Morris Berman suggests that the problems plaguing us as a nation—particularly as they relate to the government—have less to do with our inattention to corruption than our sanctioning, tacit or not, of such activities. “It seems to me,” writes Berman, “that the people do get the government they deserve, and even beyond that, the government who they are, so to speak.”

In other words, if we end up with a militarized police state, it will largely be because we welcomed it with open arms. In fact, according to a recent poll, almost a third of Americans would support a military coup “to take control from a civilian government which is beginning to violate the constitution.”

So where does that leave us?

As legendary television journalist Edward R. Murrow warned, “Unless we get up off our fat surpluses and recognize that television in the main is being used to distract, delude, amuse, and insulate us, then television and those who finance it, those who look at it, and those who work at it, may see a totally different picture too late.”

The Role of Dystopian Fiction in a Dystopian World

images

By Luther Blissett and J. F. Sebastian of Arkesoul

A few years ago, Neal Stephenson wrote a widely-shared article called Innovation Starvation for the World Policy Institute. He began the piece lamenting our inability to fulfill the hopes and dreams of mid-20th century mainstream American society. Looking back at the majority of sci-fi visions of the era, it’s clear many thought we’d be living in a utopian golden age and exploring other planets by now. In reality, the speed of technological innovation has seemingly declined compared to the first half of the 20th century which saw the creation of cars, airplanes, electronic computers, etc. Stephenson also mentions the Deepwater Horizon oil spill and Fukushima disasters as examples of how we’ve collectively lost our ability to “execute on the big stuff”.

Stephenson’s explanation for this predicament is two-fold; outdated bureaucratic structures which discourage risk-taking and innovation, and the failure of cultural creatives to provide “big visions” which dispute the notion that we have all the technology we’ll ever need. While there’s much to be said about archaic, inefficient (and corrupt) bureaucracies, there’s also a compelling argument invoked over the cultural importance of storytelling and art and how best to utilize it. One of the solutions offered by Stephenson, in this regard, is Project Hieroglyph which he describes as “an effort to produce an anthology of new SF that will be in some ways a conscious throwback to the practical techno-optimism of the Golden Age.”

While Project Hieroglyph may be a noble endeavor, one could argue that it’s based on a flawed premise. The role of science fiction has never been just about supplying grand visions for a better future, but to make sense of the present. There seems to be an assumption that the optimistic Golden Age had a causal relationship with a perceived technological golden age when it may have simply been a reflection of it— just as dystopian sci-fi reflects and strongly resonates with the world today. Stephenson may be correct in his view that much SF today is written in a “generally darker, more skeptical and ambiguous tone”, but this more nuanced perspective does not necessarily signify the belief that “we have all the technology we’ll ever need”. Rather, it reflects decades of collective experience and knowledge of unforeseen and cumulative effects of technologies. Nor does such fiction focus only on destructive effects of technology, as large a component of the narrative it may be simply because it makes for better drama and the subtext is often intended as a critique rather than celebration. For example, the archetypal hacker protagonists of technocratic cyberpunk dystopias employ technology for more positive ends (though some question whether good SF, as in speculative fiction, needs to involve new technology at all).

A particularly positive function for dystopian sci-fi is its use as rhetorical shorthand. It’s increasingly common in public discourse on major issues of the day to invoke dystopian references. Disastrous social effects of peak oil or post-collapse are often characterized as Mad Max scenarios. Various negative aspects of genetic modification and pharmaceutical development conjure Brave New World. Anxiety over out-of-control AI and resultant devaluing of human life brings to mind films as varied as Blade Runner, The Matrix and Terminator films. The expanding police/surveillance state is reminiscent of 1984 and numerous classics which have followed in its footsteps including V for Vendetta and Brazil. General fears of duplicitous, psychopathic power elites and social manipulation have elevated They Live from relatively obscure b-movie to cult classic. The entry of the term “zombie apocalypse” into the popular lexicon may in part stem from fear (and uncomfortable recognition) of images of viral social disintegration and martial law-enforced containment efforts depicted throughout various media. The burgeoning omnipotence of multinational corporations and hackers in Mr. Robot may have been the stuff of cyberpunk dystopias such as Neuromancer and Max Headroom 30 years ago, yet, it still has much to contribute to the public discourse as contemporary drama. Such visions may not prevent (or have not prevented) the scenarios they warn us of but have provided a vocabulary and framework for understanding such problems, and who’s to say how much worse it could be had such cautionary memes never existed?

The prophetic nature of storytelling, inasmuch as it derives from the minds of authors, artists and commentators that coexist with tensions and contexts particular to their epochs, resonate with the oughts, ifs, and whats inherent to our daily lives. As it were, the cautionary element of narrative is a natural product of the human mind, and the premium of what involves sharing our mental reserves to the world. To creatively dwelve and concoct problems and solutions from experience, is an axiom analogous to that of the categorical imperative—purely, and in abstract terms of what rationality involves. Yet, often times, we find material that is in favor of cultural malaise; of all things pathological in our society, such as censorship, conformity, bureaucracy, authoritarianism, militarism, and capital marketing; things which underpin issues that, if left untouched, can engulf the real brilliance of our spirit.

Stephenson fails to see this point. SF, as any form of intelligent culture, denounces and opposes systems of oppression, and even shows us the how, when, and why—the frameworks, the makings of apparent utopias into dystopias. Dystopian storytelling can serve the efforts of downtrodden creators with utopian ideals as effectively as utopian stories can reframe a societal trajectory led by beneficiaries of real world dystopia (though it may be experienced as utopia for a privileged few). SF does not only conjure visions of better futures. They lend us vocabularies and syntaxes to understand, and impede the fallenness of a confused, and ever increasingly isolated humanity. They are languages that pervade our interiorities, and that allow the exterior to change.

At the core, SF is prophecy through reasoned extrapolation and artistic intuition. This is what SF stands for when properly aligned with the subjectivities of the oppressed, and not with the voices of oppression: true testaments of a space and a time; visions of the future that carefully partake in not committing the mistakes of the past; and tools for our personal and collective flourishing.

Skynet Ascendant

t2skynetbdcap1

By Cory Doctorow

Source: Locus Online

As I’ve written here before, science fiction is terrible at predicting the future, but it’s great at predicting the present. SF writers imagine all the futures they can, and these futures are processed by a huge, dynamic system consisting of editors, booksellers, and readers. The futures that attain popular and commercial success tell us what fears and aspirations for technology and society are bubbling in our collective imaginations.

When you read an era’s popular SF, you don’t learn much about the future, but you sure learn a lot about the past. Fright and hope are the inner and outer boundaries of our imagination, and the stories that appeal to either are the parameters of an era’s political reality.

Pay close attention to the impossibilities. When we find ourselves fascinated by faster than light travel, consciousness uploading, or the silly business from The Matrix of AIs using human beings as batteries, there’s something there that’s chiming with our lived experience of technology and social change.

Postwar SF featured mass-scale, state-level projects, a kind of science fictional New Deal. Americans and their imperial rivals built cities in space, hung skyhooks in orbit, even made Dyson Spheres that treated all the Solar System’s matter as the raw material for the a new, human-optimized megaplanet/space-station that would harvest every photon put out by our sun and put it to work for the human race.

Meanwhile, the people buying these books were living in an era of rapid economic growth, and even more importantly, the fruits of that economic growth were distributed to the middle class as well as to society’s richest. This was thanks to nearly unprecedented policies that protected tenants at the expense of landlords, workers at the expense of employers, and buy­ers at the expense of sellers. How those policies came to be enacted is a question of great interest today, even as most of them have been sunsetted by successive governments across the developed world.

Thomas Piketty’s data-driven economics bestseller Capital in the Twenty-First Century argues that the vast capital destruction of the two World Wars (and the chaos of the interwar years) weakened the grip of the wealthy on the governments of the world’s developed states. The arguments in favor of workplace safety laws, taxes on capital gains, and other policies that undermined the wealthy and benefited the middle class were not new. What was new was the political possibility of these ideas.

As developed nations’ middle classes grew, so did their material wealth, political influence, and expectations that governments would build am­bitious projects like interstate highways and massive civil engineering projects. These were politically popular – because lawmakers could use them to secure pork for their voters – and also lucrative for government contractors, making ‘‘Big Government’’ a rare point of agreement between the rich and middle-income earners.

(A note on poor people: Piketty’s data suggests that the share of the national wealth controlled by the bottom 50% has not changed much for several centuries – eras of prosperity are mostly about redistributing from the top 10-20% to the next 30-40%)

Piketty hypothesizes that the returns on investment are usually greater than the rate of growth in an economy. The best way to get rich is to start with a bunch of money that you turn over to professional managers to invest for you – all things being equal, this will make you richer than you could get by inventing something everyone uses and loves. For example, Piketty contrasts Bill Gates’s fortunes as the founder of Microsoft, once the most profitable company in the world, with Gates’s fortunes as an investor after his retirement from the business. Gates-the-founder made a lot less by creating one of the most successful and profitable products in history than he did when he gave up making stuff and started owning stuff for a living.

By the early 1980s, the share of wealth controlled by the top decile tipped over to the point where they could make their political will felt again – again, Piketty supports this with data showing that nations elect seriously investor-friendly/worker-unfriendly governments when investors gain control over a critical percentage of the national wealth. Leaders like Reagan, Thatcher, Pinochet, and Mulroney enacted legislative reforms that reversed the post-war trend, dis­mantling the rules that had given skilled workers an edge over their employers – and the investors the employers served.

The greed-is-good era was also the cyberpunk era of literary globalized corporate dystopias. Even though Neuromancer and Mirrorshades predated the anti-WTO protests by a decade and a half, they painted similar pictures. Educated, skilled people – people who comprised the mass of SF buyers – became a semi-disposable under­class in world where the hyperrich had literally ascended to the heavens, living in orbital luxury hotels and harvesting wealth from the bulk of humanity like whales straining krill.

Seen in this light, the vicious literary feuds between the cyberpunks and the old guard of space-colonizing stellar engineer writers can be seen as a struggle over our political imagination. If we crank the state’s dials all the way over the right, favoring the industrialist ‘‘job creators’’ to the exclusion of others, will we find our way to the stars by way of trickle-down, or will the overclass graft their way into a decadent New Old Rome, where reality TV and hedge fund raids consume the attention and work we once devoted to exploring our solar system?

Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Termina­tor and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.

It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.

Stephen Hawking, Elon Musk, and other luminaries have issued pan­icked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.

The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recurr­ing payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.

Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.

We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.

We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

The age of automation gave us stories like Chap­lan’s Modern Times, and the age of multinational hedge-fund capitalism made The Matrix into an enduring parable. We’ve gone from being cogs to being a reproductive agar within which new cor­porations can breed. As Mitt Romney reminded us, ‘‘Corporations are people.’’

Marrying robots, killing with drones, and making empty selfies

by Edward Curtin

Source: Intrepid Report

Today everything has become a spectacle, including writing. My title probably caught your eye, as it was intended. But now I would like to tell you a personal story about a man whose brilliant work foreshadowed and dissected the issues of my title before it existed. In this he was prophetic, and it is why his work is so important. He always insisted that true artists were able to uncover society’s conflicts before they emerged consciously. Though a psychologist by profession, he was in this sense an artist as well.

His name, Rollo May, has disappeared from public discourse in this era of biological psychology and psychiatry. This great American thinker and writer was the man who introduced existential psychology to the United States. And though he died twenty-one years ago, his prescient voice begs to be heard in our current conditions.

From his first important book in 1950—The Meaning of Anxiety—he examined key underlying issues that have plagued this country ever since: the worship of technology as a death cult; the loss of a genuine sense of self; sex obsessions leading to lovelessness and impotence; and violence yoked to a lack of compassion.

In book after book, he reiterated one of his central themes: that full passionate life is only possible when one refuses to block off from consciousness the frightful emotions of anxiety, guilt, and despair. In this, his life’s work ran against the grain of the emerging zeitgeist of happy pills, mood stabilizers, and the happiness industry. “After despair,” he wrote, “the one thing left is possibility.” For possibility (Latin, posse, to be able) means power, and true power only comes to those who dare to be weak and freely embrace their personal destinies and the truth of their political and cultural conditions. I think it is not an exaggeration to say that we are presently living in an era of despair, and to embrace that reality is a hard but necessary pill to swallow. May is a wonderful guide.

While topical, in many ways his message is timeless as well. But I would like to tell you about some things I learned from him years ago that speak to our current condition. And it seems fitting that I should begin these thoughts on a day when a prominent, mainstream website has published an article arguing that humans should be able to marry robots and the day of those blissful conjugal ties is in our not too distant future. So I will proceed with those lovely words ringing in my mind: “I now pronounce you robot and wife.”

It was during the closing years of the Cold War when he and I sat down for a long conversation about his thought. Cold War rhetoric and nuclear saber rattling dominated the news and a strong anti-nuclear movement was astir. I had been deeply impressed with May’s paradoxical thinking ever since I had read his award-winning Love and Will in 1969, a year in which I had been forced out of a college teaching position for “heretical” thinking and opposition to the Vietnam war. In his work, which was not openly political, I nevertheless found a voice of deep wisdom and prophetic power. He seemed to be unearthing hidden springs of the madness sweeping the country, and in so doing also addressing the future, and, of course, me. I was feeling particularly vulnerable, yet paradoxically intensely strong, as I had recently declared myself a conscientious objector from war and the Marine Corps. It was a time like today when death and destruction were in the air, and, as Yeats puts it: “Things fall apart; the center cannot hold/Mere anarchy is loosed upon the world/The blood-dimmed tide is loosed, and everywhere/The ceremony of innocence is drowned/The best lack all conviction, while the worst/Are full of passionate intensity.”

The first thing I noticed about May the day we met was that he seemed painfully vulnerable, as though he had so opened himself to existence that the slightest breeze could blow him away. Yet when he began talking I sensed a fierceness, as well, as I recalled a favorite quote of his from Beethoven: “I will seize fate by the throat.”

So I asked him, “In reading your works one of the things that strikes me is the vitality you draw from an awareness of death. Most people would call this morbid and depressing, and yet it seems to bring you joy. I wonder how this began for you?”

“Well,” he answered without hesitation but in his ruminative way, “I’ve had some long bouts with killing illnesses. I had tuberculosis for five or six years. I had malaria fever when I was in Greece. And I’ve had several other bouts with death. If most people would call the consciousness of death depressive, I think they are the ones who have the—what I would call—masochistic or neurotic viewpoint. All through human history mortality has been faced directly and out of it, and this especially true for the ancient Greeks, they got the sense of the value of life from the fact that we are mortal. Now our age is afraid of death and we repress it and we think the only wise thing is to think about living, which strikes me as itself very sick. It’s because we’ve wedded ourselves to technology, and technology is really a study of death. You say ‘vitality.’ You can’t speak of technology as having vitality. Vitality is the human beings contribution and he ought to use technology to make his life richer. But we have become identified with it.”

Presto! Back to the present/future! As if on cue, a refutation of May’s dismissal of machines having life walks in my door. I see the mailman deliver our mail, so I get up and fetch it. An invitation has arrived for a public lecture at the college where I teach—a lecture by the futurist Ray Kurzweil, the man of “Singularity” fame, the prognosticator of the day he says is coming when artificial intelligence will surpass human intelligence and human biology will disappear into the machine. Ray has a plan to never die, so he takes 130 plus supplements a day to keep himself alive until he is able to upload his consciousness onto a hard drive and become one with the machine for a happy immortality as bits of information. Sounds like a great hereafter. And Ray has a backup plan in case the pills don’t do the trick and keep him going until he impregnates the machine; he’ll be fresh frozen at the Alcor Life Extension Foundation where he expects to be defrosted like a frozen burrito in no more than fifty years.

May said to me, “I’m very much against the quantitative views of human life. You could live exceptionally as Pascal did and die in your middle forties. As Kierkegaard did also. The length of life I don’t think is relevant. The idea that we are going to prolong life for two hundred years seems to me to be the most misplaced goal in the whole technological, crazy scheme.”

It looks like Rollo had a point: the worship of technology as a death cult. He could see it then, and today it is carrying us to our doom unless we change course. “More and more,” he wrote, “the question is being asked whether society as a whole is psychotic, and the pause after the question is a sign that the answer could be yes as well as no.” There was, he then felt, a fear of psychosis on a very broad scale, and at the heart of this fear is a loss of faith in the reality of the self, as well as a widespread feeling that one can never be sure anything is real. This sense of unreality has increased exponentially since then, and the issue of self-identity has become a hall of mirrors in our reality-media funhouse. “As in a Kafka novel, everything is waiting for us, but we ourselves do not appear.” But what does appear today, as then, but in a slightly different guise, and grows larger and larger as people’s faith in themselves grows smaller and smaller and their sense of impotence increases, is the possibility of nuclear warfare and world destruction—a new cold war started by the United States by encircling Russia and setting Ukraine ablaze. The ultimate technological death cult is, of course, nuclear weapons .

May made the connections. Like the great sociologist C. Wright Mills, he knew that our destinies are personal and social, and to deny one is to deny the other. By being existential he meant understanding the individual, not as an atomized self, but as a person-in-the world. Mills called it the sociological imagination; May preferred the term paradoxical. But they were on the same page. One’s sense of self—self-identity—is rooted social and historical conditions.

Starting with Man’s Search for Himself in the 1950s and continuing until his death in 1994, May repeatedly explored the reasons why there was an increasing loss of a genuine sense of self resulting in widespread identity confusion and a growing apathy linked to a lack of compassion. He clearly described the anxiety and loneliness that ate at so many people who “not only do not know what they want; they often do not have any clear idea of what they feel.” Feeling only empty and bored and lacking a real sense of self, they conform to hollow cultural values and mores while consuming the goods and services that a consumer culture offers to fill them up. Consuming, they are consumed. This powerless dependency, rooted in a lack of self-identity and the need to be liked, leads to painful anxiety, despair, and powerlessness resulting in acquiescence to social ills. This is today’s selfie/media culture in a nutshell, what Christopher Lasch once called the culture of narcissism.

I obviously couldn’t ask him when we talked, but I can imagine his response to today’s trends of people marrying robots, selfie photos, Facebook, avatars and second lives in cyberspace, the growth of pornography, sex with machines, the sexual saturation of culture, electronic warfare, drone killings, etc.—a bemused laugh and a comment suggesting the tragedy of it all. In Love and Will he wrote that “the contemporary paradoxes in sex and love have one thing in common, namely the banalization of sex and love. By anesthetizing feeling in order to perform better, by employing sex as a tool to prove prowess and identity, by using sensuality to hide sensitivity, we have emasculated sex and left it vapid and empty. The banalization of sex is well-aided and abetted by our mass communication. . . . They oversimplify love and sex, treating the topic like a combination and learning to play tennis and buying life insurance. In this process, we have robbed sex of its power by sidestepping eros (the creative life force); and we have ended by dehumanizing both.” He predicted that this technical approach to sex would lead to sex obsessions, lovelessness, and increased sexual impotence. And here we are—Viagra, big butts, enhanced this and enhanced that—all in the service of sexual satisfaction produced by the cult of technique and devoid of passion.

“Shooting” yourself with a phone camera, sex with a robot or a machine, and killing with drones—this is life today. We have become separated from our humanity by our machines. We worship our images and in so doing can’t grasp the death and destruction caused by our drones and foreign wars. Others don’t exist in this solipsistic culture. May saw it coming and explained why. He saw that violence was yoked to a lack of compassion and that this lack of compassion (to suffer with others) was connected to our flight from death and emotions we consider negative. He saw this form of thinking as an effort to control life that was self-defeating and could only lead to more violence.

“Paradoxical thinking,” he told me, “seems to me to be the only kind that gets to the root of human existence. I don’t think analytical thinking does. It leaves out too much. You remember Heraclitus. I think he’s quite right that we always think in terms of positive/negative. We think like electricity, thus both the negative and positive pole and the oscillation back and forth, and human thinking is a play with opposites.”

Since he has written so much about the breakdown of our traditional myths and symbols, I asked him if there was any one word or symbol that he thought encompassed the body of his work.

After a long pause, he said, “No, I think that’s impossible for any person who writes to say. I think you could say it much better than I could because we’re so much in it. All I know is that I think paradoxically.” And without pause or any word from me, he continued. “Well, if you wanted to push me, I would say that what I think is the basic, well, the basic symbol of my life, I would say that it is compassion. That’s what matters most to me. I grew up in a rather difficult family, quite difficult. I did not have a good childhood. I was quite lonely as a child. And I did suffer a good deal.”

Out of this childhood pain, he learned early to be a therapist for his family, and felt that these experiences gave him an acute sensitivity to others’ feelings. In his memoir Paulus, about his friend, Paul Tillich, the great Protestant theologian, he wrote words that could equally apply to himself: “Someone has to mediate, to make a connection through his own life between opposites.” For out of his wounds, May has created a powerful body of writings, and out of a torn self, a paradox of wholeness.

For us today, in the era of apathy, depression, and indifference to the suffering and deaths of “others” everywhere, May’s work begs to be resurrected. He urges us to care again, and to let our care and compassion lead us to act to stop the violence that we are taught to ignore. Don’t look away, I can hear him say, face fully all dimensions of the human experience, the negative and positive; remember that despair and joy are linked to the possibility of freedom; reject the cult of death that hides within technological obsessiveness; and remember that love brings the intimation of our mortality but also our greatest joys and passions.

And if he were still sitting across from me—and you—today, he’d probably also say with a grin, “Above all, don’t marry a robot.”

Edward Curtin is a sociologist and writer who teaches at Massachusetts College of Liberal Arts and has published widely.

Catching Up With the Unabomber. When Does the End Justify the Means?

220px-Unabomberforpresident

By Brian Whitney

Source: Disinfo.com

The Unabomber, known as Ted Kaczynski, was not a fan of technology. To expose the world to his anti-technology philosophy, from the years 1978 to 1995, Kaczynski sent 16 bombs to universities and airlines, killing three people and injuring 23, before he was eventually caught and sent to prison. He remains there today. At one time, he was possibly the most famous criminal in the world.

He said of technology’s role:

The system does not and cannot exist to satisfy human needs. Instead, it is human behavior that has to be modified to fit the needs of the system. It is the fault of technology, because the system is guided not by ideology but by technical necessity.

In his essay Industrial Society and Its Future, Kaczynski argued that while his bombings were “a bit” extreme, they were quite necessary to attract attention to the loss of human freedom caused by modern technology. His book Technological Slavery: The Collected Writings of Theodore J. Kaczynski, a.k.a. “The Unabomber” breaks all of his philosophies down for those of us that just know him through corporate news stations.

Was the Unabomber crazy, or just so sane he was blowing our minds?

I talked to David Skrbina, confidant of Kaczynski, and philosophy professor at the University of Michigan. Skrbina wrote the intro to Technological Slavery.

Can you tell me a bit about how you and Kaczynski began to communicate? Are you still in touch with him today?

Back in 2003, I began work on a new course at the University of Michigan: Philosophy of Technology. Surprisingly, such a course had never been offered before, at any of our campuses. I wanted to remedy that deficiency.

I then began to pull together recent and relevant material for the course, focusing on critical approaches to technology. These, to me, were more insightful and more interesting, and were notably under-analyzed among current philosophers of technology. Most of them are either neutral toward modern technology, or positively embrace it, or accept its presence resignedly. As I found out, very few philosophers of the past four decades adopted anything like a critical stance. This, for me, was highly revealing.

Anyway, I was well aware of Kaczynski’s manifesto, “Industrial society and its future,” which was published in late 1995 at the height of the Unabomber mania. I was very impressed with its analysis, even though most of the ideas were not new to me (many were reiterations of arguments by Jacques Ellul, for example—see his 1964 book The Technological Society). But the manifesto was clear and concise, and made a compelling argument.

After Kaczynski was arrested in 1996, and after a year-long trial process, he was stashed away in a super-max prison in Colorado. The media then decided that, in essence, the story was over. Case closed. No need to cover Kaczynski or his troubling ideas ever again.

By 2003, I suspected he was still actively researching and writing, but I had heard nothing of substance about him in years. So I decided to write to him personally, hoping to get some follow-up material that might be useful in my new course. Fortunately, he replied. That began a long string of letters, all on the problem of technology. To date, I’ve received something over 100 letters from him.

Most of the letters occurred in the few years prior to, and just after, the publication of Technological Slavery. Several of his more important and detailed replies to me were included in that book—about 100 pages worth.

We’ve had less occasion to communicate in the past couple years. My most recent letter from him was in late 2014.

You have said that his ideas “threaten to undermine the power structure of our technological order. And since the system’s defenders are unable to defeat the ideas, they choose to attack the man who wrote them.” Can you expand on that?

The present military and economic power of the US government, and governments everywhere, rests on advanced technology. Governments, by their very nature, function to manipulate and coerce people—both their own citizens, and any other non-citizens whom they declare to be of interest. Governments have a monopoly on force, and this force is manifest through technological structures and systems.

Therefore, all governments—and in fact anyone who would seek to exert power in the world—must embrace modern technology. American government, at all levels, is deeply pro-tech. So too are our corporations, universities, and other organized institutions. Technology is literally their life-blood. They couldn’t oppose it in any substantial way without committing virtual suicide.

So when a Ted Kaczynski comes along and reminds everyone of the inherent and potentially catastrophic problems involved with modern technology, “the system” doesn’t want you to hear it. It will do everything possible to distort or censor such discussion. As you may recall, during the final years of the Unabomber episode, there was very little—astonishingly little—discussion of the actual ideas of the manifesto. Now and then, little passages would be quoted in the newspapers, but that was it; no follow-up, no discussion, no analysis.

Basically, the system’s defenders had no counterarguments. The data, empirical observation, and common sense all were on the side of Kaczynski. There was no rational case to be made against him.

The only option for the defenders was an ad hominem attack: to portray Kaczynski as a sick murderer, a crazed loner, and so on. That was the only way to ‘discredit’ his ideas. Of course, as we know, the ad hominem tactic is a logical fallacy. Kaczynski’s personal situation, his mental state, or even his extreme actions, have precisely zero bearing on the strength of his arguments.

The system’s biggest fear was—and still is—that people will believe that he was right. People might begin, in ways small or large, to withdraw from, or to undermine, the technological basis of society. This cuts to the heart of the system. It poses a fundamental threat, to which the system has few options, apart from on-going propaganda efforts, or brute force.

What do you think of the fact that when our government, or any figure in authority such as a police officer, kills in the name of the established belief system, it is thought of as just. But when a guy like Kaczynski kills in the name of his belief system, he is thought of as a deranged psychopath?

As I mentioned, governmental authorities have a monopoly on force. Whenever they use it, it is, almost by definition, ‘right.’ Granted, police can be convicted of ‘excessive force.’ But such cases, as we know, are very rare. And militaries can never be so convicted.

At best, if the public is truly appalled by some lethal action of our police or military, they may vote in a more ‘pacifist’ administration. But even that rarely works. People were disgusted by the war-monger George W. Bush, and so they voted in the “anti-war” Obama. Ironically, he continued on with much the same killing. And through foreign aid and UN votes, Obama continues to support and defend murderous regimes around the world. So much for pacifism.

Let’s keep in mind: Kaczynski killed three people. This was tragic and regrettable, but still, it was just three people. American police kill that many citizens every other day, on average. The same with Obama’s drone operators. Technology kills many times that number, every day—even every hour. Let’s keep things in perspective.

Kaczynski killed in order to gain the notoriety necessary to get the manifesto into the public eye. And it worked. When it was published, the Washington Post sold something like 1.2 million copies that day—still a record. He devised a plan, executed it, and thereby caused millions of people to contemplate the problem of technology in a way they never had before.

Does the end justify the means? It’s too early to tell. If Kaczynski’s actions ultimately have some effect on averting technological disaster, there will be no doubt: his actions were justified. They may yet save millions of lives, not to mention much of the natural world. Time will tell.

You recently wrote a book, The Metaphysics of Technology. Can you tell us a little about that?

Sure. In thinking about the problem of technology, it struck me that there was very little philosophical analysis about what, exactly, technology is. We’ve had many action plans, ranging from tepid and mild (think Sherry Turkle), to Bill Joy’s thesis of “relinquishment” of key technologies, to Kaczynski’s total revolution. But if we don’t really understand what we’re dealing with, our actions are likely to be misguided and ineffectual. In short, we need a true metaphysics of technology.

On my view, technology advances with a tremendous, autonomous power. Humans are the implementers of this power, but we can’t really guide it and we certainly can’t stop it. In effect, it functions as a law of nature. It advances with an evolutionary force, and that’s why we are heading toward disaster.

I see technology much as the ancient Greeks did—as a combination of two potent entities, Technê and Logos (hence ‘techno-logy’). For them, these were quasi-divine forces. Logos was the guiding intelligence behind all order in the universe. Technê was the process by which all things—manmade and otherwise—came into being. These were not mere mythology; they were rational conclusions regarding the operation of the cosmos.

Like the Greeks, I argue that technê is a universal process. All order in the universe is a form of technê. Hence my coining of the term ‘Pantechnikon’—the universe as an orderly construction, manifesting a kind of intelligence, or Logos. Our modern, human technology is on a continuum with all order in the universe. (Harvard astrophysicist Eric Chaisson has argued for precisely the same point, incidentally; see his 2006 book Epic of Evolution.)

The net effect of all this is not good news for us. Technology is like a wave moving through the Earth, and the universe. For a long while, we were at the peak of that wave. Now we’re on the downside. Technology is rapidly heading toward true autonomy. Our opportunity to slow or redirect it is rapidly vanishing. If technology achieves true autonomy—we can take Kurzweil’s singularity date of 2045 as a rough guide—then it’s game over for us. We will likely either become more or less enslaved, or else wiped out. And then technology will continue on its merry way without us.

This is not mere speculation on my part, incidentally. Within the past year, Stephen Hawking, Elon Musk, and Bill Gates have all come out with related concerns. They don’t understand the metaphysics behind it, but they’re seeing the same trend.

How has your experience communicating with Kaczynski changed you as a person and as a philosopher?

As a philosopher, not that much. Kaczynski generally avoids philosophy and metaphysics, preferring practical issues. In a sense, we are operating on different planes, even as we are working on the same problem.

As a person, I have a greater understanding of the basis for the ‘extreme’ actions that he took. It’s not often in life that you get a chance to communicate with someone with such a total commitment to their cause. It’s impressive.

Also, the media treatment of his whole case has been enlightening. When his book, Technological Slavery, came out in 2010, I expected that there would be at least some media coverage. But there was none. The most famous “American terrorist” publishes a complete book from a super-max prison—and it’s not news? Seriously? Compare this topic to the garbage shown on our national evening news programs, and it’s a joke. NPR, 60 Minutes, Wired magazine, etc.—all decided it wasn’t newsworthy. Very telling.

One last thing: Expect to hear from Kaczynski again soon. His second book is nearing completion. The provisional title is “Anti-Tech Revolution: Why and How.” But don’t look for it on your evening news.

Things are not as simplistic as you think.


Buy Technological Slavery, by Ted Kaczynski, and The Metaphysics of Technology by David Skrbina. Kaczynski does not profit from his book.

Brian Whitney’s latest book is Raping the Gods.

Related articles: Ted and the CIA Part 1 & 2 by David Kaczynski

http://blog.timesunion.com/kaczynski/ted-and-the-cia-part-1/271/

http://blog.timesunion.com/kaczynski/ted-and-the-cia-part-2/285/

The Internet Doesn’t Exist

history

By Jacob Silverman

Source: The Baffler

The Internet has been very busy. In just the last week, Caitlyn Jenner broke the Internet, but she also united it. The FCC made war on the Internet. The Internet shamed a couple. The Internet had a dark side, Nikki Finke was barred from the Internet, the Supreme Court made the Internet less safe for women, the Internet named a famous fetus, the Internet did stuff with a superhero movie, and the Internet changed. A girl also won the Internet, Jack White had a difficult history with the Internet, and the Internet “shafted” a Canadian journalist.

“The Internet” is the universal straw man, a hero or villain for every occasion. The Internet, the Internet, the Internet—this decentralized communications network has long been granted a proper noun and practically a degree of sentience. Yet few people talk about “the Telephone” as if it were some person or place, though perhaps they once did. This eagerness to grant the Internet some degree of autonomy—to make it into an actor, an entity—stems in part from its apparent abstraction. Where does all this information come from? As Ray Bradbury famously said, “To hell with you and to hell with the Internet. It’s distracting. It’s meaningless; it’s not real. It’s in the air somewhere.”

Bradbury wasn’t just slipping into kneejerk techno-fear. He was also guilty of the same fallacy that crops up again and again in digital journalism: the assumption that the Internet is some monolithic mass, a discrete population or interest group. “It’s distracting,” Bradbury said, without specifying what “it” was.

But in another, more important way, Bradbury was absolutely right: the Internet doesn’t exist.

A couple years ago, Rachel Law, a grad student at Parsons at the time, had this to say: “The ‘Internet’ does not exist. Instead, it is many overlapping filter bubbles which selectively curate us into data objects to be consumed and purchased by advertisers.” As she also said, a bit less academically, “Browsing is now determined by your consumer profile and what you see, hear and the feeds you receive are tailored from your friends’ lists, emails, online purchases, etc.”

What we call the Internet—and what web writers so lazily draw on for their work—is less a hive mind or a throng or a gathering place and more a personalized set of online maneuvers guided by algorithmic recommendations. When we look at our browser windows, we see our own particular interests, social networks, and purchasing histories scrambled up to stare back at us. But because we haven’t found a shared discourse to talk about this complex arrangement of competing influences and relationships, we reach for a term to contain it all. Enter “the Internet.”

The Internet is a linguistic trope but also an ideology and even a business plan. If your job is to create content out of (mostly) nothing, then you can always turn to something/someone that “the Internet” is mad or excited about. And you don’t have to worry about alienating readers because “the Internet” is so general, so vast and all-encompassing, that it always has room. This form of writing is widely adaptable. Now it’s common to see stories where “Facebook” or “Twitter” stands in for the Internet, offering approval or judgment on the latest viral schlock. Choose your (anec)data carefully, and Twitter can tell any story you want.

We fall back on “the Internet” because it gives us a rhetorical life raft to hang onto amidst an overwhelming tide of information or a piece of sardonic shorthand to utter with a wink and a grimace, much like “never read the comments.” It also reflects a strange irony about today’s culture: despite being highly distributed, and despite offering an outlet for every subculture and niche interest and political quirk, what we think of the Internet often does feel rather uniform and monolithic.

This impression is partly based in fact; the tech and media industries are currently undergoing a kind of recentralization, exemplified by the rise of massive platforms like Facebook and recent mega-deals, such as Verizon buying AOL or Charter Communications (who?) snapping up Time Warner Cable. Attention is increasingly being manipulated and auctioned off by a handful of big conglomerates. The relegation of Twitter to also-ran in the social media sweepstakes—the loser to Facebook in the rush to industry monopoly—also reflects this centralization. That a company with hundreds of millions of users can seem like a failure only shows how bad the market is at apportioning value. (But there I go falling into abstractions again—as if there is anything called “the market.”)

“The Internet” is easy, a convenient reference point and an essential concept for web journalists tasked with surfacing monetizable content from this great informational morass. Digital culture, or writing about “what people are talking about on the Internet,” is considered its own beat now. But in the same way that someone born in the 1980s might not think of himself as a millennial—an arbitrary distinction crafted by demographers and marketers—a user of an online service is not necessarily from, or part of, the Internet. Even some of the subcultures often held up as part of the Internet are mostly notional. Is “Black Twitter” a specific, homogenous entity, as it’s so often described in news coverage? Or is it more something that people do, a set of social relations acted out by varying groups of mostly black Twitter users?

The more we write about what takes place online as if it occurred in some other world, the more we fail to relate this communication system, and everything that happens through it, to the society around us. To understand the Internet, we have to destroy it as an idea.

Saturday Matinee: The Fuck-It Point

THE-FUCK-IT-POINT

Synopsis by Savage Revival:

A film about civilization, why we should bring it down and why most civilized people don’t.

[THE FUCK-IT POINT]
‘When you have had enough. When you decide to take matters into your own hands and don’t care what’s going to happen to you. When you know that from now on you will resist with whatever tactic you think is most effective.’

Towards a Critical Public Pedagogy of Predatory Anthropocene

blogs_500x400_5619_695657_poll_xlarge

By Michael B. McDonald

Source: The Hampton Institute

In 2015, a group of scientists published ” The trajectory of the Anthropocene: The Great Acceleration “. They showed that rising consumption and increasing rates of impact on Earth Systems began after the Second World War. It was the expansion of economic activity charged by increasing resource use that created new technologies that expanded rates of consumption. This was a celebrated new socio-economic phase called the Great Acceleration that was supposed to lead to full employment and a bright future for all. It was also the beginning of a next phase of world capitalism accelerated by increasing urbanization. By 2008 humanity officially entered a new urban phase where 50% of the earth’s population lives in urban spaces. More cities will be built in the next thirty years than in all previous human history. Earth System scientists have shown that all of these changes are having unprecedented impacts on the Earth. Human life is changing the Earth, they call it anthropocene.

But the Great Acceleration did not lead to full employment nor a bright future. In fact, it has led to massive inequality created by a very small percentage of people controlling a staggering amount of wealth. In 2010, OECD countries had 18% of the earth’s population but accounted for 74% of GDP. But only .1% controlled this vast wealth through a system that I call predatory anthropocene.

The system of predatory anthropocene can be found in changes to the global economy and a fundamental shift in the way the economy works through its transformation of subjective, social and environmental ecologies, what Felix Guattari called the Three Ecologies. One aspect of this change has been called semiocapitalism, the blending of imagination, ideas, language and capital. Semiocapitalism works by capturing evolutionary life. Belonging, for instance, is now produced by the consumption of psycho-social products that gain economic value in consumption and are financed by increasing debt. The GDP of the United States is now 70% consumption.

Making community through mass consumption is eroding the anthropological basis upon which human life is built. We need a language for this. Perhaps we need to recognize that the communicative and biological systems of the human species have habitats. The biosphere sustains biological life while the ethnosphere sustains communicational life. The biosphere is quite well known but the ethnosphere less so. Wade Davis suggests that the ethnosphere is a global quilt of local cultures, a band of cultural life functioning in tandem with the biosphere for the creation, organization, and expression of human communicational life.[1] The ethnosphere is a collection of languages, ideas, and dreams. It is the anthropological rituals that have accompanied human evolution, has organized social reproduction, it is the institution of language [2] in all its complexity, but is also beyond language.[3] When people talk about humanity in general, they mean the biosphere and ethnosphere, the cultures of the world in their physical, expressive, subjective dimensions. But now ethnosphere complexity is reduced by global commodities, unique cultures consumed by Hollywood-hegemony, human imagination consumed by consumer products, dreams being replaced by corporate produced and globalized desires. A single system is producing hegemony in ways that no single system was ever before capable. It is necessary for us to see that our species is under threat by a monster system that we have created, a monsterous, cancerous, predatory system poisoning the Earth. Henry Giroux has argued that:

What makes American society distinct in the present historical moment are a culture and social order that have not only lost their moral bearings but produce levels of symbolic and real violence whose visibility and existence set a new standard for cruelty, humiliation, and mechanizations of a mad war machine, all of which serves the interest of the political and corporate walking dead-the new avatars of death and cruelty-that plunder the social order and wreak ecological devastation. We now live in a world overrun with flesh-eating zombies, parasites who have a ravenous appetite for global destruction and civic catastrophe. (2014, xi-xii)[4]

Because I follow Guattari’s cybernetic view I am less certain than Giroux appears to be, that is it possible to tell zombies from non-zombies in a period where a) agribusiness replaces agriculture and transforms all aspects of domestic life that b) creates stretches of suburbs that wipe out, without social discussion, the farmland that has laid the foundation of human flourishing, c) as mounting debt continues without slowing and without discourse in the public sphere, as d) waves of fellow humans are dislocated everyday due to military, economic, and environmental calamities. And none of this is news, we watch all of it studiously, staring at our displays unmoved by the misery and pain we see on the faces, and hear in the cries of fellow humans. Too many of us escape our responsibilities to confront this pain by fleeing to walled-in communities whose walls are maintained, not by bricks but by the capacity to carry the mortgage debt (that machinically contributes to predatory anthropocene) in the hopes of living in relative safety while the poor (who can not access debt) are left in decaying city centers. But as foreclosures swept across America after the housing bubble burst, suburban safety was shown to be precarious. It is important for us to take notice of the fact that we know all of this and collectively do very little to change it. We sign petitions on Facebook, but we still shop at malls that we built on farmland and we clearly have little access to empathy. And I am not saying this to be critical of you. I am truly stuck. After many years of being inspired by Adbusters and semio-politics and culture jamming I’m not sure what the next step is. I feel free space disappearing. I’m looking for options.

This difficulty of expressing empathy tells us something about hegemony under semiocapitalism. We now know that empathy is not something we develop, but something that we shut down. Vittorio Gallese in ” The Manifold Nature of Interpersonal Relations: The Quest for a Common Mechanism” has shown that for us to “know that another human being is suffering or rejoicing, is looking for food or shelter, is about to attack or kiss us, we do not need verbal language” (Virno 2008: 175) we only need the activation of what Gallese called mirror neurons a “class of premotor neurons [that] was discovered in the macaque monkey brain that discharged not only when the monkey executes goal-related hand actions like grasping objects, but also when observing other individuals (monkeys or humans) executing similar actions” (Gallese: 522). Experiments successfully illustrated that mirror neurons were also in the human brain “positioned in the ventral part of the inferior frontal lobe, consisting of two areas, 44 and 45, both of which belong to the Broca region” (Virno: 177). Mirror neurons allow us to experience what we see. When we see someone doing something that we’ve never done, our brain reacts as if we are doing it, what Gallese calls “embodied simulation.” This means that empathy is not something that we need to develop it is something that is functioning in our brains whether we like it or not. But as Paulo Virno points out, humans are clearly adept at seeing other humans as not-humans in order to override “embodied simulation”. We are constantly unmoved watching violent death in both fiction and non-fiction, and constantly enacting laws to restrict sexuality and eroticism in the social sphere. In this context there is little doubt that a public pedagogy of human negation is taking place that values violence and negates the erotic energy that produces new human life! What this means is that “every naturalist thinker must acknowledge one given fact: the human animal is capable of not recognizing another human animal as being on of its own kind.” How does this public pedagogy of negation occur? Virno argues that verbal language, “distinguishes itself from other communicative codes, as well as from cognitive prelinguistic performance, because it is able to negate any type of semantic content.”(176). Through language we are able to negate others as not-human, shutting down the empathy that is produced by mirror neurons. But all is not lost as Paulo Freire points out, pedagogies of dehumanization can be countered through critical pedagogy. That we might learn to negate dehumanization is our hope, to dissolve the oppressor-oppressed binary through the creation of new anti-predatorial segnifications. Virno suggests that while language introduced human-negation into communication it also provides us the technology to negate-negation. In this way critical pedagogy is the negation-of-negation. But only when it is used in this way. I make one amendment to Virno’s suggestion, that it is necessary to go beyond the notion of linguistic negation to identify the ways that negation is in the production of subjectivity, not just in the linguistic negation but in complex existential negations that occur within complex machinic semiotics. It is necessary to see the ways that the production of aesthetic systems produces collective subjectivities that produce We’ness as well as Other’ness.

Cultural technologies produce cultural workers who reproduce subjectivity-producing systems that produce subjects who reduce the ethnosphere and pollute the biosphere. Theodore Adorno was right to be concerned about the culture industry just as Walter Benjamin saw with clear sight the dangers of the absorption of aesthetics into politics. They both saw that the industrialization of the satisfaction of desire, what we might call affective-capitalism, has significant socio-political-economic impacts. There is a real danger when anthropological rituals developed for the social life are replaced by capitalist products. The production and satisfaction of desire on the marketplace is a constantly undermining of love of the local, a replacement of belonging with having the same mass manufactured private property, the replacement of environmentally-embedded anthropological bonds with capital resource consuming exchange. The production of subjectivity is consumed by the factory, negating living, thus extending the contractions of capitalism beyond the factory into all aspects of live time. Giroux has called this a “new kind of authoritarianism that does not speak in the jingoistic discourse of empowerment, exceptionalism, or nationalism. Instead, it defines itself in the language of cruelty, suffering, and fear, and it does so with a sneer and an unbridled disdain for those considered disposable. Neoliberal society mimics the search for purity we have seen in other totalitarian societies” (2014, xvii). And it does so through the production of subjectivity, in the distribution of social subjection and the institution of machinic enslavement. Together these form the contents of the public pedagogy of culture industry, the negation of lived time that blocks access to mirror neurons, limits our ability to negate the negations of the neoliberal culture industry, thus limiting our ability to resist through the production life affirming social machines, liberatory and collectively produced social subjectivations and life affirming machinic enslavements.
I, Terminator

Some people however, are arguing that the changes I call predatory anthropocene are a step forward for humanity. Luciano Floridi, for instance, imagines a new humanity as interconnected informational organisms (inforgs) active in “sharing with biological agents and engineered artifacts, a global environment ultimately made of information” (2011,9). Collectively these inforgs produce an infosphere that either replaces or contributes to the ethnosphere. But Floridi does not account for political economy and therefore misses that his dreams of the infosphere are enslaved by the algorithms of capitalism.

Franco ‘Bifo’ Berardi however, shows that inforgs are not liberated informational workers but are ‘cognitariate’ (exploited proletarians of information) controlled by the automatisms of machinic enslavements, no longer disciplined but under subjectively captured within the new means of control. Machinic enslavement is not discipline, but it is none-the-less controlling. No longer is there a need for an authority to hover over your shoulder to keep you in line. Machinic enslavement works to lead you into accepting the circuits of capture and control embedded cybernetically in modes of production, exchange and consumption. In the machinic enslavement of predatory anthropocene your only value is through economic consumption, and control is located in your desire to fulfill your consumptive role. Desire (libidinal, economic, social) is no longer a location of liberation, but a mechanism of discipline. This is power within predatory anthropocene.

Floridi’s infosphere and its cognitarians are colonizers machinically enslaving dreams and desires. Their colonization does not in fact produce the infosphere but instead a nightmarish mechanosphere. The mechanosphere converts the anthropological ethnosphere into capitalist products, cognitive capitalism “produces and domesticates the living on a scale never before seen” (Boutang 2011, 48). Felix Guattari and Franco Berardi “emphasize that entire circuits and overlapping and communicating assemblages integrate cognitive labor and the capitalistic exploitation of its content”[5] in a model they call semiocapitalism, that captures “the mind, language and creativity as its primary tools for the production of value”( Berardi 2009, 21). Our language is being transformed into capitalist value, our words, dreams, desires and subjectivities are lost to the mechanosphere, “the authoritarian disimagination machine that affirms everyone as a consumer and reduces freedoms to unchecked self-interest while reproducing subjects who are willingly complicit with the plundering of the environment, resources, and public goods by the financial elite” (Giroux 2014, xxi).

Predatory anthropocene not only massively increases earth system impacts but creates massive inequality. In early 2015 year Oxfam released Working for the Few a terrifying document that shows, “Almost half of the world’s wealth is now owned by just one percent of the population” and that, “The bottom half of the world’s population owns the same as the richest 85 people in the world,” and that this already extreme economic disparity is getting worse.

But we do not tell stories of predatory anthropocene to our children. Instead we tell myths of consumption, stories of gleeful elves happily working in non-unionized factories making toys for unproblematically good children, all the while supported by a covert group of elf spies that complicit parents move around their house for weeks. This is the childhood public pedagogy of predatory anthropocene where domestic life is machinically enslaved to global capitalism, domesticated to surveillance-of-consumption, young lives converted to effective consumer-citizens. Perhaps it’s time to start telling our children the very true story of predatory anthropocene, the killer system that we have created and released into our world but refuse to name, refuse to accept, and spend a great deal of money and words denying. There is no sense denying predatory anthropocene, we need to talk of the monster that is killing our planet, we need to develop a critical pedagogy of predatory anthropocene, to learn to negate the negation.

Notes

 

[1] Davis, Wade. (2007). Light at the Edge of the World: A Journey Through the Realm of Vanishing Cultures. Vancouver, BC: Douglas &McIntyre Ltd.

[2] Virno, Paolo. (2008). Multitude: Between Innovation and Negation. Los Angeles, California: Semiotext(e)Foreign Agents Series.

[3] Here I am thinking about post Spinozist philsophers that argue for a semiotics beyond language signification and even beyond logocentric significations and include Gilles Deleuze, Felix Guattari, Michel Foucault, Maurizio Lazzarato, Rosi Braidotti. Most compelling is the Deleuze and Guatarri suggestion that Lazzarato has picked up on in Signs and Machines and Governing by Debt that there is a machinic order as well as a logocentric order. My argument here is that predatory anthropocene functions through a machinic order that is little impacted by traditional semiotics, by political sloganeering, or even by radical critique. That there must be a politics of doing, or dropping out of predatory anthropocene in the way that Franco ‘Bifo’ Berrardi suggests in After the Future.

[4] Giroux, Henry (2014). Zombie Politics and Culture in the Age of Casino Capitalism New York: Peter Lang Press.

[5] Genosko, Gary. 2012. Remodeling Communication: From WWII to WWW. Toronto, Can: University of Toronto Press. (pg. 150)

Lara Trace Hentz

INDIAN COUNTRY NEWS

In Saner Thought

"It is the duty of every man, as far as his ability extends, to detect and expose delusion and error"..Thomas Paine

ZEDJournAI

Human in Algorithms

Rooster Crows

From the Roof Top

Aisle C

I See This

The Free

blog of the post capitalist transition.. Read or download the novel here + latest relevant posts

अध्ययन-अनुसन्धान(Essential Knowledge of the Overall Subject)

अध्ययन-अनुसन्धानको सार