Political issues in the Ebola crisis

ebola

By Patrick Martin

Source: WSWS.org

The report that a healthcare worker in Dallas, Texas, one of those who treated Ebola victim Thomas Eric Duncan before his death, has herself contracted the disease, is a significant and troubling event. Dr. Thomas Frieden, director of the US Centers for Disease Control and Prevention, admitted in a television interview Sunday, “It’s deeply concerning that this infection occurred.”

While Frieden claimed that current protocols for treating Ebola patients were effective in preventing the spread of the disease, arguing that there must have been “a breach of protocol,” no actual explanation has been given for how the healthcare worker became infected. She was not one of the 48 primary contacts with Duncan who were being monitored for possible exposure, but worked in a more peripheral role. Her infection was only detected when she contracted a fever and reported it herself.

There are a growing number of such cases, including doctors and nurses in the affected regions of Liberia, Sierra Leone and Guinea, who were well aware of the procedures, and an NBC News photographer, whose infection has caused the quarantining of the entire reporting team, led by Dr. Nancy Snyderman, the network’s chief medical correspondent. These cases suggest that despite the repeated assurances from health officials, there is much that is not known about how the disease is transmitted.

What is certain is that the Ebola outbreak in West Africa is a catastrophe for the people of that region. More than 8,000 people have been infected and more than 4,000 have died, with no signs that the epidemic has been curtailed. The heroic efforts of doctors, nurses and aid workers have been sabotaged by the collapse of the healthcare systems of these countries, among the poorest in the world. Only 20 percent of the affected population in West Africa has access to a treatment center.

It is almost impossible to overstate the dimensions of the disaster. Until this year, Ebola was a disease of remote rural areas that had killed only 1,500 people in 20 previous outbreaks over 40 years. Now the disease has reached urban centers like Monrovia, capital of Liberia, a city of one million, and individuals infected with the virus have travelled from the region only to fall ill in the United States, Spain and Brazil. There are well-founded fears that Ebola could become a global plague, particularly if it reaches more densely populated countries like Nigeria, or the impoverished billions of South and East Asia.

The impotent global response to the immense tragedy in West Africa is a serious warning. The Ebola crisis has proven to be a test of the ability of capitalism, as a world system, to deal with an acute and deadly threat. The profit system has failed. A society organized on the basis of production for private gain and divided into antagonistic nation-states, with a handful of imperialist powers dominating the rest, is incapable of the systematic, energetic and humane response that this crisis requires. It is no accident that the Ebola outbreak takes place in countries that are former colonies of imperialist powers. Guinea was a French colony, Sierra Leone a British colony, and Liberia a de facto US colony since its founding by freed American slaves. Despite their nominal independence, each country remains dominated by giant corporations and banks based in the imperialist countries, which extract vast profits from the mineral wealth and other natural resources. Guinea is the world’s largest bauxite exporter, Sierra Leone depends on diamond exports, Liberia has long been the fiefdom of Firestone Rubber (now Bridgestone).

These countries are unable to provide even rudimentary healthcare services to their populations, not because they lack resources, but because they are exploited and oppressed by a global economic system controlled by Wall Street and other financial and commodity markets. This economic system is so unequal that the 85 richest individuals on the planet control more wealth than the poorest three billion people, nearly half of humanity.

Economic development, particularly over the past 40 years, has created an interconnected and globalized world. Thousands of people travel every day between West Africa and other parts of the world. The revolution in transportation and communications means that what happens in West Africa today can affect Dallas, Boston, Madrid and Rio de Janeiro tomorrow. This makes the Ebola epidemic not a regional event, but a world event.

But the response to the Ebola crisis is carried out by national governments driven by competing national interests, and concerned, not with the danger of the virus to the world’s people, but with how it affects the interests of the ruling class in each nation. Thus there are calls in the United States and Europe for imposing an embargo on travelers from Liberia, Sierra Leone and Guinea, although health experts warn that such an action would cause the economic collapse of these countries, vastly worsening the epidemic and making its global spread more rather than less likely.

Equally reactionary is the Obama administration’s decision to send 4,000 US troops to Liberia, ostensibly to build health treatment facilities. Why are heavily armed soldiers chosen for such a mission? They are not construction workers or healthcare providers. If healthcare workers and journalists have become infected, despite taking every precaution, then certainly soldiers could themselves fall victim to the disease, and bring the virus home with them. The real agenda of Washington is to secure a basis for its Africa Command (AFRICOM), up to now excluded from the continent by local opposition, thus advancing the interests of American imperialism against its rivals, particularly China.

The potential dangers of a disease like Ebola spreading from rural Africa to the world have long been understood by epidemiologists and other scientists. It has been the subject of specialized studies and best-selling books. The issue has even penetrated into popular culture through films from The Andromeda Strain to Outbreak and 28 Days. But the profit system has been incapable of generating a serious effort to forestall an entirely predictable crisis.

The detection of Ebola in the mid-1970s should have been the occasion for the launching of an intensive effort to study the virus, analyze how it is transmitted and develop antidotes and a vaccine. This did not take place, in large measure, as a report last month suggested, because the giant pharmaceutical companies that control medical research saw little profit in saving the lives of impoverished villagers in rural Africa (see “Profit motive big hurdle for Ebola drugs”).

What little research has been conducted on possible cures and vaccines was funded by the US Pentagon, for dubious reasons: at best, to protect US soldiers who might be deployed to the jungles of central Africa as an imperialist invasion force; at worst, to determine whether the virus could be weaponized for use against potential enemies.

What would a serious response to the Ebola crisis look like? It would entail a massive, internationally coordinated response which calls on vast resources on the scale necessary both to save as many as possible of those under immediate threat and to prevent the development of an outbreak on a global scale.

It would mean the mobilization of doctors, nurses, public health workers and scientists from America, Europe, Russia, China and the rest of the world to fight back against a deadly threat to the entire human race. And it would mean taking control of this response out of the hands of the national military establishments, particularly the Pentagon, and the giant pharmaceutical firms, one of the most corrupt and rapacious detachments of big business.

Beyond Palliative Care

130517cntower

By arranjames

Source: Synthetic Zero

Not all that long ago the curators of this blog started talking about the possibility of the palliative care of the Earth. Recently dmf posted up a podcast dealing with the same topic. I haven’t listened to it yet so won’t be drawing on it in this post. I wanted to take a few minutes to experiment with the senses of the phrase “palliative care of the Earth”.

First of all, what is palliative care? Like all attempts at sense it is a contested battleground rife with bullet holes and no-man’s lands with various armies massed and pressing on it. One such army is the global institutional Roman legion that is the World Health Organisation. The WHO loves definitions. One could almost assume it employed nothing but glossophiliacs who spent their days and night writing endless variations on definitions who, in their frenzied madness, ended up trying to murder the words they were seeking to play midwife to. The WHO definition is long. And vague. You can read it here. Operationalising a little we can extract the fundamentals: palliative care seeks to make life as liveable as possible for the dying body and for the bodies who will mourn it.

The Earth as a system of ecosystems, an ecological metasystem, is considered as a body composed of bodies that play habitat and inhabitant, catalyst and anticatalyst, metabolism and metabolite, and so on, to one another [1]. Not all of these bodies are living but as with any machinic assemblage this Earth emerges as a necessarily heterogenetic improvisation (the imposition of unpredictability) that depends on both organic and machinic kinds [2]. A cyborg of a different order than Robocop the Earth is more akin to the Half-Faced Man, a machine that wants to be human. This isn’t to say the Earth wants to be human, or that it wants anything in any way we’d recognise as desire, although interspecies sexuality clearly indicates a queer promiscuity among nonhuman organisms, but that the Earth has assembled in such a way that the organic has come out of the inorganic. From a certain perspective: so what? It’s all just interlocking mechanism. Well, fine. But its dying is what.

But the Earth won’t die. Not yet. Far more likely- and if we stop being so anthropophobic- we’re talking about ourselves. It is us that is dying. It is the palliative care of the human that we should really consider. We open with a discussion of the dying Earth because it is this dying that is killing us: a vicarious species-suicide? These are dark thoughts that imply a loathing so great in our species that we’d take out everything else just to slit our own throats once and for all. But we’re not that grand, we’re all too limited, all too human still. Like smokers in the 1950s we didn’t know what we were doing, then we did and did nothing about it, then everyone said it was too late. We’re not quite sure of the periodicity. We don’t know if it is too late. What we do know is that we’ve had a mass terminal diagnosis and there is no consensus on the prognosis. What are we dying of then, if not some anthropathology [3]?

Does the species have a body? Or is the species also a hallucination? Hallucinations can die too- ask a “schizophrenic” on Clozapine. The WHO is an ensemble of equipment and technique in the same way Guattari once spoke of the unconscious. It is almost as if the WHO invented health (“a total state”) and must administer it. What is it that the WHO wants to say about palliative care? It has things to say about the reduction of suffering; the affirmation of life and death; it seeks neither the hastening nor the postponing of death; it looks to psychology and spirituality; produces support systems; is multidisciplinary; is life enhancing; it’s never too early to start.

How does this map onto humanity? We’re just scale in a sense, where “humanity” stands in for “person”. So it is the reduction of the suffering of species and the enhancement of its existence. This follows nicely from Lacanian ideas that we live both by the reality-principle and jouissance, by both aversion and hedonics. We’re also not talking about killing ourselves off, so no reproductions of Zapffe’s conclusion to ‘The Last Messiah’- we aren’t about to go forth to be fruitless and let the Earth be silent after us (as if it would be). I think it’s safe to say fuck Messiahs, especially last ones.

We’re also looking to the psychology and spirituality of humanity? Doesn’t this translate quite well to looking at the cognitive biases and metacognitive illusions and the affects and emotions in their normativity and pragmatics? Support systems like what? New technologies and alternative energy sources? Sure. But it can’t be limited to that- what if extinction is much closer than this than we think? Well think about it for a second. The process has already begun. And I’m not just talking about Tim Morton’s plutonium, or irreversible glacial melting, or any other particular doomsday protocol. If we’ve been paying attention to the three ecologies then we should have spotted multiple extinctions have been in process for a long time now. Systems of systems have been disintegrating within whatever it is- or was- that we called the human for decades. By 2050 or so even the strange hominid form will have been eradicated, recorded in images that no creature surviving us will care much about.

So palliative care is about easing our way into dying off. It is about quietly doing our best to assemble societies in which we can humanely coexist with wild being until our time’s really up. That was certainly my feeling two years ago when I wrote a post on extinction. Back in 2012 I declared that

Any post-nihilistic pragmatics will require that we operate consciously within catastrophic time and that we surrender the impossible task of removing precariousness from the human condition. These are the same project in fact, given that the former reveals to us the anthropocentrism of the latter…the benign revelation that precariousness is the condition of all things. IF this garners the accusation of privelging the perspective of extinction and heat death then this is a necessary part of the pragmatic ethics of a self-management of extinction. As I have said before, the task now is to think the ethics of palliative care for the species. The dream of species-being is realised at last.

Today I wonder at the sadness of that post. At the time I’d thought of it as realistic, hard-headed, unsentimental. All that. But ultimately, I think it was a depressive position. If 2050 is the time limit then maybe it is too late, and maybe we should be looking at harm-reduction and palliation. But for me this could lead us to a politics of the worst in which we try to stave off the ‘least of all possible evils’, a mode of thought that Eyal Weizman has convincingly shown to be at work in some of the worst atrocities in modern history. In trying to create the conditions of the least possible harm the scale of the species we might actually end up with a resigned sigh in the face of forces we might be able to do something about. As such, the only way to “self-manage our extinction” has to be truly palliative in that it doesn’t just avoid suffering but also seeks jouissance. In fact, I’d concretise the program into what David Roden has been talking about in terms of a speculative posthumanism in which posthuman beings that emerge out of the human bear as much intuitive relation to us as we do to our ancestral forebears. The jouissance of this is less Lacanian and more about a sweeping mutative recombinatory innovation in the normativity of posthumanity.

In fact the pessimist and transhumanist programs belong together when we view harm reduction without the depressive targeting system, when in fact we dare to accelerate palliative philosophy into a praxis of assisted dying. What is born from the uneven unity of these programs is what I’m (stupidly; deliriously; in a state of panic) calling transpessimism: the speculative conviction that humanity must become extinct by becoming something else.

[1] From wikipedia:

System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete.[1] The methodology for defining, abstracting, modeling, and analyzing system of systems problems is typically referred to as system of systems engineering.

[2] I’ve stolen the term “heterogenetic improvisation” from David Malvinni’s study of Roma music: “heterogenic improvisation…the divided interval where improvisation orginates out of otherness while identifying with itself…grows out of a desire for a purely involved performance, a symbiosis of listener and sound, via an identification of the same with its unpredictable mutation” (p. 47-48).

[3] The term “anthropathology”, a neologism of the “anthro” pertaining to the human and “pathology” pertaining to disease, was coined in 2007 by a practicing counselor and counselling theorist, Colin Feltham, turned pessimist philosopher. Feltham defines the condition of anthropathology at length in his book treatment of the “condition”- as valid as most psychopathological categories- but also presents a condensed definition as follows:

‘the marked, universal tendency of human beings individually and collectively towards suffering deceptiveness, irrationality, destructiveness and dysfunction,including an extreme difficulty in perceiving and freeing ourselves from this state’ (What’s Wrong With Us? The anthropathology thesis. 2007. p.256).

It’s Time for Some Anti-Science Fiction

nature-spaceships_00374723

Source: The Hipcrime Vocab

It’s Time for Some Anti-Science Fiction
Why must positive depictions of the future always be dependent upon some sort of new technology?

Neal Stephenson is a very successful and well-known science fiction writer. He’s also very upset that the pace of technological innovation has seemingly slowed down and we seem to be unable to come up with truly transformative  “big ideas” anymore. He believes this is the reason why we are so glum and pessimistic nowadays. Indeed, the science fiction genre, once identified with space exploration and utopias of post-scarcity and abundant leisure time, has come to be dominated by depictions of the future as a hellhole of extreme inequality, toxic environmental pollution, overcrowded cities, oppressive totalitarian governments, and overall political and social breakdown. Think of movies like The Hunger Games, Elysium, The Giver, and Snowpiercer.

This pessimism is destructive and corrosive, believes Stephenson. According to the BBC:

Acclaimed science-fiction writer Neal Stephenson saw this bleak trend in his own work, but didn’t give it much thought until he attended a conference on the future a couple years ago. At the time, Stephenson said that science fiction guides innovation because young readers later grow up to be scientists and engineers.

But fellow attendee Michael Crow, president of Arizona State University (ASU), “took a more sort of provocative stance, that science fiction actually needed to supply ideas that scientists and engineers could actually implement”, Stephenson says. “[He] basically told me that I needed to get off my duff and start writing science fiction in a more constructive and optimistic vein.”

“We want to create a more open, optimistic, ambitious and engaged conversation about the future,” project director Ed Finn says. According to his argument, negative visions of the future as perpetuated in pop culture are limiting people’s abilities to dream big or think outside the box. Science fiction, he says, should do more. “A good science fiction story can be very powerful,” Finn says. “It can inspire hundreds, thousands, millions of people to rally around something that they want to do.”

Basically, Stephenson wants to bring back the kind of science fiction that made us actually long for the future rather than dread it. Stephenson means to counter this techno-pessimism by inviting a number of well-known science fiction writers to come up with more positive, even utopian, visions of the future, where we once again come up with “big ideas” that inspire the scientists and engineers in their white labcoats. He apparently believes that it is the duty of science fiction authors to act as, in the words of one commentator, “the first draft of the future. ” Indeed, much of modern technology and space exploration was presaged by authors like H.G. Wells and Jules Verne. From the BBC article above, here are some of the positive future scenarios depicted in the book:

  •     Environmentalists fight to stop entrepreneurs from building the first extreme tourism destination hotel in Antarctica.
  •     People vie for citizenship on a near-zero-gravity moon of Mars, which has become a hub for innovation.
  •     Animal activists use drones to track elephant poachers.
  •     A crew crowd-funds a mission to the Moon to set up an autonomous 3D printing robot to create new building materials.
  •     A 20km tall tower spurs the US steel industry, sparks new methods of generating renewable energy and houses The First Bar in Space.

The whole idea behind Project Hieroglyph, as I understand it, is to depict more positive futures than the ones being depicted in current science fiction and media. That seems like a good idea. But my question is – why must these positive futures always involve more intensive application of technology? Why are we unable to envision a better future in any other way besides more technology, more machines, more inventions, more people, more economic growth, etc. Haven’t we already been down that road?

Or to put it another way, why must science fiction writers assume that more technological innovation will produce a better society when our modern society is the result of previous technological innovations, and is seen by many people as a dystopia (with many non-scientifically-minded people actually longing for a collapse of some sort)? Perhaps, to paraphrase former president Reagan, in the context of our current crisis, technology is not the solution to the problem, technology is the problem.

***

It’s worth pointing out that many of the increasingly dystopian elements of our present circumstances have been brought about by the application of technology.

Economists have pinpointed technology as a key driver of inequality thanks to the hollowing out of the middle class due to the automation of routine tasks that underpinned the  industrial/service economy leaving only high-end and low-end jobs remaining, as well as the “superstar effect” where a few well-paid superstars capture all the gains because technology allows them to everywhere at once. Fast supercomputers have allowed the rich to game the stock market casino where the average stock is now held for just fractions of a second, while global telecommunications has led to reassigning jobs anywhere in the world where the very cheapest workers can be found. America’s manufacturing  jobs are now done by Chinese workers and its service jobs by Indian workers half a world away even as the old Industrial heartland looks suspiciously like what is depicted in The Hunger Games. Rather than a world of abundant leisure, stressed out workers take their laptops to the beach, fearful of losing their jobs if they don’t, while millions have given up even looking for work anymore. A permanently underemployed underclass distracts itself with Netflix, smartphones and computer games, and takes expensive drugs promoted by pharmaceutical companies to deal with their depression.

Global supply chains, supertankers, the “warehouse and wheels,” and online shopping have hollowed out local main street economies and led to monopolies in every industry across the board. Small family farmers have been kicked off the land worldwide and replaced by gargantuan, fossil-fuel powered agricultural factories owned by agribusinesses churning out  bland processed food based around wheat, corn and soy causing soaring obesity rates worldwide and runaway population growth.

Banks have merged into just a handful of entities that are “too-big-to-fail” and send trillions around the world at the speed of light. Gains are privatized while loses and risk are socialized, and the public sphere is sold off to profiteers at fire sale prices. A small financial aristocracy controls the system and hamstrings the world with debt. Just eighty people control as much wealth as half of the planet’s population, and in the world’s biggest economy just three people gain as much income as half the workforce. There are now more prisoners in America than farmers.

A now global trans-national elite of owner-oligarchs criss-crosses the world in Gulfsteam jets and million-dollar yachts and  hides their money in offshore accounts beyond the reach of increasingly impotent national governments, while smaller local governments can’t keep potholes filled, streets plowed and streetlights on for ordinary citizens. Many of the world’s great cities have become “elite citadels” making it impossible for regular citizens to live there. This elite controls bond markets, funds political campaigns and owns and controls a monopolized media that normalizes this state of affairs using sophisticated propaganda tools enhanced by cutting-edge psychological research enabled by MRI scanners. The media is controlled by a small handful of corporations and panders to the lowest common demonstrator while keeping people in a constant state of fear and panic. Advertising preys on our insecurities and desire for status to make us buy more, enabled by abundant credit. The Internet, once the hope for a more democratic future, has ended up as shopping mall, entertainment delivery system and spying/tracking system rather than a force for democracy and revolution.

Security cameras peer at us from every streetcorner and store counter and shocking revelations about the power and reach of the national security state that are as fantastic as anything dreamed up by dystopian science fiction writers have become so commonplace that people hardly notice anymore. Anonymous people in gridded glass office towers read our every email, listen to our every phone call and track our every move using our cell phones. New technology promises “facial recognition” and “smart” technology promoted by corporations promises to track and permanently record literally every move you make.

Remote-control drones patrol the skies of global conflict zones and vaporize people half a world away without their pilots ever seeing their faces. High-tech fighter jets allow us to “cleanly” drop bombs without the messiness of a real war. Private mercenaries are a burgeoning industry and global arms sales continue to increase even in a stagnant global economy with arms companies often selling to both sides. By some accounts one in ten Americans is employed in some sort of “guard labor,” that is, keeping their fellow citizens in line. The number of failed states continues to increase in the Middle East and Africa and citizens in democracies are marching in the streets.

Not that there’s nothing for the national security state to fear after all – technology has enabled individual terrorists and non-state actors to produce devastating weapons capable of destroying economies and killing thousands as 9-11 demonstrated. A single “superempowered” individual can kill millions with a nuclear bomb the size of a suitcase or an engineered virus or other bioterrorism weapon. The latest concern is “cyberwarfare” which could destroy the technological infrastructure we are now utterly dependent upon and kill millions. “Non-state actors” can wreak as much havoc as armies thanks to modern technology, and there are a lot of disgruntled people out there.

And then there is the environmental devastation, of which climate change is the most overwhelming, but includes everything from burned down Amazonian rainforest, to polluted mangroves in Thailand, to collapased fish stocks, dissolving coral reefs and oceans full of jellyfish. Half the  world’s terrestrial biodiversity has been eliminated in the past fifty years and we’ve lost so much polar ice that earth’s gravity is measurably affected. In China, the world’s economic success story, the haze is so thick that people can’t see the tops of the skyscrapers they already have and there are “cancer villages.” The skies may be a bit clearer in America thanks to deindustrialization, but things like drought in the Southwest and increasinginly powerful hurricanes are reminders that no one is immune. Entire countries and major cities look to be submerged under rising oceans and the first climate refugees are already on the move from places like Africa and Southeast Asia leading to anti-immigrant backlash in developed countries.

This is not some future dystopia, by the way, this is where technology has us led right now. Today. Current headlines. Maybe the reason that dystopias are so popular is because that seems to be where technology had led us here in the first decade of the twenty-first century. I’m skeptical that Project Hieroglyph and it’s fostering of “big ideas” will do much to change that.

Thus my fundamental question is, given the above, why is it always assumed that the path to utopia goes through a widespread deployment of even more innovation and technology? Is it realistic to believe that colonies on Mars, drones, intelligent robots, skyscrapers and space elevators will solve any of this?

I’ve written before about the fact that the technology we already have in our possession today was expected to deliver a utopia by numerous writers and thinkers of the past. “The coming of the wireless era will make war impossible, because it will make war ridiculous,” declared Marconi in 1912. HG Wells, a committed socialist who lived during perhaps the greatest period of invention before or since (railroads, harnessing of electricity, radio communication, internal combustion engines, powered flight, antibiotics),  very frequently depicted utopian societies brought about through the applications of greater technology. Science fiction authors still seem to conceive utopias as being exclusively brought about by “technological progress.” But given hindsight, is that realistic anymore?

Maybe it’s time for some anti-science fiction.

***

The classic example of this is William Morris’ utopian novel News From Nowhere.

Morris was a key figure in the Arts and Crafts movement, which was a reaction to the factory-based mass production and subsequent deskilling of the workforce. People no longer collectively made the world of goods and buildings around them, rather they were now made by a small amount of people using deskilled, alienated labor in giant factories with the profits accruing to a tiny handful of capitalist owners. Morris wanted another way.

In Morris’ future London there are very little in the way of centralized institutions.  People work when they want to and do what they want to. Money is not used. Life is lived leisurely pace. Writing during the transformative changes of the Industrial Revolution, Morris’ London looks less like a World’s Fair and more like a lost bucolic pastoral London that had long since vanished under the smoke of factories. Technology plays a very small role yet people are much happier.

Morris’ work was written partially in response to a book entitled Looking Backward by Edward Bellamy, which was extraordinarily popular in the late nineteenth century, but almost forgotten today. Bellamy’s year 2000 utopia had the means of production brought under centralized control, with people serving time in an “industrial army” for twenty years and then retiring to a life of leisure and  material abundance brought about by production for use rather than capitalist profit.

Morris still felt that this subordinated workers to machines rather than depicting a society for the maximization of human well-being, including work. Here is Morris in a speech:

“Before I leave this matter of the surroundings of life, I wish to meet a possible objection. I have spoken of machinery being used freely for releasing people from the more mechanical and repulsive part of necessary labour; it is the allowing of machines to be our masters and not our servants that so injures the beauty of life nowadays. And, again, that leads me to my last claim, which is that the material surroundings of my life should be pleasant, generous, and beautiful; that I know is a large claim, but this I will say about it, that if it cannot be satisfied, if every civilised community cannot provide such surroundings for all its members, I do not want the world to go on.”

Morris’ book shows that utopias need not be high-tech. It also shows that real utopias are brought about by the underlying philosophy of a society and its corresponding social relations. It seems to me like Stephenson’s utopias are all predicated on the continuation of the philosophy and social relations of our current society – more growth, more technology, faster innovation, more debt, corporate control, trickle-down economics, private property, absentee ownership, anarchic markets, autonomous utility-maximizing consumers, etc. It is yoked to our ideas of “progress” as simply an application of more and faster technology.

By contrast, Morris’ utopia has the technological level we would  associate with a “dystopian” post collapse society, yet everyone seems a whole lot happier.

***

Now I don’t mean to suggest that any utopia should necessarily be a place where we have reverted to some sort pre-industrial level of technology. We don’t need to depict utopias as living like the Amish (although that would be an interesting avenue of exploration). I merely wish to point out that a future utopia need not be exclusively the domain of science fiction authors, and need not be predicated by some sort of new wonder technology or space exploration. For example, in an article entitled Is It Possible to Imagine Utopia Anymore? the author writes:

Recently, though, we may have finally hit Peak Dystopia…All of which suggests there might be an opening for a return to Utopian novels — if such a thing as “Utopian novels” actually existed anymore…In college, as part of a history class, I read Edward Bellamy’s Looking Backwards, a Utopian science-fiction novel published in 1888. The book — an enormous success in its time, nearly as big as Uncle Tom’s Cabin — is interesting now less as literature than as a historical document, and it’s certainly telling that, in the midst of the industrial revolution, a novel promising a future socialist landscape of increased equality and reduced labor so gripped the popular imagination. We might compare Bellamy’s book to current visions of Utopia if I could recall even a single Utopian novel or film from the past five years. Or ten years. Or 20. Wikipedia lists dozens of contemporary dystopian films and novels, yet the most recent entry in its rather sparse “List of Utopian Novels” is Island by Aldous Huxley, published in 1962*. The closest thing to a recent Utopian film I can think of is Spike Jonze’s Her, though that vision of the future — one in which human attachment to sentient computers might become something close to meaningful — hardly seems like a fate we should collectively strive for, but rather one we might all be resigned to placidly accept

Many serious contemporary authors have tackled dystopia: David Foster Wallace’s Infinite Jest, Gary Shteyngart’s Super Sad True Love Story, Cormac McCarthy’s The Road, and so on. But the closest thing we have to a contemporary Utopian novel is what we could call the retropia: books like Michael Chabon’s Telegraph Avenue (about a funky throwback Oakland record store) or Jonathan Lethem’s Fortress of Solitude (about 1970s Brooklyn) that fondly recall a bygone era, by way of illustrating what we’ve lost since —  “the lost glories of a vanished world,” as Chabon puts it. Lethem’s more recent Dissident Gardens is also concerned with utopia, but mostly in so far as it gently needles the revolutionaries of yesteryear.

Indeed, the closest things we have to utopias on TV today are shows like Mad Men which take place during the era when Star Trek was on TV rather than a utopia inspired by Star Trek itself. For many Americans, their version of utopia is not in the future but in the past – the 1950’s era of widespread prosperity, full employment, single-earner households, more leisure, guaranteed pensions, social mobility, inexpensive housing, wide open roads and spaces, and increasing living standards. As this article points out:

When I first heard about the project, my cynical heart responded skeptically. After all, much of the Golden Age science fiction Stephenson fondly remembers was written in an era when, for all its substantial problems, the U.S. enjoyed a greater degree of democratic consensus. Today, Congress can barely pass a budget, let alone agree on collective investments.

If someone asked me to depict a more positive future than the one we have, deploying more technology is just about the last thing I would do to bring it about. In fact, the future I would depict would almost certainly include less technology, or rather technology playing a smaller role in our lives. I would focus more on social relations that would make us be happy to be alive, where we eat good food, spend time doing what we want instead of what we’re forced to, and don’t have to be medicated just to make it through another day in our high-pressure classrooms and cubicles. I might even depict a future with no television inspired by Jerry Mander’s 1978 treatise Four Arguments for the Elimination of Television (hey, remember this is fiction after all!)

Rather it would depict different political, economic and social relations first, with new technology playing only a supporting, not a starring role. Organizing society around the needs of productive enterprise, growth and profits (and nothing else) is the reason, I believe, why we are feeling so depressed about the future that dystopias resonate more with a demoralized general public who rolls their collective eyes at the exhortations of science fiction writers with an agenda**. The problem of science fiction is it’s single-minded conflagration of technology with progress.

Personally my utopia would be something more like life on the Greek island of Ikaria*** according to this article from The New York Times (which reads an awful lot like News from Nowhere):

Seeking to learn more about the island’s reputation for long-lived residents, I called on Dr. Ilias Leriadis, one of Ikaria’s few physicians, in 2009. On an outdoor patio at his weekend house, he set a table with Kalamata olives, hummus, heavy Ikarian bread and wine. “People stay up late here,” Leriadis said. “We wake up late and always take naps. I don’t even open my office until 11 a.m. because no one comes before then.” He took a sip of his wine. “Have you noticed that no one wears a watch here? No clock is working correctly. When you invite someone to lunch, they might come at 10 a.m. or 6 p.m. We simply don’t care about the clock here.”

Pointing across the Aegean toward the neighboring island of Samos, he said: “Just 15 kilometers over there is a completely different world. There they are much more developed. There are high-rises and resorts and homes worth a million euros. In Samos, they care about money. Here, we don’t. For the many religious and cultural holidays, people pool their money and buy food and wine. If there is money left over, they give it to the poor. It’s not a ‘me’ place. It’s an ‘us’ place.”

Ikaria’s unusual past may explain its communal inclinations. The strong winds that buffet the island — mentioned in the “Iliad” — and the lack of natural harbors kept it outside the main shipping lanes for most of its history. This forced Ikaria to be self-sufficient. Then in the late 1940s, after the Greek Civil War, the government exiled thousands of Communists and radicals to the island. Nearly 40 percent of adults, many of them disillusioned with the high unemployment rate and the dwindling trickle of resources from Athens, still vote for the local Communist Party. About 75 percent of the population on Ikaria is under 65. The youngest adults, many of whom come home after college, often live in their parents’ home. They typically have to cobble together a living through small jobs and family support.

Leriadis also talked about local “mountain tea,” made from dried herbs endemic to the island, which is enjoyed as an end-of-the-day cocktail. He mentioned wild marjoram, sage (flaskomilia), a type of mint tea (fliskouni), rosemary and a drink made from boiling dandelion leaves and adding a little lemon. “People here think they’re drinking a comforting beverage, but they all double as medicine,” Leriadis said. Honey, too, is treated as a panacea. “They have types of honey here you won’t see anyplace else in the world,” he said. “They use it for everything from treating wounds to curing hangovers, or for treating influenza. Old people here will start their day with a spoonful of honey. They take it like medicine.”

Over the span of the next three days, I met some of Leriadis’s patients. In the area known as Raches, I met 20 people over 90 and one who claimed to be 104. I spoke to a 95-year-old man who still played the violin and a 98-year-old woman who ran a small hotel and played poker for money on the weekend.

On a trip the year before, I visited a slate-roofed house built into the slope at the top of a hill. I had come here after hearing of a couple who had been married for more than 75 years. Thanasis and Eirini Karimalis both came to the door, clapped their hands at the thrill of having a visitor and waved me in. They each stood maybe five feet tall. He wore a shapeless cotton shirt and a battered baseball cap, and she wore a housedress with her hair in a bun. Inside, there was a table, a medieval-looking fireplace heating a blackened pot, a nook of a closet that held one woolen suit coat, and fading black-and-white photographs of forebears on a soot-stained wall. The place was warm and cozy. “Sit down,” Eirini commanded. She hadn’t even asked my name or business but was already setting out teacups and a plate of cookies. Meanwhile, Thanasis scooted back and forth across the house with nervous energy, tidying up.

The couple were born in a nearby village, they told me. They married in their early 20s and raised five children on Thanasis’s pay as a lumberjack. Like that of almost all of Ikaria’s traditional folk, their daily routine unfolded much the way Leriadis had described it: Wake naturally, work in the garden, have a late lunch, take a nap. At sunset, they either visited neighbors or neighbors visited them. Their diet was also typical: a breakfast of goat’s milk, wine, sage tea or coffee, honey and bread. Lunch was almost always beans (lentils, garbanzos), potatoes, greens (fennel, dandelion or a spinachlike green called horta) and whatever seasonal vegetables their garden produced; dinner was bread and goat’s milk. At Christmas and Easter, they would slaughter the family pig and enjoy small portions of larded pork for the next several months.

During a tour of their property, Thanasis and Eirini introduced their pigs to me by name. Just after sunset, after we returned to their home to have some tea, another old couple walked in, carrying a glass amphora of homemade wine. The four nonagenarians cheek-kissed one another heartily and settled in around the table. They gossiped, drank wine and occasionally erupted into laughter.

No robot babysitters or mile-high skyscrapers required.

* No mention of Ernest Callenbach’s Ecotopia published in 1975?

** ASU is steeped in Department of Defense funding and DARPA (The Defense Research Projects Agency) was present at a conference about the book entitled “Can We Imagine Our Way to a Better Future?” held in Washington D.C. I’m guessing the event did not take place in the more run-down parts of the city. Cui Bono?

***Ironically, Icaria was used as the name of a utopian science fiction novel, Voyage to Icaria, and inspired an actual utopian community.

Ebola Outbreak: The Latest U.S. Government Lies. The Risk of Airborne Contagion?

_76476153_76475767

By Prof. Jason Kissner

Source: Global Research

We begin with the Public Health Agency of Canada, which once (as recently as August 6) stated on its website that:

“In the laboratory, infection through small-particle aerosols has been demonstrated in primates, and airborne spread among humans is strongly suspected, although it has not yet been conclusively demonstrated (1613). The importance of this route of transmission is not clear. Poor hygienic conditions can aid the spread of the virus.”

No more; the “airborne spread among humans is strongly suspected” language has been cleansed:

“In laboratory settings, non-human primates exposed to aerosolized ebolavirus from pigs have become infected, however, airborne transmission has not been demonstrated between non-human primates

Footnote1 Footnote10 Footnote15 Footnote44 Footnote45.

Viral shedding has been observed in nasopharyngeal secretions and rectal swabs of pigs following experimental inoculation.”

Are we to suppose that very recent and ground-breaking research was conducted that indicated there is no longer reason to “strongly suspect” that airborne Ebola contagion occurs? Surely, the research was done three weeks ago, and we only need to wait another couple of days until the study is released for public consumption. Feel better now?

If not, perhaps the 9/30 words of the Centers for Disease Control accompanying the Dallas Ebola case will provide some solace. Or, perhaps those words just contain another pack of U.S. Government lies. Let’s investigate.

Before addressing the CDC’s Statement, we should articulate some pivotal Ebola Outbreak facts we’re apparently not supposed to mention or even think about, since they’ve been buried by the Government/MSM complex. So, consider this from an earlier Global Research contribution by this author, drawn from a 2014 New England Journal of Medicine article:

“Phylogenetic analysis of the full-length sequences established a separate clade for the Guinean EBOV strain in sister relationship with other known EBOV strains. This suggests that the EBOV strain from Guinea has evolved in parallel with the strains from the Democratic Republic of Congo and Gabon from a recent ancestor and has not been introduced from the latter countries into Guinea. Potential reservoirs of EBOV, fruit bats of the species Hypsignathusmonstrosus, Epomopsfranqueti, & Myonycteristorquata, are present in large parts of West Africa.18 It is possible that EBOV has circulated undetected in this region for some time. The emergence of the virus in Guinea highlights the risk of EBOV outbreaks in the whole West African subregion…

The high degree of similarity among the 15 partial L gene sequences, along with the three full-length sequences and the epidemiologic links between the cases, suggest a single introduction of the virus into the human population. This introduction seems to have happened in early December 2013 or even before.”

The take-home message is that we now confront a brand spanking new genetic variant of Ebola. Furthermore, we still have no idea at all how the “single introduction of the virus in the human population” of West Africa occurred. And, the current Ebola outbreak appears to be orders of magnitude more contagious than previous outbreaks. It also presents with a fatality count that far exceeds all previous outbreaks combined. But it’s certainly not airborne, so who cares about nit-picking details such as these!

In spite of the above facts, we are supposed to believe that all questions regarding the current Ebola outbreak can be answered with exclusive reference to what has occurred in connection with previously encountered—in terms of genetic composition—and known—in terms of initial outbreak source—Ebola episodes.

Here are a couple of questions. When was the last time an Ebola outbreak coincided with instructions to U.S. funeral homes on how to “handle the remains of Ebola patients”? Not to worry, since Alysia English, Executive Director of the Georgia Funeral Homes Association, is quoted (click preceding link) as saying “If you were in the middle of a flood or gas leak, that’s not the time to figure out how to turn it off. You want to know all of that in advance. This is no different.” So it’s just about being prepared, you see. Of course, nothing resembling this sort of preparation has ever transpired alongside any other Ebola outbreak in world history, so what gives now?

“Oh, it’s because we now have that Ebola case in Dallas.” True, but this response suffers from two fatal defects. First, we’re not supposed to worry about one tiny case as long as it’s in America, right, since according to the CDC on 9/30:

…there’s all the difference in the world between the U.S. and parts of Africa where Ebola is spreading. The United States has a strong health care system and public health professionals who will make sure this case does not threaten our communities,” said CDC Director, Dr. Tom Frieden, M.D., M.P.H. “While it is not impossible that there could be additional cases associated with this patient in the coming weeks, I have no doubt that we will contain this.”

If the U.S.’ strong health care system (which is apparently far superior to hazmat suits) is so effective at containment, what explains the funeral home preparations again? If U.S. containment procedures are so superb and the virus is no more contagious than before, what difference does it make whether the case is in Dallas, Texas or Sierra Leone? To be sure, maybe the answers to these questions are simple, and it’s just about corrupt money and the like.

However, the corrupted money explanation isn’t very plausible (at least on its own) either, for the very simple, and extremely disturbing, reason that the “funeral home preparations” article was first published on 9/29 at 3:36 PM PST—a day before the Dallas case was confirmed positive. Of course, this makes the following language at the very head of the article all the more eerie:

“CBS46 News has confirmed the Centers for Disease Control has issued guidelines to U.S. funeral homes on how to handle the remains of Ebola patients. If the outbreak of the potentially deadly virus is in West Africa, why are funeral homes in America being given guidelines?”

If the rejoinder is that “well, people thought the Dallas case might turn out positive”, the reply must be that there were several other cases, in places like Sacramento and New York, that might have turned out positive, but resulted in neither funeral home preparations nor a rash of CDC “Ebola Prevention” tips (wash those hands, since they’re running low on hazmat suits!)

Hopefully, you are in the mood for two more big CDC lies, because they really are quite important. From the 9/30 CDC statement: “People are not contagious after exposure unless they develop symptoms.” This is a lie for three basic reasons. First, the studies that inform the CDC’s professed certainty on this issue relied upon analyses of previous outbreaks of then-known known Ebola variants. The current strain, as stated here early on, is novel—genetically as well as geographically. Second, the distinction between “incubation” and “visible symptoms” is a continuum, not discrete in nature; a few droplets might not be rain, but they’re not indicative of fully clear skies either—so the boundary drawn by the CDC is, like nearly everything else the U.S. government does, arbitrary. Third, as even rank amateurs at statistics know, previous outbreaks have consisted of too few cases to confidently rule out small but consequential probabilities of asymptomatic transmission—completely leaving aside the fact that we have a new genetic variant of Ebola to deal with.

The last major CDC lie mentioned in this article is the claim, repeated ad nauseam, that “infrastructure shortcomings” and the like is wholly sufficient to explain the exponential increase in the number of cases presented by the current outbreak. We should believe that only when presented with well-designed multivariate contagion models that properly incorporate information about Ebola outbreaks and generate findings that socioeconomic differences as between West Africa and other regions of Africa (such as Zaire) alone can fully explain observed differences associated with the current outbreak. It seems to this author that we should strongly doubt that the current contagion can be fully explained without at some point invoking features of the novel genetic strain.

Dr. Jason Kissner is Associate Professor of Criminology at California State University. Dr. Kissner’s research on gangs and self-control has appeared in academic journals. His current empirical research interests include active shootings. You can reach him at crimprof2010[at]hotmail.com   

 

Soul Resonance and Music

cymatics-300x225

Many interesting insights on the nature and power of music in a recent article posted at Montalk.net. Tom’s writings on his site are steeped in knowledge on physics, spirituality, and multidisciplinary research on a number of academic and esoteric fields of study. The piece excerpted below is no exception, begining with an exploration of the various cultural, environmental, physiological and emotional factors forming one’s musical preferences, continuing onto effects of music on soul, spirit, and psychology and concluding with speculation on the origins, evolution and future of music:

Introduction

There are subjective and objective reasons why you might prefer one song over another. Subjective reasons include:

  • Tradition: because that is what you heard while growing up. Your preference then arises from habit and identification with your family and culture. You derive pleasure from safety, comfort, and familiarity. Folk and country music feature this prominently.
  • Identity: because the song is a token representation of some subculture you have invested your social identity into, whereby the music is more a fashion accessory or emblem displayed before others. You derive satisfaction from the reactions you get from others. Anything associated with a distinctive look such as rap, punk, goth, country, and metal can serve this function.
  • Sentiment: because you hear a song during a meaningful or emotional time in your life, and the two become linked together in your mind. The song will then trigger those same emotions when heard again in the future. Like a scent of perfume bringing back fond memories, you derive pleasure from the sentimental effect this brings. Pop songs, especially ballads frequently played on the radio, appeal to this factor.

Alone, these factors have little to do with the intrinsic musicality of the song. They merely project subjective values upon what is heard.

True music is measured by the degree to which its melody, harmony, rhythm, and texture in and of themselves evoke an objective response in us. For example, a minor chord sounds sad without us ever needing to be conditioned to feel that. Infants can distinguish between harmonious and dissonant chords well before their enculturation. A beat can make us clap or tap our foot without having to be taught to do so, as seen in babies who bend their knees and bounce to the music instinctively. Similarly, an odd pattern of strange sounds can make us tilt our heads in curiosity.

Some objective responses stimulate the intellect, some the physical body, and some the emotional and spiritual aspects of our being. So in addition to the aforementioned subjective reasons for musical preference, there are also objective ones:

  • Intrigue: your intellect is aroused by the originality, quirkiness, or complexity of a song. You find amusement in being stirred from boredom, apathy, or jadedness by its novelty. Experimental electronica, noise, and math rock focus exclusively on this aspect.
  • Groove: the song’s beat and rhythm stimulate the motor and speech areas of your brain, provoking you to dance. You derive pleasure from the endorphins released through physical movement, from the social approval and camaraderie present when dancing with others, and it simply feels good being physically motivated and energized by the sonic equivalent of a stimulant drug.
  • Resonance: there is something within a song that stimulates something within you at the emotional, spiritual, archetypal level. It evokes a response according to how much we inwardly resonate with that song’s combination of melody, harmony, rhythm, and texture.

Songs typically represent a mixture of all the above. When a song combines several factors, it has greater impact and wider appeal:

  • A bit of emotional resonance goes a long way toward building associative conditioning, which then amplifies the apparent emotional intensity of the song and leads to a strong sentimental effect. This is the basis of sappy ballads played on radio stations throughout the 70s and 80s.
  • Groove enhances intellectually fascinating songs by adding some physical energy, making it both interesting and fun, with many examples to be found in electronic music.
  • Groove combined with tradition makes for a high dance factor, as can be heard in Eastern European folk dances, samba and salsa, Mexican polka, American hoedowns and country line dancing.
  • Identity, groove, intrigue, and resonation of anger may be found in most forms of nu metal, djent, screamo, grindcore, etc.

Musical Preferences

We know that people differ in the degree to which they respond to a song. Some may not identify with the tradition being represented; some find its intellectual complexity confusing and irritating; some only desire groove and find little appeal in a slow emotional ballad; some do not have within their souls the aspects that a song is aiming to resonate; some never had a meaningful or emotional experience linked with a particular song that, for someone else, has much sentimental value.

So when different people respond differently to the same song, understand that in regard to the objective factors, the difference involves only the degree to which that factor is present in that person. A quirky and complex experimental piece might arouse much interest in one person, little interest in another, and strong disinterest in a third. When a song has groove, one person will dance uncontrollably, another will only tap his or her foot, and another with no sense of rhythm will fold his arms in boredom. When a song resonates the emotion of happiness, one person will have tears in her eyes, another will merely feel uplifted, and another might not care for feeling happy at the moment. It’s about varying degrees on the same scale.

On the other hand, the subjective factors have no such consistency:

  • One man hears a song during his first kiss, another just prior to the car accident that killed his wife. The same song by association will evoke a smile in the first and sadness in the latter.
  • The same rap song brings a sense of belonging and identity to one person and a sense of hatred or contempt against black culture in another.
  • Negative association can be so strong that it overrides the intrinsic resonance value of a song. One person likes metal because it resonates his inner sense of valor and strength, another hates it solely because her abusive ex-boyfriend was in a metal band.

Strong antipathy against certain music is usually due to a combination of lack of resonance, negative conditioned associations, clash against one’s tradition or subcultural affiliation, and dislike of the bodily responses induced by a song’s texture and rhythm (such as strong dance beats coming off as licentious to the prudish, or distorted guitars grating the ears of those who prefer comfort and gentleness).

So the question arises, what does musical preference say about a person? Here are some possibilities:

  • If you like a song solely because of tradition, identification, or sentimentalism then that simply indicates the nature of the experiences and social influences you have been imprinted with. It says very little about your inner being. How can it, if resonance to a song’s intrinsic musicality played no part in your always listening to it or singing it?
  • If you like a song solely for its intellectual intrigue, then that merely indicates you haven’t really heard something like it before. It is something new, surprising, and thus amusing. If the song is complex and abstract, maybe it says you have an active intellect that enjoys abstract sensory stimulation. But it says nothing about your soul.
  • If you like songs solely for their groove, then you’re probably a kinesthetic person with good hand-eye coordination and a healthy motor-speech system in the brain. It speaks more to your physiological and neurological composition than anything.

These factors don’t provide much insight into your inner emotional, spiritual, archetypal composition. For that, we must look at the resonance factor, whereby something in music resonates something in you. In other words, pure communication from song to soul.

Soul Resonance

Our internal compositions differ; we don’t all have the same emotional resonance spectrum. A song can only resonate what is there to be resonated, and if a portion of one’s inner spectrum is absent, then the corresponding qualities of the song will not be noticed, let alone felt. Like two people with different types of color blindness, it’s possible for one person to see something in a song that the other cannot, and vice versa. This kind of difference is not due to a difference in subjective projection or association, but inner perception of what is objectively there.

So what we’re really talking about here is soul resonance characteristics, meaning the unique spectrum of emotions, themes of experience, and pathways to fulfillment that you most deeply respond to and yearn for. These can be glimpsed by asking yourself the following questions:

  • What are your deepest priorities?
  • What brings you the greatest fulfillment?
  • What motivates your existence?
  • What completes you as a being?

The answers may correspond to the music you resonate with most. Esoterically, the answers to these questions also correspond to the “story of your life.” The same soul resonance characteristics that are touched by music are also touched by your inner responses to life events. In fact, it is these resonance characteristics that synchronistically attract such events in the first place through quantum-metaphysical processes. Thus the theme of your life, the nature of your soul, and the musical qualities of the songs you resonate with all share correspondence.

Read the full article with audio examples at http://montalk.net/metaphys/265/soul-resonance-and-music

Podcast Roundup

9/7: On Expanding Minds, hosts Maja D’Aoust and Erik Davis have a conversation with Andy Sharp of English Heretic about death, Horror films, Hiroshima, psychogeography, and his latest release, The Underworld Service.

 
http://s50.podbean.com/pb/fd840a4721e38d3f25dd4ec01834d2c6/541340f7/data2/blogs18/276613/uploads/ExpandingMind_090714.mp3

9/8: R.U. Sirius joins hosts Chris Dancy and Klint Finley to discuss technology transhumanism, and the current social/political climate among other topics.

https://soundcloud.com/itsmweekly/pending-mindful-cyborgs-episode-37
 
9/9: Peter Null interviews Professor Andrew Kolin, a professor of political science at Hilbert College in Hamburg and Kevin Carson, researcher at the Center for a Stateless Society, on militarization of police, centralization of power, war and the military-industrial complex.


http://s53.podbean.com/pb/e788a26888199ef114360f06cc89f48c/541347f9/data1/blogs18/371244/uploads/ProgressiveCommentaryHour_090914.mp3

9/10: On the C-Realm, KMO and June Pulliam discuss and dissect the archetypes and cultural meaning of zombie apocalypse narratives.


http://c-realmpodcast.podOmatic.com/enclosure/2014-09-10T12_48_22-07_00.mp3

9/11: Christopher Knowles joins Aeon Byte Gnostic Radio to examine how Gnosticism connects to alternative cultures, politics and humanity’s existential crisis.


http://content.screencast.com/users/AeonByte/folders/AEON%20BYTE/media/7984ec1d-8363-4162-a034-0dabc54aef33/1.%20Gnosticism%20and%20Politics%20with%20Chris%20Knowles.mp3

9/12: On New World Next Week, James Corbett and James Evan Pilato report on 9/11 terror hysteria, Obama’s private CFR event with Sandy Berger (9/11 document thief) and the cryptocurrency/anti-surveillance potential of a new off-the-grid communications technology.

 
http://www.corbettreport.com/mp3/2014-09-11%20James%20Evan%20Pilato.mp3

DATAcide: The Total Annihilation of Life as We Know It

panopticon-image

By Douglas Haddow

Source: Adbusters

“So tell me, why did you leave your last job?” he asks.

The first thing I remember about the internet was the noise. That screeching howl of static blips signifying that you were, at last, online. I first heard it in the summer of ’93. We were huddled around my friend’s brand new Macintosh, palms sweaty, one of us on lookout for his mom, the others transfixed as our Webcrawler search bore fruit. An image came chugging down, inch by inch. You could hear the modem wince as it loaded, and like a hammer banging out raw pixels from the darkness beyond the screen, a grainy, low-res jpeg came into view. It was a woman and a horse.

Since then, I’ve had a complicated relationship with the internet. We all have. The noise is gone now, and its reach has grown from a network of isolated weirdos into a silent and invisible membrane that connects everything we do and say.

“I needed a bigger challenge,” I say. This is a lie.

The brewpub we’re in has freshly painted white walls and a polished concrete floor, 20 ft ceilings and dangling lightbulbs. It could double as a minimalist porn set, or perhaps a rendition chamber. Concrete is easy to clean. The table we’re at is long and communal. Whenever someone’s smartphone vibrates we all feel it through the wood, and we’re feeling it every second minute — a look of misery slicing across my face when I realize it’s not mine.

“Tell me about your ideal process,” the guy sitting down the table from us says. My eyes strain sideways. He looks to be about thirty; we all do. Like a young Jeff Bezos, his skin is the color of fresh milk. He’s dressed like a Stasi agent trying to blend in at a disco. Textbook Zuckercore: a collared blue-green plaid shirt unbuttoned with a subdued grey-on-grey graphic tee, blue jeans and sneakers. Functional sneakers. Tech sneakers. This is a tech bar. Frequented by tech people who do tech things. The park down the street is now a tech park. That’s where the tech types gather to broadcast their whimsy and play inclusive non-sports like Quidditch, which, I’m told, is something actual people actually do. It’s a nerd paradise where the only problems that exist are the ones that you’re inspired to solve. And I want in on it, because I want to believe.

“I’m a big fan of social,” I blurt out as an aside. He replies with a calm and ministerial nod. Nobody says “social media” anymore, it’s just “social” now.

My atoms are sitting here drinking a beer, being interviewed for a position at a firm that specializes in online brand management systems. Which is a euphemism for a human centipede of marketers selling marketing to marketers for marketing. The firm is worth a billion dollars. You’ve never heard of it. It’s the type of place where they force you to play ping-pong if you come in looking depressed. Meet the new boss, same as the old boss, except this one is very concerned that you see him as a positive force in the universe.

I’m here, bringing the cold beer to my dry lips and bobbing my head in my best impersonation of someone who doesn’t feel ill when he hears the words “key metrics,” “familiarity,” “control groups” and “variant groups.” It’s the dawn of the new creative economy, and I can dig it. I’m here, but I’m also spread across the internet in a series of containers. I’m in Facebook, I’m in Instagram, I’m in Google, I’m in Twitter and a thousand other places I never knew existed. Depending how my body is disposed of, it will either become dirt or atmosphere. But the digital atoms will live forever, or at least until civilization is incinerated by whatever means we choose to off ourselves.

“What about this position interests you?” he asks.

When the TechCrunchers preach the gospel of disruption, it’s from an industrial perspective that sees life on Earth as a series of business models to be upended. Disrupt or die is the motto, but they never mention the disruptees — the travel agents, the cab drivers, the bellhops. The journalists. The meat in the box before the box is crushed by the anvil of innovation.

“People have ideas about things but it’s a bunch of things. Sign up flow for example, high level things, but sometimes I think — let’s table this for now and put together some idea maps. I feel so empowered because we’re aligned,” someone else says. I look around but can’t trace the source.

It’s hard to focus on his questions when all the conversations occurring parallel to ours combine in a cacophony of sameness, as if we’re all Tedtalking a mantra of ancient buzzwords: Engagement. Intuitive. Connection. User base. Revolutionary. It’s like coke talk gone sour, not words that are meant to say things, but stale semiotics that signify you belong. This is the the new language of business. This is where Wall Street goes to find itself.

“I traded in my suit for khakis and sunglasses,” one of them says. But he’s wearing neither. “That’s the best decision you’ve ever made bro,” his colleague replies.

These are the most boring people on the planet. And it’s their world now, we’re just supplying the data for it. The game is simple: dump venture capital into a concept, get the eyeballs, take the data and profit. But the implications of this crude scheme are profound. Beyond all the hype, something weird is happening.

I can’t eat without instagramming my food. I can’t shit without playing Candy Crush. I can’t even remember who half the people are on my Facebook feed, but I’ll still mindlessly scroll through their tedious status updates and wince at their tacky wedding photos. Out of these aimless swipes, clicks and likes, a new world is being born. A world where everything we do, no matter how inane, is tracked, recorded, sorted and analyzed. Yahoo CEO Marissa Mayer has said the whole process is “like watching the planet develop a nervous system.” And through this system, every human action has become a potential source of profit for our data lords, a signal for them to identify and exploit.

“We are about to enter a world that is half digital and half physical, and without properly noticing, we’ve become half bits and half atoms. These bits are now an integral part of our identity, and we don’t own them,” says Hannes Grassegger.

Grassegger is a German economics journalist who was raised in front of his mom’s Macintosh, and later, on a Commodore 64 he got for his sixth birthday. He recently wrote “Das Kapital bin ich” (I am Capital), a book that has been criticized by the European left for being too capitalist, and by the right for being the communist manifesto of the digital era. In it he tries to answer a deceptively simple question: if our data is the oil of the 21st century, then why aren’t we all sheikhs?

“We’ve all been sharing. But the smart ones have been collecting — and they’ve packed us into their clouds,” he says. “Privacy. Transparency. Surveillance. Security gap. I don’t want to hear about it. These are sloppy downplayings of a radical new condition: We don’t own ourselves any more. We are digital serfs.”

Like Grassegger, and like everybody else, I was lured into this radical new condition with the feel-good promises of connection, friendship and self-expression. Apps, sites and services that allowed us to share what we loved, and do what we wanted. For Grassegger, these platforms were merely fresh lots ready to be ploughed, and in turn they kept the harvest: our feelings, thoughts, experiences and emotions, encoded in letters and numbers. Now they’re putting it all to work, exploiting these assets with algorithms and sentiment analysis, and our virtual souls are toiling even while we sleep.

His solution to this dilemma is practical and pragmatic, siding with a lesser evil of establishing a personalized free data market, which would allow us to exploit our information before others do it for us, arguing that “We must carry into the new space those rights and freedoms we eked out in the physical world centuries ago. The ownership over ourselves and the freedom to employ this property for our own benefit. Only this will help us leave behind our self-imposed digital immaturity.”

“KRRAAAAASHH!”

A waitress lets a pint glass slip from her hand and shatter on the floor, but no one bothers to look over; they’re too engaged. Then I notice something eerie about the vibe in this place. There’s no sneering, no sarcasm, and no self-deprecation. Everyone is just sort of floating along in an earnest tranquility. As if each anecdote about “that cool loft I found on Airbnb” contained some deep spiritual significance beyond my grasp.

My interrogator goes for a piss and I load up Facebook in the interim, hoping to find a shard of inspiration in my feed that will provide a topical talking point. Instead I find a listicle. A curiosity gap headline. An ad. A solicitation. Another ad. Another listicle. Oh dear, someone has lost their phone. And finally, an ad in the form of a listicle. Or is it a listicle in the form of an ad?

We were told to surf the web, but in the end, the web serf’d us. Yet there’s a worse fate than digital serfdom, as Snowden’s ongoing NSA revelations suggest. This isn’t simply about the commodification of all human kinesis, it’s the psychological colonialism that makes the commodification possible.

The nature of this bad trip was hinted at in June when we learned that Facebook manipulated the emotional states of nearly 700,000 of its users. Half of those chosen for the study were fed positivity, the others, despair. “The results show emotional contagion,” the Facebook scientists told us, meaning that they had discovered that alternating between positive and negative stimulus does indeed affect our behaviour. Or perhaps rediscovered. There’s a precedent for this. We’ve been here before.

Burrhus Frederic Skinner, known simply as B.F. to his BFFs, is best known as the psychologist with the painfully large forehead who tried to convince the world that free will was an illusion. But he wasn’t always so dire. He was once a young man with hopes and dreams who wrote poems and sonnets and wanted to become a stream-of-consciousness novelist like his idol, Marcel Proust. He failed miserably and it led him to conclude that he wasn’t capable of writing anything of interest because he had nothing to say. Frustrated and bitter, he resolved that literature was irrelevant and it should be destroyed, and that psychology was the true art form of the 20th century. So he went to Harvard and developed the concept of operant conditioning by putting a rat in a cage and manipulating its behaviour by alternating positive and negative stimulus. Now we’re the rats in the cage, only we don’t know where the cage ends and where it begins.

“What’s your five year vision for social?” he asks.

There’s a right way and a wrong way to answer this question. The wrong way is to be critical and cast scepticism on the internet’s role in our lives. For instance, you could draw a parallel between Facebook’s probing of emotional contagion and the Pentagon’s ongoing research into how to quash dissent and manage social unrest. Or you could mention how the Internet of Things will inevitably consolidate corporate power over our personal liberty unless we implement strict regulations on what part of ourselves can and cannot be quantified. But if you did that, you’d upset the prevailing good vibes and come off like a sickly paranoiac in desperate need of some likes.

The right way is to turn off, buy in and cash out. Reinforce the grand narrative and talk about how social is going to bring people together, not just online, but in the real world. How it will augment our interactions and make us more open. How in five years you’ll be able to meet your true love through an algorithm that correlates your iTunes activity to your medical history and how that algorithm will be worth a billion fucking dollars. And it’s through that magical cloud of squandered human potential that Skinner emerges once again and starts poking his finger into your brain.

After establishing himself as a household name, Skinner was finally able to live out his dream of writing a novel. That novel was Walden Two, a story about a utopian commune where people live a creative and harmonious life in accordance to the principles of radical behaviourism. In contrast to 1984 and Brave New World, it was meant to be a positive portrayal of a technologically-enabled utopian ideal. In it he writes, “The majority of people don’t want to plan. They want to be free of the responsibility of planning. What they ask for is merely some assurance that they will be decently provided for. The rest is a day-to-day enjoyment of life.”

In the late 60s, Walden Two directly inspired a series of attempts to create real world versions of the fictional community it described. These were just a few of the thousands of communes that were being established across America at that time. Some thrived, but the majority fell apart within a couple short years. They failed for a number of reasons: latrines overflowed, the tofu supply ran out, the livestock starved to death and so forth. But what many of them had in common was a cascading systems failure of their foundational hypothesis — that social change could be achieved through self-transformation and the problems of power could be solved simply by ignoring them. There was always a Machiavellian in the transformational mist, though, and a refusal to acknowledge outright how power creates invisible structures that undermine the potential for cooperative action ultimately led to their implosion. It’s in this stale pub, with its complimentary WiFi and overpriced organic popcorn, that those invisible power structures continue to thrive.

“There has to be incentive. There has to be. You can’t force people to use it,” a woman in the corner mutters. She’s among a cluster of people who for some reason are all carrying the same cheap, ugly backpack. Her hand gestures become more aggressive as the conversation progresses and she looks to be caught in a moment midway between panic and ecstasy. Her expression would make the perfect emoji for the inertia of our time. It looks sort of like this: (&’Z)

“Our notions of digital utopianism are deeply rooted in a communal wing of American counter-culture from the 1960s. That group of people have had an enormous impact on how we do technology. Many of the leading figures in technology come from that wing, Steve Jobs would be one,” says Fred Turner, a communications professor at Stanford University who researches and writes about how counterculture and technology interact.

“Their ideas of what a person is and what a community should be has suffused our idealized understanding of what a virtual community can be and what a digital citizen should be. That group believed that what you had to do to save the world was to build communities of consciousness — places where you would step outside mainstream America and turn away from politics and democracy, turn away from the state, and turn instead to people like yourself and to sharing your feelings, your ideas and your information, as a way of making a new world.”

There’s a fault line that runs underneath the recycling bins of America’s abandoned hippy communes all the way to my cracked iPhone 5 screen. And if there is one man who epitomizes the breadth of this fault, it’s Stewart Brand.

In 1968, Brand published the Whole Earth Catalog, an internet before the internet that provided a directory of products for sustainable, alternative and creative lifestyles, and helped connect those who pursued them. When the Whole Earth Catalog went out of business in 1971, Brand threw a “demise party” wherein the audience got to choose who would receive the magazine’s remaining twenty grand. They chose to give it to Fred Moore, an activist moonlighting as a dishwasher, who would go on to found the Homebrew Computer Club — the birthplace of Apple and the PC. In the 80s Brand launched The Whole Earth ’Lectronic Link, one of the world’s first virtual communities. Following its success, he started the Global Business Network — a think tank to shape the future of the world. They’ve worked on “navigating social uncertainty” with corporations like Shell Oil & AT&T, among others. In 2000, GBN was bought by Monitor Group, a consultancy firm that made headlines in 2011 by earning millions of dollars from the Libyan Government to manage and enhance the global profile of Muammar Gaddafi.

Brand’s most enduring legacy will likely come from coining the phrase “information wants to be free,” which serves as the business model for the Actually Existing Internet and the Big Data dream.

Looking around the brewpub, listening to the chatter, and staring into the bright blue eyes of my would-be employer, you can almost hear the words of Google CEO Eric Schmidt echo against the minimalist decor: “We know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.”

In San Francisco, my fellow disruptees have taken to the streets and kicked off a proper bricks & bottle backlash against this sort of dictator-grade hubris that has come to define the Internet of Kings. Crude graffiti reading “DIE TECHIE SCUM” is scrawled on the sidewalk next to Googlebus blockades. TECH = DEATH signs are held up at protests. Tires are slashed, windows are smashed and #techhatecrimes is a hashtag that is being passed around Silicon Valley without a hint of irony.

Just down the street from where I’m sitting, a more passive form of protest has manifested in the form of a new café that promises an escape from the incessant blips and bleeps of the internet and its accoutrements.The tables there are also long and communal, but they’re wrapped inside an aluminum metal mesh designed to interrupt and restrain wireless signals and WiFi.

We are not going to escape this crisis by putting ourselves in a cage. There is no opt-out anymore. You can draw the blinds, deadlock your door, smash your smartphone, and only carry cash, but you’ll still get caught up in their all-seeing algorithmic gaze. They’ve datafied your car, your city and even your snail mail. This is not a conspiracy, it’s the status quo, and we’ve been too busy displacing our anxiety into their tidy little containers to realize what’s going on.

“Do you have any questions for me?” he finally asks, abruptly. My beer is empty, I’m thirsty for another, and the interview hasn’t gone well. I’ve failed to put on a brave face and the only questions that I have concern how much money I’m going to make. Will it be enough to pay for my escalating rent now that the datarazzi have moved into the neighborhood? Or will I have to drive an Über in my spare time to make ends meet?

The internet is a failed utopia. And we’re all trapped inside of it. But I’m not willing to give up on it yet. It’s where I first discovered punk rock and anarchism. Where I learned about the I Ching and Albert Camus while downloading “Holiday in Cambodia” at 15kbps. It’s where I first perved out on the photos of a girl I would eventually fall in love with. It’s home to me, you and everybody we know.

No, the appropriate question to ask is: “What is the purpose of my life?”

I’ve seen the best minds of my generation sucked dry by the economics of the infinite scroll. Amidst the innovation fatigue inherent to a world with more phones than people, we’ve experienced a spectacular failure of the imagination and turned the internet, likely the only thing between us and a very dark future, into little more than a glorified counting machine.

Am I data, or am I human? The truth is somewhere in between. Next time you click I AGREE on some purposefully confusing terms and conditions form, pause for a moment to interrogate the power that lies behind the code. The dream of the internet may have proven difficult to maintain, but the solution is not to dream less, but to dream harder.

A Lie That Serves the Rich — the truth about the American economy

american-economy-tiago-hoisel-wallpaper-wallchan-h-n-ibackgroundz.com

By Paul Craig Roberts, John Titus, and Dave Kranzler

Source: PaulCraigRoberts.org

The labor force participation rate has declined from 66.5% in 2007 prior to the last downturn to 62.7% today. This decline in the participation rate is difficult to reconcile with the alleged economic recovery that began in June 2009 and supposedly continues today. Normally a recovery from recession results in a rise in the labor force participation rate.

The Obama regime, economists, and the financial presstitutes have explained this decline in the participation rate as the result of retirements by the baby boomers, those 55 and older. In this five to six minute video, John Titus shows that in actual fact the government’s own employment data show that baby boomers have been entering the work force at record rates and are responsible for raising the labor force participation rate above where it would otherwise be. http://www.tubechop.com/watch/3544087

It is not retirees who are pushing down the participation rate, but those in the 16-19 age group whose participation rate has fallen by 10.4%, those in the 22-14 age group whose participation rate has fallen by 5.4%, and those in the 24-54 age group whose participation rate is down 2.5%.

The offshoring of US manufacturing and tradable professional service jobs has resulted in an economy that can only create new jobs in lowly paid, increasingly part-time nontradable domestic service jobs, such as waitresses, bartenders, retail clerks, and ambulatory health care workers. These are not jobs that can support an independent existence. However, these jobs can supplement retirement incomes that have been hurt by many years of the Federal Reserve’s policy of zero or negative interest rates. Those who were counting on interest earnings on their savings to supplement their retirement and Social Security incomes have reentered the labor force in order to fill the gaps in their budgets created by the Fed’s policy. Unlike the young who lack savings and retirement incomes, the baby boomers’ economic lives are not totally dependent on the lowly-paid, part-time, no-benefits domestic service jobs.

Lies are told in order to make the system look acceptable so that the status quo can be continued. Offshoring America’s jobs benefits the wealthy. The lower labor costs raise corporate profits, and shareholders’ capital gains and performance bonuses of corporate executives rise with the profits. The wealthy are benefitting from the fact that the US economy no longer can create enough livable jobs to keep up with the growth in the working age population.

The clear hard fact is that the US economy is being run for the sole benefit of a few rich people.