After the Crash

Dispatches From a Long Recovery (Est. 10/2024)

After the Crash

Skynet Ascendant

t2skynetbdcap1

By Cory Doctorow

Source: Locus Online

As I’ve written here before, science fiction is terrible at predicting the future, but it’s great at predicting the present. SF writers imagine all the futures they can, and these futures are processed by a huge, dynamic system consisting of editors, booksellers, and readers. The futures that attain popular and commercial success tell us what fears and aspirations for technology and society are bubbling in our collective imaginations.

When you read an era’s popular SF, you don’t learn much about the future, but you sure learn a lot about the past. Fright and hope are the inner and outer boundaries of our imagination, and the stories that appeal to either are the parameters of an era’s political reality.

Pay close attention to the impossibilities. When we find ourselves fascinated by faster than light travel, consciousness uploading, or the silly business from The Matrix of AIs using human beings as batteries, there’s something there that’s chiming with our lived experience of technology and social change.

Postwar SF featured mass-scale, state-level projects, a kind of science fictional New Deal. Americans and their imperial rivals built cities in space, hung skyhooks in orbit, even made Dyson Spheres that treated all the Solar System’s matter as the raw material for the a new, human-optimized megaplanet/space-station that would harvest every photon put out by our sun and put it to work for the human race.

Meanwhile, the people buying these books were living in an era of rapid economic growth, and even more importantly, the fruits of that economic growth were distributed to the middle class as well as to society’s richest. This was thanks to nearly unprecedented policies that protected tenants at the expense of landlords, workers at the expense of employers, and buy­ers at the expense of sellers. How those policies came to be enacted is a question of great interest today, even as most of them have been sunsetted by successive governments across the developed world.

Thomas Piketty’s data-driven economics bestseller Capital in the Twenty-First Century argues that the vast capital destruction of the two World Wars (and the chaos of the interwar years) weakened the grip of the wealthy on the governments of the world’s developed states. The arguments in favor of workplace safety laws, taxes on capital gains, and other policies that undermined the wealthy and benefited the middle class were not new. What was new was the political possibility of these ideas.

As developed nations’ middle classes grew, so did their material wealth, political influence, and expectations that governments would build am­bitious projects like interstate highways and massive civil engineering projects. These were politically popular – because lawmakers could use them to secure pork for their voters – and also lucrative for government contractors, making ‘‘Big Government’’ a rare point of agreement between the rich and middle-income earners.

(A note on poor people: Piketty’s data suggests that the share of the national wealth controlled by the bottom 50% has not changed much for several centuries – eras of prosperity are mostly about redistributing from the top 10-20% to the next 30-40%)

Piketty hypothesizes that the returns on investment are usually greater than the rate of growth in an economy. The best way to get rich is to start with a bunch of money that you turn over to professional managers to invest for you – all things being equal, this will make you richer than you could get by inventing something everyone uses and loves. For example, Piketty contrasts Bill Gates’s fortunes as the founder of Microsoft, once the most profitable company in the world, with Gates’s fortunes as an investor after his retirement from the business. Gates-the-founder made a lot less by creating one of the most successful and profitable products in history than he did when he gave up making stuff and started owning stuff for a living.

By the early 1980s, the share of wealth controlled by the top decile tipped over to the point where they could make their political will felt again – again, Piketty supports this with data showing that nations elect seriously investor-friendly/worker-unfriendly governments when investors gain control over a critical percentage of the national wealth. Leaders like Reagan, Thatcher, Pinochet, and Mulroney enacted legislative reforms that reversed the post-war trend, dis­mantling the rules that had given skilled workers an edge over their employers – and the investors the employers served.

The greed-is-good era was also the cyberpunk era of literary globalized corporate dystopias. Even though Neuromancer and Mirrorshades predated the anti-WTO protests by a decade and a half, they painted similar pictures. Educated, skilled people – people who comprised the mass of SF buyers – became a semi-disposable under­class in world where the hyperrich had literally ascended to the heavens, living in orbital luxury hotels and harvesting wealth from the bulk of humanity like whales straining krill.

Seen in this light, the vicious literary feuds between the cyberpunks and the old guard of space-colonizing stellar engineer writers can be seen as a struggle over our political imagination. If we crank the state’s dials all the way over the right, favoring the industrialist ‘‘job creators’’ to the exclusion of others, will we find our way to the stars by way of trickle-down, or will the overclass graft their way into a decadent New Old Rome, where reality TV and hedge fund raids consume the attention and work we once devoted to exploring our solar system?

Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Termina­tor and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.

It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.

Stephen Hawking, Elon Musk, and other luminaries have issued pan­icked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.

The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recurr­ing payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.

Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.

We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.

We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

The age of automation gave us stories like Chap­lan’s Modern Times, and the age of multinational hedge-fund capitalism made The Matrix into an enduring parable. We’ve gone from being cogs to being a reproductive agar within which new cor­porations can breed. As Mitt Romney reminded us, ‘‘Corporations are people.’’

Marrying robots, killing with drones, and making empty selfies

by Edward Curtin

Source: Intrepid Report

Today everything has become a spectacle, including writing. My title probably caught your eye, as it was intended. But now I would like to tell you a personal story about a man whose brilliant work foreshadowed and dissected the issues of my title before it existed. In this he was prophetic, and it is why his work is so important. He always insisted that true artists were able to uncover society’s conflicts before they emerged consciously. Though a psychologist by profession, he was in this sense an artist as well.

His name, Rollo May, has disappeared from public discourse in this era of biological psychology and psychiatry. This great American thinker and writer was the man who introduced existential psychology to the United States. And though he died twenty-one years ago, his prescient voice begs to be heard in our current conditions.

From his first important book in 1950—The Meaning of Anxiety—he examined key underlying issues that have plagued this country ever since: the worship of technology as a death cult; the loss of a genuine sense of self; sex obsessions leading to lovelessness and impotence; and violence yoked to a lack of compassion.

In book after book, he reiterated one of his central themes: that full passionate life is only possible when one refuses to block off from consciousness the frightful emotions of anxiety, guilt, and despair. In this, his life’s work ran against the grain of the emerging zeitgeist of happy pills, mood stabilizers, and the happiness industry. “After despair,” he wrote, “the one thing left is possibility.” For possibility (Latin, posse, to be able) means power, and true power only comes to those who dare to be weak and freely embrace their personal destinies and the truth of their political and cultural conditions. I think it is not an exaggeration to say that we are presently living in an era of despair, and to embrace that reality is a hard but necessary pill to swallow. May is a wonderful guide.

While topical, in many ways his message is timeless as well. But I would like to tell you about some things I learned from him years ago that speak to our current condition. And it seems fitting that I should begin these thoughts on a day when a prominent, mainstream website has published an article arguing that humans should be able to marry robots and the day of those blissful conjugal ties is in our not too distant future. So I will proceed with those lovely words ringing in my mind: “I now pronounce you robot and wife.”

It was during the closing years of the Cold War when he and I sat down for a long conversation about his thought. Cold War rhetoric and nuclear saber rattling dominated the news and a strong anti-nuclear movement was astir. I had been deeply impressed with May’s paradoxical thinking ever since I had read his award-winning Love and Will in 1969, a year in which I had been forced out of a college teaching position for “heretical” thinking and opposition to the Vietnam war. In his work, which was not openly political, I nevertheless found a voice of deep wisdom and prophetic power. He seemed to be unearthing hidden springs of the madness sweeping the country, and in so doing also addressing the future, and, of course, me. I was feeling particularly vulnerable, yet paradoxically intensely strong, as I had recently declared myself a conscientious objector from war and the Marine Corps. It was a time like today when death and destruction were in the air, and, as Yeats puts it: “Things fall apart; the center cannot hold/Mere anarchy is loosed upon the world/The blood-dimmed tide is loosed, and everywhere/The ceremony of innocence is drowned/The best lack all conviction, while the worst/Are full of passionate intensity.”

The first thing I noticed about May the day we met was that he seemed painfully vulnerable, as though he had so opened himself to existence that the slightest breeze could blow him away. Yet when he began talking I sensed a fierceness, as well, as I recalled a favorite quote of his from Beethoven: “I will seize fate by the throat.”

So I asked him, “In reading your works one of the things that strikes me is the vitality you draw from an awareness of death. Most people would call this morbid and depressing, and yet it seems to bring you joy. I wonder how this began for you?”

“Well,” he answered without hesitation but in his ruminative way, “I’ve had some long bouts with killing illnesses. I had tuberculosis for five or six years. I had malaria fever when I was in Greece. And I’ve had several other bouts with death. If most people would call the consciousness of death depressive, I think they are the ones who have the—what I would call—masochistic or neurotic viewpoint. All through human history mortality has been faced directly and out of it, and this especially true for the ancient Greeks, they got the sense of the value of life from the fact that we are mortal. Now our age is afraid of death and we repress it and we think the only wise thing is to think about living, which strikes me as itself very sick. It’s because we’ve wedded ourselves to technology, and technology is really a study of death. You say ‘vitality.’ You can’t speak of technology as having vitality. Vitality is the human beings contribution and he ought to use technology to make his life richer. But we have become identified with it.”

Presto! Back to the present/future! As if on cue, a refutation of May’s dismissal of machines having life walks in my door. I see the mailman deliver our mail, so I get up and fetch it. An invitation has arrived for a public lecture at the college where I teach—a lecture by the futurist Ray Kurzweil, the man of “Singularity” fame, the prognosticator of the day he says is coming when artificial intelligence will surpass human intelligence and human biology will disappear into the machine. Ray has a plan to never die, so he takes 130 plus supplements a day to keep himself alive until he is able to upload his consciousness onto a hard drive and become one with the machine for a happy immortality as bits of information. Sounds like a great hereafter. And Ray has a backup plan in case the pills don’t do the trick and keep him going until he impregnates the machine; he’ll be fresh frozen at the Alcor Life Extension Foundation where he expects to be defrosted like a frozen burrito in no more than fifty years.

May said to me, “I’m very much against the quantitative views of human life. You could live exceptionally as Pascal did and die in your middle forties. As Kierkegaard did also. The length of life I don’t think is relevant. The idea that we are going to prolong life for two hundred years seems to me to be the most misplaced goal in the whole technological, crazy scheme.”

It looks like Rollo had a point: the worship of technology as a death cult. He could see it then, and today it is carrying us to our doom unless we change course. “More and more,” he wrote, “the question is being asked whether society as a whole is psychotic, and the pause after the question is a sign that the answer could be yes as well as no.” There was, he then felt, a fear of psychosis on a very broad scale, and at the heart of this fear is a loss of faith in the reality of the self, as well as a widespread feeling that one can never be sure anything is real. This sense of unreality has increased exponentially since then, and the issue of self-identity has become a hall of mirrors in our reality-media funhouse. “As in a Kafka novel, everything is waiting for us, but we ourselves do not appear.” But what does appear today, as then, but in a slightly different guise, and grows larger and larger as people’s faith in themselves grows smaller and smaller and their sense of impotence increases, is the possibility of nuclear warfare and world destruction—a new cold war started by the United States by encircling Russia and setting Ukraine ablaze. The ultimate technological death cult is, of course, nuclear weapons .

May made the connections. Like the great sociologist C. Wright Mills, he knew that our destinies are personal and social, and to deny one is to deny the other. By being existential he meant understanding the individual, not as an atomized self, but as a person-in-the world. Mills called it the sociological imagination; May preferred the term paradoxical. But they were on the same page. One’s sense of self—self-identity—is rooted social and historical conditions.

Starting with Man’s Search for Himself in the 1950s and continuing until his death in 1994, May repeatedly explored the reasons why there was an increasing loss of a genuine sense of self resulting in widespread identity confusion and a growing apathy linked to a lack of compassion. He clearly described the anxiety and loneliness that ate at so many people who “not only do not know what they want; they often do not have any clear idea of what they feel.” Feeling only empty and bored and lacking a real sense of self, they conform to hollow cultural values and mores while consuming the goods and services that a consumer culture offers to fill them up. Consuming, they are consumed. This powerless dependency, rooted in a lack of self-identity and the need to be liked, leads to painful anxiety, despair, and powerlessness resulting in acquiescence to social ills. This is today’s selfie/media culture in a nutshell, what Christopher Lasch once called the culture of narcissism.

I obviously couldn’t ask him when we talked, but I can imagine his response to today’s trends of people marrying robots, selfie photos, Facebook, avatars and second lives in cyberspace, the growth of pornography, sex with machines, the sexual saturation of culture, electronic warfare, drone killings, etc.—a bemused laugh and a comment suggesting the tragedy of it all. In Love and Will he wrote that “the contemporary paradoxes in sex and love have one thing in common, namely the banalization of sex and love. By anesthetizing feeling in order to perform better, by employing sex as a tool to prove prowess and identity, by using sensuality to hide sensitivity, we have emasculated sex and left it vapid and empty. The banalization of sex is well-aided and abetted by our mass communication. . . . They oversimplify love and sex, treating the topic like a combination and learning to play tennis and buying life insurance. In this process, we have robbed sex of its power by sidestepping eros (the creative life force); and we have ended by dehumanizing both.” He predicted that this technical approach to sex would lead to sex obsessions, lovelessness, and increased sexual impotence. And here we are—Viagra, big butts, enhanced this and enhanced that—all in the service of sexual satisfaction produced by the cult of technique and devoid of passion.

“Shooting” yourself with a phone camera, sex with a robot or a machine, and killing with drones—this is life today. We have become separated from our humanity by our machines. We worship our images and in so doing can’t grasp the death and destruction caused by our drones and foreign wars. Others don’t exist in this solipsistic culture. May saw it coming and explained why. He saw that violence was yoked to a lack of compassion and that this lack of compassion (to suffer with others) was connected to our flight from death and emotions we consider negative. He saw this form of thinking as an effort to control life that was self-defeating and could only lead to more violence.

“Paradoxical thinking,” he told me, “seems to me to be the only kind that gets to the root of human existence. I don’t think analytical thinking does. It leaves out too much. You remember Heraclitus. I think he’s quite right that we always think in terms of positive/negative. We think like electricity, thus both the negative and positive pole and the oscillation back and forth, and human thinking is a play with opposites.”

Since he has written so much about the breakdown of our traditional myths and symbols, I asked him if there was any one word or symbol that he thought encompassed the body of his work.

After a long pause, he said, “No, I think that’s impossible for any person who writes to say. I think you could say it much better than I could because we’re so much in it. All I know is that I think paradoxically.” And without pause or any word from me, he continued. “Well, if you wanted to push me, I would say that what I think is the basic, well, the basic symbol of my life, I would say that it is compassion. That’s what matters most to me. I grew up in a rather difficult family, quite difficult. I did not have a good childhood. I was quite lonely as a child. And I did suffer a good deal.”

Out of this childhood pain, he learned early to be a therapist for his family, and felt that these experiences gave him an acute sensitivity to others’ feelings. In his memoir Paulus, about his friend, Paul Tillich, the great Protestant theologian, he wrote words that could equally apply to himself: “Someone has to mediate, to make a connection through his own life between opposites.” For out of his wounds, May has created a powerful body of writings, and out of a torn self, a paradox of wholeness.

For us today, in the era of apathy, depression, and indifference to the suffering and deaths of “others” everywhere, May’s work begs to be resurrected. He urges us to care again, and to let our care and compassion lead us to act to stop the violence that we are taught to ignore. Don’t look away, I can hear him say, face fully all dimensions of the human experience, the negative and positive; remember that despair and joy are linked to the possibility of freedom; reject the cult of death that hides within technological obsessiveness; and remember that love brings the intimation of our mortality but also our greatest joys and passions.

And if he were still sitting across from me—and you—today, he’d probably also say with a grin, “Above all, don’t marry a robot.”

Edward Curtin is a sociologist and writer who teaches at Massachusetts College of Liberal Arts and has published widely.

The true value of money

index

Economics needs a revolution.

By David Orrell

Source: Adbusters

This sentiment has been expressed by people from the physicist turned hedge-fund manager Jean-Philippe Bouchaud (in a 2008 paper), to the Bank of England’s Andrew Haldane (in a 2014 foreword for Manchester’s student-run Post-Crash Economics), to activist groups such as Kick It Over. So what would such a revolution look like?

Perhaps the archetypal model for a scientific revolution is the quantum revolution that shocked the world at the turn of the last century. In the space of a few short years, almost everything that was known about the nature of matter was overturned. The Newtonian view of the world as a predictable machine crumbled with it.

Except, that is, in economics – which continues to base its models on quasi-Newtonian economic laws.

A peculiar feature of orthodox economics is that money is treated as an inert medium of exchange, with no special properties of its own. As a result, money is largely excluded from macroeconomic models, which is one reason the financial crisis of 2007/8 was not predicted (it involved money). In many respects, when viewed through the lens of quantum physics, money behaves a lot like matter – and acknowledging that behavior promises to do to economics what quanta did for physics.

The main insight of quantum physics is that matter is composed of entities which behave in some ways as waves and in other ways as particles. This novel insight countered the Newtonian view that billiard ball-like atoms behaved independently of each other. A beam of light, for example, is an electromagnetic wave but it is also a stream of particles known as photons. At a quantum level, matter is fundamentally dualistic: neither the particle nor the wave description is complete by itself.

The same can be said of money, which turns out to have quantum properties of its own. Money is strange stuff, when you think about it – but because it has been around for millennia we rarely do. Consider for example a U.S. dollar bill. On the one hand it represents a number – in this case the number one. On the other hand it is a physical thing which can be possessed, exchanged and above all valued (even lusted after, if there are enough of them). It therefore lives partly in the abstract world of numbers and mathematics and partly in the real world of things, people and value.

The same is true of any money object that we use for payment. Here “object” could refer either to a physical object – such as a coin – or a virtual object, such as 1.2107 bitcoin (BTC) sent from a phone. What makes such objects special is that they have a fixed, defined value in currency units.

While seeing money objects as things with a fixed monetary value might appear trivial, it turns out to have complex and contradictory properties that feed into the economy as a whole. In particular, they combine two aspects, abstract number and real world value, which are as different as waves and particles.

For example numbers are subject to mathematical laws – such as compound interest – and can grow without limits, while in the real world natural processes tend to be subject to bounds. In 1850 an American lawyer did the math and calculated that five English pennies invested at 5 percent compound interest since 0 AD would have accumulated to 32 billion spheres of pure gold, each equal in size to the Earth. This is a useful exercise for anyone who thinks that gross domestic product (GDP) can grow forever.

Numbers can be negative, as in debts, but (as the English physicist-turned-economist Frederick Soddy pointed out) there is no such thing as a negative number of objects. You might be underwater on your mortgage but you can’t own a negative house. Throughout history the frightening ability of negative debt to grow without bounds has been responsible for forcing people into economic slavery.

Numbers are hard and precise, like the particle aspect of matter. Real-world concepts such as value are diffuse and fuzzy, like the wave aspect of matter. By combining these two aspects in a single package, money objects are our contribution to the quantum universe.

The dualistic nature of money explains its frequently paradoxical behavior. In the early 2000s, cheap credit in the United States meant that even low-income people could afford their own homes. Some cashed in and sold their houses at the top of the market. For them the money was real – they could go to the bank and withdraw dollar bills. But when the credit crunch kicked in most of the new money disappeared into the ether, as if it had never existed. Money seemed to be both real and unreal at the same time – a sensation familiar to anyone who has studied quantum physics.

Just as quantum physics overturned Newtonian physics, so a reexamination of money promises to disrupt economics. The reason that critics are calling for fundamental change is that neoclassical economics has failed to provide answers to problems such as wealth inequality, financial crises and environmental degradation – which is unsurprising if it treats money as nothing more than an inert, Newtonian medium of exchange. The tendency of money to clump and accumulate with a small group of creditors, or for financial markets to be inherently unstable, or for GDP growth to be valued over the environment, becomes clearer when we acknowledge the vital, active role of money and the tension and discrepancy between numbers and the real world that drives it.

Of course, one should not underestimate the resistance of economists to adopting new ideas, however the worldwide student movement calling for change is unlikely to go away. Economics is primed for a quantum revolution of its own.

— David Orrell is a mathematician and author. His latest book, Truth or Beauty: Science and The Quest for Order, explores the role of aesthetics in science. He is currently working on a book about money.

Co-Creation and the Greater Reality

431233_10150711357695081_1179126980_n

By Rahkyt

Source: Sacred Space in Time

Life is a continuous process of creation and destruction. With every action and reaction, something new is birthed into being and something old undergoes the processes of destruction. From the thought processes we engage in during the course of the day, to the actions we take and reactions we make, we are creating and destroying our perception of reality. As a result, in each instance, we are truly remaking ourselves anew, in the only way that matters: the construction of our own, personal rendition of reality.

Memory serves the function of maintaining consistency. Without it, we become a new person in each instance. Memories ties us to the past and creates a stream of impressions which guide our present and future actions. We define ourselves by our memories and through them, we present a facade to the world comprised of impressions that may be factual or not, based upon our own emotional ties to events that have passed. The past itself, being dependent upon perception, is more illusory than real. Outside of mechanical means of perception, video and audio recordings, pictures, etcetera, our memories of the past are not at all reliable. Our emotional state of being at the moment of experience often determines how we perceive them and the mind fills in gaps with re-created scenes, making memory often more fantastical than reality-based.

When we hold on to stories of the past and use them to define our present we limit our options and outcomes. By stating to ourselves and others that we are this and that because we have experienced this or that we bind ourselves to the potentialities inherent in the probabilistic options arising from the present manifestation of effects caused by our experience of this and/or that. Logical determinism is the result. We see no other options available because our synaptic patterns of reality-formation dictate outcomes bounded by the past, when, in reality, there are no such boundaries truly in place. Only those we have placed upon ourselves.

We are thereby bound into preconditioned repetitions of past patterns from which there is seemingly no escape. Despair and depression result, limiting the outcomes of a life to a downward-spiraling reoccurrence pattern. Fear acts as the conditioning factor, pulling up past scenes of loathing that trap thought into conditioned pathways of expression. Again and again we dance the same dance, although the tune itself has changed. The ability to cognate at higher levels is a divine gift that allows us to shift those patterns out and to achieve a higher state of expression if we so choose to do so. We can change, no matter the thickness of our synaptic connections, the density of our neural nets, the pervasivity of fear-based patterns of call and response.

Removing these boundaries and experiencing the full gamut of experience is the challenge of a lifetime. Becoming open to the infinite expression of creative union with the ultimate requires the death of the past and the embrace of the present, thereby delimiting the possibilities of the future. It entails the conscious processing of future experience by the release of past experience resulting in the embrace of the Now experience. Each moment recreates itself anew, requiring new output in order to take full advantage of new input. While reality does seemingly proceed in cycles and we undergo certain types of experience again and again it is really a spiral and each new experience while appearing similar is a new iteration at a higher level of occurrence. Therefore, the same reaction is not always the best action.

Determining the difference between action and reaction takes a depth of understanding that must be based upon recognition of fundamental patterns inherent within our lived realities. A thorough recapitulation of our life experiences, whee every memory we can access has been ruminated upon and its lessons internalized and made accessible to conscious thought. Where our reactions become instead new actions based upon the present input which may be slightly different from past input in important and fundamental ways.

Life is purposeful and there are no accidents. Each perception, each interaction is meaningful and is connected to external realities in fundamental ways. Together, we co-create the Greater Reality and, despite the hype, we are all integrally connected at the base level of existence, beneath conscious perception and experience.  This web of continuous co-creation is beyond the conscious ability of most to perceive and yet, to be able to do so is not necessary. Trust in God, in the willful and deliberate manifestation of reality is required alongside the desire to act in harmonious resonation with the dictations of the Multiverse. This can be done by simply existing in the flow of creation and destruction, maintaining a perceptive ambivalence to societally-determined conceptions of value yet adhering to the internal sense of beingness within the continuous flow of change and evolution.

No matter how things seem, whether they are determined to be good or bad, all has purpose, all has reason. Accepting this truth of existence and working within one’s own flow of interaction at the personal and subjective level sends emanations of causality out into the greater creation, combining with the output of other souls to morph into stupendously complex manifestations that form the holographic context of the combined, material co-creation that we call the world.

 

The Internet Doesn’t Exist

history

By Jacob Silverman

Source: The Baffler

The Internet has been very busy. In just the last week, Caitlyn Jenner broke the Internet, but she also united it. The FCC made war on the Internet. The Internet shamed a couple. The Internet had a dark side, Nikki Finke was barred from the Internet, the Supreme Court made the Internet less safe for women, the Internet named a famous fetus, the Internet did stuff with a superhero movie, and the Internet changed. A girl also won the Internet, Jack White had a difficult history with the Internet, and the Internet “shafted” a Canadian journalist.

“The Internet” is the universal straw man, a hero or villain for every occasion. The Internet, the Internet, the Internet—this decentralized communications network has long been granted a proper noun and practically a degree of sentience. Yet few people talk about “the Telephone” as if it were some person or place, though perhaps they once did. This eagerness to grant the Internet some degree of autonomy—to make it into an actor, an entity—stems in part from its apparent abstraction. Where does all this information come from? As Ray Bradbury famously said, “To hell with you and to hell with the Internet. It’s distracting. It’s meaningless; it’s not real. It’s in the air somewhere.”

Bradbury wasn’t just slipping into kneejerk techno-fear. He was also guilty of the same fallacy that crops up again and again in digital journalism: the assumption that the Internet is some monolithic mass, a discrete population or interest group. “It’s distracting,” Bradbury said, without specifying what “it” was.

But in another, more important way, Bradbury was absolutely right: the Internet doesn’t exist.

A couple years ago, Rachel Law, a grad student at Parsons at the time, had this to say: “The ‘Internet’ does not exist. Instead, it is many overlapping filter bubbles which selectively curate us into data objects to be consumed and purchased by advertisers.” As she also said, a bit less academically, “Browsing is now determined by your consumer profile and what you see, hear and the feeds you receive are tailored from your friends’ lists, emails, online purchases, etc.”

What we call the Internet—and what web writers so lazily draw on for their work—is less a hive mind or a throng or a gathering place and more a personalized set of online maneuvers guided by algorithmic recommendations. When we look at our browser windows, we see our own particular interests, social networks, and purchasing histories scrambled up to stare back at us. But because we haven’t found a shared discourse to talk about this complex arrangement of competing influences and relationships, we reach for a term to contain it all. Enter “the Internet.”

The Internet is a linguistic trope but also an ideology and even a business plan. If your job is to create content out of (mostly) nothing, then you can always turn to something/someone that “the Internet” is mad or excited about. And you don’t have to worry about alienating readers because “the Internet” is so general, so vast and all-encompassing, that it always has room. This form of writing is widely adaptable. Now it’s common to see stories where “Facebook” or “Twitter” stands in for the Internet, offering approval or judgment on the latest viral schlock. Choose your (anec)data carefully, and Twitter can tell any story you want.

We fall back on “the Internet” because it gives us a rhetorical life raft to hang onto amidst an overwhelming tide of information or a piece of sardonic shorthand to utter with a wink and a grimace, much like “never read the comments.” It also reflects a strange irony about today’s culture: despite being highly distributed, and despite offering an outlet for every subculture and niche interest and political quirk, what we think of the Internet often does feel rather uniform and monolithic.

This impression is partly based in fact; the tech and media industries are currently undergoing a kind of recentralization, exemplified by the rise of massive platforms like Facebook and recent mega-deals, such as Verizon buying AOL or Charter Communications (who?) snapping up Time Warner Cable. Attention is increasingly being manipulated and auctioned off by a handful of big conglomerates. The relegation of Twitter to also-ran in the social media sweepstakes—the loser to Facebook in the rush to industry monopoly—also reflects this centralization. That a company with hundreds of millions of users can seem like a failure only shows how bad the market is at apportioning value. (But there I go falling into abstractions again—as if there is anything called “the market.”)

“The Internet” is easy, a convenient reference point and an essential concept for web journalists tasked with surfacing monetizable content from this great informational morass. Digital culture, or writing about “what people are talking about on the Internet,” is considered its own beat now. But in the same way that someone born in the 1980s might not think of himself as a millennial—an arbitrary distinction crafted by demographers and marketers—a user of an online service is not necessarily from, or part of, the Internet. Even some of the subcultures often held up as part of the Internet are mostly notional. Is “Black Twitter” a specific, homogenous entity, as it’s so often described in news coverage? Or is it more something that people do, a set of social relations acted out by varying groups of mostly black Twitter users?

The more we write about what takes place online as if it occurred in some other world, the more we fail to relate this communication system, and everything that happens through it, to the society around us. To understand the Internet, we have to destroy it as an idea.

Politics as therapy: they want us to be just sick enough not to fight back

freedom-mental-slavery2

By Michael Richmond

Source: Transformation

On 10 October it is World Mental Health Day. I used to be outgoing, but a descent into crushing depression left me housebound. After Occupy I started asking: how does social environment shape our psychology?

I used to buy the Sun newspaper. Not just to fit in with mates at secondary school but right into my first year at university. I knew there was something to be ashamed of in this filthy habit, armed as I was with my oft-deployed excuse: “I only buy it for the crossword and the football transfers.”

This was true. I never read the news. In general, I lived a remarkably apolitical existence. This was some feat considering I have a Jewish communist great grandfather, socialist grandparents, a union lawyer dad and an older brother who went through his Che Guevara phase at around fifteen.

I dropped out of university in early 2007, five months before Northern Rock bank hit the skids. Who knows whether the student experience would have politicised me? Perhaps the process would have been helped along by the backdrop of the approaching financial crisis?

But something else politicised me instead: a crushing, rapid descent into depression, social wilderness and personal crisis.

I experienced anxiety and depression as a hostile takeover of my life and sense of self. I went from being outgoing and sociable to being unable to talk to people or leave the house. This was within the space of a few days. There was no discernible cause.

It was quickly clear that I couldn’t continue at university and so I moved back into my parents’ house, where I have lived ever since.

Several years of isolation, suicidal thoughts and internal struggle followed. I remained unable to escape the confines of my bullying psyche, let alone my house.

Unable to work or study, have friendships, or experience joy, reading became my true love, my source of meaning, my attempt to make sense of what had happened to me. I obsessively read classic literature, history, philosophy, political economy – I had felt a profound sense of loss at not being able to finish university. I became determined that I would instead educate myself.

But an impenetrable sense of terror and despair continued to accompany me through my every waking and sleeping hour. I began to work my way through an impressive list of psychotropic medications and psychotherapies and eventually attended an NHS psychiatric day hospital for six months.

A “service user” within the psychiatric system gains a unique insight and practical education in state discipline as well as the lengths gone to in enforcing normativity. Having grown up white, straight, male and middle class, I was privileged to rarely, if ever, be told that I had to be something other than what I was.

I seldom encountered gross injustice or violence, blatant discrimination or the kind of treatment faced from the earliest ages if you happen to be a person of colour, don’t fit a gender binary or adhere to accepted ideals of sexual behaviour.

Apart from being a non-religious Jew and encountering minimal levels of playground anti-Semitism, this was the first time I found myself in a situation of social and political ostracism (as well as a self-ostracism that proved just as powerful). I discovered for myself that the experience of the personal deeply informs the political.

Leaving the psychiatric day hospital to instead attend the asylum of Occupy the London Stock Exchange at St Paul’s Cathedral was in many ways a descent into further madness. Many “occupiers” were well acquainted with psychiatric services and medications – as well as using drugs not sanctioned by the state, but often taken for similar reasons.

Chaotic, naïve, and ultimately politically problematic and ineffectual, the initial occupied space did nevertheless open up the possibility for social and political interaction that is elsewhere absent from society.

I felt that I was in crisis, but also that the crisis was much bigger than just me. Getting involved in political praxis seemed to be the best way to channel what I was experiencing.

There is a lot to be said for the practice of “politics as therapy.”

The personal account or “journey” format often proves insufficient when attempting to understand what we do and why we do it. An analysis of political subjectivity is crucial. Shifts in capitalist expansion, social environment and class composition, technological development and the onset of crises tend to precipitate political transformation on an individual and collective basis.

The advent of the printing press or the collapse of the automotive industry in mid-west America, for example, are not external factors to people’s lives or isolated moments in history. Indeed, any such upheaval is bound to lead to transformative changes in the lives and political ideation of those experiencing it.

Our social environment shapes our psychology. We must consider how the policy, ideology and debate that surrounds “mental health” or madness is framed.

The individualisation of suffering is key to the prevailing ideology and discourse surrounding mental illness. This will often focus on a supposed misfiring of brain chemicals, a “cure” to which can be found in the form of pharmaceuticals – often prescribed by your GP before any contact with mental health services.

Attention may also turn to an individual’s lack of positive attitude, but this problem can be “fixed” by a six-week course of cognitive behavioural therapy. So much human suffering is pathologised and medicated when it is either “natural” (i.e grief or the general variety of mental experience) or is directly or indirectly linked to social, political and economic factors that remain absent from debate, let alone actively contested on this terrain.

Psychologist and author Bruce E Levine suggests that much of today’s intervention under the auspices of “mental health” is all too political.

“What better way to maintain the status quo,” Levine asks, “than to view inattention, anger, anxiety, and depression as biochemical problems of those who are mentally ill rather than normal reactions to an increasingly authoritarian society?”

He also argues that many potential activists and “natural anti-authoritarians” are prevented from opposing power: “Some activists lament how few anti-authoritarians there appear to be in the US. One reason could be that many natural anti-authoritarians are now psychopathologised and medicated before they achieve political consciousness of society’s most oppressive authorities.”

The historical origins of madness within western culture and how it became increasingly medicalised should not be forgotten. Michel Foucault exposed how the origins of “confinement” of the “insane” in asylums and workhouses were an integral part of the violent replacement of the feudal commons way of life with capitalist work discipline during the 16th and 17th centuries.

This process is in keeping with continual “primitive accumulation” akin to and contemporary with the conquest of the “New World” and the persecution of heretics and witches. Their land and means of reproduction were stolen and appropriated, while authorities continually oppressed and attempted to proletarianise them.

Initially, the “Great Confinement” saw the imprisonment of the old, the unemployed, the “criminal”, the “insane.”

As Foucault explains: “Before having the medical meaning we give it, or that at least we like to suppose it has, confinement was required by something quite different from any concern with curing the sick. What made it necessary was an imperative of labour. Our philanthropy prefers to recognise the signs of a benevolence toward sickness where there is only a condemnation of idleness.”

The conflation of pejoratives like lazy, sick, unemployed, idle are more than familiar to us in today’s discourse surrounding welfare benefits and the imperatives of labour. And it is not just the DWP and Atos who pressure people back into work, NHS psychiatric services also seem to believe that it is work that sets you free.

The capitalist class would like us to be just sick enough not to fight back, but not so sick that we cannot work. The challenge for us is to find ways of organising and helping each other so that we can find adequate levels of social reproduction, care and support to give us a platform to engage in the therapy of class struggle.

 

Saturday Matinee: The Fuck-It Point

THE-FUCK-IT-POINT

Synopsis by Savage Revival:

A film about civilization, why we should bring it down and why most civilized people don’t.

[THE FUCK-IT POINT]
‘When you have had enough. When you decide to take matters into your own hands and don’t care what’s going to happen to you. When you know that from now on you will resist with whatever tactic you think is most effective.’

Everything You Think You Know About Addiction is Wrong: Smashing the Drug War Paradigm

war-on-drugs

By Matt Agorist

Source: The Free Thought Project

“Overwhelming evidence points to not just the failure of the drug control regime to attain its stated goals but also the horrific unintended consequences of punitive and prohibitionist laws and policies,” states a study, published by the Global Commission on Drug Policy (GCDP).

“A new and improved global drug control regime is needed that better protects the health and safety of individuals and communities around the world,”
 the report says. “Harsh measures grounded in repressive ideologies must be replaced by more humane and effective policies shaped by scientific evidence, public health principles and human rights standards.”

This sudden onset of logic by political bodies across the globe is likely due to the realization of the cruel and inhumane way governments deal with addiction. Arbitrary substances are deemed “illegal” and then anyone caught in possession of those substances is then kidnapped and locked in a cage, or even killed.

The fact that this barbaric and downright immoral practice has been going on for so long speaks to the sheer ignorance of the state and its dependence upon violence to solve society’s ills.

The good news is that the drug war’s days are numbered, especially seeing that it’s reached the White House, and they are taking action, even if it is symbolic. Evidence of this is everywhere. States are defying the federal government and refusing to lock people in cages for marijuana. Colorado and Washington served as a catalyst in a seemingly exponential awakening to the government’s immoral war.

Following suit were Oregon, D.C., and Alaska. Medical marijuana initiatives are becoming a constant part of legislative debates nationwide. We’ve even seen bills that would not only completely legalize marijuana but deregulate it entirely, like corn.

As more and more states refuse to kidnap and cage marijuana users, the drug war will continue to implode.

Knowledge is a key role in this battle against addiction tyranny and investigative journalist Johann Hari, has some vital information to share. In a recent TEDx talk, Hari smashes the paradigm of the war on drugs.

What really causes addiction — to everything from cocaine to smart-phones? And how can we overcome it? Johann Hari has seen our current methods fail firsthand, as he has watched loved ones struggle to manage their addictions. He started to wonder why we treat addicts the way we do — and if there might be a better way. As he shares in this deeply personal talk, his questions took him around the world, and unearthed some surprising and hopeful ways of thinking about an age-old problem.

Lara Trace Hentz

INDIAN COUNTRY NEWS

In Saner Thought

"It is the duty of every man, as far as his ability extends, to detect and expose delusion and error"..Thomas Paine

ZEDJournAI

Human in Algorithms

Rooster Crows

From the Roof Top

Aisle C

I See This

The Free

blog of the post capitalist transition.. Read or download the novel here + latest relevant posts

अध्ययन-अनुसन्धान(Essential Knowledge of the Overall Subject)

अध्ययन-अनुसन्धानको सार