Art and Dreaming: Realizing our Power to Co-Create Reality

By Ruth Gordon

Source: Reality Sandwich

“True creativity doesn’t just make things; it feeds what feeds life. In modern culture where people are no longer initiated, the spirit goes unfed. To be seen, the uninitiated create insane things, some destructive to life, to feel visible and powerful. These creations are touted as the real world. They are actually forms of untutored grief signaling a longing for the true reality of village togetherness.”

Martín Prechtel, Secrets of the Talking Jaguar, p.232

These words, from a book detailing Martín Prechtel’s initiation as a Mayan Shaman, accurately sum up our modern world. In the humanitarian, ecological, and political crises we are facing, we are witnessing the effects of a severe spiritual hunger.

We in the Western world are a deeply wounded culture; our Indigenous traditions long destroyed, our common land stolen by the rich and powerful, we often now desperately seek comfort by any means possible – over-consumption of food, of social media, of drugs and alcohol, of our natural resources.

This way of being is known among North American Indigenous people by the name of “wetiko,” or the “disease of the white man.” In the traditional Algonquin myth, the “wetiko” is a rapacious spirit who lives in the dark forest and possesses people, filling them with an insane compulsion to consume and destroy. This spirit makes monsters out of humans, filling them with an insatiable drive to devour everything that crosses their path.

Today, we see wetiko everywhere – in our cruel systems of governance that refuse sanctuary to refugees fleeing conflict, while at the same time escalating those very conflicts, mostly for the single purpose of the highest possible short-term profit, in the disintegration of human community through separating and atomizing social structures and the corresponding upsurge of loneliness and despair, and in the continued addiction to economic growth despite clear and repeated warnings that this kind of globalized industry is killing our planet.

Wetiko functions like a virus – it’s highly contagious and most of us are infected with it to some degree. It’s at the root of the human conflicts that often derail attempts to create alternative ways of life. It’s not enough to simply wish for a better world, it’s not even enough to work hard at creating one. We need to be ready to transform our entire mode of perception, to boil down our ways of thinking and being and reconstruct ourselves from scratch, with consciousness of the wetiko-ized habits we often fall into.

In Dispelling Wetiko: Breaking the Curse of Evil, Paul Levy writes:

“The evil that is incarnating in our world simultaneously beckons and potentially actualizes an expansion of consciousness, all depending on our recognition of what is being revealed. It is as if hidden in the darkness is a spark of light that has descended into its depths, and when recognized in the darkness, this light returns to its source.”

(Levy, 2013, p. 145)

Levy’s idea, that hidden in the poison of wetiko lies its own antidote, offers a healing reference for how to approach what Prechtel calls “untutored grief”: the fecund raw material that, if not used to grow something new, becomes destructive. However, when we are educated, or “initiated” into ways of transforming our grief, of understanding what the darkness in us wants to bring to light, we often find we have stumbled upon a store of incredible potentiality – an almost boundless source of energy and power that we can refocus towards healing, if we choose to do so. Our collective shadows are potential treasure, showing us wounds that need healing, the deep behavioral structures that create conflict, and pushing us to grow beyond our self-limiting patterns. We find the light by going through the dark, not by avoiding it. We can only unfold our full potential for love, beauty, and creativity by recognizing the life-force that’s bound up in our trauma. It’s releasing that closed-off and separated aspect of ourselves that will make us whole.

There’s an interesting symbolic parallel in the human compulsion to dig, mine and extract precious metal. If we instead dug into the fertile ground of our consciousness and our imagination rather than into the physical Earth, would we then finally be able to create a sustainable form of the “treasure” we long for – the “true reality of village togetherness,” so overcoming our addiction to exploiting the Earth?

Consciousness and Creativity: We are the Universe Observing Itself

In Quantum Revelation: A Radical Synthesis of Science and Spirituality, Paul Levy describes how the science of quantum mechanics, although yet to really inform our everyday mode of being, could be a gateway for us: enabling us to understand the dreamlike nature of the world, to reconnect with the divine and infinitely creative aspects of existence. The central insight of quantum mechanics is that quantum particles respond differently depending on whether we are observing them or not. They are waves when we do not observe them and become particles when we do. This implies that quantum matter somehow knows when it is being observed, and subsequently changes both its form and behavior. This points to an astounding idea: that the world we perceive not only perceives us, but also manifests itself depending on our very mode of perception. Or, to put it another way, that the world we encounter depends on how we dream it up. It seems as if there are infinite possibilities of reality. The one that is activated depends only on our capacity to envision it, on the expansiveness and daring of our imagination.

Levy goes even further, asserting that we are living in a world that consciously responds to our consciousness, that, in fact, has created us for the purpose of understanding itself:

“[T]hrough us, the universe questions itself and tries out various answers on itself in an effort parallel to our own to decipher its own being. In the process of observing and reflecting upon our universe we are actually changing the universe’s idea of itself.”

(Chapter 5, “Cosmogenesis,” 2018)

If Levy is right, we are part of a cosmos that is self-creating and self-understanding. It is as if, through consciousness, the universe is craning its neck around to look at itself. We are its eyes, and its senses.

If we want to escape the hold of wetiko, to transition to a way of life that serves all beings, we need to value the power of our own creativity, and to understand that we are always creating the reality we experience, whether we are aware of it or not. The more conscious we are of our creative power, the more we can use it to dream up a world we want to live in; to orchestrate our lives with the same skill and precision as a highly trained conductor.

For this, we need to build a network of communities, (as in Tamera’s Healing Biotopes Plan), where we can study the raw matter of our cultural grief, where we can learn to compost it, and use it to grow new life, where we can discover how to create the “village togetherness” we all long for. We need spaces where we can experiment with and test out our powers of dreaming, encountering, understanding and interacting with the dreamlike nature of reality. We need spaces where we can build the self-confidence and courage that a “life artist” needs. We need public forums where our “life-art” is seen and honored. And all this needs to happen in a large enough group of people for our actions to hold weight, gather momentum and give courage to others.

As Paul Levy writes:

“The universe is a collectively shared dream that is too seemingly dense and solidified for any one person’s change in perspective to transform, but when a critical mass of people get into alignment and consciously put together what I call our “sacred power of dreaming” (our innate power to dream the universe into materialization), we can, literally, change the (waking) dream we are having.”

(Levy, Chapter 5: “Self-Excited Circuit,” 2018)

This is why it is so vital to build communities of trust – we will not be able to change the reality we are currently experiencing alone. However, by cooperating with others we will find the power to co-create paradise on Earth: a reality in which war and violence will be completely unthinkable, where we honor and respect the Earth as the sacred life giver it is, where we are able to fully use the creative potential that lies coiled within each of us. The field-creating power of a group of people can both activate our imaginative potential and provide the vessel in which to create the life we long for.

Waking Up to the Dreamlike Nature of Reality

Paying attention to our powers of dreaming is a simple first step towards comprehending the dreamlike nature of reality, as even those of us who believe that we are “not artistic” still dream each and every night, effortlessly creating symbols and stories that resonate through and inform us, if we take the time to remember and listen to them.

In the Tzutujil culture that Prechtel describes, families gathered each morning to share their dreams, which they saw as being the other half of waking life – just as real, and just as important:

“To a shaman a dream is not a creation of the mind, psyche or soul. It is the remembered fragment of the experience of one’s natural spirit in the twin world, the dreamworld … Although the landscape of dreams may seem different than the landscape of the awake world, it is actually the balanced opposite, reversed version, where our souls live out our bodies’ lives reenacted as if in a complex kind of mirror. Like the two opposing wings of a butterfly, the dreamworld is one wing and the awake world is the other wing. The butterfly must have both wings connected at the Heart in order to fly and function. Neither wing – dreams or waking – contains all of life. Real life occurs as a result of the interaction between the two. The life is the butterfly’s heart, and both dreaming and awake life are necessary to keep the heart alive.”

(Secrets of the Talking Jaguar, pp. 169–170)

As Prechtel goes on to say, “dreams read life back to us like a storyteller” and as such, can be excellent and often uncanny guides in life. I’m sure all of us have had the experience of a dream that seems wiser than we are, a dream that gives an answer to a problem, or that seems to foretell future events.

I’ve experienced personally how dreams can come into creative play with waking life. I once had a powerful dream in which a man, who in my waking life I was on the brink of falling in love with, guided me as I climbed down a building. He was agile, he knew the structure well, as it was his parent’s house, and he helped me down, showing me where to put my hands and feet. After I had this dream, I felt a deep certainty that I could trust this man. I understood that his role in my life right now was to accompany and guide me so that I could move forward, leaving behind the old structures of thought and being that no longer served me (structures he knew well, that he’d also “climbed down from” before). In my waking life, I had very little basis for such a deep trust at that point. I’d known this man a few months. And yet the indication of this dream turned out to be true. It encouraged me to trust him as a guide, and in turn, this faith allowed him (perhaps even prompted him) to actually play out this role in my waking life.

Was this dream reality not only informing but actually creating waking life? I think so. By believing in the certainty this dream instilled in me, I was able to act with faith and courage, which then allowed trust and intimacy to develop in waking reality.

For me, this is an example of those twin butterfly wings of the dreamworld and the waking world meeting at the heart’s center. Both dreamworld and waking life kept my heart alive at that time, nourishing and feeding it. These dual realities prompted me to be an artist: to act on my desires and impulses, to paint the world as I wished it to be.

Consciously Shaping Reality

The consequence of accepting our own creative powers and the dreamlike logic of existence are that we can begin to consciously shape reality. This is a deep responsibility – not anything we can take lightly.

Wetiko disrupts our natural experience of unity with all life. But in truth, we are inextricably interrelated with all other living beings, in the same way that a whirlpool is both identifiably different and part of the river it forms in. This knowledge comes with an immense duty to everything else that exists.

Our every thought, our every action, has an effect on the whole, unavoidably altering everything else in some way, however subtle. We do not need to become megalomaniacs about this – we are no more and no less important than any other human, plant or animal being. But we must understand, if we are to overcome wetiko’s hold on us, that all life, and all activity, constantly shifts the pattern of the whole.

Once we realize this, our everyday lives become imbued with a new sense of purpose and responsibility. Knowing that what we think, say and do alters the whole, guiding a new form of reality into being in each and every moment, means considering carefully how we want to exist in this world. It’s much easier to believe that we are powerless; then we can escape any sense of responsibility. Victimhood is much more comfortable than agency. But if we want to realize the role human beings can play in global transformation, we must be willing to step into agency. We must understand that our inherent creative powers are a divine gift. We’ve been given the capacity to make drastic alterations to the world – in the natural environment, in human society, perhaps even to outer space. Now we must choose whether we want to use these gifts in service of life or continue using them against it—and so push ourselves off the brink of abyss.

Let’s choose to use the wetiko virus rampaging through our human system to actualize an expansion of consciousness, to shine a light deep into the roots of our “untutored grief,” and begin to dream into our potential as deeply creative beings with the ability to create the reality of togetherness that we all long for.

Cyberpunk is Now and No One Knows What to Do With It

By Pattern Theory

Source: Modern Mythology

Cyberpunk broke science fiction. Creeping in alongside the commercialization of the internet, it extrapolated the corruption and dysfunction of its present into a brutal and interconnected future that remained just a heartbeat away. Cyberpunk had an attitude that refused to be tamed, dressed in a style without comparison. Its resurgence shows that little has changed since its inception, and that’s left cyberpunk incapable of discussing our future.

Ghost in the Shell got the live-action treatment in 2017, a problematic remakeof the 1995 adaptation. Some praised its art direction for increasing the visual fidelity of retrofuture anime cityscapes, but the general consensus was that the story failed to apply care and consideration towards human brains and synthetic bodies like Mamoru Oshii had more than two decades before. A few months later came Blade Runner 2049, a sequel to the cyberpunk classic. Critics and fans praised it for high production values, sincere artistic effort, and meticulous direction. Yet something had gone wrong. Director Denis Villeneuve couldn’t shake the feeling that he was making a period movie, not one about the future.

Enough has changed since the 1980s that cyberpunk needs reinvention. New aesthetics. An expanded vocabulary. Code 46 managed this years ago. It rejects a fetish for all things Japanese and embraces China’s economic dominance. Conversations being in English and are soon peppered with Mandarin and Spanish. Life takes place at night to avoid dangerous, unfiltered sunlight. Corporations guide government decisions. Genetics determine freedom of movement and interaction. Climate refugees beg to leave their freeway pastures for the safety of cities.

Code 46 is cyberpunk as seen from 2003, a logical future that is now also outdated.

If Blade Runner established the look, Neuromancer defined cyberpunk’s voice. William Gibson’s debut novel was ahead of the curve by acknowledging the personal computer as a disruptive force when the Cold War was at its most threatening. “Lowlife and high tech” meant the Magnetic Dog Sisters headlining some creep joint across the street from a capsule hotel where console cowboys rip off zaibatsus with their Ono Sendai Cyberdeck. But Gibson’s view of the future would be incomplete without an absolute distrust of Reaganism:

“If I were to put together a truly essential thank-you list for the people who most made it possible for me to write my first six novels, I’d have to owe as much to Ronald Reagan as to Bill Gates or Lou Reed. Reagan’s presidency put the grit in my dystopia. His presidency was the fresh kitty litter I spread for utterly crucial traction on the icey driveway of uncharted futurity. His smile was the nightmare in my back pocket.” — William Gibson

“Fragments of a Hologram Rose” to Mona Lisa Overdrive is a decade of creative labor that was “tired of America-as-the-future, the world as a white monoculture.” The Sprawl is a cyberpunk trilogy where military superpowers failed and technology gave Japan leadership of the global village. Then Gibson wrote Virtual Light and readers witnessed extreme inequality shove the middle class into the gig economy as corporations schemed to profit off natural disasters with proprietary technology.

Gibson knew the sci-fi he didn’t care for would absorb cyberpunk and tame its “dissident influence”, so the genre could remain unchanged. “Punk” is the go-to suffix for emerging subgenres that want to appear subversive while posing a threat to nothing and no one. It’s how “hopepunk” becomes a thing. But to appreciate cyberpunk’s assimilation, look at how it’s presented sincerely.

CD Projekt Red (CDPR), known for the Witcher game series, has spent six years developing what’s arguably the most anticipated video game of the moment, Cyberpunk 2077. Like Gibson, Mike Pondsmith, creator the original “pen-n-paper” RPG, and collaborator on this adaptation of his work, has had his writing absorbed by mainstream sci-fi. CDPR could survive on that 31-year legacy, but they insist they’re taking their time with Cyberpunk 2077 to craft an experience with a distinct political identity that somehow allows players to remain apolitical. In a way this is reflective of CDPR’s reputation as a quality-driven business that’s pro-consumer, but has driven talent away by demanding they work excessive hours and promoting a hostile attitude towards unions. This crunch culture is a problem across the industry.

We’ll soon see how Cyberpunk 2077 developed. What we can infer from its design choices, like giving protagonist V a high-collar jacket seen on the cover of the 2nd edition game book from 1990, is that Cybperpunk 2077 will be familiar. Altered Carbon and Ready Player One share this problem. Altered Carbon is so derivative of first-wave cyberpunk it’s easy to forget its based on a novel from 2002. Ready Player One at least has the courtesy to be shameless in its love of pop culture, proud to proclaim that nothing is more celebrated today than our participation in media franchises without ever considering how that might be a problem.

What’s being suggested, intentionally or not, is that contemporary reality has avoided the machinations of the powerful at a time when technology is wondrous, amusing, and prolific. If only we were so lucky.

238 cities spent more than a year lobbying Amazon, one of two $1 trillion corporations in existence, for privilege of hosting their new office. In November it was announced that Amazon would expand to Crystal City, Virginia and Long Island City, Queens. Plenty of New Yorkers are incensedthat the world’s largest online marketplace will get $3 billion in subsidies, tax breaks, and grants to further disrupt a housing market that takes more from them than any city should allow. Some Amazon employees were so excited to relocate they made down payments on their new homes before the decision went public, telling real estate developers to get this corner of New York readyfor a few thousand transplants. But what of the people already there?

Long Island City is home to the Queensbridge Houses, the largest housing project in the US. Built in 1939, these two buildings are home to more then 6,000 people with an average income of $16,000. That’s far below the $54,000 for Queens residents overall. But neither group is anywhere near the average salary for the 25,000 employees Amazon will bring with them, which will exceed $150,000. How many of those positions will be filled by locals? How many will come from Queensbridge?

Over 800 languages are spoken in Queens, making it the most linguistically diverse place in the world. Those diverse speakers spend over 30% of their income on rent. They risk being priced out of their neighborhoods. Some will be forced out of the city. Has Governor Cuomo considered the threat this deal poses to people’s homes? Has Mayor de Blasio prepared for the inevitable drift to other boroughs once property values spike? Looking at Seattle and San Francisco, there’s no reason to expect local governments to be proactive. So New Yorkers have taken up the fight on their own.

Amazon boss Jeff Bezos toyed with these politicians. He floated the idea that any city could become the next Silicon Valley and they believed him. They begged for his recognition, handed over citizen data, and took part in the $100 billion ritual of subsidizing tech companies.

It was all for nothing. Crystal City is a 20-minute drive from Bezos’ house in Washington DC, where Amazon continues to increase its spending on lobbyists. That’ll seem like a long commute compared to the helicopter ride from Long Island City, the helipad for which is subsidized by the city, to Manhattan, the financial and advertising capital of the world, where Bezos owns four more houses.

The auction for Bezos’ favor was a farce. New York and Virginia give him regular access to people with decision-making power, invaluable data, and institutions that are are sure to expand his empire. These cities were always the only serious options.

Amazon’s plans read like the start of a corporate republic, a cyberpunk trope inspired by company towns. Employers were landlords, retailers, and even moral authorities to workforces too in debt to quit. Many had law enforcement and militias to call on in addition to the private security companies they hired to break labor strikes, investigate attempts at unionization, and maintain a sense of order that resulted in massacres like Ludlow, Colorado.

Amazon is known for labor abuses, monitoring, and tracking speed and efficiency in warehouses without bathroom breaks, where employees have collapsed from heat exhaustion. They sell unregulated facial recognition services to police departments, knowing it misidentifies subjects because of inherent design bias. Companies with a history of privacy abuses have unfettered access to their security devices. They control about half of all e-commerce in the US and, as Gizmodo’s Kashmir Hill found out, it is impossible to live our lives without encountering Amazon Web Services.

It doesn’t take a creative mind to imagine similar exposition being attributed to corporate villains like Cayman Global or Tai Yong Medical.

Rewarding corporations for their bad behavior is just one way the world resembles a fictive dystopia. We also have to face rapid ecological and institutional decay that fractionally adjusts our confidence in stability, feeding a persistent situational anxiety. That should make for broader and bolder conversations about the future, and a few artists have managed to do that.

Keiichi Matsuda is the designer and director behind Hyper-Reality, a short film that portrays augmented reality as a fever dream that influences consumption, and shows how freeing and frightening it is to be cut off from that network. Matsuda’s short film got him an invitation to the World Economic Forum in Davos to “speak truth to power.” What Matsuda witnessed were executives and billionaires pledging responsibility with t-shirts and sustainability, while simultaneously destroying the environment, as an audience of their peers and the press nodded and applauded “this brazen hypocrisy.” So Matsuda took a stanchion to his own installation.

Independence means Matsuda gets to decide how to talk about technology and capitalism, and how to separate his art and business. It also means smaller audiences and fewer productions.

Sam Esmail used a more visible platform to “bring cyberpunk to TV” with Mr. Robot. Like Gibson’s Pattern Recognition, it’s cyberpunk retooled for the present — post-cyberpunk. Esmail never hesitates to place our villains in Mr. Robot. Enron is an influence on logo designs and tactics of evil corps. Google, Verizon, and Facebook are called out for their complicity with the federal government in exposing customer data. AT&T’s Long Lines building, an NSA listening post since the 1970s, plays the role of a corporate data hub that reaches across the county. Even filming locations serve as commentary.

An anti-capitalist slant runs through Mr. Robot, exposing the American dream as a lie and our concept of meritocracy as a tool to protect the oligarchy, presenting hackers as in direct contact with a world of self-isolation and exploitation, those who dare to hope for a future affected by people rather than commerce. And Esmail somehow manages this without interference from NBC.

Blade Runner will get more life as an animeCowboy Bebop is joining Battle Angel Alita in live action. Altered Carbon is in the process of slipping into a new sleeve. There’s no shortage of revivals, remakes, and rehashing of cyberpunk’s past on the way. They’ll get bigger audiences than a short film about submitting to algorithms. More sites will discuss their pros and cons than a mobile tie-in that name-drops Peter Kropotkin and Maria Nikiforova. But in being descriptive and prescriptive, moving to the future and looking for sure footing in the accelerated present, Matsuda’s and Esmail’s work reminds us that cyberpunk needs to be more than just repeating what’s already been said about yuppies, Billy Idol, and the Apple IIc.

We live at a time where 3D printing is so accessible refugees can obtain prosthesis as part of basic aid. People forced to migrate because of an iceless arctic will rely on that assistance. Or we could lower temperatures and slow climate change by spraying the atmosphere with sulfate, an option that might disrupt advertising in low-orbit. Social credit systems are bringing oppressive governments together. Going cashless is altering our expectations of others. Young people earn so little they’re leveraging nude selfies to extend meager lines of credit. Productivity and constant notifications are enough to drive some into a locked room, away from anything with an internet connection. Deepfakes deny women privacy, compromise their identity, and obliterate any sense of safety in exchange for porn. Online communities are refining that same technology, making false video convincing, threatening our sense of reality. Researchers can keep our memories alive in chat bots distilled from social media, but the rich will outlive us all by transfusing bags of teenage blood purchased through PayPal.

In a world that increasingly feels like science fiction it’s important to remind ourselves that writing about the future is writing about the present. Artists worthy of an audience should be unable to look at the embarrassment of inspiration around them and refuse the chance to say something new.

What Are We Working For? The Economic System is a Labyrinthine Trap

By Edward Curtin

Source: Global Research

One also knows from his letters that nothing appeared more sacred to Van Gogh than work.” – John Berger, “Vincent Van Gogh,” Portraits

Ever since I was a young boy, I have wondered why people do the kinds of work they do.  I sensed early on that the economic system was a labyrinthine trap devised to imprison people in work they hated but needed for survival.  It seemed like common sense to a child when you simply looked and listened to the adults around you.  Karl Marx wasn’t necessary for understanding the nature of alienated labor; hearing adults declaim “Thank God It’s Friday” spoke volumes.

In my Bronx working class neighborhood I saw people streaming to the subway in the mornings for their rides “into the city” and their forlorn trundles home in the evenings. It depressed me.  Yet I knew the goal was to “make it” and move away as one moved “up,” something that many did.  I wondered why, when some people had options, they rarely considered the moral nature of the jobs they pursued.  And why did they not also consider the cost in life (time) lost in their occupations?  Were money, status, and security the deciding factors in their choices?  Was living reserved for weekends and vacations?

I gradually realized that some people, by dint of family encouragement and schooling, had opportunities that others never received.  For the unlucky ones, work would remain a life of toil and woe in which the search for meaning in their jobs was often elusive.  Studs Terkel, in the introduction to his wonderful book of interviews, Working: People Talk About What They Do all Day and How They Feel About What They Do, puts it this way:

This book, being about work, is, by its very nature, about violence – to the spirit as well as to the body.  It is about ulcers as well as accidents, about shouting matches as well as fistfights, about nervous breakdowns as well as kicking the dog around.  It is, above all (or beneath all), about daily humiliations. To survive the day is triumph enough for the walking wounded among the great many of us.

Those words were confirmed for me when in the summer between high school and college I got a job through a relative’s auspices as a clerk for General Motors in Manhattan.  I dreaded taking it for the thought of being cooped up for the first time in an office building while a summer of my youth passed me by, but the money was too good to turn down (always the bait), and I wanted to save as much as possible for college spending money.  So I bought a summer suit and joined the long line of trudgers going to and fro, down and up and out of the underground, adjusting our eyes to the darkness and light.

It was a summer from hell. My boredom was so intense it felt like solitary confinement.  How, I kept wondering, can people do this?  Yet for me it was temporary; for the others it was a life sentence.  But if this were life, I thought, it was a living death.  All my co-workers looked forward to the mid-morning coffee wagon and lunch with a desperation so intense it was palpable.  And then, as the minutes ticked away to 5 P.M., the agitated twitching that proceeded the mad rush to the elevators seemed to synchronize with the clock’s movements.  We’re out of here!

On my last day, I was eating my lunch on a park bench in Central Park when a bird shit on my suit jacket.  The stain was apt, for I felt I had spent my days defiling my true self, and so I resolved never to spend another day of my life working in an office building in a suit for a pernicious corporation, a resolution I have kept.

“An angel is not far from someone who is sad,” says Vincent Van Gogh in the new film, At Eternity’s Gate. For some reason, recently hearing these words in the darkened theater where I was almost alone, brought me back to that summer and the sadness that hung around all the people that I worked with.  I hoped Van Gogh was right and an angel visited them from time to time. Most of them had no options.

The painter Julian Schnabel’s moving picture (moving on many levels since the film shakes and moves with its hand-held camera work and draws you into the act of drawing and painting that was Van Gogh’s work) is a meditation on work.  It asks the questions: What is work?  What is work for?  What is life for?  Why paint? What does it mean to live?  Why do you do what you do?  Are you living or are you dead?  What are you seeking through your work?

For Vincent the answer was simple: reality.  But reality is not given to us and is far from simple; we must create it in acts that penetrate the screens of clichés that wall us off from it.  As John Berger writes,

One is taught to oppose the real to the imaginary, as though the first were always at hand and the second, distant, far away.  This opposition is false.  Events are always to hand.  But the coherence of these events – which is what one means by reality – is an imaginative construction.  Reality always lies beyond – and this is as true for materialists as for idealists. For Plato, for Marx.  Reality, however one interprets it, lies beyond a screen of clichés.

These screens serve to protect the interests of the ruling classes, who devise ways to trap regular people from seeing the reality of their condition.  Yet while working can be a trap, it can also be a means of escape. For Vincent working was the way.  For him work was not a noun but a verb. He drew and he painted as he does in this film to “make people feel what it is to feel alive.”  To be alive is to act, to paint, to write.  He tells his friend Gauguin that there’s a reason it’s called the “act of painting, the “stroke of genius.”  For him painting is living and living is painting.

The actual paintings that he made are almost beside the point, as all creative artists know too well. It is the doing wherein living is found. The completed canvas, essay, or book are what is done.  They are nouns, still lifes, just as Van Gogh’s paintings have become commodities in the years since his death, dead things to be bought and sold by the rich in a culture of death where they can be hung in mausoleums isolated from the living. It is appropriate that the film ends with Vincent very still in his coffin as “viewers” pass him by and avidly now desire his paintings that encircle the room that they once rejected. The man has become a has-been and the funeral parlor the museum.

“Without painting I can’t live,” he says earlier.  He didn’t say without his paintings.

“God gave me the gift for painting,” he said.  “It’s the only gift he gave me.  I am a born painter.”  But his gift has begotten gifts that are still-births that do not circulate and live and breathe to encourage people to find work that will not, “by its very nature, [be] about violence,” as Terkel said. His works, like people, have become commodities, brands to be bought and sold in a world where the accumulation of wealth is accomplished by the infliction of pain, suffering, and death on untold numbers of victims, invisible victims that allow the wealthy to maintain their bad-faith innocence. This is often achieved in the veiled shadows of intermediaries such as stock brokers, tax consultants, and financial managers; in the liberal and conservative boardrooms of mega-corporations or law offices; and in the planning sessions of the world’s great museums. Like drone killings that distance the killers from their victims, this wealth accumulation allows the wealthy to pretend they are on the side of the angels.  It’s called success, and everyone is innocent as they sing, “Hi Ho, Hi Ho, it’s off to work we go.”

“It is not enough to tell me you worked hard to get your gold,” said Henry Thoreau, Van Gogh’s soul-mate. “So does the Devil work hard.”

A few years ago there was a major exhibit of Van Gogh’s nature paintings at the Clark Museum in Williamstown, Massachusetts – “Van Gogh and Nature” – that aptly symbolized Van Gogh in his coffin.  The paintings were exhibited encased in ornate gold frames. Van Gogh in gold. Just perfect.  I am reminded of a scene in At Eternity’s Gatewhere Vincent and Gauguin are talking about the need for a creative revolution – what we sure as hell need – and the two friends stand side by side with backs to the camera and piss into the wind.

But pseudo-innocence dies hard.  Not long ago I was sitting in a breakfast room in a bed-and-breakfast in Houston, Texas, sipping coffee and musing myself awake.  Two men came in and the three of us got to talking.  As people like to say, they were nice guys.  Very pleasant and talkative, in Houston on business. Normal Americans.  Stressed.  Both were about fifty years old with wives and children.

One sold drugs for one of the largest pharmaceutical companies that is known for its very popular anti-depressant drug and its aggressive sales pitches.  He travelled a triangular route from Corpus Christi to Austin to Houston and back again, hawking his wares.  He spoke about his work as being very lucrative and posing no ethical dilemmas.  There were so many depressed people in need of his company’s drugs, he said, as if the causes of their depression had nothing to do with inequality and the sorry state of the country as the rich rip off everyone else.  I thought of recommending a book to him – Deadly Medicines and Organized Crime: How big pharma has corrupted health care by Peter Gotzsche – but held my tongue, appreciative as I was of the small but tasteful fare we were being served and not wishing to cause my companions dyspepsia.  This guy seemed to be trying to convince me of the ethical nature of the way he panned gold, while I kept thinking of that quote attributed to Mark Twain: “Denial ain’t just a river in Egypt.”

The other guy, originally from a small town in Nebraska and now living in Baton Rouge, was a former medevac helicopter pilot who had served in the 1st Gulf War.  He worked in finance for an equally large oil company.  His attitude was a bit different, and he seemed sheepishly guilty about his work with this company as he told me how shocked he was the first time he saw so many oil, gas, and chemical plants lining the Mississippi River from Baton Rouge to New Orleans and all the oil and chemicals being shipped down the river. So many toxins that reminded him of the toxic black smoke rising from all the bombed oil wells in Iraq.  Something about it all left him uneasy, but he too said he made a very good “living” and that his wife also worked for the oil company back home.

My childish thought recurred: when people have options, why do they not choose ethical work that makes the world more beautiful and just?  Why is money and so-called success always the goal?

Having seen At Eternity’s Gate, I now see what Van Gogh was trying to tell us and Julian Schnabel conveys through this moving picture.  I see why these two perfectly normal guys I was breaking bread with in Houston are unable to penetrate the screen that lies between them and reality.  They have never developed the imaginative tools to go beyond normal modes of perception and conception. Or perhaps they lack the faith to dare, to see the futility and violence in what they are working for and what their companies’ products are doing to the world.  They think of themselves as hard at work, travelling hither and yon, doing their calculations, “making their living,” and collecting their pay.  It’s their work that has a payoff in gold, but it’s not working in the sense that painting was for Vincent, a way beyond the screen.  They are mesmerized by the spectacle, as are so many Americans.  Their jobs are perfectly logical and allow them a feeling of calm and control.

But Vincent, responding to Gauguin, a former stock broker, when he urged him to paint slowly and methodically, said, “I need to be out of control. I don’t want to calm down.”  He knew that to be fully alive was to be vulnerable, to not hold back, to always be slipping away, and to be threatened with annihilation at any moment. When painting, he was intoxicated with a creative joy that belies the popular image of him as always depressed.  “I find joy in sorrow,” he said, echoing in a paradoxical way Albert Camus, who said, “I have always felt that I lived on the high seas, threatened, at the heart of a royal happiness.”   Both rebels, one in paint, the other in words: “I rebel: therefore we exist,” was how Camus put it, expressing the human solidarity that is fundamental to genuine work in our ephemeral world. Both nostalgic in the present for the future, creating freedom through vision and disclosing the way for others.

And although my breakfast companions felt safe in their calmness on this side of the screen, it was an illusion. The only really calm ones are corpses. And perhaps that’s why when you look around, as I did as a child, you see so many of the living dead carrying on as normal.

“I paint to stop thinking and feel I am a part of everything inside and outside me,” says Vincent, a self-described exile and pilgrim.

If we could make working a form of such painting, a path to human solidarity because a mode of rebelling, what a wonderful world it might be.

That, I believe, is what working is for.

Our age of horror

In this febrile cultural moment filled with fear of the Other, horror has achieved the status of true art

By M M Owen

Source: aeon

In Ray Bradbury’s horror short story, ‘The Next in Line’ (1955), a woman visits the catacombs in Guanajuato, Mexico. Mummified bodies line the walls. Lying awake the next night, haunted by her macabre tour, she finds that her heart ‘was a bellows forever blowing upon a little coal of fear … an ingrown light which her inner eyes stared upon with unwanting fascination’.

Our present era is one in which the heart of culture is blowing hard upon a coal of fear, and the fascination is everywhere. By popular consent, horror has been experiencing what critics feel obliged to label a ‘golden age’. In terms of ticket sales, 2017 was the biggest year in the history of horror cinema, and in 2018, Hereditary and A Quiet Place have been record-breaking successes. In both the United States and the United Kingdom, sales of horror literature are up year over year – an uptick that industry folk partly attribute to the wild popularity of Netflix’s Stranger Things (2016-). And the success isn’t merely commercial. Traditionally a rather maligned genre, these days horror is basking in the glow of critical respectability. As The New York Times remarked this June, horror ‘has never been more bankable and celebrated than it is right now’.

As any historian of the genre will tell you, horror has had previous golden ages. Perhaps ours is just a random quirk of popular taste. But perhaps not. Perhaps we are intoxicated by horror today because the genre is serving a function that others aren’t. Can’t. Horror’s roots run deep, but they twist themselves into forms very modern. The imagination’s conversion of fear into art offers a dark and piercing mirror.

My earliest horror memory is Stay Out of the Basement (1992), one of R L Stine’s Goosebumps series of young adult novels. In the story, a botanist accidently creates a hybrid plant clone of himself. When the clone comes to life, he tries to steal his humanoid self’s life. The botanist’s children unmask the imposter, and in a mess of green blood and plant mush, the clone is felled with an axe. The rescued father disposes of the rest of the mutating plant matter, and the family is all set to live happily ever after. But at the very end, the daughter is standing in the garden and feels a small plant nudging her ankle. The plant whispers to her: ‘Please – help me. I’m your father.’ Stay Out of the Basement is no masterpiece, but I was young, and it struck me cold.

Horror is what anthropologists call biocultural. It is about fears we carry because we are primates with a certain evolved biology: the corruption of the flesh, the loss of our offspring. It is also about fears unique to our sociocultural moment: the potential danger of genetically modifying plants. The first type of fear is universal; the second is more flexible and contextual. Their cold currents meet where all great art does its work, down among the bottomless caves on the seabed of consciousness. Lurking here, a vision of myself paralysed in the dirt, invisible to those I love.

Horror has always been with us. Prehistoric cave paintings are rife with the animal-human hybrids that remain a motif of horror to this day. Every folktale tradition on Earth contains tales of malevolent creatures, petrifying ghosts and graphic violence. The classics are frequently horrifying: in Homer’s Odyssey, when the Cyclops encounters Odysseus’ men, the monster eats them, ‘entrails, flesh and the marrowy bones alike’.

We have always told horror stories, and we always will. Because horror is an artistic expression of an ontological truth: we are creatures formed in no small part by the things to which we are averse. Fear is a base ingredient of consciousness, partaking of brain circuits that are so ancient humans share them with all vertebrate lifeforms. As the neuroscientist Antonio Damasio has described, the whole weird soup of human feeling emerged as a result of our beginning to process whether to ‘approach or avoid … certain places or things or creatures’. Our cognition absorbs reality as a vast spectrum of potential encounters, and horror alchemises the dark end into art.

Thus, evolutionary analyses of horror mention monsters as the genre’s most defining feature. As the philosopher Stephen T Asma puts it, ‘during the formation of the human brain, the fear of being grabbed by sharp claws, dragged into a dark hole and eaten alive was not an abstraction’. For a quarter of a million years – the vast majority of Homo sapiens’ existence as a species – we lived outdoors, with giant hyenas, saber-toothed cats and other carnivores representing a real threat to life. That other ancient health risk, the biological pathogen, manifests itself in the tendency of monsters to be not only violent but also disgusting – feral, oozing blood and saliva, bearing their infectious teeth. From the evolutionary perspective, horror’s vast monstrous menagerie echoes with Paleolithic peril.

Historically, horror’s willingness to play directly to our evolved physiology has seen it earn a low reputation. Western culture was built on a vision of ourselves as above the beasts, above the beastliness of acquiescing helplessly to the demands of the body. But horror can bypass all intellect, extract from us an embarrassingly animalistic response. The skittish physicality of the ‘jump scare’ is a manipulation of what biologists call the startle response, present in all mammals. And cruder horror always contains that other ghastly reminder of our physicality: gore. Gore disgusts us, and the way that gore can be darkly compelling to us disgusts us. Whenever horror is criticised, it is criticised for staging a dark carnival of physicality. Perhaps the only sort of media we moralise more than we do horror is that other mainliner of bodily response, pornography.

Horror’s historical ghettoisation has meant that weightier, smarter horror reliably gets labelled as something else. The finest films of our current golden age have been dubbed ‘elevated horror’ and ‘post-horror’. In literary circles, works of horror seen as sufficiently cerebral get relabelled ‘Gothic’. It’s certainly true that great horror is always about more than gore. But we should be careful not to gentrify the genre by cleansing it of everything but the philosophy.

There are always beings that want to bite us, scratch us, puncture our fragile flesh. There is the terrible old coercion of brute, muscular force, the lethal threat of contagion and infection. There is darkness, disorientation. And looming explicitly or symbolically in all horror is that vast shadow that the anthropologist Ernest Becker said ‘haunts the human animal like nothing else’: death.

‘And he that sat on the cloud thrust in his sickle on the Earth; and the Earth was reaped’. Witness the machinations of that famous slasher, God (Revelation 14:16). Horror encodes the story of our long primate journey, but these biological foundations support the towering edifice of culture. And for millennia, horror merged with our oldest cultural phenomena: religion and folklore. In fact, for most of its history, horror wasn’t really art, as we tend to understand that term today. It certainly wasn’t fiction. Prior to about 1750, in our pivot toward the Enlightenment, the best horror stories can all be found within theology and lore. In Europe, for generations Satan was every bit as petrifying as Pennywise, the murderous clown of Stephen King’s It (1986). Demonic forces were terrifyingly real; in the Bible, Jesus spends almost as much time performing exorcisms as he does healing people. There were widespread societal panics about the threat of werewolves and vampires, and tens of thousands of women were murdered as witches.

This isn’t to judge the credulity of bygone peoples. But the reason that horror – unlike say tragedy, or comedy, or the epic – didn’t exist as an artistic genre until relatively recently is that its deep history is fundamentally pre-scientific. Nothing in the annals of art is as scary as what you’ll find in bygone worldviews. Who needs make-believe scares when everyone you know is awaiting the day of judgment, at which point an angel will sweep a sickle across the Earth and make the blood run for hundreds of miles? It is no coincidence that the Gothic – horror’s regal antecedent – emerged precisely at the moment when lots of people began to believe that God really might be dead. Modern horror is in part the story of what happens when our threatened minds shed a theology. Once holy texts can no longer entirely encode the terrors of being, horror enters fully the arena of art.

However, the old ways cast a long shadow. In the pantheon of genres, horror remains an adolescent, and it has a sort of adolescent relationship with its past: half rebellion, half dependency. On the one hand, more than any other genre, horror loves to thematise the coldest sorts of atheism. ‘All my tales,’ said horror grandee H P Lovecraft, ‘are based on the fundamental premise that common human laws and interests and emotions have no validity or significance in the vast cosmos-at-large.’ In The Silence of the Lambs(1988), amused by what he sees as clear evidence for the absence of any benevolent deity, the charmingly evil Hannibal Lecter ‘collects church collapses, recreationally’.

On the other hand, horror is marked everywhere by the centuries it spent wedded to otherworldly belief systems. In 2018’s biggest horror movie, Hereditary, an obscure figure from demonology possesses a teenage boy and wreaks death upon his family. Much Japanese horror features yūrei, tormented and enraged spirits denied a smooth passage to the afterlife. Horror was a dark, mutant child of the Enlightenment, and yet it can’t shake its pre-scientific genes. Its penchant for lurid supernaturalism is a big reason why, when it fails, it can so easily seem puerile. The modern, skeptical mind whispers: This is just silly. Haven’t we outgrown all this? On Halloween – a celebration of horror’s pre-artistic forms – children are meant to have the most fun.

Why does horror have this double-edged relationship with its religious and spiritual heritage? Perhaps because, for all its modernity, the sheer scale of theological enquiry still reflects the genre’s ambition. As leading horror author Joe Hill told me, horror is what we turn to when we want to explore ‘the biggest and darkest questions’. And even demoted from dogma to metaphor, the old myths offer a fine way to channel those grand subjects of which horror is so fond: good versus evil, the tribulations of the soul, the end of days. Even though it requires our suspension of disbelief, the paranormal presents us with the very real prospect of brittle reason splintering against the mystery of reality.

‘There’s a sense of uncertainty and potential wrongness underlying most of human existence,’ the Canadian author Gemma Files told me. All of humankind’s great mythic narratives know this, and horror doesn’t let us forget it. At the core of the numinous impulse – that oceanic feeling in which horror was submerged for so many centuries – is the strange certainty that reality is unpredictable and inscrutable, that certain things will forever resist the reach of the human mind. Horror will always share in this sense. It may have fallen from heaven, but it still isn’t entirely of this Earth. Cormac McCarthy’s The Road(2006) centres on a father and son as they wander across a blasted, post-apocalyptic America. The horrors are everywhere: they discover, chained in a basement, ‘a man with his legs gone to the hip and the stumps of them blackened and burnt’. The man is being harvested by cannibals, piece by piece. It is a bleak, fallen world, where the memory of a time when trout swam in the streams shimmers with celestial grace. When the father meets an elderly man, he tells him: ‘There is no God and we are his prophets.’

And so what of today? Horror reverberates with fears Paleolithic and God-fearing, but it is also always reacting to its present moment. And it seems reasonable to perceive any swell in the production and popularity of horror – any dawning of a new golden age – as the expression of a culture that is afraid. ‘In anxious times,’ David Bruckner, director of The Ritual (2017) and other horror movies, told me, ‘people are more likely to turn to horror. If you have an uneasy night at the movie theatre, you are sort of answering the call of your times.’

‘I think we’re living in a nightmare, basically.’ So said horror legend Ramsey Campbell, when I asked him why he thinks horror is flourishing right now. This is one of those things that cooler heads will say is your mind deceiving you. By many objective measures, for many people, life today is better than ever. But horror has never been too worried about culture’s long-term trajectory; it is always fixated on how it could all go badly wrong, any minute now. Horror is steeped in worry; its narratives frequently open with the calm before a terrible storm. And every person connected with horror that I interviewed smelled doom on the breeze.

Horror has always made good use of our deep aversion to what Lovecraft called ‘the oldest and strongest kind of fear’: the unknown. This is one of the ways in which horror (like the folktale) can display a sort of archetypal conservatism. In general terms, the best way to survive a horror setting is to be supremely, boringly sensible: don’t talk to strangers, don’t stay the night in a foreign town, don’t go to the aid of anyone who looks sick, don’t go into that crumbling old building. If a very attractive stranger tries to seduce you, it is almost definitely a trap. Respect tradition, do not commit sacrilege, listen to the advice of elderly locals. At the heart of a lot of horror is a conservative craving for the predictable and the known. The unpleasant atonal dissonance you’ll hear in every horror score reflects, through the collapse of harmony, the disintegration of familiar and comforting patterns out there in the world.

Horror, then, thrives on discombobulation. And today, the discombobulation is everywhere. The causes of the anxiety are scattershot, and you already know them. There are those scientific breakthroughs of the sort that get Silicon Valley execs psyched, but which many others find deeply, opaquely perturbing. Take artificial intelligence, whose rise has seen more and more science fiction turn horrific: ‘One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa,’ says Nathan, a central character in Ex Machina (2015). And even if the robots don’t vanquish poor old Homo sapiens, other sorts of scientific experimentation might. One of the great horror trends of the 21st century has been the zombie, and in all of the best works of zombie fiction, the immediate cause of the outbreak is the same: biological experimentation gone horribly wrong. The zombie is the gnashing, lunging embodiment of that modern terror, the global pandemic. People might not fear Satan anymore, but they sure as hell fear Ebola.

Outside of the lab, there is that slower method of planetary destruction: climate change. ‘Horror,’ the author Jeff VanderMeer told me, ‘is the beauty of the natural world juxtaposed against the way we destroy those natural systems without understanding them.’ VanderMeer’s Southern Reach trilogy (2014), some of my favourite horror novels of all time, diluted my enjoyment of the UK’s recent heatwave and refused to let me forget that what I was basking in were the convulsions of an aching planet. A biosphere cast brutally off-balance forms the setting for M R Carey’s The Girl With All the Gifts(2014), where humanity has been devastated by a fungal infection. Where horror once worried about the weather gods, it now just worries about the weather. Climate change, meanwhile, is a major cause of mass migrations, potent fuel for what leading critic Leslie Klinger described to me as horror’s historical trend of feeding off ‘the invasion of foreigners into previously stable populations’. At a base level, we are in-groupish creatures. I’ve spent time with rural, paganish communities who enjoy a singsong and are sexually unrepressed; none of them tried to burn me alive in a giant wicker man. But horror says: you never knowStick to your own.

If all these fears sound selfish, parochial, insular – don’t get the wrong idea. Horror offers a map of the psyche and, like fear itself, is inherently apolitical. It can easily offset its archetypal conservatism with a radical sort of anarchism. Horror might thematise our fear of the unknown – but it also warns about clinging too stubbornly to the familiar. In a lot of horror, survival is predicated on a capacity to quickly adapt to brutal change. Horror has little time for the conservative sentimentality that swirls around ideas such as institutions and tradition, and even something like the nation-state is often revealed as a sort of frilly, doomed illusion. The protections of social hierarchy or private property are never of any use, and horror loves to punish characters who arrogantly believe that wealth will shield them. In horror, the consolations of the past melt in contact with the white-hot heart of present fear. Conservatism fails because it is revealed that at bottom there is nothing to conserve. As the author Michael Marshall Smith put it to me, great horror often declares: ‘It’s just you versus the monster. Always has been. Always will be.’

In this, lots of horror is intensely universalising. Frequently, a scenario comes down to a simple contest between humankind and something else. Splitting up is a suicidal move in horror; survival often follows an impulse toward communal effort. Similarly universalising is the way that, at extreme moments of threat or fear, a given character’s skin colour or gender or nationality will often be effaced. At horror’s pitch, we perceive a simple human, doing what we all do every day of our lives: struggling to live, to persist, to overcome. In The Babadook(2014), a widowed mother is stalked by an amorphous, black-hatted monster that embodies her grief at losing her husband. The monster – her terrible, life-sucking trauma – threatens to claim her son, and destroy what is left of her family. Late in the film, bloodied and exhausted, the mother faces down the Babadook, yelling: ‘If you touch my son again, I’ll fucking kill you.’ The monster is tamed. It is a show of furious bravery that could be any mother, anywhere; courageous love in the face of total disorder.

It’s easy to romanticise horror, but there are also unsexy, funcitonal reasons why it’s having its moment right now. The streaming revolution has given creators a reliable and direct way to reach a dedicated, self-selecting audience. Trusting this audience, distributors like A24 and Blumhouse have put a great amount of both creative wherewithal and cold hard dollars into horror cinema. The genre has always been reliably profit-turning, but it has also always been prone to the lazy recycling of ideas and tropes. Today, even experimental horror can be profitable. In literature, meanwhile, a revival of interest in horror greats like Shirley Jackson, as well as a slew of Stephen King adaptations, has been a boost for the genre at large.

Yet on their own, these tantalising products would never suffice to make horror soar. Horror has been with us since the dawn of storytelling. It manifests the fears of the human animal, and even today echoes the slippery spiritual suspicion that reality isn’t what it seems. Our world is ripe for upending, and horror expresses that best. Horror can thrive today because ours is a strange and febrile cultural moment. It seems every civilisation has believed they are on the brink of cataclysmic change; such an idea has a weird narcissistic appeal. But today there is everywhere a deep feeling that the horses of disaster are about to plunge in the heavy clay. There is a sort of great loop being completed here: as horror has morphed from theology to art, the ruinous power has moved from the judgment of God to the hand of humans. The end of days in one programmer’s idle tinkering, in one laboratory’s overlooked quarantine protocols. Robert Louis Stevenson, author of The Strange Case of Dr Jekyll and Mr Hyde (1886), wrote, ‘Sooner or later, everyone sits down to a banquet of consequences.’ As a genre, horror is forever pulling up a chair, licking its lips at the feast to come.

Breathing this nervous air, current horror – like the theology that provided its former home – is animated by the full spectrum of human psychology. It is driven by our desire to stop all the clocks, shrink into a bubble of the familiar and the known, reject all things foreign. Equally, current horror is shot through with the bone-deep knowledge that if we can’t adapt, we will perish. Its narratives warn us not to cling to outdated consolations, to recognise that we all face the same monsters, in the end. The world has always been dark and full of terrors, and horror has always known it. The dark pleasure of enjoying horror is all about countenancing this awful truth from within a little bubble of safety. It is about the doppelgängered headspace of loathing the real thing but craving its imaginative facsimile. If the genre of horror has a master virtue, a single human quality that it consistently exalts, it is an old one: bravery. We are certain to need that, wherever we are headed.

 

With thanks to all the horror authors, editors, screenwriters, directors and critics who generously gave me their time to explore horror: Nick Antosca, Stephen T Asma, David Bruckner, Ramsey Campbell, Noël Carroll, Ellen Datlow, Gemma Files, Steven Gerrard, Joe Hill, Carole Johnstone, Leslie Klinger, John Langan, Lisa Morton, Andy Nyman, Jami O’Brien, Xavier Aldana Reyes, Priya Sharma, David J Skal, Michael Marshall Smith, Eugene Thacker, Paul Tremblay, and Jeff VanderMeer. 

Superpowers and Concrete Towers: Katsuhiro Otomo’s ‘Domu: A Child’s Dream’

By Patrick Haddad

Source: We Are the Mutants

In 1989, almost a year after it was released in Japan, the Western world was given its first cinematic taste of anime with the sci-fi epic Akira. Acclaimed writer Katsuhiro Otomo’s vision of a post-apocalyptic Neo-Tokyo, a sea of concrete edifices laid waste by war, was adapted from the manga of the same name, serialized in Japan from 1982 to 1990. Religious fanatics, biker gangs, and shadowy government figures all vie for control of children with superhuman powers, while the truth behind World War III teases just out of reach. But before Akira, Otomo penned and illustrated a shorter piece of raw, dystopian horror: Domu: A Child’s Dream. Set in a government housing complex where a series of inexplicable deaths are taking place, Domu (serialized between 1980-1981) is resolved through a conflict between an old man and a young girl, both of whom secretly possess extrasensory powers.

While Domu foreshadowed Akira in many ways, it is a much more intimate story with fewer characters and just one location, the Tsutsumi Housing Complex. The residents of Tsutsumi are a forgotten, surplus community. Dreams left unfulfilled, private sufferings gone unchecked, and the struggle for identity in the monotonous wash of concrete go some way to explain the rash of suicides plaguing the complex, yet the police are at a loss to explain exactly how many of these deaths occurred. From the beginning, Otomo sets out to introduce the overwhelming, modernist structure as a character in and of itself. Full panel shots of the building in high detail and high contrast are found throughout, and are often employed as bookends to each chapter. Its circular layout insists upon dreary introspection for half of the residents it houses: there is no looking out to the city, to a potentially brighter future. Many prisons, schools, and hospitals also follow a similar template. The circular design—reminiscent of the Panopticon—allows for both greater visibility and fewer places to hide, and is often accompanied by a raised central observation point. Tsutsumi is monotonous in its aspect as well as its makeup: hard concrete, hard lines; no facade, no flair. Behind this impassive exterior lie grimy, cramped apartments hidden among a labyrinth of iron and concrete hallways. It isn’t much of a stretch to imagine Tsutsumi as the backdrop to one of Freddy Krueger’s nightmarish rampages, and Otomo almost certainly drew upon similar feelings of unease regarding the homogeneous modernization of postwar Japan as did Shinya Tsukamoto, who created the shocking cult horror film Tetsuo: The Iron Man (1989).

As the story progresses, it becomes clear—to the reader at least—that the mysterious deaths can be traced back to Old Cho, an apparently senile old man who is in fact using his telekinetic powers to cause fatal accidents or, as in the iconic scene featuring a depressed young man with a craft knife, forcing residents to commit suicide. Old Cho frequently kills from the background, unseen by his victims, his twisted revelry seeming to come from the building itself. Old Cho is not the only resident with special gifts. He finds an adversary in the form of a young girl called Etsuko. They clash the very moment Etsuko and her family move into the tower block, when she uses her abilities to catch the plummeting baby Cho, using his psychic powers, had snatched and dropped from the balcony.

As the story progresses, Cho’s and Etsuko’s confrontations escalate, as the tally of innocent victims climbs. The final act of Domu marks a stark departure from earlier passages, as the static panels showing the impassive monolith and its cowed inhabitants are replaced by dynamic and violent scenes, splashed with blood and fueled by emotion. By the end of the book, it is hard to tell who really won or what a victory would even mean, but it is clear that most of the violence was down to random malice, or misguided fear and rage. It is ironic, then, that the reduction of human beings to just pure function results in senseless, unproductive violence.

Many modernist, and particularly Brutalist, social housing structures were built after World War II in order to show that “A dwelling can be standardized to meet the needs of men whose lives are standardized,” according to urban design pioneer Le Corbusier. Projects such as Les Damiers in Paris, Robin Hood Gardens in London, Habitat 67 in Montreal, and the Unité d’habitation in Berlin are iconic examples of an architectural style that would dominate social housing well into the 1970s. However, the utopian vision set out by modernist architects—to create socially progressive and egalitarian housing—became twisted by time and by the reality of the project’s application. Affordable social housing turned into isolated ghettos, while the idea of social progress became just a gear, an empty promise, in the great Soviet machine. The modular, repetitive nature allowed for quick and cheap rebuilding, but, perhaps in part due to its success, it also aided in the dissolution of identity. Each building, each home, was just a copy of the last, with nothing substantial to distinguish each from each: an existence stripped of form, each building nothing but raw function. And so too its occupants.

In contrast to the urbanization implied by modernist housing, America saw large numbers of people, many of them returning veterans, flee the cities after World War II in favor of suburban living. Combining the power of assembly-line mass production with the G.I. Bill’s loan assistance saw entire communities of cookie-cutter homes spring up in a remarkably short space of time. Spreading out in a grid, rather than towering above, suburbia very often entailed a similar rationalization of living spaces. The mass produced sprawl, built to the same specifications and filled with the same stylish appliances, fits nicely into Le Corbusier’s definition of homes as “machines for living in.”

Things were much grimmer in Japan, of course. Suffering atomic bomb strikes that wiped two of its cities clean off the map, as well as the death of an Empire, caused a national crisis of identity. The country was then occupied by and rebuilt in the image of its conquerors: centuries of culture burned to the ground or consumed by industry and replaced with bloodless, uniform architecture. Tower blocks went up where pagodas once stood, no longer hewn from stone and wood but erected with concrete and rebar. British author Theodore Dalrymple describes modernist architecture as “inherently totalitarian… [it] delights to overwhelm and humiliate what went before it by its size and prepotency.” The seeds of modernism in Japan were planted before their defeat during World War II, however. Le Corbusier worked with two prominent Japanese architects during the 1930s, Kunio Maekawa and Junzo Sakakura. During this time, a synergy was found between Le Corbusier’s visions for flexible, open plan buildings filled with natural light and the traditional Japanese house, called Sukiya-zukuri. This synergy would be expanded upon in the later Metabolism movement, which sought to bring inspiration from organic, biological structures to modernist architecture during the 1960s.

In Domu, we find a manifestation of this dehumanizing monolith in the form of the Tsutsumi Housing Complex and it’s magpie avatar, Old Cho. For each of his victims, Cho claims a glittering prize, a personal token: a badge, a hat, a gun. Before being forced over the precipice of their hopelessness, literally and figuratively, a piece of their identity is stolen before being secreted away somewhere in the bowels of the building.

While it is unlikely that Otomo wrote Domu as an explicit attack on modernism in Japan, the influence of the displacement and anxiety it caused is clear in his work. In Domu, we see Otomo start to develop two of the principle themes that went on to make Akira a timeless classic: young people with exceptional, inherent power, and a dystopian vision of Neo-Tokyo as a failed totalitarianism, an endless landscape of monstrous and towering concrete. These themes went on to define an era of storytelling in manga and anime characterized by pervasively bleak visions of the future.

The loss of privacy and individualism caused by massive modernist social housing estates is also explored in J.G. Ballard’s High Rise (1975), in which tenants of a self-contained tower block in London degenerate into primal tribes, warring over territory and resources while normal city life continues outside. In High Rise, the dehumanizing power of Brutalism leads people to lose their “civilized” behavior and let base urges drive their lives, while in Domu we get a greater sense of the despondency that comes from being a lifeless industrial worker in a lifeless industrial landscape. It is as though living in this monochromatic, function-centric environment leaves us with only two potential identities: the animal, or the machine.

Regardless of whether you read Domu for its gripping and pioneering storytelling, for what it tells us about the role played by modernist architecture in postwar Japan, or purely for its wonderful aesthetic, it is a work that easily stands on its own two legs, despite often being overlooked as some sort of practice run for the Akira epic. Rather than the abstract, existentialist sprawl that is the latter, in Domu we have a more concise and personal tale, with a sense of looming oppression that bleeds from every page. The bare honesty found in some of Cho’s victims shines a light on the real lives lived quietly the world over, their deep fatigue resonating in profound echoes. The next time your morning commute takes you past some austere, concrete tower block, remember: somewhere inside may be a young girl who blows things up with her mind.

 

Read Domu for free here.

Alex Schlegel on Imagination, the Brain and ‘the Mental Workspace’

By Rob Hopkins

Source: Resilience

What happens in the brain when we’re being imaginative?  Neuroscientists are moving away from the idea of what’s called ‘localisationism’ (the idea that each capacity of the brain is linked to a particular ‘area’ of the brain) towards the idea that what’s more important is to identify the networks that fire in order to enable particular activities or insights.  Alex Schlegel is a cognitive neuroscientist, which he describes as being about “trying to understand how the structure and function of the brain creates the mind and the consciousness we experience and everything that makes us human, like imagination”.

He recently co-published fascinating research entitled “Network structure and dynamics of the mental workspace” which appeared in Proceedings of the National Academy of Sciences, which identified what the authors called “the mental workspace”, the network that fires in the brain when we are being imaginative.  I spoke to Alex via Skype, and started by asking him to explain what the mental workspace is [here is the podcast of our full conversation, an edited transcript appears below].

This is maybe just a product of the historical moment we’re in with cognitive neuroscience researching, that most of neuroscience research, I think I would say even now, is still focused on finding where is the neuro correlate of some function?  Where does language happen?  Where does vision happen?  Where does memory happen?  Those kinds of things.

It was very easy to ask those questions when fRMI came around, because we could stick someone in the scanner and have them do one task, and do a control test, and then do the real test, and see what part of the brain lights up, in one case rather than the other.  Those very well controlled reductionist kinds of paradigms behind these very clean blobs where something happens in one case versus the other.  I think that led a lot to the story of one place in the brain for every function and we just have to map out those places.

But in reality, the brain is a complex system.  It works in a real world which is a complex environment, and in any kind of real behaviour that we engage in, the entire brain is going to be involved in one way or the other.  Especially when you start to get into these more complex abilities that are very hard to reduce to this highly controlled A versus B kind of thing.

To really understand the behaviour itself, like imagination, it’s not that surprising that it’s going to be a complex, multi-network kind of phenomenon. I think why we were able to show that is maybe primarily because the techniques are advancing in the field and we’re starting to figure out how to look at these behaviours in a more realistic way. One of the big limitations of cognitive neuroscience research right now, because of fMRI, because of the techniques we’ve had, is that we tend of think of behaviour as activating, or not activating the brain.

When we’re doing analyses of brain activity, we’re looking for areas that become more active than another. This is changing a lot in the last few years, but at least for the first fifteen, twenty years, that was one of the only ways we would look at brain activity. So it simplistically thinks of the brain as of some other organ where it’s either buzzing, or it’s not buzzing, or it’s buzzing, or it’s not buzzing, or if it buzzes, the language happens. But really the brain is a complex computational system.

It’s doing complex computations and information processing and that’s not something you’re really going to see if you’re just looking for, in a large area, increased versus decreased activity. When we start to be able to look at the brain more in terms of the information that is processing, and where we can see information, how we can see communication between different areas, then you can start to look at things like imagination, or mental workspace, in a more complex light.

So how does that idea sit alongside the ideas firstly of the ‘Default Network’, which is often linked to creativity and imagination as well, and also to the idea that the hippocampus is the area that is essential to a healthy, functioning imagination?  Do those three ideas just fit seamlessly together, or are they heading off in different directions?

I can give you my opinion, that’s not very well founded in any kind of data, but this is something that we’ve talked about a lot in the lab.  I have a suspicion that actually we had been thinking about how to test for a while.  So the Default Mode Network was first seen as this network that would become more active in between tasks.  So when we’re doing an fRMI experiment what we’ll usually do is you’ll have some period where you’re doing the task, and then there’s a period where you’re just resting, so you can get the baseline brain activity when you’re not doing anything.  And this was a surprising result, is that actually during rest periods, some areas of the brain become more active.  And, you know, “Oh wow, it’s a surprise, the person’s not just sitting there blankly doing nothing.”  The brain doesn’t just totally deactivate.  They’re doing other stuff during those blank periods where there’s no stimulus on the screen.

From my personal experience, what you do in those rest periods is you daydream.  Your mind wanders.  You think about what you’re going to do afterwards, or stuff that’s happened during the day.  There’s a lot of research since then to back that up.  It seems to be this kind of network that’s highly involved in daydreaming like behaviour, or social imagination, those kinds of things.

My opinion, or my suspicion, is that this is illustrating how our term ‘imagination’ really encompasses a lot of different things.  When you try to lump it under this one term, this one mega term, you’re going to be missing out on a lot of the complexity, or subtlety.  So what I suspect is going on is that there’s this more like daydreaming mode of control over your inner space, where you’re not really consciously, volitionally, directing yourself to have certain experiences.  There’s a default control network that’s more taking over the daydreaming.

When I daydream I’m not trying to think about anything, it’s just letting the thoughts come.  That’s maybe part of what imagination is, but a very important part of imagination is you trying to imagine things, trying to direct yourself, thinking, “Well, what is the relationship between these two things?  Or “how can I build community?””  Or something like that.  In that case you’re taking active volitional control over these systems.  So that would be my suspicion of what’s going on.

How the results we found would differ from default mode network is that in our study we would show people some stimulus (see below) and we would say, “Rotate this 90 degrees clockwise”, so they had this fairly difficult task that they had to do and it was effortful.  This more frontal parietal network probably took over then.  And you see that a lot in other studies.  Our frontal parietal, I think they sometimes call it like an Executive Attention Network, that directs when you’re consciously trying to engage in some tests, that takes over, and if you’re not doing anything, the default mode network takes over.

So they’re both different manifestations of the imagination?  Like an active and a more passive, less conscious version?  They’re two versions of the same thing, in a sense?

Yeah, I would think that.  It fits well with what I’ve seen.  There have been studies that show that they’re in some ways antagonistic or mutually inhibiting, the default mode network and this executive attention network…

It’s like oil or water, it’s one or the other?  Or Ying and Yang, as I’ve read in some papers?

Right, but a simple way of describing these that people often resort to is that the Executive Attention Network is designed for attention to the outside world, and the Default Mode Network is attention to the inner space.  Where I would disagree with that, or suggest that that’s not the case, is that I think a better way to classify it would be that executive attention is more of this volitionally driven attention, which is usually associated with attention to the outside world.  And default mode network is more – I don’t know how to describe it exactly, but it’s more of this daydreaming network.  But the point is that your executive volitional attention can be driven to the inner space just as much as it can be driven to the outside world.

Is the mental space network the same kind of network that would be firing in people as when they’re thinking about the future and trying to be imaginative about how the future could be?

Yeah, I would think so.  I think an important difference, or an important additional part that you might start to see if you’re thinking about imagining the future, is that practically most of the time when you’re imagining the future, you’re thinking about people, and social groups, and how to navigate those kinds of dynamics.

So I would guess that then you would get added into the mix all the social processing networks that we have.  That’s actually another thing that we’re thinking about how to look at, is that practically a big chunk of human cognition is spent thinking about your relationship with other people, and how to navigate that.  There’s a good argument to be made that that kind of complex processing space was one of the main drivers of us becoming who we are.  Because social cognition is some of the most complex cognition we do, trying to imagine what somebody’s thinking by looking at their facial expression, or imagine how do I resolve a conflict between these two people who are fighting.  Things like that.

We do have very specialised regions and networks in the brain that have evolved to do that kind of processing.  So yeah, it’s a very interesting question.  That how would these other mental workspace areas, at least that we looked at, that had nothing to do with it, you know, it’s like, “Here’s this abstract shape.  What does it look like if you flip it horizontally here”, things like that.  How would they interact with these socially evolved areas?  It’s a very interesting question.

A lot of the research that I’ve been looking at is about how when people are in states of trauma, or when people grow up in states of fear, that the hippocampus visibly shrinks and that cells are burnt out in the hippocampus, and that people become less able to imagine the future.  People get stuck in the present, and it’s one of the indicators, particularly with post-traumatic stress, is that inability to look forward, and inability to imagine a future.  Do you have any knowledge of, or any speculation about, what happens to the mental workspace when people are in states of trauma or when people are in states of fear?

Definitely no data, only speculation.  As with anything real and interesting involving humans, it’s going to be incredibly complex.  So it would be very difficult, and may be impossible to distil it down to simple understandable things that are happening in the brain, but what I would guess is that, in people that are in stressful situations, and experiencing trauma, you tend to focus – like you were hinting at – you tend to focus on present.  What’s there immediately?  How do I survive this day?

You don’t tend to think much about planning for the future.  Synthesising everything that’s happened to you in the past, you just react in the moment because you don’t know what the next moments are going to be like.  It’s no more cognitive load that you can deal with because of all the stress you have.  So I would guess that for one you’re not really synthesising or processing your experiences into something brought to bear on decisions in the future as much.

And you’re not exercising those muscles of planning far into the future.  So just like any other muscle in the body, if you don’t practice the skills, and you don’t use various parts of your brain, they’re going to atrophy.  They’re not going to develop in the way that they would if you did use them.  In that sense it seems perfectly understandable and not that surprising that these areas and these networks that we found associated with these kinds of activities of projecting oneself in the future, or imaging that things don’t exist, in people for whatever reason aren’t doing that kind of thing regularly in their lives, they’re not going to be developed as much as they would from people that were happy and healthy and imaginative.

The paper that Kyung Hee Kim published in 2010, ‘The Creativity Crisis’ suggested that we might be seeing a decline in our collective imagination.  Do you have any thoughts on why that might be, or what might be some of the processes at work here?

I could speculate a couple of things.  The first thing that pops to mind obviously is education.  How we think about the educational system, how we train children.  And I don’t know about 1990 in particular but definitely starting in 1999 when we became test-crazed, that would be a very obvious culprit.

One thing to think about with the Torrance test and pretty much all tests, these standardised tests of creativity that we use, is that one of the major components that determines the outcome on the test is this divergent thinking idea.  How many ideas can you come up with?  So this has, I think, fairly detrimentally become one of the working definitions we have in psychology research of creativity, is “how much?”  And not really focusing on quality so much, and just using how many ideas you can think of as a stand in for how creative someone is.

The Torrance Test is better because it does get into other dimensions as well, but still some of the major dimensions determining the score are fluency, when you’re doing these drawings, how many components are there in the drawing?  That kind of thing.  So for instance if there were educational trends starting in the 1990s and continuing to now that were leading people to try to converge rather than diverge – you know, “What’s the one right answer?” versus, “What are lots of possible answers?” – then that could definitely lead to these changes we’ve been seeing in the tests.

Even if that were the case though, is that really a problem? Obviously we want people to be able to think of lots of possibilities but if it’s just, for instance, people who have been brought up in an educational system where they’ve been taking standardised tests all the time, and they’re trying to figure out which of the four bubbles is the right one to fill in, then that could just be a habit they’ve developed that carries over to these tests.  I don’t know exactly.

Another idea that maybe would be related to this is we’re definitely much less idle than we were in the past.  I guess we lament all the time how overscheduled kids are.  They go from soccer practice to band practice to art class, to blah, blah, blah, blah, trying to fill up their resume for college or whatever.  So if somebody is just constantly buzzing, busy, not really just stopping and daydreaming, and throwing rocks in creeks or whatever, then that’s again, it’s a habit they’re not going to have developed and they’re not going to be able to use as well.

This idleness, or giving up control to the Default Mode Network maybe, if you will, letting those ideas come in, exploring possibilities, those are things that I think often come out of boredom. And if you’re never bored, you’re never really letting those processes happen.  So that would be another thing to think about.

So if somebody is less imaginative, is that because that when the mental workspace fires, it’s including less places, or that it’s joining them up less vigorously? I don’t have all the terminology.  It all fires, but it fires to less places?  Or it fires less strongly to all those different places?

I think it would be basically everything, to give you a terrible answer.  For instance, this is where we’re really getting at how imagination is a very, very complex process that we’re distilling to a single word, and it’s really thousands of parts to come together.

For instance, if you can imagine visual experiences more or less vividly, then that’s going to play a role.  Somebody who can have very vivid mental images of things is going to probably have an easier time recombining things than somebody who really struggles to form a visual image.  Or on the flip side, there’s a lot of circumstantial evidence that people tend to go to one end or another of being very visual people, and I consider myself on those…  When I think, I tend to think a lot in terms of visual representations.  So it’s very easy for me to do the kinds of tasks that I ask subjects to do, where you know, “Here’s this weird random shape, what would it look like if it was rotated 90 degrees?”

Some people have a really hard time doing that kind of stuff though.  They’ve very smart people, but they’re just terrible at mentally manipulating images.  But if you have them think about other things, like more verbal kinds of verbal logical representations, they’re really good at that.  So even trying to talk about the mental workspace network as one static network of areas in the brain is probably not true, or probably not accurate because different people will have different connections, or different parts of it will be more active than others.

When I’m trying to mentally imagine things, for some people like me, that might involve mental or visual images, and that’s the way I think about it, but for other people it might involve much more the language areas of the brain, exercising that language network in a more mental way.  And that might lead to strengths for some people versus others, and vice versa, depending on what kinds of tests you’re trying to do, or whether you’re a verbal person that’s being forced to try to do something visual, or vice versa.

So given that these networks are involved are these complex information processing systems, there’s any number of ways where they can differ or fail, or become strengthened or become atrophied.

One of the questions I’ve asked everybody that I’ve interviewed has been if you had been elected last year as the President on a platform of ‘Make America Imaginative Again’, if you had thought actually one of the most important things we need is to have young people have a society that really cherishes the imagination, an education system where people come out really fired up and passionate, what might be some of the things you would do in your first 100 days in office?

First 100 days?  Well I think the real solutions are things that are more like 20 year solutions.  So you can start at a 100 days I guess but you definitely won’t solve it in 100 days.  For me it all comes down to how we choose to educate people.  I come at this all from a perspective of the US education system, so one thing is that we don’t view a teacher as a profession really, in the same way that we do as a medical doctor, or a lawyer.

I would say we need the equivalent training and residencies and professional degrees for teachers that we would have with anything else that’s as important a profession as teaching is.  Obviously we shouldn’t be focused on tests in the way that we are.  If you teach tests, and you teach to the kind of competencies a child should achieve by fifth grade, you’re going to be ignoring all the things that are hard to measure, for one thing, like imagination, creativity, curiosity.  How do you evaluate whether a kid’s curious?  I don’t know.

One of the changes I would want to see is that we trust more that the outcomes that we want will come rather than need to see them happen, because if you need to see a result, then you’ll only focus the things that you can see.   And for a lot of what education really does, it’s very hard to measure it in any reliable way.  If your goal is create a society of people that are civically engaged, that are curious, that are creative, compassionate, that’s all stuff that you just have to set up a system to do that, and hope that the outcome you measure will be the society you create, basically.  So that it frees you to focus on those things, and not focus on maths skills, reading skills, that kind of thing.

So in the first 100 days, what do you do? I don’t know. One concrete thing you could do is try to reorganise the teacher training system to make it more professionally aligned.

Like they have in Finland, where teachers are basically trained to Masters level, and then there’s no testing in schools of teachers.  They are then just empowered to teach, and they have the most amount of play and the shortest school hours of any country in Europe, and they constantly gain the best results and the brightest students.

Maybe that would be the first thing we could do, just copy Scandinavia.

The Cost of Resistance

 

(Museum of the Revolution, León, Nicaragua)

By Chris Hedges

Source: TruthDig

Resistance entails suffering. It requires self-sacrifice. It accepts that we may be destroyed. It is not rational. It is not about the pursuit of happiness. It is about the pursuit of freedom. Resistance accepts that even if we fail, there is an inner freedom that comes with defiance, and perhaps this is the only freedom, and true happiness, we will ever know. To resist evil is the highest achievement of human life. It is the supreme act of love. It is to carry the cross, as the theologian James Cone reminds us, and to be acutely aware that what we are carrying is also what we will die upon.

Most of those who resist—Sitting Bull, Emma Goldman, Malcolm X and Martin Luther King Jr.—are defeated, at least in the cold calculation of the powerful. The final, and perhaps most important quality of resistance, as Cone writes, is that it “inverts the world’s value system.” Hope rises up out of defeat. Those who resist stand, regardless of the cost, with the crucified. This is their magnificence and their power.

The seductive inducements to conformity—money, fame, prizes, generous grants, huge book contracts, hefty lecture fees, important academic and political positions and a public platform—are scorned by those who resist. The rebel does not define success the way the elites define success. Those who resist refuse to kneel before the idols of mass culture and the power elites. They are not trying to get rich. They do not want to be part of the inner circle of the powerful. They accept that when you stand with the oppressed you get treated like the oppressed.

The inversion of the world’s value system makes freedom possible. Those who resist are free not because they have attained many things or high positions, but because they have so few needs. They sever the shackles used to keep most people enslaved. And this is why the elites fear them. The elites can crush them physically, but they cannot buy them off.

The power elites attempt to discredit those who resist. They force them to struggle to make an income. They push them to the margins of society. They write them out of the official narrative. They deny them the symbols of status. They use the compliant liberal class to paint them as unreasonable and utopian.

Resistance is not, fundamentally, political. It is cultural. It is about finding meaning and expression in the transcendent and the incongruities of life. Music, poetry, theater and art sustain resistance by giving expression to the nobility of rebellion against the overwhelming forces, what the ancient Greeks called fortuna, which can never ultimately be overcome. Art celebrates the freedom and dignity of those who defy malignant evil. Victory is not inevitable, or at least not victory as defined by the powerful. Yet in every act of rebellion we are free. It was the raw honesty of the blues, spirituals and work chants that made it possible for African-Americans to endure.

Power is a poison. It does not matter who wields it. The rebel, for this reason, is an eternal heretic. He or she will never fit into any system. The rebel stands with the powerless. There will always be powerless people. There will always be injustice. The rebel will always be an outsider.

Resistance requires eternal vigilance. The moment the powerful are no longer frightened, the moment the glare of the people is diverted and movements let down their guard, the moment the ruling elites are able to use propaganda and censorship to hide their aims, the gains made by resisters roll backward. We have been steadily stripped of everything that organized working men and women—who rose up in defiance and were purged, demonized and killed by the capitalist elites—achieved with the New Deal. The victories of African-Americans, who paid with their bodies and blood in making possible the Great Society and ending legal segregation, also have been reversed.

The corporate state makes no pretense of addressing social inequality or white supremacy. It practices only the politics of vengeance. It uses coercion, fear, violence, police terror and mass incarceration as social control. Our cells of resistance have to be rebuilt from scratch.

The corporate state, however, is in trouble. It has no credibility. All the promises of the “free market,” globalization and trickle-down economics have been exposed as a lie, an empty ideology used to satiate greed. The elites have no counterargument to their anti-capitalist and anti-imperialist critics. The attempt to blame the electoral insurgencies in the United States’ two ruling political parties on Russian interference, rather than massive social inequality—the worst in the industrialized world—is a desperate ploy. The courtiers in the corporate press are working feverishly, day and night, to distract us from reality. The moment the elites are forced to acknowledge social inequality as the root of our discontent is the moment they are forced to acknowledge their role in orchestrating this inequality. This terrifies them.

The U.S. government, subservient to corporate power, has become a burlesque. The last vestiges of the rule of law are evaporating. The kleptocrats are pillaging and looting like barbarian hordes. Programs instituted to protect the common good—public education, welfare and environmental regulations—are being dismantled. The bloated military, sucking the marrow out of the nation, is unassailable. Poverty is a nightmare for half the population. Poor people of color are gunned down with impunity in the streets. Our prison system, the world’s largest, is filled with the destitute. And presiding over the chaos and the dysfunction is a political P.T. Barnum, a president who, while we are being fleeced, offers up one bizarre distraction after another, much like Barnum’s Feejee mermaid—the head and torso of a monkey sewed to the back half of a fish.

There is no shortage of artists, intellectuals and writers, from Martin Buber and George Orwell to James Baldwin, who warned us that this dystopian era was fast approaching. But in our Disneyfied world of intoxicating and endless images, cult of the self and willful illiteracy, we did not listen. We will pay for our negligence.

Søren Kierkegaard argued that it was the separation of intellect from emotion, from empathy, that doomed Western civilization. The “soul” has no role in a technocratic society. The communal has been shattered. The concept of the common good has been obliterated. Greed is celebrated. The individual is a god. The celluloid image is reality. The artistic and intellectual forces that make transcendence and the communal possible are belittled or ignored. The basest lusts are celebrated as forms of identity and self-expression. Progress is defined exclusively by technological and material advancement. This creates a collective despair and anxiety that feeds and is fed by glitter, noise and false promises of consumer-culture idols. The despair grows ever-worse, but we never acknowledge our existential dread. As Kierkegaard understood, “the specific character of despair is precisely this: it is unaware of being despair.”

Those who resist are relentlessly self-critical. They ask the hard questions that mass culture, which promises an unachievable eternal youth, fame and financial success, deflects us from asking. What does it mean to be born? What does it mean to live? What does it mean to die? How do we live a life of meaning? What is justice? What is truth? What is beauty? What does our past say about our present? How do we defy radical evil?

We are in the grip of what Kierkegaard called “sickness unto death”—the numbing of the soul by despair that leads to moral and physical debasement. Those who are ruled by rational abstractions and an aloof intellectualism, Kierkegaard argued, are as depraved as those who succumb to hedonism, cravings for power, violence and predatory sexuality. We achieve salvation when we accept the impediments of the body and the soul, the limitations of being human, yet despite these limitations seek to do good. This burning honesty, which means we always exist on the cusp of despair, leaves us, in Kierkegaard’s words, in “fear and trembling.” We struggle not to be brutes while acknowledging we can never be angels. We must act and then ask for forgiveness. We must be able to see our own face in the face of the oppressor.

The theologian Paul Tillich did not use the word “sin” to mean an act of immorality. He, like Kierkegaard, defined sin as estrangement. For Tillich, it was our deepest existential dilemma. Sin was our separation from the forces that give us ultimate meaning and purpose in life. This separation fosters the alienation, anxiety, meaninglessness and despair that are preyed upon by mass culture. As long as we fold ourselves inward, embrace a perverted hyper-individualism that is defined by selfishness and narcissism, we will never overcome this estrangement. We will be separated from ourselves, from others and from the sacred.

Resistance is not only about battling the forces of darkness. It is about becoming a whole and complete human being. It is about overcoming estrangement. It is about the capacity to love. It is about honoring the sacred. It is about dignity. It is about sacrifice. It is about courage. It is about being free. Resistance is the pinnacle of human existence.

Revolutionary Terror: Mark Steven’s ‘Splatter Capital’

By Michael Grasso

Source: We Are the Mutants

Splatter Capital: The Political Economy of Gore Films
By Mark Steven
Repeater Books, 2017

“Splatter confirms and redoubles our very worst fears. It reminds us of what capital is doing to all of us, all of the time—of how predators are consuming our life-substances; of how we are gravely vulnerable against the machinery of production and the matrices of exchange; and of how, as participants of an internecine conflict, our lives are always already precarious.”

—from the Introduction to Splatter Capital

Political readings or interpretations of horror films are nothing new. But in Mark Steven’s 2017 study, Splatter Capital, an explicit connection is made between the bloody gore of what Steven terms “splatter” horror films and the dehumanizing, mutilative forces of global capitalism. Moreover, Steven posits the artistic motivation behind splatter horror as an explicit repudiation of this system: “It is politically committed and its commitment tends toward the anti-capitalist left.” In splatter films, Steven tells us, the images of gory dismemberment do double duty. They both offer a clear metaphor for capitalism’s cruelty, and act as a cathartic revenge in which the bloody legacy of capitalist exploitation is often visited upon its perpetrators and profiteers among the bourgeoisie.

Some definitions are in order here, given that Steven’s schema of genres—“splatter,” “slasher,” “extreme horror”—draws distinctions that might not be apparent even to horror fans. Splatter horror, according to Steven, is all about the violence that can be visited upon the human body and all the abjection that follows. It is machinery tearing apart flesh, blood, and guts: the moment a human body becomes meat. It differs from the personalized and often sexualized “hunt” of the slasher flick. The protagonist in a slasher movie is an individual (often female) resisting violent death at the hands of another individual (often male). In victory against Jason, Freddy, or Michael Myers, this protagonist, in Steven’s words, “restores a social order, which is all too regularly white, middle-class, and suburban.” Splatter horror not only expands the horizons of mutilation and violence allowable in a horror film but systematizes it. The splatter enemy is an implacable, impersonal force, full of shock and awe; its grudge is not personal, but instead overwhelming, inescapable, and, most importantly, class-based.

The language of violence and horror has been with Marxist thought from the beginning. Steven gives us a good précis of Marx’s use of explicitly Gothic (along with bloody and cannibalistic) imagery throughout his works, as well as a splatter-tastic explanation of the exploitation behind surplus value, using an imaginary case study in the manufacturing of chainsaws and knives. The October Revolution in Russia is viewed as a reaction to the inhuman mechanized slaughter of the first World War; Eisenstein’s early filmic paeans to the necessity of revolution such as Strike (1925) demonstrate, thanks to Eisenstein’s pioneering use of montage, capitalism’s role as butcher. Steven also discusses avowed leftist filmmakers from outside the Soviet Union such as Godard, Makavejev, and Pasolini—specifically their use of gore to embody the cruelty of the ruling classes.

As we enter the world of Hollywood film in Chapter Three, Steven examines splatter film as a specifically American reaction to the constant churning crisis of capitalism. Specifically, Steven looks at the two peaks of gore-flecked horror—the mid ’60s through the early ’80s, and the post-Cold War “torture porn” trend of the early ’00s—as expressions of two very important economic and political shifts. The first splatter peak in the ’70s is seen as a clear reaction to the slow, inexorable widening of neoliberal and globalist postindustrial economics and its impact on the American industrial worker. (The aftermath of this trend continues into the 1980s with the evaporation of industry and the establishment of a new information-and-finance-based economy.) The splatter/torture porn trend of the ’00s and beyond is a reaction to the crises of capitalism under a new world order of neocolonialist conflict: the War on Terror, the final disestablishment of the Western industrial base in favor of cheap labor in the developing world, and the new interconnected, networked world’s rulership by speculative capital in the form of the finance sector.

Steven cites too many splatter movies to cover in this review, but central to his thesis is the seminal 1974 Tobe Hooper film, The Texas Chain Saw Massacre. The death of local industry leads Leatherface and family to keep their slaughterhouse traditions alive by carving up and eating young people. These young people, Steven is quick to point out, are only here at all because they were unable to get gas for their car (thanks to the first of two 1970s oil crises). American decline is everywhere; betrayal by global economic forces are central to the trap that’s being laid by the cannibals. (Of course, the carnage of the Vietnam War can’t be overlooked here either, given the visual language of ambush, capture, and torture; Hooper himself has cited this in subsequent interviews.) Steven notes that the victims in The Texas Chain Saw Massacre are representative of a bourgeoisie who don’t know how the sausage is made. It’s important and vital, Steven says, that the cannibalistic side of splatter involves the bourgeoisie being forced to eat members of their own class. It’s Burroughs’s famous “naked lunch“: “the frozen moment when everyone sees what is at the end of every fork.”

As the neoliberal takeover of the world economy begins in earnest in the 1980s, as complex and largely ephemeral systems of mass media and finance take the place of the visceral, grinding monomania of industrial capitalism, splatter horror follows suit. Steven’s analysis of David Cronenberg’s Videodrome (1983) is especially sharp, examining the links between the body horror of the film and the Deleuzian body without organs. Max Renn’s body becomes an endlessly modular media node, able to accommodate video cassettes, to generate and fuse with phallic weapons (used to assassinate and destroy the media forces who’ve made him this way), to mesh and mold and mix with the hard plastic edges of media technology. By the end of the film, Renn is a weapon reprogrammed and re-trained on the very media-industrial complex that made him. More body horror: the cult classic Society (1989) and its shocking conclusion posits the ruling class as a cancerous monster, an amorphous leviathan straight out of a Gilded Age political cartoon, eating and fucking and vomiting, red in tooth and claw and pseudopod. Barriers between bodies break down; the system begins swallowing up all alternate possibilities.

By the time the Cold War is finished, the era of post-9/11 eternal war, of Abu Ghraib and Guantanamo, led to the popular new splatter sub-genre of “torture porn.” Steven identifies the genre’s distinguishing aesthetic feature: the indisputable, systematic, and worldwide victory of capitalism and the hypnotic Spectacle that accompanies it. In this era, there are no longer any alternatives. Everyone, rich and poor, is trapped in the system, and the system reintegrates torture into a worldwide video spectacle. This is embodied in both the global conspiracies of the wealthy in Roth’s Hostel series and in the Jigsaw Killer’s industrially-themed Rube Goldberg devices in the Saw franchise—devices of dismemberment explicitly linked to moral quandaries reminiscent of capitalism’s impossible everyday Hobson’s choices for the working class. The system will go on consuming you, whether you’re unlucky enough to be a splatter film’s victim, or “lucky” enough to wield the power to splatter (for example, Hostel: Part II‘s reversal of fate on the ultra-wealthy hunters, or the Jigsaw Killer’s death from cancer in Saw III—ultimately due to… a lack of health insurance).

Possibly the most intriguing aspect of this already very good book is Steven’s interspersing of personal anecdotes on when and where he discovered some of his favorite horror and genre films. By placing his personal and psychological experience of splatter films front and center, and linking it to his personal growth and increasing political maturity, he demonstrates the personal impact of the political, and the necessity of personal epiphany, mediated by culture, to achieve political awareness. Splatter Capital ultimately is not a book for the already-convinced and committed leftist, the Marxian thinker already well-versed in theory. (Another of Splatter Capital‘s very strong points is how Steven largely eschews jargon and obscurantism for an approachable tone and topic that laypeople can dive into easily.) It is for the fans of these films who’ve always wondered about the ineluctable appeal of visceral, shocking violence on screen, and perhaps why it all feels so strangely familiar.