Saturday Matinee: The Selfish Ledger

“The Selfish Ledger” (2016) is a leaked internal Google video by Nick Foster, the head of design at Google’s research-and-development division, X. It draws on theories of evolutionary biology to explain how the collective data history of all devices could be used  by an AI “ledger” similar to how genes shape characteristics of future generations. As explained by Foster:

“User-centered design principles have dominated the world of computing for many decades, but what if we looked at things a little differently? What if the ledger could be given a volition or purpose rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information? What if we thought of ourselves not as the owners of this information, but as custodians, transient carriers, or caretakers?…By thinking of user data as multigenerational, it becomes possible for emerging users to benefit from the preceding generation’s behaviors and decisions.”

This database of human behavior can be mined for patterns, and “sequenced” like the human genome, making future behaviors and decisions easier to predict and direct. According to Google the video was designed to be provocative and did not relate to any products in development. Watch it yourself and decide.

Disarming the Weapons of Mass Distraction

By Madeleine Bunting

Source: Rise Up Times

“Are you paying attention?” The phrase still resonates with a particular sharpness in my mind. It takes me straight back to my boarding school, aged thirteen, when my eyes would drift out the window to the woods beyond the classroom. The voice was that of the math teacher, the very dedicated but dull Miss Ploughman, whose furrowed grimace I can still picture.

We’re taught early that attention is a currency—we “pay” attention—and much of the discipline of the classroom is aimed at marshaling the attention of children, with very mixed results. We all have a history here, of how we did or did not learn to pay attention and all the praise or blame that came with that. It used to be that such patterns of childhood experience faded into irrelevance. As we reached adulthood, how we paid attention, and to what, was a personal matter and akin to breathing—as if it were automatic.

Today, though, as we grapple with a pervasive new digital culture, attention has become an issue of pressing social concern. Technology provides us with new tools to grab people’s attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain “productive” on a cab journey.

Even those spare moments of time in our day—waiting for a bus, standing in a queue at the supermarket—can now be “harvested,” says the writer Tim Wu in his book The Attention Merchants. In this quest to pursue “those slivers of our unharvested awareness,” digital technology has provided consumer capitalism with its most powerful tools yet. And our attention fuels it. As Matthew Crawford notes in The World Beyond Your Head, “when some people treat the minds of other people as a resource, this is not ‘creating wealth,’ it is transferring it.”

There’s a whiff of panic around the subject: the story that our attention spans are now shorter than a goldfish’s attracted millions of readers on the web; it’s still frequently cited, despite its questionable veracity. Rates of diagnosis attention deficit hyperactivity disorder in children have soared, creating an $11 billion global market for pharmaceutical companies. Every glance of our eyes is now tracked for commercial gain as ever more ingenious ways are devised to capture our attention, if only momentarily. Our eyeballs are now described as capitalism’s most valuable real estate. Both our attention and its deficits are turned into lucrative markets.

There is also a domestic economy of attention; within every family, some get it and some give it. We’re all born needing the attention of others—our parents’, especially—and from the outset, our social skills are honed to attract the attention we need for our care. Attention is woven into all forms of human encounter from the most brief and transitory to the most intimate. It also becomes deeply political: who pays attention to whom?

Social psychologists have researched how the powerful tend to tune out the less powerful. One study with college students showed that even in five minutes of friendly chat, wealthier students showed fewer signs of engagement when in conversation with their less wealthy counterparts: less eye contact, fewer nods, and more checking the time, doodling, and fidgeting. Discrimination of race and gender, too, plays out through attention. Anyone who’s spent any time in an organization will be aware of how attention is at the heart of office politics. A suggestion is ignored in a meeting, but is then seized upon as a brilliant solution when repeated by another person.

What is political is also ethical. Matthew Crawford argues that this is the essential characteristic of urban living: a basic recognition of others.

And then there’s an even more fundamental dimension to the politics of attention. At a primary level, all interactions in public space require a very minimal form of attention, an awareness of the presence and movement of others. Without it, we would bump into each other, frequently.

I had a vivid demonstration of this point on a recent commute: I live in East London and regularly use the narrow canal paths for cycling. It was the canal rush hour—lots of walkers with dogs, families with children, joggers as well as cyclists heading home. We were all sharing the towpath with the usual mixture of give and take, slowing to allow passing, swerving around and between each other. Only this time, a woman was walking down the center of the path with her eyes glued to her phone, impervious to all around her. This went well beyond a moment of distraction. Everyone had to duck and weave to avoid her. She’d abandoned the unspoken contract that avoiding collision is a mutual obligation.

This scene is now a daily occurrence for many of us, in shopping centers, station concourses, or on busy streets. Attention is the essential lubricant of urban life, and without it, we’re denying our co-existence in that moment and place. The novelist and philosopher, Iris Murdoch, writes that the most basic requirement for being good is that a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”

Attention is what draws us out of ourselves to experience and engage in the world. The word is often accompanied by a verb—attention needs to be grabbed, captured, mobilized, attracted, or galvanized. Reflected in such language is an acknowledgement of how attention is the essential precursor to action. The founding father of psychology William James provided what is still one of the best working definitions:

It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

Attention is a limited resource and has to be allocated: to pay attention to one thing requires us to withdraw it from others. There are two well-known dimensions to attention, explains Willem Kuyken, a professor of psychology at Oxford. The first is “alerting”— an automatic form of attention, hardwired into our brains, that warns us of threats to our survival. Think of when you’re driving a car in a busy city: you’re aware of the movement of other cars, pedestrians, cyclists, and road signs, while advertising tries to grab any spare morsel of your attention. Notice how quickly you can swerve or brake when you spot a car suddenly emerging from a side street. There’s no time for a complicated cognitive process of decision making. This attention is beyond voluntary control.

The second form of attention is known as “executive”—the process by which our brain selects what to foreground and focus on, so that there can be other information in the background—such as music when you’re cooking—but one can still accomplish a complex task. Crucially, our capacity for executive attention is limited. Contrary to what some people claim, none of us can multitask complex activities effectively. The next time you write an email while talking on the phone, notice how many typing mistakes you make or how much you remember from the call. Executive attention can be trained, and needs to be for any complex activity. This was the point James made when he wrote: “there is no such thing as voluntary attention sustained for more than a few seconds at a time… what is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.”

Attention is a complex interaction between memory and perception, in which we continually select what to notice, thus finding the material which correlates in some way with past experience. In this way, patterns develop in the mind. We are always making meaning from the overwhelming raw data. As James put it, “my experience is what I agree to attend to. Only those items which I notice shape my mind—without selective interest, experience is an utter chaos.”

And we are constantly engaged in organizing that chaos, as we interpret our experience. This is clear in the famous Gorilla Experiment in which viewers were told to watch a video of two teams of students passing a ball between them. They had to count the number of passes made by the team in white shirts and ignore those of the team in black shirts. The experiment is deceptively complex because it involves three forms of attention: first, scanning the whole group; second, ignoring the black T-shirt team to keep focus on the white T-shirt team (a form of inhibiting attention); and third, remembering to count. In the middle of the experiment, someone in a gorilla suit ambles through the group. Afterward, half the viewers when asked hadn’t spotted the gorilla and couldn’t even believe it had been there. We can be blind not only to the obvious, but to our blindness.

There is another point in this experiment which is less often emphasized. Ignoring something—such as the black T-shirt team in this experiment—requires a form of attention. It costs us attention to ignore something. Many of us live and work in environments that require us to ignore a huge amount of information—that flashing advert, a bouncing icon or pop-up.

In another famous psychology experiment, Walter Mischel’s Marshmallow Test, four-year-olds had a choice of eating a marshmallow immediately or two in fifteen minutes. While filmed, each child was put in a room alone in front of the plate with a marshmallow. They squirmed and fidgeted, poked the marshmallow and stared at the ceiling. A third of the children couldn’t resist the marshmallow and gobbled it up, a third nibbled cautiously, but the last third figured out how to distract themselves. They looked under the table, sang… did anything but look at the sweet. It’s a demonstration of the capacity to reallocate attention. In a follow-up study some years later, those who’d been able to wait for the second marshmallow had better life outcomes, such as academic achievement and health. One New Zealand study of 1,000 children found that this form of self-regulation was a more reliable predictor of future success and wellbeing than even a good IQ or comfortable economic status.

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

*

I learned to concentrate as a child. Being a bookworm helped. I’d be completely absorbed in my reading as the noise of my busy family swirled around me. It was good training for working in newsrooms; when I started as a journalist, they were very noisy places with the clatter of keyboards, telephones ringing and fascinating conversations on every side. What has proved much harder to block out is email and text messages.

The digital tech companies know a lot about this widespread habit; many of them have built a business model around it. They’ve drawn on the work of the psychologist B.F. Skinner who identified back in the Thirties how, in animal behavior, an action can be encouraged with a positive consequence and discouraged by a negative one. In one experiment, he gave a pigeon a food pellet whenever it pecked at a button and the result, as predicted, was that the pigeon kept pecking. Subsequent research established that the most effective way to keep the pigeon pecking was “variable-ratio reinforcement.” Give the pigeon a food pellet sometimes, and you have it well and truly hooked.

We’re just like the pigeon pecking at the button when we check our email or phone. It’s a humiliating thought. Variable reinforcement ensures that the customer will keep coming back. It’s the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

In much public space, wherever your eye lands—from the back of the toilet door, to the handrail on the escalator, or the hotel key card—an ad is trying to grab your attention, and does so by triggering the oldest instincts of the human mind: fear, sex, and food. Public places become dominated by people trying to sell you something. In his tirade against this commercialization, Crawford cites advertisements on the backs of school report cards and on debit machines where you swipe your card. Before you enter your PIN, that gap of a few seconds is now used to show adverts. He describes silence and ad-free experience as “luxury goods” that only the wealthy can afford. Crawford has invented the concept of the “attentional commons,” free public spaces that allow us to choose where to place our attention. He draws the analogy with environmental goods that belong to all of us, such as clean air or clean water.

Some legal theorists are beginning to conceive of our own attention as a human right. One former Google employee warned that “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have.” They use the insights into human behavior derived from social psychology—the need for approval, the need to reciprocate others’ gestures, the fear of missing out. Your attention ceases to be your own, pulled and pushed by algorithms. Attention is referred to as the real currency of the future.

*

In 2013, I embarked on a risky experiment in attention: I left my job. In the previous two years, it had crept up on me. I could no longer read beyond a few paragraphs. My eyes would glaze over and, even more disastrously for someone who had spent their career writing, I seemed unable to string together my thoughts, let alone write anything longer than a few sentences. When I try to explain the impact, I can only offer a metaphor: it felt like my imagination and use of language were vacuum packed, like a slab of meat coated in plastic. I had lost the ability to turn ideas around, see them from different perspectives. I could no longer draw connections between disparate ideas.

At the time, I was working in media strategy. It was a culture of back-to-back meetings from 8:30 AM to 6 PM, and there were plenty of advantages to be gained from continuing late into the evening if you had the stamina. Commitment was measured by emails with a pertinent weblink. Meetings were sometimes as brief as thirty minutes and frequently ran through lunch. Meanwhile, everyone was sneaking time to battle with the constant emails, eyes flickering to their phone screens in every conversation. The result was a kind of crazy fog, a mishmash of inconclusive discussions.

At first, it was exhilarating, like being on those crazy rides in a theme park. By the end, the effect was disastrous. I was almost continuously ill, battling migraines and unidentifiable viruses. When I finally made the drastic decision to leave, my income collapsed to a fraction of its previous level and my family’s lifestyle had to change accordingly. I had no idea what I was going to do; I had lost all faith in my ability to write. I told friends I would have to return the advance I’d received to write a book. I had to try to get back to the skills of reflection and focus that had once been ingrained in me.

The first step was to teach myself to read again. I sometimes went to a café, leaving my phone and computer behind. I had to slow down the racing incoherence of my mind so that it could settle on the text and its gradual development of an argument or narrative thread. The turning point in my recovery was a five weeks’ research trip to the Scottish Outer Hebrides. On the journey north of Glasgow, my mobile phone lost its Internet connection. I had cut myself loose with only the occasional text or call to family back home. Somewhere on the long Atlantic beaches of these wild and dramatic islands, I rediscovered my ability to write.

I attribute that in part to a stunning exhibition I came across in the small harbor town of Lochboisdale, on the island of South Uist. Vija Celmins is an acclaimed Latvian-American artist whose work is famous for its astonishing patience. She can take a year or more to make a woodcut that portrays in minute detail the surface of the sea. A postcard of her work now sits above my desk, a reminder of the power of slow thinking.

Just as we’ve had a slow eating movement, we need a slow thinking campaign. Its manifesto could be the German poet Rainer Maria Rilke’s beautiful “Letters to a Young Poet”:

To let every impression and the germ of every feeling come to completion inside, in the dark, in the unsayable, the unconscious, in what is unattainable to one’s own intellect, and to wait with deep humility and patience for the hour when a new clarity is delivered.

Many great thinkers attest that they have their best insights in moments of relaxation, the proverbial brainwave in the bath. We actually need what we most fear: boredom.

When I left my job (and I was lucky that I could), friends and colleagues were bewildered. Why give up a good job? But I felt that here was an experiment worth trying. Crawford frames it well as “intellectual biodiversity.” At a time of crisis, we need people thinking in different ways. If we all jump to the tune of Facebook or Instagram and allow ourselves to be primed by Twitter, the danger is that we lose the “trained powers of concentration” that allow us, in Crawford’s words, “to recognize that independence of thought and feeling is a fragile thing, and requires certain conditions.”

I also took to heart the insights of the historian Timothy Snyder, who concluded from his studies of twentieth-century European totalitarianism that the way to fend off tyranny is to read books, make an effort to separate yourself from the Internet, and “be kind to our language… Think up your own way of speaking.” Dropping out and going offline enabled me to get back to reading, voraciously, and to writing; beyond that, it’s too early to announce the results of my experiment with attention. As Rilke said, “These things cannot be measured by time, a year has no meaning, and ten years are nothing.”

*

A recent column in The New Yorker cheekily suggests that all the fuss about the impact of digital technologies on our attention is nothing more than writers’ worrying about their own working habits. Is all this anxiety about our fragmenting minds a moral panic akin to those that swept Victorian Britain about sexual behavior? Patterns of attention are changing, but perhaps it doesn’t much matter?

My teenage children read much less than I did. One son used to play chess online with a friend, text on his phone, and do his homework all at the same time. I was horrified, but he got a place at Oxford. At his interview, he met a third-year history undergraduate who told him he hadn’t yet read any books in his time at university. But my kids are considerably more knowledgeable about a vast range of subjects than I was at their age. There’s a small voice suggesting that the forms of attention I was brought up with could be a thing of the past; the sustained concentration required to read a whole book will become an obscure niche hobby.

And yet, I’m haunted by a reflection: the magnificent illuminations of the eighth-century Book of Kells has intricate patterning that no one has ever been able to copy, such is the fineness of the tight spirals. Lines are a millimeter apart. They indicate a steadiness of hand and mind—a capability most of us have long since lost. Could we be trading in capacities for focus in exchange for a breadth of reference? Some might argue that’s not a bad trade. But we would lose depth: artist Paul Klee wrote that he would spend a day in silent contemplation of something before he painted it. Paul Cézanne was similarly known for his trance like attention on his subject. Madame Cézanne recollected how her husband would gaze at the landscape, and told her, “The landscape thinks itself in me, and I am its consciousness.” The philosopher Maurice Merleau-Ponty describes a contemplative attention in which one steps outside of oneself and immerses oneself in the object of attention.

It’s not just artists who require such depth of attention. Nearly two decades ago, a doctor teaching medical students at Yale was frustrated at their inability to distinguish between types of skin lesions. Their gaze seemed restless and careless. He took his students to an art gallery and told them to look at a picture for fifteen minutes. The program is now used in dozens of US medical schools.

Some argue that losing the capacity for deep attention presages catastrophe. It is the building block of “intimacy, wisdom, and cultural progress,” argues Maggie Jackson in her book Distracted, in which she warns that “as our attentional skills are squandered, we are plunging into a culture of mistrust, skimming, and a dehumanizing merging between man and machine.” Significantly, her research began with a curiosity about why so many Americans were deeply dissatisfied with life. She argues that losing the capacity for deep attention makes it harder to make sense of experience and to find meaning—from which comes wonder and fulfillment. She fears a new “dark age” in which we forget what makes us truly happy.

Strikingly, the epicenter of this wave of anxiety over our attention is the US. All the authors I’ve cited are American. It’s been argued that this debate represents an existential crisis for America because it exposes the flawed nature of its greatest ideal, individual freedom. The commonly accepted notion is that to be free is to make choices, and no one can challenge that expression of autonomy. But if our choices are actually engineered by thousands of very clever, well-paid digital developers, are we free? The former Google employee Tristan Harris confessed in an article in 2016 that technology “gives people the illusion of free choice while architecting the menu so that [tech giants] win, no matter what you choose.”

Despite my children’s multitasking, I maintain that vital human capacities—depth of insight, emotional connection, and creativity—are at risk. I’m intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

One of the most interesting responses to our distraction epidemic is mindfulness. Its popularity is evidence that people are trying to find a way to protect and nourish their minds. Jon Kabat-Zinn, who pioneered the development of secular mindfulness, draws an analogy with jogging: just as keeping your body fit is now well understood, people will come to realize the importance of looking after their minds.

I’ve meditated regularly for twenty years, but curious as to how this is becoming mainstream, I went to an event in the heart of high-tech Shoreditch in London. In a hipster workspaces with funky architecture, excellent coffee, and an impressive range of beards, a soft-spoken retired Oxford professor of psychology, Mark Williams, was talking about how multitasking has a switching cost in focus and concentration. Our unique human ability to remember the past and to think ahead brings a cost; we lose the present. To counter this, he advocated a daily practice of mindfulness: bringing attention back to the body—the physical sensations of the breath, the hands, the feet. Williams explained how fear and anxiety inhibit creativity. In time, the practice of mindfulness enables you to acknowledge fear calmly and even to investigate it with curiosity. You learn to place your attention in the moment, noticing details such as the sunlight or the taste of the coffee.

On a recent retreat, I was beside a river early one morning and a rower passed. I watched the boat slip by and enjoyed the beauty in a radically new way. The moment was sufficient; there was nothing I wanted to add or take away—no thought of how I wanted to do this every day, or how I wanted to learn to row, or how I wished I was in the boat. Nothing but the pleasure of witnessing it. The busy-ness of the mind had stilled. Mindfulness can be a remarkable bid to reclaim our attention and to claim real freedom, the freedom from our habitual reactivity that makes us easy prey for manipulation.

But I worry that the integrity of mindfulness is fragile, vulnerable both to commercialization by employers who see it as a form of mental performance enhancement and to consumer commodification, rather than contributing to the formation of ethical character. Mindfulness as a meditation practice originates in Buddhism, and without that tradition’s ethics, there is a high risk of it being hijacked and misrepresented.

Back in the Sixties, the countercultural psychologist Timothy Leary rebelled against the conformity of the new mass media age and called for, in Crawford’s words, an “attentional revolution.” Leary urged people to take control of the media they consumed as a crucial act of self-determination; pay attention to where you place your attention, he declared. The social critic Herbert Marcuse believed Leary was fighting the struggle for the ultimate form of freedom, which Marcuse defined as the ability “to live without anxiety.” These were radical prophets whose words have an uncanny resonance today. Distraction has become a commercial and political strategy, and it amounts to a form of emotional violence that cripples people, leaving them unable to gather their thoughts and overwhelmed by a sense of inadequacy. It’s a powerful form of oppression dressed up in the language of individual choice.

The stakes could hardly be higher, as William James knew a century ago: “The faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will.” And what are we humans without these three?

A 2% Financial Wealth Tax Would Provide A $12,000 Annual Stipend To Every American Household

Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income.

By Paul Buchheit

Source: Nation of Change

It’s not hard to envision the benefits in work opportunities, stress reduction, child care, entrepreneurial activity, and artistic pursuits for American households with an extra $1,000 per month. It’s also very easy to justify a financial wealth tax, given that the dramatic stock market surge in recent years is largely due to an unprecedented degree of technological and financial productivity that derives from the work efforts and taxes of ALL Americans. A 2% annual tax on financial wealth is a small price to pay for the great fortunes bestowed on the most fortunate Americans.

The REASONS? Careful analysis reveals a number of excellent arguments for the implementation of a Universal Basic Income (UBI).

(1) Our Jobs are Disappearing

A 2013 Oxford study determined that nearly HALF of American jobs are at risk of being replaced by computers, AI, and robots. Society simply can’t keep up with technology. As for the skeptics who cite the Industrial Revolution and its job-enhancing aftermath (which actually took 60 years to develop), the McKinsey Global Institute says that society is being transformed at a pace “ten times faster and at 300 times the scale” of the radical changes of two hundred years ago.

(2) Half of America is Stressed Out or Sick

Half of Americans are in or near poverty, unable to meet emergency expenses, living from paycheck to paycheck, and getting physically and emotionally ill because of it. Numerous UBI experiments have led to increased well-being for their participants. A guaranteed income reduces the debilitating effects of inequality. As one recipient put it, “It takes me out of depression…I feel more sociable.”

(3) Children Need Our Help

This could be the best reason for monthly household stipends. Parents, especially mothers, are unable to work outside the home because of the all-important need to care for their children. Because we currently lack a UBI, more and more children are facing hunger and health problems and educational disadvantages.

(4) We Need More Entrepreneurs

A sudden influx of $12,000 per year for 126 million households will greatly stimulate the economy, potentially allowing millions of Americans to TAKE RISKS that could lead to new forms of innovation and productivity.

Perhaps most significantly, a guaranteed income could relieve some of the pressure on our newest generation of young adults, who are deep in debt, underemployed, increasingly unable to live on their own, and ill-positioned to take the entrepreneurial chances that are needed to spur innovative business growth. No other group of Americans could make more productive use of an immediate boost in income.

(5) We Need the Arts & Sciences

A recent Gallup poll found that nearly 70% of workers don’t feel ‘engaged’ (enthusiastic and committed) in their jobs. The work chosen by UBI recipients could unleash artistic talents and creative impulses that have been suppressed by personal financial concerns, leading, very possibly, to a repeat of the 1930s, when the Works Progress Administration hired thousands of artists and actors and musicians to help sustain the cultural needs of the nation.

Arguments against

The usual uninformed and condescending opposing argument is that UBI recipients will waste the money, spending it on alcohol and drugs and other ‘temptation’ goods. Not true. Studies from the World Bank and the Brooks World Poverty Institute found that money going to poor families is used primarily for essential needs, and that the recipients experience greater physical and mental well-being as a result of their increased incomes. Other arguments against the workability of the UBI are countered by the many successful experiments conducted in the present and recent past: FinlandCanada, Netherlands, Kenya, IndiaGreat Britain, Uganda, Namibia, and in the U.S. in Alaska and California.

How to pay for it

Largely because of the stock market, U.S. financial wealth has surged to $77 trillion, with the richest 10% owning over three-quarters of it. Just a 2 percent tax on total financial wealth would generate enough revenue to provide a $12,000 annual stipend to every American household (including those of the richest families).

It’s easy to justify a wealth tax. Over half of all basic research is paid for by our tax dollars. All the technology in our phones and computers started with government research and funding. Pharmaceutical companies wouldn’t exist without decades of support from the National Institutes of Health. Yet the tech and pharmaceutical companies claim patents on the products paid for and developed by the American people.

The collection of a wealth tax would not be simple, since only about half of U.S. financial wealth is held directly in equities and liquid assets (Table 5-2). But it’s doable. As Thomas Piketty notes, “A progressive tax on net wealth is better than a progressive tax on consumption because first, net wealth is better defined for very wealthy individuals..”

And certainly a financial industry that knows how to package worthless loans into A-rated mortgage-backed securities should be able to figure out how to tax the investment companies that manage the rest of our ever-increasing national wealth.

 

Smashing the Cult of Celebrity and the Disempowerment Game

By Dylan Charles

Source: Waking Times

At the dark heart of corporate consumer culture lie the social programs that mass-produce conformity,  obedience, acquiescence and consent for the matrix.

The cult of celebrity is the royal monarch of these schemes, the ace in the hole for mass mind control and the disempowerment of the individual. This is the anointed paradigm of idol worship and idol sacrifice, a vampire’s feast on our individual and collective dreams. Who do you love? Who do you hate? Who do want to be like? 

Combine this paradigm with the technology of social media, and the individual is flung into oblivion, never fully understanding the importance and value of their own life, instead always comparing themselves to phony ideals and well-designed, well-funded marketing campaigns.

‘The camera has created a culture of celebrity; the computer is creating a culture of connectivity. As the two technologies converge – broadband tipping the Web from text to image; social-networking sites spreading the mesh of interconnection ever wider – the two cultures betray a common impulse. Celebrity and connectivity are both ways of becoming known. This is what the contemporary self wants. It wants to be recognized, wants to be connected: It wants to be visible. If not to the millions, on Survivor or Oprah, then hundreds, on Twitter or Facebook. This is the quality that validates us, this is how we become real to ourselves – by being seen by others. The great contemporary terror is anonymity.’ ~William Deresiewicz

Marketeers and propagandists are skilled at leveraging human psychology to exploit human nature. They utilize the study of the psyche to gain inroads into your behavior, and they employ this science as a tool for stoking insecurities and triggering urges.

They may be selling an idea, a lifestyle, a product, or a war, but, the pitch is the same: a false idol rises from the wastelands of the American dream, and is presented to the hordes as a well-packaged product. The celebrity’s life is a projection of a niche fantasy, and a following is built up around this fantasy, and the cult followers are steered toward whatever point of purchase.

And that’s what a cult is: “a system of religious veneration and devotion directed toward a particular figure or object.”

This kind of externalized validation serves as a power transfer. Your personal power is extracted and foisted onto a manufactured image in the matrix, and without realizing it, you’ve forfeited your power to influence the direction of your own life.

“The Fantasy of celebrity culture is not designed simply to entertain. It is designed to drain us emotionally, confuse us about our identity, make us blame ourselves for our predicament, condition us to chase illusions of fame and happiness, and keep us from fighting back.” – Chris Hedges

This is about usurping individuality in order to foster groupthink and hive consciousness. It’s also about creating a barrier between what you believe is possible for yourself and what chances you are willing to take in order to manifest a unique vision for your life.

You see, human beings are energetic creations, partly made of matter and partly made of spirit, but wholly malleable to the direction of the mind. We are affected by subtle energies, body language, electromagnetic energy, frequencies of light that we cannot see, sounds that we cannot hear, and a thousand other hidden cues. We are beings of energy, and much like a battery, we can can give or receive energy.

But the mind is at the center of it all. Whatever the mind entertains, the being creates.

When the mind fixes on an external idol, this innate power to form ourselves is transferred outside of our own locus of control, and where the mind could be centered on creating and expanding the self, it is instead focused on the fantasy of achieving an impossible ideal.

As journalist Jon Rappoport notes:

“If perception and thought can be channeled, directed, reduced, and weakened, then it doesn’t matter what humans do to resist other types of control. They will always go down the wrong path. They will always operate within limited and bounded territory. They will always ignore their own authentic power.” ~Jon Rappoport

The end game here is to keep us from accepting ourselves as worthy and perfect divine beings, and to disconnect us from our own potential. This is deep stuff, reaching far beyond the push to convert us into greedy, materialistic consumers. In a metaphysical sense it is a transfer of energy, and where once we were strong and full of promise, we are now helpless and content to observe as the world flits by.

What’s most dangerous to any system of control is for the individual to know their own strength and to speak their own language, as Chris Hedges puts it.

“That’s why I don’t own a television… and I work as hard as I can to distance myself from popular culture so that I can speak in my own language, not the one they give me.” ~Chris Hedges

The Singular Pursuit of Comrade Bezos

By Malcolm Harris

Source: Medium

It was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by way of stepping up output each year, every year, year after year. Nothing else mattered: not profit, not the rate of industrial accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced.

— Francis Spufford, Red Plenty

But isn’t a business’s goal to turn a profit? Not at Amazon, at least in the traditional sense. Jeff Bezos knows that operating cash flow gives the company the money it needs to invest in all the things that keep it ahead of its competitors, and recover from flops like the Fire Phone. Up and to the right.

— Recode, “Amazon’s Epic 20-Year Run as a Public Company, Explained in Five Charts


From a financial point of view, Amazon doesn’t behave much like a successful 21st-century company. Amazon has not bought back its own stock since 2012. Amazon has never offered its shareholders a dividend. Unlike its peers Google, Apple, and Facebook, Amazon does not hoard cash. It has only recently started to record small, predictable profits. Instead, whenever it has resources, Amazon invests in capacity, which results in growth at a ridiculous clip. When the company found itself with $13.8 billion lying around, it bought a grocery chain for $13.7 billion. As the Recode story referenced above summarizes in one of the graphs: “It took Amazon 18 years as a public company to catch Walmart in market cap, but only two more years to double it.” More than a profit-seeking corporation, Amazon is behaving like a planned economy.

If there is one story on Americans who grew up after the fall of the Berlin Wall know about planned economies, I’d wager it’s the one about Boris Yeltsin in a Texas supermarket.

In 1989, recently elected to the Supreme Soviet, Yeltsin came to America, in part to see Johnson Space Center in Houston. On an unscheduled jaunt, the Soviet delegation visited a local supermarket. Photos from the Houston Chronicle capture the day: Yeltsin, overcome by a display of Jell-O Pudding Pops; Yeltsin inspecting the onions; Yeltsin staring down a full display of shiny produce like a line of enemy soldiers. Planning could never master the countless variables that capitalism calculated using the tireless machine of self-interest. According to the story, the overflowing shelves filled Yeltsin with despair for the Soviet system, turned him into an economic reformer, and spelled the end for state socialism as a global force. We’re taught this lesson in public schools, along with Animal Farm: Planned economies do not work.

It’s almost 30 years later, but if Comrade Yeltsin had visited today’s most-advanced American grocery stores, he might not have felt so bad. Journalist Hayley Peterson summarized her findings in the title of her investigative piece, “‘Seeing Someone Cry at Work Is Becoming Normal’: Employees Say Whole Foods Is Using ‘Scorecards’ to Punish Them.” The scorecard in question measures compliance with the (Amazon subsidiary) Whole Foods OTS, or “on-the-shelf” inventory management. OTS is exhaustive, replacing a previously decentralized system with inch-by-inch centralized standards. Those standards include delivering food from trucks straight to the shelves, skipping the expense of stockrooms. This has resulted in produce displays that couldn’t bring down North Korea. Has Bezos stumbled into the problems with planning?

Although OTS was in play before Amazon purchased Whole Foods last August, stories about enforcement to tears fit with the Bezos ethos and reputation. Amazon is famous for pursuing growth and large-scale efficiencies, even when workers find the experiments torturous and when they don’t make a lot of sense to customers, either. If you receive a tiny item in a giant Amazon box, don’t worry. Your order is just one small piece in an efficiency jigsaw that’s too big and fast for any individual human to comprehend. If we view Amazon as a planned economy rather than just another market player, it all starts to make more sense: We’ll thank Jeff later, when the plan works. And indeed, with our dollars, we have.

In fact, to think of Amazon as a “market player” is a mischaracterization. The world’s biggest store doesn’t use suggested retail pricing; it sets its own. Book authors (to use a personal example) receive a distinctly lower royalty for Amazon sales because the site has the power to demand lower prices from publishers, who in turn pass on the tighter margins to writers. But for consumers, it works! Not only are books significantly cheaper on Amazon, the site also features a giant stock that can be shipped to you within two days, for free with Amazon Prime citizensh…er, membership. All 10 or so bookstores I frequented as a high school and college student have closed, yet our access to books has improved — at least as far as we seem to be able to measure. It’s hard to expect consumers to feel bad enough about that to change our behavior.


Although they attempt to grow in a single direction, planned economies always destroy as well as build. In the 1930s, the Soviet Union compelled the collectivization of kulaks, or prosperous peasants. Small farms were incorporated into a larger collective agricultural system. Depending on who you ask, dekulakization was literal genocide, comparable to the Holocaust, and/or it catapulted what had been a continent-sized expanse of peasants into a modern superpower. Amazon’s decimation of small businesses (bookstores in particular) is a similar sort of collectivization, purging small proprietors or driving them onto Amazon platforms. The process is decentralized and executed by the market rather than the state, but don’t get confused: Whether or not Bezos is banging on his desk, demanding the extermination of independent booksellers — though he probably is — these are top-down decisions to eliminate particular ways of life.

Now, with the purchase of Whole Foods, Bezos and Co. seem likely to apply the same pattern to food. Responding to reports that Amazon will begin offering free two-hour Whole Foods delivery for Prime customers, BuzzFeed’s Tom Gara tweeted, “Stuff like this suggests Amazon is going to remove every cent of profit from the grocery industry.” Free two-hour grocery delivery is ludicrously convenient, perhaps the most convenient thing Amazon has come up with yet. And why should we consumers pay for huge dividends to Kroger shareholders? Fuck ’em; if Bezos has the discipline to stick to the growth plan instead of stuffing shareholder pockets every quarter, then let him eat their lunch. Despite a business model based on eliminating competition, Amazon has avoided attention from antitrust authorities because prices are down. If consumers are better off, who cares if it’s a monopoly? American antitrust law doesn’t exist to protect kulaks, whether they’re selling books or groceries.

Amazon has succeeded in large part because of the company’s uncommon drive to invest in growth. And today, not only are other companies slow to spend, so are governments. Austerity politics and decades of privatization put Amazon in a place to take over state functions. If localities can’t or won’t invest in jobs, then Bezos can get them to forgo tax dollars (and dignity) to host HQ2. There’s no reason governments couldn’t offer on-demand cloud computing services as a public utility, but instead the feds pay Amazon Web Services to host their sites. And if the government outsources health care for its population to insurers who insist on making profits, well, stay tuned. There’s no near-term natural end to Amazon’s growth, and by next year the company’s annual revenue should surpass the GDP of Vietnam. I don’t see any reason why Amazon won’t start building its own cities in the near future.

America never had to find out whether capitalism could compete with the Soviets plus 21st-century technology. Regardless, the idea that market competition can better set prices than algorithms and planning is now passé. Our economists used to scoff at the Soviets’ market-distorting subsidies; now Uber subsidizes every ride. Compared to the capitalists who are making their money by stripping the copper wiring from the American economy, the Bezos plan is efficient. So, with the exception of small business owners and managers, why wouldn’t we want to turn an increasing amount of our life-world over to Amazon? I have little doubt the company could, from a consumer perspective, improve upon the current public-private mess that is Obamacare, for example. Between the patchwork quilt of public- and private-sector scammers that run America today and “up and to the right,” life in the Amazon with Lex Luthor doesn’t look so bad. At least he has a plan, unlike some people.

From the perspective of the average consumer, it’s hard to beat Amazon. The single-minded focus on efficiency and growth has worked, and delivery convenience is perhaps the one area of American life that has kept up with our past expectations for the future. However, we do not make the passage from cradle to grave as mere average consumers. Take a look at package delivery, for example: Amazon’s latest disruptive announcement is “Shipping with Amazon,” a challenge to the USPS, from which Amazon has been conniving preferential rates. As a government agency bound to serve everyone, the Postal Service has had to accept all sorts of inefficiencies, like free delivery for rural customers or subsidized media distribution to realize freedom of the press. Amazon, on the other hand, is a private company that doesn’t really have to do anything it doesn’t want to do. In aggregate, as average consumers, we should be cheering. Maybe we are. But as members of a national community, I hope we stop to ask if efficiency is all we want from our delivery infrastructure. Lowering costs as far as possible sounds good until you remember that one of those costs is labor. One of those costs is us.

Earlier this month, Amazon was awarded two patents for a wristband system that would track the movement of warehouse employees’ hands in real time. It’s easy to see how this is a gain in efficiency: If the company can optimize employee movements, everything can be done faster and cheaper. It’s also easy to see how, for those workers, this is a significant step down the path into a dystopian hellworld. Amazon is a notoriously brutal, draining place to work, even at the executive levels. The fear used to be that if Amazon could elbow out all its competitors with low prices, it would then jack them up, Martin Shkreli style. That’s not what happened. Instead, Amazon and other monopsonists have used their power to drive wages and the labor share of production down. If you follow the Bezos strategy all the way, it doesn’t end in fully automated luxury communism or even Wall-E. It ends in The Matrix, with workers swaddled in a pod of perfect convenience and perfect exploitation. Central planning in its capitalist form turns people into another cost to be reduced as low as possible.

Just because a plan is efficient doesn’t mean it’s good. Postal Service employees are unionized; they have higher wages, paths for advancement, job stability, negotiated grievance procedures, health benefits, vacation time, etc. Amazon delivery drivers are not and do not. That difference counts as efficiency when we measure by price, and that is, to my mind, a very good argument for not handing the world over to the king of efficiency. The question that remains is whether we have already been too far reduced, whether after being treated as consumers and costs, we might still have it in us to be more, because that’s what it will take to wrench society away from Bezos and from the people who have made him look like a reasonable alternative.

“An Enthusiastic Corporate Citizen”: David Cronenberg and the Dawn of Neoliberalism

(Editor’s note: In commemoration of director David Cronenberg’s 75th birthday we present this compelling and socially relevant analysis of his filmography.)

By Michael Grasso

Source: We Are the Mutants

The cinematic corpus of David Cronenberg is probably best known for its expertly uncanny use of body horror, but looming almost as large in the writer-director’s various universes is the presence of faceless, all-powerful organizations. Like his rough contemporary Thomas Pynchon and the conspiracies that litter Pynchon’s early works—V. (1963), The Crying of Lot 49 (1966), and Gravity’s Rainbow (1973)—Cronenberg’s shadowy organizations offer fodder for paranoid conspiracy. These conspiracies operate under the cloak of beneficent academic institutes and, in his later work, corporations. The transition from institutes to corporations occurred during Cronenberg’s late ’70s and early ’80s output, specifically the trio of films The Brood (1979), Scanners (1981), and Videodrome (1983).

It is no coincidence that, at this particular time, international finance and prevailing political winds helped put the corporation in society’s driver’s seat. In Adam Curtis’s recent documentary film HyperNormalisation (2016), he notes how the default of the city of New York in 1975 opened the door for private investment and the finance industry to get their hands on municipal governance on a large scale for the first time, and how this creaked open the door for the Thatcher-Reagan privatization wave in the ’80s. These last few “hinge” years of the 1970s offered the last chance for a real alternative to the coming neoliberal revolution. Soon, all alternatives for governance in the name of the public good were destroyed. Corporatism tightened its grip on the Western polity.

Cronenberg’s early eerie organizations—the “Canadian Academy of Erotic Enquiry” from Stereo (1969) and the panoply of gruesome academic and cosmetic conspiracies in his Crimes of the Future (1970)—eventually yielded to corporations like Scanners‘ ConSec and Videodrome‘s Spectacular Optical. In these early works, Cronenberg’s mysterious organizations are headed by visionary (mad) geniuses. In 1975’s Shivers, experiments by a lone mad scientist infect an entire apartment building with parasites, which awaken dark impulses in the building’s residents and spread themselves through sexual violence. But as the decade went on, Cronenberg slowly backed away from utilizing the character of a singular scientific genius harboring a twisted vision of the future. Now, organizations sought to pull the strings from the shadows. The key transitional work in this chronology is the sometimes-underlooked The Brood from 1979.

In the film, Oliver Reed plays esteemed psychologist Dr. Hal Raglan, who has developed a method of exorcising deep-seated psychological issues using a technique called “psychoplasmics.” In intense one-on-one sessions reminiscent of psychodrama, Raglan is able to physically remove trauma from the human body in the form of ulcers, rashes, and, we eventually discover, cancer. In the ultimate reveal, it’s shown that Raglan has helped traumatized patient Nola Carveth (Samantha Eggar) to birth violent, deformed homunculi who go out into the world, psychically connected to her, in order to resolve her childhood abandonment issues and abuse with bloody murder. Raglan’s foundation, the Somafree Institute of Psychoplasmics (its name simultaneously evocative of Aldous Huxley’s perfect drug soma, and reminiscent of fringe psychological research like Wilhelm Reich’s orgone theory) inhabits a modernist chalet far outside the city of Toronto. Non-resident patients have to be bussed in. Raglan’s public reputation is that of an eccentric, but effective, therapist. At several points in the film we see the covers of Raglan’s presumably best-selling The Shape of Rage. (Curiously, a decade later, in 1990, a documentary titled Child of Rage would be released covering the controversial use of “attachment therapy.”)

As depicted in the film, Somafree is not a corporation. But the thematic threads surrounding Raglan and his Institute are based on real-life trends in the 1970s. In its practices and in the person of Raglan, Somafree resembles psycho-intensive institutes like Esalen, self-improvement organizations like Lifespring, and personalities like Werner Erhard. Erhard’s est movement used primal abuse to ostensibly create psychological breakthroughs, helping the “patient” become more assertive, more powerful, less prone to obeying impulses caused by their early traumas. There is also the real-life analogue to the psychological method that Raglan employs: psychodrama. In the 1970s, new methods of conflict resolution pioneered in places like Esalen were beginning to seep into the mainstream of North American society. These methods soon spread into the corporate world as a purported means of defusing tensions at work and making an office more productive. The “encounter group” soon became a punchline, but the principles behind the Age of Aquarius’s more touchy-feely psychodynamic methods soon became part of the warp and weft of corporate culture in the ’80s and well beyond.

Nola’s estranged husband Frank interviews a former Raglan patient, Jan Hartog, in an attempt to discredit Somafree so Frank can regain custody of his daughter. This patient bears the scars of Raglan’s work on him: a lymphatic cancer sprouting from his neck (an eerie foreshadowing of the coming of another mysterious lymphatic disorder that would soon break out all over North America). Hartog plans to sue; not to achieve victory in a courtroom, but to destroy Raglan’s reputation. It doesn’t matter if they win, Hartog says, because “They’ll just remember the slogan. Psychoplasmics can cause cancer.” The 1970s was full of an increased awareness of the carcinogens that surrounded us in the late-industrial West—cigarettes, sweeteners, food dyes, and pesticides—thanks in large part to the nascent environmental and consumer rights movements, which faced off against corporations using  weapons of negative publicity.

By the time we get to Scanners in 1981, we are fully invested in a world of shadowy corporate overlords. A huge multinational security firm, ConSec, tries to shepherd psychics called “scanners,” ostensibly to help them control their powers, but also to utilize and exploit their paranormal abilities. Protagonist Cameron Vale (Steven Lack) is apprehended off the streets, where, due to his psychic pain, he’s living as a derelict. We learn that scanners don’t “fit in” with society. When Vale is given the inhibitive drug ephemerol by ConSec’s head of scanner research, Dr. Paul Ruth (Patrick McGoohan), he is able to get himself together and is even given a new proto-yuppie wardrobe and mission by ConSec: eliminate rogue scanner Darryl Revok (Michael Ironside). But as Vale accepts his mission and new identity, he finds himself enlisted in ConSec’s private war against renegade scanners. When he runs into an emerging cell of scanners who are forming a powerful “group mind” in a New Age-like encounter session, assassins controlled by Revok murder most of the cell. “Everywhere you go, somebody dies,” one of the hive mind tells Vale, who is complicit with ConSec’s need to exert corporate control over scanners, including the use of violence as part of the corporate mission. Meanwhile, ConSec itself is riddled with moles working with Revok. Indeed, a chemical and pharmaceutical company called “Biocarbon Amalgamate,” founded by Dr. Ruth but now infiltrated by Revok, manufactures ephemerol in massive quantities. Scanners recontexualizes the Cold War espionage “wilderness of mirrors” in terms of corporate espionage for a new age of corporate domination. (It’s no coincidence that Cronenberg cast McGoohan, one of the Cold War’s most famous fictional spies, in the role of Dr. Ruth.)

ConSec’s corporate mission is revealed in a board meeting when the new head of security says, “We’re in the business of international security. We deal in weaponry and private armories.” This head of security also tells Dr. Ruth, “Let us leave the development of dolphins and freaks as weapons of espionage to others.” To the new breed of ConSec executive, fringe ’70s research is a thing of the past, despite its obvious power and relevance. The future is in fighting proxy wars, ensuring private security for the wealthy, and providing mercenary security forces. ConSec in this way is like many other private security firms that first emerged in the 1970s and ’80s. Begun as an outgrowth of post-colonial British military adventurism, the private military company soon became a way for ex-military officers to assure themselves a handsome post-service sinecure in a new era where hot wars were a thing of the past. “Brushfire wars” would continue to ensue, ensuring these companies an expanding portfolio, both in the waning years of the Cold War and in the 1990s and beyond. In fact, it’s interesting to note that many of the real-world military’s supposed psychic assets themselves got into private security after the U.S. Army shut down fringe science projects like Project STARGATE. Art imitates life imitates art.

Videodrome expands Cronenberg’s conspiratorial corporate, military, and espionage worldview into the rapidly exploding world of the media in the early ’80s. Leaps forward in technology, all of which are explicitly called out in Videodrome, litter the film’s visual landscape. Cable television, satellite transmissions (and the attendant hacking thereof), video cassette recorders, the rise of video pornography, virtual reality, postmodern media theory, and violence in entertainment all play essential roles in the film. Max Renn’s (James Woods) tiny Civic TV/Channel 83 (itself based on groundbreaking independent Toronto television station CityTV) is trying to survive as best it can in a world of massive international media players. Ever seeking the latest hit that will tap into the public’s unending hunger for sex and violence, his on-staff “satellite pirate” Harlan delivers the mysterious Videodrome transmission. Harlan is later revealed to be working with the Videodrome conspiracy, having intentionally exposed Max to the signal. In a memorable speech, Harlan nails Max’s amoral desire to sell sex and violence to his viewers: “This cesspool you call a television station, and your people who wallow around in it, and your viewers who watch you do it; you’re rotting us away from the inside.” When Renn is deep into his Videodrome-triggered hallucinations, he is offered corporate “help” much as Cameron Vale was. This time, his “savior” is Barry Convex, a representative of Spectacular Optical. In his video message to Max, he, like the ConSec executive before him, lays out Spectacular Optical’s corporate mission:

I’d like to invite you into the world of Spectacular Optical, an enthusiastic global corporate citizen. We make inexpensive glasses for the Third World… and missile guidance systems for NATO. We also make Videodrome, Max.

The final form of the military-industrial-entertainment complex is laid bare. Videodrome’s intent is to harden and make psychotic a North American television audience who’ve “become soft,” as Harlan puts it. Renn’s hallucinations are recorded, he is literally “reprogrammed” to kill Civic TV’s board (thanks to the memorable hallucinatory image of Convex sticking a VHS tape into Renn’s gut). Renn is then reprogrammed to retaliate and assassinate Convex by the much more ’70s-cult Cathode Ray Mission of “media prophet” Brian O’Blivion, whose postmodern, expressly McLuhanesque view of television’s place in the world allowed Videodrome to come into existence in the first place: “I had a brain tumor and I had visions. I believe the visions caused the tumor and not the reverse… when they removed the tumor, it was called Videodrome.” It’s also worth noting that O’Blivion tells us that Videodrome made him its first victim; postmodern criticism of the medium of television is no match for its violent, cancerous growth.

The deregulation of media in the U.S. in the Reagan years is common knowledge; rules around children’s television were especially eviscerated, which allowed for an explosion in violent, warlike cartoons based on popular toy lines, training a new generation for a lifetime of endless war. Combined with the aforementioned explosion of video technology, the laissez-faire environment shepherded by Reagan’s FCC allowed a new breed of cable television magnates to get rich and created a television and media landscape with a relatively friction-free relationship to government. By the time the first Gulf War broke out in 1991, war provided the cable news networks with surefire ratings and cable news provided the propaganda platform for the war effort, a mutually beneficial (and Cronenberg-esque) symbiosis that’s continued to metastasize through multiple subsequent wars in the Middle East. The world of Videodrome, the one Harlan evokes where America will no longer be soft in a world full of tough hombres, has finally come to fruition thanks in part to all of our enmeshment in the video arena—the video drome.

After Videodrome—in The Fly (1986), Dead Ringers (1988), and Crash (1996)—Cronenberg focuses less on sinister organizations and more on monomaniacal researchers, doctors, and fetishists who pursue their individual idiosyncratic agendas through the director’s trademark twisting mindscapes (and bodyscapes). With the exception of eXistenZ (1999), Cronenberg’s meditation on computer technology and gaming released amidst the first dot-com bubble, and his Occupy-influenced adaptation of Don DeLillo’s 2003 novel Cosmopolis (2012), he has retreated from a more overt suspicion of corporations and shadowy conspiracies. His warning about these invisible masters pulling the strings of society came during the time period when something could have been done about corporate hegemony. But now, the conspiracy operates in the open. We are now all of us the dumb, trusting Cronenberg protagonist, lulled into a false sense of security by a series of “enthusiastic corporate citizens.” Long live the new flesh.

Annihilation: Alex Garland’s Bad Trip Through Dis-ease and Over-Reproduction

By Kim Nicolini

Source: CounterPunch

If you go see Alex Garland’s Annihilation (2018) – and I highly recommend you see this film in an actual movie theater with a big screen and big sound –, you are in for a trip. Not a road trip. Not a good trip. But a bad trip. You may ask why I am urging you to see a film that will pull the ground out from under you, defy delivering a tidy narrative, refuse to answer your questions, and leave you in a state of discombobulated horror as if you just experienced a 115 minute very bad trip. There are a lot of reasons to join Garland’s journey into a shaky world where reproduction leads to destruction and where the further you go into the film the further you will find yourself separated from any known reality (just as the further the main characters delve into the ominous and alien Shimmer, the further they come unglued). At one point in the film, female scientist Dr. Ventress (Jennifer Jason Leigh) questions whether all the women who reside at the film’s center have lost their minds. After watching the film, you may very well ask yourself the same thing. But that is the power of the film. By provoking the audience to lose their minds, toss all rational thought to the wind, and deconstruct the most primal notions of stability, this sci-fi horror film unveils the fears that seep through collective humanity like a terminal illness and show the unnatural and terrifying impact of human intervention with the natural world.

The movie is built on the basic sci-fi premise of a team of scientists sent on an expedition to explore an alien anomaly – in this case, the Shimmer. This mysterious form sprouted from an occurrence at a lighthouse and is rapidly devouring a national park and its surroundings, and it is hell bent on eating up all humankind and the earth it occupies (emphasis on the term occupation). Annihilation is astoundingly beautiful while also being exceptionally terrifying. It will take you into an alluring yet unnerving world that reflects our own world through myriad lenses. The Shimmer takes the very substance of all life – DNA – and refracts it into a kaleidoscopic array of mutant variations. Most of them are terrifying, even when they are beautiful, and the realm of this film is one of absolute instability.

Like the characters in the film, we presently occupy an environment of fear, where every day we are confronted with new terrors and new monsters bombarding the airwaves and the internet, a world which is being ripped from the core, where they natural landscape is threatened to be mutated by monster drills, where borders are pushed at us as if they are threats, and where females are both the source of growing power and the source of tremendous social anxiety. These and so many other things are delivered in Garland’s surreal portrait of four women on a scientific expedition into the unknown realm of the Shimmer which is rapidly consuming the southern gulf coast and mutating or killing everyone who enters it.

The film is based on Jeff VanderMeer’s 2014 novel, and your first question may be how well Garland has adapted the book for screen. Well, the book is the first thing he annihilates, so don’t attempt to compare. The material of the book inspired the film, but Garland acts not unlike the Shimmer. He has refracted the DNA of the book into its own species, something that none of us has ever seen before. In the film, the central Scientist Lena (Natalie Portman) discovers that all mutated plant species within the Shimmer are connected to one shared root system. VanderMeer’s book is like the movie’s root system from which Garland has conceived his own lusciously nightmarish film species, growing a whole forest of ideas and visions that multiply in glorious weirdness.

Garland outwardly states that he engages in an anarchistic approach to filmmaking. He resists leadership and debunks the idea of the auteur and refuses to be one (though both films he directed – Ex Machina (2014) and Annihilation bear striking similarities in aesthetics, production, and themes). An Alex Garland film is firmly and concretely an Alex Garland film. There is no way to mistake Garland’s use of glass and reflections (sliding doors as eerie otherworldly portals/prisons) or his cinematic obsession with reproduction (girl-bots and genetic engineering) for the films of anyone else. I commend Garland for his cooperative approach to filmmaking and for stepping back and letting people do what they are good at, trusting the experts he employs to do their job and refusing to interfere with their work. For example, when he partnered with Director of Photography Rob Hardy (also DP in Ex Machina), Garland didn’t dictate what lens or camera to use. He respects his DP as a collaborative artist within a team of collaborative artists, and he trusts that together they will produce uniquely beautiful and unsettling films. Likewise, Garland gives free reign to his actors to improvise, reinvent characters, and add their own unique dimensionality. His anarchistic approach to filmmaking shines through every surface of his films, and the surfaces in Annihilation indeed are magically shiny, slick with water, glistening with reflections, and refracted through glowing prisms.

Perhaps, Garland’s filmmaking anarchy also leads the audience to the sense that we are entering a world that never existed before because it only exists as a result of a distinct collaborative artistic process. It is a movie that can only result from a very specific mutation of elements. Just as the film relies on the image of cellular reproduction to create unique species that did not preexist, the cellular interaction of human creative DNA in Garland’s films creates a new species of movie, and for many, that is unsettling.

People are comfortable with what is familiar, and Annihilation is not like anything we have seen before, though we may recognize elements of its underlying DNA. The initial reference to the lighthouse as the locus for obliterating norms and a destination for the film’s team of women to reach echoes Virginia Woolfe’s desperate plea for female autonomy, creative freedom, and liberation in her 1927 novel To the Lighthouse. The four female protagonists are headed to the source of the reproductive anomaly (representing a breach in the traditional female role as birther and caregiver), and they are mirroring an early work of feminist fiction through the lens of sci-fi horror (because reproduction and all its ramifications both intrigue and terrifify men who want to understand and control something they can’t entirely understand and control). To reach the lighthouse, the women have to trek through Area X, a former national park which has now become a mutated kill zone that bears an eerie resemblance to the infamous Zone in Andrei Tarkovsky’s cinematic masterpiece Stalker (Сталкер, 1979). As in the Zone, Area X jumbles time, seems to be plagued with the aftermath of an environmental catastrophe, seeps water from every surface, glows with a haze of timeless loss, and destabilizes all sense of location (compasses fail), communication (technology signals drop), and unravels logic and reason. It also evokes the sense of some kind of radioactive disaster.  To follow through on the film’s exploration of cancer as an act of self-destruction, radiation can cure (cancer) or kill (bombs). Finally, staying rooted in 1979, the film’s hazy dream/nighmarescape recalls the directorial style of Ridley Scott, and the Shimmer’s central root system – a seething undulating network of organs that combined look like a horrifically alien birth canal – harken back to H.R. Geiger renditions of a monster-breeding alien reproduction system in Scott’s Alien (1979). In other words, though Annihilation is its own cinematic species, it possesses the DNA of its cinematic and literary ancestors, which gives the audience a thread of familiarity even as we are being thrown into a psychedelic whirlwind of confusion and terror.

Both Annihilation and Ex Machina have very solid aesthetic and thematic grounding – the conjoining of the organic and the artificial which creates another dimension of being. Both films obsessively dissect, interrogate, and reconstruct ideas of reproduction and the murky, often shifting, line between reproduction and self-destruction.

Ex Machina explores the traditional horror film approach to reproduction by showing what happens when men try to take on the female role of reproducing through technological and/or scientific intervention. In this film, not only is the man the one reproducing, but he reproduces women as objects of male consumption – porno objects who can cook dinner, suck your dick, and kick up some dust on the dance floor. But in the end, man can’t outdo woman as the great reproducer. The girl-bots win, playing on man’s weak spots – all-consuming lust and ego – the man cancer that causes him to eat himself in an act of selfish self-desctruction. The robo-girls beat both their inventor, who thinks his brains can buy him a pussy (on all fronts), and the nerdy tech geek who likes to believe he’s above fetishizing women when actually his attraction to a girl is ruled more by his hard-on than intellectual intrigue. The only one either of these men is kidding is themselves. And they lose, and . . . they kind of get off on it, which flips us back into that loop that never seems to close.

Annihilation, on the other hand, puts women front and center. Female bodies invade a male genre – a troop of scientists and/or military guys sent on a mission to learn the secrets of and destroy a mysterious alien force – the Shimmer.  We are not accustomed to seeing women in these roles, so the film annihilates traditional male-dominated sci-fi horror narratives. With another nod to Alien and a tribute to Sigourney Weaver’s Ripley, these women righteously bear automatic weapons to fend off the alien forces that threaten them. Remember how adept Ripley was at wielding a blow torch? In one scene Portman’s Lena obliterates a gigantic mutated crocodile without batting an eye. She literally never blinks! Unlike its predecessor Ex Machina which is fixated on male-reproduction of female bodies, Annihilation focuses on a group of women who have somehow failed to reproduce. The central character Lena has destroyed her marriage and therefore snuffed her possible future as a mother. Anya (Gina Rodriguez) has infiltrated her body with drugs and booze instead of babies. Radek (Tessa Thompson) has actually “felt” life through self-destruction (cutting herself to the extent that her arms are mapped with scars) rather than giving life through reproduction. Finally, Ventress has no connections to anyone, projects as if she is an ether trace of a rapidly vanishing body. Ventress is, it turns out, dying of cancer – the film’s stand-in metaphor for toxic reproduction, since cancer is the reproduction of cells to the point of biological annihilation.

The film opens with a close-up of cells multiplying under a microscope. We learn very quickly that they are cancerous cells from a female cervix – the gateway (or gatekeeper) to reproduction. From the film’s onset, reproduction is under attack (being annihilated). As we enter deeper into the Shimmer with the four women, we learn that cellular reproduction can be both beautiful and toxic. As Dr. Ventress states: “It is the source of all life, and of all death.” Therein lies the great conundrum, and the underlying horror of the movie (because this is a Sci-Fi horror film). By vividly exploring multiple angles of Reproduction Gone Wrong – from the Shimmer’s mutated plants and creatures to lethal cancer –, Annihilation taps into some of the most prevalent collective social fears. Over-population (one of the greatest threats to the planet) is shown as both beautiful (“Look at all those gorgeous and strange flowers!”) and as claustrophobic and strangulating (“Look how that mutated corpse is sprouting from a tapestry of flowers!”). Fear of scientific intervention in human creation and the potential horrors of genetic engineering confront us full-body through abominable mutated creatures, some of which literally open their mouths and swallow us. A rampaging bear howls with the voice of a dead woman. A female scientist sprouts stems and leaves and morphs into a cross-species plant.

At its core, the film confronts one of the biggest social fears that has been planted so deeply in the collective unconscious that many people are unaware of it.  Even at this point in the 21st century when you would think people would “know better,” the large majority of the population – both male and female – rely on the traditional role of women as mother caregivers for a sense of stability. This film destabilizes patriarchal order by refusing to put its lead female characters in maternal roles and instead putting them in the traditional male shoes of scientists, and in Lena’s case – Scientist Soldier.

Unlike the women’s bodies, the land in the Shimmer has no problem reproducing. It reproduces itself crazy. It reproduces itself to annihilation, one of the great conundrums of the film – that reproduction (as in cancer) leads to complete destruction. Still, the women push through the Shimmer as it refracts all DNA, reproducing mutant and sometimes terrifying life forms. Climbing through overgrown plants, encountering hybrid animals, and camping out in abandoned houses and military encampments, the women make their way through an iridescent beautifully toxic world. Shimmering wet rainbows resemble the iridescence of a biologically disastrous oil spill. Though terrified and with the very ground of their minds unraveling, the women keep pushing, even as their numbers dwindle, and they confront such images as a live autopsy and its resulting mutation; a psychotic rampaging monster bear; tree-humans/human-trees; alligator-shark hybrids; and myriad other grotesque surprises.

In the end, however, the most terrifying image is the one of reproduction and destruction when Lena confronts herself and births her mutated, alien replicant via a seething, pulsing psychedelic vagina. At once curiously alluring and beautifully horrific, the magnum opus of the film occurs in a scene that defies description but must be experienced on the big screen as the vagina swirls in fleshy prismatic colors, its form both bulging and opening. In the climatic act of self-reproduction and destruction, the screen/vagina opens into a bottomless black birth canal and swallows the audience. There are fewer things more terrifying than a psychedelic vagina the size of a theater screen opening its black hole to swallow you alive while giving birth to your mutated duplicate self.

One of the many reasons this film is so unsettling and delivers such an overwhelming sense of dread is that it refuses to offer any middle ground. Everything is turned on its head. Actions and environments are extreme. Women bear arms instead of children. Interior landscapes are eerily sterile, filled with plastic zippered rooms, stainless steel furniture, and windows reflecting windows reflecting more windows. Not one organic thing lives in the lab, except the women (and one dying man and a few men in hazmat suits). Outside, the landscape is abominably fertile. Creatures are like beautifully terrifying genetic experiments. The land is so pregnant, you could practically barf looking at it. It is both bulging with life and seething with decay. Seemingly lovely flowers evoke feminist fiber art run amok. Humans and nature blend not into a vision of utopian bliss, but into an unnerving psychedelic bad trip. While reproduction is supposed to be the act of life, in this world it is a death sentence where living things reproduce themselves to annihilation, echoing the metaphor of cancer – a disease in which the body actually consumes itself with its own cellular reproduction. The film itself is an act of reproduction, reproducing itself in movie theaters while audiences succumb to, absorb, and are mutated by its toxic beauty. This is the kind of movie you don’t easily forget. It will infiltrate your dreams. Next time you take a hike through a densely wooded forest, you may think twice before exploring that abandoned cabin.

The mismatch between humans and nature and its potential for disastrous consequences leads to some excellent moments of sci-fi horror (you will be terrified) while also questioning the nightmarish impact and consequences of human exploitation of the environment/natural world. Let’s close those national parks and drill! But remember, if you keep on drilling, you may give birth to a monster. Throughout the film, music is critical to the movie’s unsettling hallucinatory delivery. With a soundtrack composed by Portishead’s Geoff Barrow and long-time composer Ben Salisbury, the music is as large and imposing of a character as the mutant bear. Alternating between soft acoustic guitar from another era, full orchestral strings, assaultive horns and bombastically creepy synths, the music doesn’t tell us how to feel, it immerses us in feeling. Complementing the film with orchestral moans and sonic decay, the music tips the scales of this movie toward outright Very Bad Trip. But it’s an entertaining trip!

Annihilation may be the most mind-boggling movie of the century. As it builds and breeds and breathes and opens its mouth and swallows us whole, the movie oozes questions and refuses answers. Told from the single POV of the unreliable narrator Lena, we don’t know what to believe and not believe, what is happening, what is a demented hallucination, what is past, present, or future. In one scene, Anya screams over and over: “Lena is a liar! Lena is a liar!” And maybe she is. We never know. Since the story is strictly told from Lena’s perspective, we don’t know if she is lying to us. When asked to recount what happened in the Shimmer, her most common reply is: “I don’t know.” She doesn’t know, and neither do we, just like in the world outside the Shimmer where we are bombarded with “fake news,” false alarms, and paranoid manufactured distractions to prevent us from getting to answers.

At this point, you may be asking, “But what about Oscar Isaak and his character Kane?” He exists in ghost form, in memory, propped up by life support, leaking blood from mutated organs, or as a reconstituted alien being. In other words, he has been stripped of solidity. The central conjoining entities in the film are Lena and Kane, but Lena destroyed their marriage in an act of self-destruction. Lena, who introduces the cancer cells in the beginning of the film, is a cancer herself, and oddly the lone survivor, perhaps because she is the mutant cell that consumes everything in an act of self-destruction that ironically keeps her alive. I know – what a lot of confusing hogwash.

But the world is confusing hogwash! We live in a time of questions not answers, a time of abstract fear that permeates everything and saturates our very souls with instability. The earth is dying; the System is lying; our hearts and land are crying; and there are no fucking answers.

Launching a cast of women in traditional male roles and playing on the trope of cancer as the ultimate method of lethal reproduction, Annihilation blows a hole through just about everything known and turns it in an unknown. It annihilates preconceptions about conception; rational thought; traditional gender roles; cinematic genre; social expectations; definitions of species; fundamental biology, earth science; the possibility of future; application of human thought to unanswerable questions; and the idea of self itself. And the annihilation is both beautiful and horrific. The movie screen seems to actually breathe with mutated life as it sucks us into its tantalizing bad trip. And I loved every minute of it. Personally, I’d rather be on a bad trip that explores socio-political fears and anxiety through a hallucinatory cinematic lens rather than succumb to the excessively toxic reproduction and biased distortion of an unreal reality.

Saturday Matinee: Stare Into The Lights My Pretties

Source: https://stareintothelightsmypretties.jore.cc/

Logline

A film about screen culture and its implications. While the world burns, where are we?

Introduction

We live in a world of screens. The average adult spends the majority of their waking hours in front of some sort of screen or device. We’re enthralled, we’re addicted to these machines. How did we get here? Who benefits? What are the cumulative impacts on people, society and the environment? What may come next if this culture is left unchecked, to its end trajectory, and is that what we want?

Stare Into The Lights My Pretties investigates these questions with an urge to return to the real physical world, to form a critical view of technological escalation driven by rapacious and pervasive corporate interest. Covering themes of addiction, privacy, surveillance, information manipulation, behaviour modification and social control, the film lays the foundations as to why we may feel like we’re sleeprunning into some dystopian nightmare with the machines at the helm. Because we are, if we don’t seriously avert our eyes to stop this culture from destroying what is left of the real world.

Purpose

This independent film was made with no budget (adding to its authenticity) with no affiliations, is not-for-profit, and is released to the world for free for the purposes of critical discourse, education, and for cultivating radical social and political change.