Disarming the Weapons of Mass Distraction

By Madeleine Bunting

Source: Rise Up Times

“Are you paying attention?” The phrase still resonates with a particular sharpness in my mind. It takes me straight back to my boarding school, aged thirteen, when my eyes would drift out the window to the woods beyond the classroom. The voice was that of the math teacher, the very dedicated but dull Miss Ploughman, whose furrowed grimace I can still picture.

We’re taught early that attention is a currency—we “pay” attention—and much of the discipline of the classroom is aimed at marshaling the attention of children, with very mixed results. We all have a history here, of how we did or did not learn to pay attention and all the praise or blame that came with that. It used to be that such patterns of childhood experience faded into irrelevance. As we reached adulthood, how we paid attention, and to what, was a personal matter and akin to breathing—as if it were automatic.

Today, though, as we grapple with a pervasive new digital culture, attention has become an issue of pressing social concern. Technology provides us with new tools to grab people’s attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain “productive” on a cab journey.

Even those spare moments of time in our day—waiting for a bus, standing in a queue at the supermarket—can now be “harvested,” says the writer Tim Wu in his book The Attention Merchants. In this quest to pursue “those slivers of our unharvested awareness,” digital technology has provided consumer capitalism with its most powerful tools yet. And our attention fuels it. As Matthew Crawford notes in The World Beyond Your Head, “when some people treat the minds of other people as a resource, this is not ‘creating wealth,’ it is transferring it.”

There’s a whiff of panic around the subject: the story that our attention spans are now shorter than a goldfish’s attracted millions of readers on the web; it’s still frequently cited, despite its questionable veracity. Rates of diagnosis attention deficit hyperactivity disorder in children have soared, creating an $11 billion global market for pharmaceutical companies. Every glance of our eyes is now tracked for commercial gain as ever more ingenious ways are devised to capture our attention, if only momentarily. Our eyeballs are now described as capitalism’s most valuable real estate. Both our attention and its deficits are turned into lucrative markets.

There is also a domestic economy of attention; within every family, some get it and some give it. We’re all born needing the attention of others—our parents’, especially—and from the outset, our social skills are honed to attract the attention we need for our care. Attention is woven into all forms of human encounter from the most brief and transitory to the most intimate. It also becomes deeply political: who pays attention to whom?

Social psychologists have researched how the powerful tend to tune out the less powerful. One study with college students showed that even in five minutes of friendly chat, wealthier students showed fewer signs of engagement when in conversation with their less wealthy counterparts: less eye contact, fewer nods, and more checking the time, doodling, and fidgeting. Discrimination of race and gender, too, plays out through attention. Anyone who’s spent any time in an organization will be aware of how attention is at the heart of office politics. A suggestion is ignored in a meeting, but is then seized upon as a brilliant solution when repeated by another person.

What is political is also ethical. Matthew Crawford argues that this is the essential characteristic of urban living: a basic recognition of others.

And then there’s an even more fundamental dimension to the politics of attention. At a primary level, all interactions in public space require a very minimal form of attention, an awareness of the presence and movement of others. Without it, we would bump into each other, frequently.

I had a vivid demonstration of this point on a recent commute: I live in East London and regularly use the narrow canal paths for cycling. It was the canal rush hour—lots of walkers with dogs, families with children, joggers as well as cyclists heading home. We were all sharing the towpath with the usual mixture of give and take, slowing to allow passing, swerving around and between each other. Only this time, a woman was walking down the center of the path with her eyes glued to her phone, impervious to all around her. This went well beyond a moment of distraction. Everyone had to duck and weave to avoid her. She’d abandoned the unspoken contract that avoiding collision is a mutual obligation.

This scene is now a daily occurrence for many of us, in shopping centers, station concourses, or on busy streets. Attention is the essential lubricant of urban life, and without it, we’re denying our co-existence in that moment and place. The novelist and philosopher, Iris Murdoch, writes that the most basic requirement for being good is that a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”

Attention is what draws us out of ourselves to experience and engage in the world. The word is often accompanied by a verb—attention needs to be grabbed, captured, mobilized, attracted, or galvanized. Reflected in such language is an acknowledgement of how attention is the essential precursor to action. The founding father of psychology William James provided what is still one of the best working definitions:

It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

Attention is a limited resource and has to be allocated: to pay attention to one thing requires us to withdraw it from others. There are two well-known dimensions to attention, explains Willem Kuyken, a professor of psychology at Oxford. The first is “alerting”— an automatic form of attention, hardwired into our brains, that warns us of threats to our survival. Think of when you’re driving a car in a busy city: you’re aware of the movement of other cars, pedestrians, cyclists, and road signs, while advertising tries to grab any spare morsel of your attention. Notice how quickly you can swerve or brake when you spot a car suddenly emerging from a side street. There’s no time for a complicated cognitive process of decision making. This attention is beyond voluntary control.

The second form of attention is known as “executive”—the process by which our brain selects what to foreground and focus on, so that there can be other information in the background—such as music when you’re cooking—but one can still accomplish a complex task. Crucially, our capacity for executive attention is limited. Contrary to what some people claim, none of us can multitask complex activities effectively. The next time you write an email while talking on the phone, notice how many typing mistakes you make or how much you remember from the call. Executive attention can be trained, and needs to be for any complex activity. This was the point James made when he wrote: “there is no such thing as voluntary attention sustained for more than a few seconds at a time… what is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.”

Attention is a complex interaction between memory and perception, in which we continually select what to notice, thus finding the material which correlates in some way with past experience. In this way, patterns develop in the mind. We are always making meaning from the overwhelming raw data. As James put it, “my experience is what I agree to attend to. Only those items which I notice shape my mind—without selective interest, experience is an utter chaos.”

And we are constantly engaged in organizing that chaos, as we interpret our experience. This is clear in the famous Gorilla Experiment in which viewers were told to watch a video of two teams of students passing a ball between them. They had to count the number of passes made by the team in white shirts and ignore those of the team in black shirts. The experiment is deceptively complex because it involves three forms of attention: first, scanning the whole group; second, ignoring the black T-shirt team to keep focus on the white T-shirt team (a form of inhibiting attention); and third, remembering to count. In the middle of the experiment, someone in a gorilla suit ambles through the group. Afterward, half the viewers when asked hadn’t spotted the gorilla and couldn’t even believe it had been there. We can be blind not only to the obvious, but to our blindness.

There is another point in this experiment which is less often emphasized. Ignoring something—such as the black T-shirt team in this experiment—requires a form of attention. It costs us attention to ignore something. Many of us live and work in environments that require us to ignore a huge amount of information—that flashing advert, a bouncing icon or pop-up.

In another famous psychology experiment, Walter Mischel’s Marshmallow Test, four-year-olds had a choice of eating a marshmallow immediately or two in fifteen minutes. While filmed, each child was put in a room alone in front of the plate with a marshmallow. They squirmed and fidgeted, poked the marshmallow and stared at the ceiling. A third of the children couldn’t resist the marshmallow and gobbled it up, a third nibbled cautiously, but the last third figured out how to distract themselves. They looked under the table, sang… did anything but look at the sweet. It’s a demonstration of the capacity to reallocate attention. In a follow-up study some years later, those who’d been able to wait for the second marshmallow had better life outcomes, such as academic achievement and health. One New Zealand study of 1,000 children found that this form of self-regulation was a more reliable predictor of future success and wellbeing than even a good IQ or comfortable economic status.

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

*

I learned to concentrate as a child. Being a bookworm helped. I’d be completely absorbed in my reading as the noise of my busy family swirled around me. It was good training for working in newsrooms; when I started as a journalist, they were very noisy places with the clatter of keyboards, telephones ringing and fascinating conversations on every side. What has proved much harder to block out is email and text messages.

The digital tech companies know a lot about this widespread habit; many of them have built a business model around it. They’ve drawn on the work of the psychologist B.F. Skinner who identified back in the Thirties how, in animal behavior, an action can be encouraged with a positive consequence and discouraged by a negative one. In one experiment, he gave a pigeon a food pellet whenever it pecked at a button and the result, as predicted, was that the pigeon kept pecking. Subsequent research established that the most effective way to keep the pigeon pecking was “variable-ratio reinforcement.” Give the pigeon a food pellet sometimes, and you have it well and truly hooked.

We’re just like the pigeon pecking at the button when we check our email or phone. It’s a humiliating thought. Variable reinforcement ensures that the customer will keep coming back. It’s the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

In much public space, wherever your eye lands—from the back of the toilet door, to the handrail on the escalator, or the hotel key card—an ad is trying to grab your attention, and does so by triggering the oldest instincts of the human mind: fear, sex, and food. Public places become dominated by people trying to sell you something. In his tirade against this commercialization, Crawford cites advertisements on the backs of school report cards and on debit machines where you swipe your card. Before you enter your PIN, that gap of a few seconds is now used to show adverts. He describes silence and ad-free experience as “luxury goods” that only the wealthy can afford. Crawford has invented the concept of the “attentional commons,” free public spaces that allow us to choose where to place our attention. He draws the analogy with environmental goods that belong to all of us, such as clean air or clean water.

Some legal theorists are beginning to conceive of our own attention as a human right. One former Google employee warned that “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have.” They use the insights into human behavior derived from social psychology—the need for approval, the need to reciprocate others’ gestures, the fear of missing out. Your attention ceases to be your own, pulled and pushed by algorithms. Attention is referred to as the real currency of the future.

*

In 2013, I embarked on a risky experiment in attention: I left my job. In the previous two years, it had crept up on me. I could no longer read beyond a few paragraphs. My eyes would glaze over and, even more disastrously for someone who had spent their career writing, I seemed unable to string together my thoughts, let alone write anything longer than a few sentences. When I try to explain the impact, I can only offer a metaphor: it felt like my imagination and use of language were vacuum packed, like a slab of meat coated in plastic. I had lost the ability to turn ideas around, see them from different perspectives. I could no longer draw connections between disparate ideas.

At the time, I was working in media strategy. It was a culture of back-to-back meetings from 8:30 AM to 6 PM, and there were plenty of advantages to be gained from continuing late into the evening if you had the stamina. Commitment was measured by emails with a pertinent weblink. Meetings were sometimes as brief as thirty minutes and frequently ran through lunch. Meanwhile, everyone was sneaking time to battle with the constant emails, eyes flickering to their phone screens in every conversation. The result was a kind of crazy fog, a mishmash of inconclusive discussions.

At first, it was exhilarating, like being on those crazy rides in a theme park. By the end, the effect was disastrous. I was almost continuously ill, battling migraines and unidentifiable viruses. When I finally made the drastic decision to leave, my income collapsed to a fraction of its previous level and my family’s lifestyle had to change accordingly. I had no idea what I was going to do; I had lost all faith in my ability to write. I told friends I would have to return the advance I’d received to write a book. I had to try to get back to the skills of reflection and focus that had once been ingrained in me.

The first step was to teach myself to read again. I sometimes went to a café, leaving my phone and computer behind. I had to slow down the racing incoherence of my mind so that it could settle on the text and its gradual development of an argument or narrative thread. The turning point in my recovery was a five weeks’ research trip to the Scottish Outer Hebrides. On the journey north of Glasgow, my mobile phone lost its Internet connection. I had cut myself loose with only the occasional text or call to family back home. Somewhere on the long Atlantic beaches of these wild and dramatic islands, I rediscovered my ability to write.

I attribute that in part to a stunning exhibition I came across in the small harbor town of Lochboisdale, on the island of South Uist. Vija Celmins is an acclaimed Latvian-American artist whose work is famous for its astonishing patience. She can take a year or more to make a woodcut that portrays in minute detail the surface of the sea. A postcard of her work now sits above my desk, a reminder of the power of slow thinking.

Just as we’ve had a slow eating movement, we need a slow thinking campaign. Its manifesto could be the German poet Rainer Maria Rilke’s beautiful “Letters to a Young Poet”:

To let every impression and the germ of every feeling come to completion inside, in the dark, in the unsayable, the unconscious, in what is unattainable to one’s own intellect, and to wait with deep humility and patience for the hour when a new clarity is delivered.

Many great thinkers attest that they have their best insights in moments of relaxation, the proverbial brainwave in the bath. We actually need what we most fear: boredom.

When I left my job (and I was lucky that I could), friends and colleagues were bewildered. Why give up a good job? But I felt that here was an experiment worth trying. Crawford frames it well as “intellectual biodiversity.” At a time of crisis, we need people thinking in different ways. If we all jump to the tune of Facebook or Instagram and allow ourselves to be primed by Twitter, the danger is that we lose the “trained powers of concentration” that allow us, in Crawford’s words, “to recognize that independence of thought and feeling is a fragile thing, and requires certain conditions.”

I also took to heart the insights of the historian Timothy Snyder, who concluded from his studies of twentieth-century European totalitarianism that the way to fend off tyranny is to read books, make an effort to separate yourself from the Internet, and “be kind to our language… Think up your own way of speaking.” Dropping out and going offline enabled me to get back to reading, voraciously, and to writing; beyond that, it’s too early to announce the results of my experiment with attention. As Rilke said, “These things cannot be measured by time, a year has no meaning, and ten years are nothing.”

*

A recent column in The New Yorker cheekily suggests that all the fuss about the impact of digital technologies on our attention is nothing more than writers’ worrying about their own working habits. Is all this anxiety about our fragmenting minds a moral panic akin to those that swept Victorian Britain about sexual behavior? Patterns of attention are changing, but perhaps it doesn’t much matter?

My teenage children read much less than I did. One son used to play chess online with a friend, text on his phone, and do his homework all at the same time. I was horrified, but he got a place at Oxford. At his interview, he met a third-year history undergraduate who told him he hadn’t yet read any books in his time at university. But my kids are considerably more knowledgeable about a vast range of subjects than I was at their age. There’s a small voice suggesting that the forms of attention I was brought up with could be a thing of the past; the sustained concentration required to read a whole book will become an obscure niche hobby.

And yet, I’m haunted by a reflection: the magnificent illuminations of the eighth-century Book of Kells has intricate patterning that no one has ever been able to copy, such is the fineness of the tight spirals. Lines are a millimeter apart. They indicate a steadiness of hand and mind—a capability most of us have long since lost. Could we be trading in capacities for focus in exchange for a breadth of reference? Some might argue that’s not a bad trade. But we would lose depth: artist Paul Klee wrote that he would spend a day in silent contemplation of something before he painted it. Paul Cézanne was similarly known for his trance like attention on his subject. Madame Cézanne recollected how her husband would gaze at the landscape, and told her, “The landscape thinks itself in me, and I am its consciousness.” The philosopher Maurice Merleau-Ponty describes a contemplative attention in which one steps outside of oneself and immerses oneself in the object of attention.

It’s not just artists who require such depth of attention. Nearly two decades ago, a doctor teaching medical students at Yale was frustrated at their inability to distinguish between types of skin lesions. Their gaze seemed restless and careless. He took his students to an art gallery and told them to look at a picture for fifteen minutes. The program is now used in dozens of US medical schools.

Some argue that losing the capacity for deep attention presages catastrophe. It is the building block of “intimacy, wisdom, and cultural progress,” argues Maggie Jackson in her book Distracted, in which she warns that “as our attentional skills are squandered, we are plunging into a culture of mistrust, skimming, and a dehumanizing merging between man and machine.” Significantly, her research began with a curiosity about why so many Americans were deeply dissatisfied with life. She argues that losing the capacity for deep attention makes it harder to make sense of experience and to find meaning—from which comes wonder and fulfillment. She fears a new “dark age” in which we forget what makes us truly happy.

Strikingly, the epicenter of this wave of anxiety over our attention is the US. All the authors I’ve cited are American. It’s been argued that this debate represents an existential crisis for America because it exposes the flawed nature of its greatest ideal, individual freedom. The commonly accepted notion is that to be free is to make choices, and no one can challenge that expression of autonomy. But if our choices are actually engineered by thousands of very clever, well-paid digital developers, are we free? The former Google employee Tristan Harris confessed in an article in 2016 that technology “gives people the illusion of free choice while architecting the menu so that [tech giants] win, no matter what you choose.”

Despite my children’s multitasking, I maintain that vital human capacities—depth of insight, emotional connection, and creativity—are at risk. I’m intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

One of the most interesting responses to our distraction epidemic is mindfulness. Its popularity is evidence that people are trying to find a way to protect and nourish their minds. Jon Kabat-Zinn, who pioneered the development of secular mindfulness, draws an analogy with jogging: just as keeping your body fit is now well understood, people will come to realize the importance of looking after their minds.

I’ve meditated regularly for twenty years, but curious as to how this is becoming mainstream, I went to an event in the heart of high-tech Shoreditch in London. In a hipster workspaces with funky architecture, excellent coffee, and an impressive range of beards, a soft-spoken retired Oxford professor of psychology, Mark Williams, was talking about how multitasking has a switching cost in focus and concentration. Our unique human ability to remember the past and to think ahead brings a cost; we lose the present. To counter this, he advocated a daily practice of mindfulness: bringing attention back to the body—the physical sensations of the breath, the hands, the feet. Williams explained how fear and anxiety inhibit creativity. In time, the practice of mindfulness enables you to acknowledge fear calmly and even to investigate it with curiosity. You learn to place your attention in the moment, noticing details such as the sunlight or the taste of the coffee.

On a recent retreat, I was beside a river early one morning and a rower passed. I watched the boat slip by and enjoyed the beauty in a radically new way. The moment was sufficient; there was nothing I wanted to add or take away—no thought of how I wanted to do this every day, or how I wanted to learn to row, or how I wished I was in the boat. Nothing but the pleasure of witnessing it. The busy-ness of the mind had stilled. Mindfulness can be a remarkable bid to reclaim our attention and to claim real freedom, the freedom from our habitual reactivity that makes us easy prey for manipulation.

But I worry that the integrity of mindfulness is fragile, vulnerable both to commercialization by employers who see it as a form of mental performance enhancement and to consumer commodification, rather than contributing to the formation of ethical character. Mindfulness as a meditation practice originates in Buddhism, and without that tradition’s ethics, there is a high risk of it being hijacked and misrepresented.

Back in the Sixties, the countercultural psychologist Timothy Leary rebelled against the conformity of the new mass media age and called for, in Crawford’s words, an “attentional revolution.” Leary urged people to take control of the media they consumed as a crucial act of self-determination; pay attention to where you place your attention, he declared. The social critic Herbert Marcuse believed Leary was fighting the struggle for the ultimate form of freedom, which Marcuse defined as the ability “to live without anxiety.” These were radical prophets whose words have an uncanny resonance today. Distraction has become a commercial and political strategy, and it amounts to a form of emotional violence that cripples people, leaving them unable to gather their thoughts and overwhelmed by a sense of inadequacy. It’s a powerful form of oppression dressed up in the language of individual choice.

The stakes could hardly be higher, as William James knew a century ago: “The faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will.” And what are we humans without these three?

Why America’s Major News-Media Must Change Their Thinking

By Eric Zuesse

Source: Strategic Culture Foundation

America’s ‘news’-media possess the mentality that characterizes a dictatorship, not a democracy. This will be documented in the linked-to empirical data which will be subsequently discussed. But, first, here is what will be documented by those data, and which will make sense of these data:

In a democracy, the public perceive their country to be improving, in accord with that nation’s values and priorities. Consequently, they trust their government, and especially they approve of the job-performance of their nation’s leader. In a dictatorship, they don’t. In a dictatorship, the government doesn’t really represent them, at all. It represents the rulers, typically a national oligarchy, an aristocracy of the richest 0.1% or even of only the richest 0.01%. No matter how much the government ‘represents’ the public in law (or “on paper”), it’s not representing them in reality; and, so, the public don’t trust their government, and the public’s job-rating of their national leader, the head-of-state, is poor, perhaps even more disapproval than approval. So, whereas in a democracy, the public widely approve of both the government and the head-of-state; in a dictatorship, they don’t.

In a dictatorship, the ‘news’-media hide reality from the public, in order to serve the government — not the public. But the quality of government that the regime delivers to its public cannot be hidden as the lies continually pile up, and as the promises remain unfulfilled, and as the public find that despite all of the rosy promises, things are no better than before, or are even becoming worse. Trust in such a government falls, no matter how much the government lies and its media hide the fact that it has been lying. Though a ‘democratic’ election might not retain in power the same leaders, it retains in power the same regime (be it the richest 0.1%, or the richest 0.01%, or The Party, or whatever the dictatorship happens to be). That’s because it’s a dictatorship: it represents the same elite of power-holding insiders, no matter what. It does not represent the public. That elite — whatever it is — is referred to as the “Deep State,” and the same Deep State can control more than one country, in which case there is an empire, which nominally is headed by the head-of-state of its leading country (this used to be called an “Emperor”), but which actually consists of an alliance between the aristocracies within all these countries; and, sometimes, the nominal leading country is actually being led, in its foreign policies, by wealthier aristocrats in the supposedly vassal nations. But no empire can be a democracy, because the residents in no country want to be governed by any foreign power: the public, in every land, want their nation to be free — they want democracy, no dictatorship at all, especially no dictatorship from abroad.

In order for the elite to change, a revolution is required, even if it’s only to a different elite, instead of to a democracy. So, if there is no revolution, then certainly it’s the same dictatorship as before. The elite has changed (and this happens at least as often as generations change), but the dictatorship has not. And in order to change from a dictatorship to a democracy, a revolution also is required, but it will have to be a revolution that totally removes from power the elite (and all their agents) who had been ruling. If this elite had been the nation’s billionaires and its centi-millionaires who had also been billionaire-class donors to political campaigns (such as has been proven to be the case in the United States), then those people, who until the revolution had been behind the scenes producing the bad government, need to be dispossessed of their assets, because their assets were being used as their weapons against the public, and those weapons need (if there is to be a democracy) to be transferred to the public as represented by the new and authentically democratic government. If instead the elite had been a party, then all of those individuals need to be banned from every sort of political activity in the future. But, in either case, there will need to be a new constitution, and a consequent new body of laws, because the old order (the dictatorship) no longer reigns — it’s no longer in force after a revolution. That’s what “revolution” means. It doesn’t necessarily mean “democratic,” but sometimes it does produce a democracy where there wasn’t one before. The idea that every revolution is democratic is ridiculous, though it’s often assumed in ‘news’-reports. In fact, coups (which the U.S. Government specializes in like no other) often are a revolution that replaces a democracy by a dictatorship (such as the U.S. Government did to Ukraine in 2014, for example, and most famously before that, did to Iran in 1953). (Any country that perpetrates a coup anywhere is a dictatorship over the residents there, just the same as is the case when any invasion and occupation of a country are perpetrated upon a country. The imposed stooges are stooges, just the same. No country that imposes coups and/or invasions/occupations upon any government that has not posed an existential threat against the residents of that perpetrating country, supports democracy; to the exact contrary, that country unjustifiably imposes dictatorships; it spreads its own dictatorship, which is of the imperialistic type, and any government that spreads its dictatorship is evil and needs to be replaced — revolution is certainly justified there.)

This is how to identify which countries are democracies, and which ones are not: In a democracy, the public are served by the government, and thus are experiencing improvement in their lives and consequently approve of the job-performance of their head-of-state, and they trust the government. But in a dictatorship, none of these things is true.

In 2014, a Japanese international marketing-research firm polled citizens in each of ten countries asking whether they approve or disapprove of the job-performance of their nation’s head-of-state, and Harvard then provided an English-translated version online for a few years, then eliminated that translation from its website; but, fortunately, the translation had been web-archived and so is permanent here (with no information however regarding methodology or sampling); and it shows the following percentages who approved of the job-performance of their President or other head-of-state in each of the given countries, at that time:

China (Xi)          90%

Russia (Putin)      87%

India (Modi)        86%

South Africa (Zuma) 70%

Germany (Merkel)    67%

Brazil (Roussef)    63%

U.S. (Obama)        62%

Japan (Abe)         60%

UK (Cameron)        55%

France (Hollande)   48%

In January 2018, the global PR firm Edelman came out with the latest in their annual series of scientifically polled surveys in more than two dozen countries throughout the world, tapping into, actually, some of the major criteria within each nation indicating whether or not the given nation is more toward the dictatorship model, or more toward the democracy model. The 2018 Edelman Trust Barometer survey showed that “Trust in Government” (scored and ranked on page 39) was 44% in Russia, and is only 33% in the United States. Trust in Government is the highest in China: 84%. The U.S. and Russia are the nuclear super-powers; and the U.S. and China are the two economic super-powers; so, these are the world’s three leading powers; and, on that single measure of whether or not a country is democratic, China is the global leader (#1 of 28), Russia is in the middle (#13 of 28), and U.S. ranks at the bottom of the three, and near the bottom of the entire lot (#21 of 28). (#28 of 28 is South Africa, which, thus — clearly in retrospect — had a failed revolution when it transitioned out of its apartheid dictatorship. That’s just a fact, which cannot reasonably be denied, given this extreme finding. Though the nation’s leader, Zuma, was, according to the 2014 Japanese study, widely approved by South Africans, his Government was overwhelmingly distrusted. This distrust indicates that the public don’t believe that the head-of-state actually represents the Government. If the head-of-state doesn’t represent the Government, the country cannot possibly be a democracy: the leader might represent the people, but the Government doesn’t.)

When the government is trusted but the head-of-state is not, or vice-versa, there cannot be a functioning democracy. In other words: if either the head-of-state, or the Government, is widely distrusted, there’s a dictatorship at that time, and the only real question regarding it, is: What type of dictatorship is this?

These figures — the numbers reported here — contradict the ordinary propaganda; and, so, Edelman’s trust-barometer on each nation’s ‘news’-media (which are scored and ranked on page 40) might also be considered, because the natural question now is whether unreliable news-media might have caused this counter-intuitive (in Western countries) rank-order. However, a major reason why this media-trust-question is actually of only dubious relevance to whether or not the given nation is a democracy, is that to assume that it is, presumes that trust in the government can be that easily manipulated — it actually can’t. Media and PR can’t do that; they can’t achieve it. Here is a widespread misconception: Trust in government results not from the media but from a government’s having fulfilled its promises, and from the public’s experiencing and seeing all around themselves that they clearly have been fulfilled; and lying ‘news’-media can’t cover-up that reality, which is constantly and directly being experienced by the public.

However, even if trust in the ‘news’-media isn’t really such a thing as might be commonly hypothesized regarding trust in the government, here are those Edelman findings regarding the media, for whatever they’re worth regarding the question of democracy-versus-dictatorship: Trust in Media is the highest, #1, in China, 71%; and is 42% in #15 U.S.; and is 35% in #20 Russia. (A July 2017 Marist poll however found that only 30% of Americans trust the media. That’s a stunning 12% lower than the Edelman survey found.) In other words: Chinese people experience that what they encounter in their news-media becomes borne-out in retrospect as having been true, but only half of that percentage of Russians experience this; and U.S. scores nearer to Russia than to China on this matter. (Interestingly, Turkey, which scores #7 on trust-in-government, scores #28 on trust-in-media. Evidently, Turks find that their government delivers well on its promises, but that their ‘news’-media often deceive them. A contrast this extreme within the Edelman findings is unique. Turkey is a special case, regarding this.)

I have elsewhere reported regarding other key findings in that 2018 Edelman study.

According to all of these empirical findings, the United States is clearly not more of a democracy than it is a dictatorship. This particular finding from these studies has already been overwhelmingly (and even more so) confirmed in the world’s only in-depth empirical scientific study of whether or not a given country is or is not a “democracy”: This study (the classic Gilens and Page study) found, incontrovertibly, that the U.S. is a dictatorship — specifically an aristocracy, otherwise commonly called an “oligarchy,” and that it’s specifically a dictatorship by the richest, against the public.

Consequently, whenever the U.S. Government argues that it intends to “spread democracy” (such as it claims in regards to Syria, and to Ukraine), it is most-flagrantly lying — and any ‘news’-medium that reports such a claim without documenting (such as by linking to this article) its clear and already-proven falsehood (which is more fully documented here than has yet been done anywhere, since the Gilens and Page study is here being further proven by these international data), is no real ‘news’-medium at all, but is, instead, a propaganda-vehicle for the U.S. Government, a propaganda-arm of a dictatorship — a nation that has been overwhelmingly proven to be a dictatorship, not a democracy.

MySpace Tom beat Facebook in the long run

Wouldn’t you rather be a rich nobody than whatever Mark Zuckerberg is?

By Jeremy Gordon

Source: The Outline

My MySpace profile was abandoned when, at the ripe age of 18, I decided it was just a little too juvenile — the glittering GIFs affixed to every page, the garish customized designs, the pressure on maintaining your top 8. By 2006, Facebook offered a cleaner social experience; by 2009, Twitter offered a more casual one. MySpace was a complete relic by this point, even though only a few years had passed since its launch.

Back in 2005, though, long before MySpace burned out, its founder, Tom Anderson — whose grinning face greeted every new user as their first “friend” — sold the site for $580 million to Rupert Murdoch’s News Corporation. While his site was becoming a punchline during the rise of Facebook, Twitter, Instagram, and the other social media networks we now use everyday, Anderson disappeared entirely from the tech scene. Now, he travels the world, documenting his visits to exotic locations.

Contrast that with what’s currently happening to Facebook’s Mark Zuckerberg, who’s on day two of being grilled by a Senate committee for Facebook’s role in haphazardly collecting all of our personal data, and possibly swinging the 2016 presidential election toward Donald Trump. What was supposed to be a basic networking tool has now become one of the chief mediators of how people interact with each other and the world around them, and how information is absorbed and disseminated on the internet. It’s now apparent that Facebook and Zuckerberg didn’t really consider any of this when aggressively pursuing growth, and now we’re all screwed as we try to untangle the consequences.

MySpace Tom? His most recent Instagram post from seven days ago is a giveaway for a stay at an Iceland hotel. He doesn’t have to issue any terse statements about his company’s commitment to fostering a healthy society; he doesn’t have to sit on a booster seat for seven hours and take dipshit questions from a procession of Senate ghouls. He isn’t worth as much money as Zuckerberg, of course, but unless you’re an oil baron, $580 million is enough to tide you over for the length of your lifetime, and your children’s lifetime, and your children’s children’s lifetime, and so on. (Even after taxes!) And yes, yes, being that rich is good for nobody, but without getting into an argument about the perils of capitalism, we can agree that personally speaking, Anderson is having a much better go of things.

It puts MySpace’s failure to evolve in a new light, as perhaps the healthy thing is for a platform to die and for everyone to move on. Its aesthetic and form, back when everyone had emo bangs and listened to Hawthorne Heights, couldn’t change without altering the meaning of the site altogether, and by that time, everyone was gone. Had Facebook not gotten too good at inserting itself between human users, there’s no way it would’ve run into their current problems at such a wide scope. The suspicious CEO is not the one who cashes out; it’s the one who sticks around and creates a behemoth.

Zuckerberg could have sold off his share and avoided becoming literally one of the most disliked people in the present moment. I never thought we’d declare MySpace the winner over Facebook, but then again, I never thought a lot of things about the moment we’re in.

 

NYT: Don’t Be Progressive, Be a ‘Liberal’

By Jim Naureckas

Source: FAIR

A New York Times op-ed by political scientist (and former Bob Kerrey aide) Greg Weiner (7/13/18) may well be the New York Times–iest op-ed ever.

Its ostensible subject is why Democrats should call themselves “liberals” and not “progressives.” But in making that case, it hits most of the main points of the New York Times‘ ideology—one that has guided the paper since the late 19th century.

First and foremost, it’s a defense of the status quo. “The basic premise of liberal politics,” Weiner writes, “is the capacity of government to do good, especially in ameliorating economic ills.” But not too much good, mind you:  “A liberal can believe that government can do more good or less,” he stresses. Weiner draws a contrast with progressives: “Where liberalism seeks to ameliorate economic ills, progressivism’s goal is to eradicate them.”

So Lyndon Johnson’s Great Society is cited negatively as an example of “a progressive effort to remake society by eradicating poverty’s causes”—in the process supporting  “community action” and  financing the “political activism”—presented without explanation as a self-evident evil.  The explanation, presumably, is that the poor should remain passive as they remain poor, gratefully accepting the handouts that “alleviate” their plight, as “cutting checks,” as Weiner puts it, is “something government does competently.”

Coupled with this anxiety about “eradicating poverty’s causes” is the confident assurance that the truth is always somewhere in the middle. “Unlike liberalism, progressivism is intrinsically opposed to conservation,” Weiner warns:

Nothing structurally impedes compromise between conservatives, who hold that the accumulated wisdom of tradition is a better guide than the hypercharged rationality of the present, and liberals, because both philosophies exist on a spectrum.

Conservatives make better partners for liberals than progressives, because “one can debate how much to conserve.” But you can’t debate how much to progress, apparently: “Progressivism is inherently hostile to moderation because progress is an unmitigated good.”

In other words: Equality and justice, sure, but let’s not rush into things, is the “liberal’s” advice. He endorses “policies [that] develop gradually and command wide consensus—at least under normal circumstances.” (Progressives have an unnerving desire to “depress the accelerator.”)

Something that doesn’t change is the right wing of the left’s attraction to redbaiting. Weiner praises “the Cold War liberal who stood for social amelioration and against Soviet Communism,” a figure who “was often maligned by progressives.” Without coming out and accusing progressives of Stalinism, he describes progressives’ response to critics as “a passive-aggressive form of re-education,” one that “supersedes the rights of its opponents.” The example he gives of this is the “progressive indifference to the rights of those who oppose progressive policies in areas like sexual liberation”—an odd arena to cite, since the main “rights” that opponents of “sexual liberation” have demanded in recent years are the “right” of small businesses to discriminate against gay customers and the “right” to check the chromosome status of people who use public restrooms.

 


You can send a message to the New York Times at letters@nytimes.com  (Twitter:@NYTimes). Please remember that respectful communication is the most effective.

The New York Times takes on the social media “hordes”

By Andre Damon

Source: WSWS.org

Since late 2016, the New York Times, working together with the US intelligence agencies and the Democratic Party, has been engaged in a campaign to promote internet censorship in the guise of targeting “fake news” and “Russian propaganda.”

In waging this campaign, the Times’ motives are both political and pecuniary. Speaking for a ruling elite that sees the growth of social opposition on all sides and expects far worse, the Times has promoted censorship to remove opportunities for the working class to organize outside the framework of official politics.

In addition, the Times, for the most part a clearinghouse for staid and predigested state propaganda, is seeking to carve back market share it has lost to online publications that carry out genuine investigative journalism and oppose the lies peddled by the US government and media.

In recent months, this campaign has entered a new and malignant phase. Increasingly dropping the pretext of “Russian meddling,” the Times is now directly attacking its main target: the fact that the internet, and in particular social media, helps empower the population to access oppositional sources of news and have their voices heard in public.

Among the Times’ latest broadsides against freedom of expression is an article by its “State of the Art” columnist Farhad Manjoo headlined “For Two Months, I Got My News From Print Newspapers. Here’s What I Learned.” The piece, supposedly written as a first-hand account of a journalist turning off social media and only reading the news from print newspapers, is—in an unusually literal sense—a piece of lying propaganda from beginning to end.

As the Columbia Journalism Review pointed out, during the period in which he supposedly stopped using social media, Manjoo managed to post on Twitter virtually every day. “Manjoo remained a daily, active Twitter user throughout the two months he claims to have gone cold turkey, tweeting many hundreds of times, perhaps more than 1,000,” the Review pointed out.

Manjoo’s blatant falsifying of his own social media use is hardly the most sinister aspect of his piece. However, it expresses something essential about the Times’ notion of “reporting”: its writers feel they can say anything and get away with it, so long as their claims conform to the dictates of the establishment and the intelligence agencies whose interests determine what is and what is not reported in the US media.

The columnist’s dishonesty about his own activities provides much needed context for his article as a whole, which is little more than a long-form denunciation of a reading public that feels compelled to obtain its news from sources not massaged by the CIA-vetted hacks at the New York Times. In the process, Manjoo gives his unqualified blessings to the pronouncements of his own publication and castigates anyone who would question them as a member of an ignorant “herd,” whose opinions ought to be suppressed.

During his pretended sojourn into the desert of print media, Manjoo said he learned to value having the news spoon-fed to him by “professionals,” without having to worry about whether what he was reading was true or false.

As he puts it, “It takes professionals time to figure out what happened, and how it fits into context… This was the surprise blessing of the newspaper. I was getting news a day old, but in the delay between when the news happened and when it showed up on my front door, hundreds of experienced professionals had done the hard work for me.”

He continues, “Now I was left with the simple, disconnected and ritualistic experience of reading the news, mostly free from the cognitive load of wondering whether the thing I was reading was possibly a blatant lie.”

Here, we assume, the reader is supposed to heave a sigh of relief. How soothing not to have to think for oneself! The author’s surrender of his critical faculties supposedly did wonders for his health and general well being. Not only did he become “less anxious,” but he had the time to “take up pottery” and “became a more attentive husband and father”! Wonderful! And so much more wonderful if he hadn’t actually made up the story about his abstinence from social media.

Manjoo’s condemnation of critical thinking aside, the real core of the piece is a scathing denunciation of the public, which he describes as a “herd” and a “crowd,” and which, moreover, is empowered to express its rotten opinions by the sinister power of social media.

“Avoid social [media],” he declares. “Technology allows us to burrow into echo chambers, exacerbating misinformation and polarization and softening up society for propaganda.”

The statements posted by the “online hordes” are not “quite news, and more like a never-ending stream of commentary, one that does more to distort your understanding of the world than illuminate it,” Manjoo adds. “On social networks … People don’t just post stories—they post their takes on stories, often quoting key parts of a story to underscore how it proves them right.”

People are posting “their takes on stories!” The horror! Instead of just consuming the news as worked over by the Times, complete with big lies (“weapons of mass destruction”) and small ones (its technology columnist giving up Twitter for two months), social media allows users to critically examine the stories they read. In other words, the internet allows the public to bypass the monopoly of “professional” falsifiers and “gatekeepers” like Manjoo, Judith Miller, Thomas Friedman and the like.

The author’s only hope is that “the government” and “Facebook” will soon “fix” this problem. The clear implication is that once social media is “fixed,” the “herd,” “crowd,” and “hordes” will no longer be allowed to pollute cyberspace by questioning the pronouncements of the New York Times. Manjoo’s self-righteous pontifications, worthy of Polonius (if Polonius were also a liar), would be comical if they were not so ominous. Faced with a growing wave of social struggles, the ruling elite is preparing censorship on a massive scale. Having succeeded in dramatically reducing traffic to left-wing web sites, the technology giants and intelligence agencies are proceeding to the next phase: censoring all expressions of social opposition, in particular by the working class, on social media.

Smashing the Cult of Celebrity and the Disempowerment Game

By Dylan Charles

Source: Waking Times

At the dark heart of corporate consumer culture lie the social programs that mass-produce conformity,  obedience, acquiescence and consent for the matrix.

The cult of celebrity is the royal monarch of these schemes, the ace in the hole for mass mind control and the disempowerment of the individual. This is the anointed paradigm of idol worship and idol sacrifice, a vampire’s feast on our individual and collective dreams. Who do you love? Who do you hate? Who do want to be like? 

Combine this paradigm with the technology of social media, and the individual is flung into oblivion, never fully understanding the importance and value of their own life, instead always comparing themselves to phony ideals and well-designed, well-funded marketing campaigns.

‘The camera has created a culture of celebrity; the computer is creating a culture of connectivity. As the two technologies converge – broadband tipping the Web from text to image; social-networking sites spreading the mesh of interconnection ever wider – the two cultures betray a common impulse. Celebrity and connectivity are both ways of becoming known. This is what the contemporary self wants. It wants to be recognized, wants to be connected: It wants to be visible. If not to the millions, on Survivor or Oprah, then hundreds, on Twitter or Facebook. This is the quality that validates us, this is how we become real to ourselves – by being seen by others. The great contemporary terror is anonymity.’ ~William Deresiewicz

Marketeers and propagandists are skilled at leveraging human psychology to exploit human nature. They utilize the study of the psyche to gain inroads into your behavior, and they employ this science as a tool for stoking insecurities and triggering urges.

They may be selling an idea, a lifestyle, a product, or a war, but, the pitch is the same: a false idol rises from the wastelands of the American dream, and is presented to the hordes as a well-packaged product. The celebrity’s life is a projection of a niche fantasy, and a following is built up around this fantasy, and the cult followers are steered toward whatever point of purchase.

And that’s what a cult is: “a system of religious veneration and devotion directed toward a particular figure or object.”

This kind of externalized validation serves as a power transfer. Your personal power is extracted and foisted onto a manufactured image in the matrix, and without realizing it, you’ve forfeited your power to influence the direction of your own life.

“The Fantasy of celebrity culture is not designed simply to entertain. It is designed to drain us emotionally, confuse us about our identity, make us blame ourselves for our predicament, condition us to chase illusions of fame and happiness, and keep us from fighting back.” – Chris Hedges

This is about usurping individuality in order to foster groupthink and hive consciousness. It’s also about creating a barrier between what you believe is possible for yourself and what chances you are willing to take in order to manifest a unique vision for your life.

You see, human beings are energetic creations, partly made of matter and partly made of spirit, but wholly malleable to the direction of the mind. We are affected by subtle energies, body language, electromagnetic energy, frequencies of light that we cannot see, sounds that we cannot hear, and a thousand other hidden cues. We are beings of energy, and much like a battery, we can can give or receive energy.

But the mind is at the center of it all. Whatever the mind entertains, the being creates.

When the mind fixes on an external idol, this innate power to form ourselves is transferred outside of our own locus of control, and where the mind could be centered on creating and expanding the self, it is instead focused on the fantasy of achieving an impossible ideal.

As journalist Jon Rappoport notes:

“If perception and thought can be channeled, directed, reduced, and weakened, then it doesn’t matter what humans do to resist other types of control. They will always go down the wrong path. They will always operate within limited and bounded territory. They will always ignore their own authentic power.” ~Jon Rappoport

The end game here is to keep us from accepting ourselves as worthy and perfect divine beings, and to disconnect us from our own potential. This is deep stuff, reaching far beyond the push to convert us into greedy, materialistic consumers. In a metaphysical sense it is a transfer of energy, and where once we were strong and full of promise, we are now helpless and content to observe as the world flits by.

What’s most dangerous to any system of control is for the individual to know their own strength and to speak their own language, as Chris Hedges puts it.

“That’s why I don’t own a television… and I work as hard as I can to distance myself from popular culture so that I can speak in my own language, not the one they give me.” ~Chris Hedges

“An Enthusiastic Corporate Citizen”: David Cronenberg and the Dawn of Neoliberalism

(Editor’s note: In commemoration of director David Cronenberg’s 75th birthday we present this compelling and socially relevant analysis of his filmography.)

By Michael Grasso

Source: We Are the Mutants

The cinematic corpus of David Cronenberg is probably best known for its expertly uncanny use of body horror, but looming almost as large in the writer-director’s various universes is the presence of faceless, all-powerful organizations. Like his rough contemporary Thomas Pynchon and the conspiracies that litter Pynchon’s early works—V. (1963), The Crying of Lot 49 (1966), and Gravity’s Rainbow (1973)—Cronenberg’s shadowy organizations offer fodder for paranoid conspiracy. These conspiracies operate under the cloak of beneficent academic institutes and, in his later work, corporations. The transition from institutes to corporations occurred during Cronenberg’s late ’70s and early ’80s output, specifically the trio of films The Brood (1979), Scanners (1981), and Videodrome (1983).

It is no coincidence that, at this particular time, international finance and prevailing political winds helped put the corporation in society’s driver’s seat. In Adam Curtis’s recent documentary film HyperNormalisation (2016), he notes how the default of the city of New York in 1975 opened the door for private investment and the finance industry to get their hands on municipal governance on a large scale for the first time, and how this creaked open the door for the Thatcher-Reagan privatization wave in the ’80s. These last few “hinge” years of the 1970s offered the last chance for a real alternative to the coming neoliberal revolution. Soon, all alternatives for governance in the name of the public good were destroyed. Corporatism tightened its grip on the Western polity.

Cronenberg’s early eerie organizations—the “Canadian Academy of Erotic Enquiry” from Stereo (1969) and the panoply of gruesome academic and cosmetic conspiracies in his Crimes of the Future (1970)—eventually yielded to corporations like Scanners‘ ConSec and Videodrome‘s Spectacular Optical. In these early works, Cronenberg’s mysterious organizations are headed by visionary (mad) geniuses. In 1975’s Shivers, experiments by a lone mad scientist infect an entire apartment building with parasites, which awaken dark impulses in the building’s residents and spread themselves through sexual violence. But as the decade went on, Cronenberg slowly backed away from utilizing the character of a singular scientific genius harboring a twisted vision of the future. Now, organizations sought to pull the strings from the shadows. The key transitional work in this chronology is the sometimes-underlooked The Brood from 1979.

In the film, Oliver Reed plays esteemed psychologist Dr. Hal Raglan, who has developed a method of exorcising deep-seated psychological issues using a technique called “psychoplasmics.” In intense one-on-one sessions reminiscent of psychodrama, Raglan is able to physically remove trauma from the human body in the form of ulcers, rashes, and, we eventually discover, cancer. In the ultimate reveal, it’s shown that Raglan has helped traumatized patient Nola Carveth (Samantha Eggar) to birth violent, deformed homunculi who go out into the world, psychically connected to her, in order to resolve her childhood abandonment issues and abuse with bloody murder. Raglan’s foundation, the Somafree Institute of Psychoplasmics (its name simultaneously evocative of Aldous Huxley’s perfect drug soma, and reminiscent of fringe psychological research like Wilhelm Reich’s orgone theory) inhabits a modernist chalet far outside the city of Toronto. Non-resident patients have to be bussed in. Raglan’s public reputation is that of an eccentric, but effective, therapist. At several points in the film we see the covers of Raglan’s presumably best-selling The Shape of Rage. (Curiously, a decade later, in 1990, a documentary titled Child of Rage would be released covering the controversial use of “attachment therapy.”)

As depicted in the film, Somafree is not a corporation. But the thematic threads surrounding Raglan and his Institute are based on real-life trends in the 1970s. In its practices and in the person of Raglan, Somafree resembles psycho-intensive institutes like Esalen, self-improvement organizations like Lifespring, and personalities like Werner Erhard. Erhard’s est movement used primal abuse to ostensibly create psychological breakthroughs, helping the “patient” become more assertive, more powerful, less prone to obeying impulses caused by their early traumas. There is also the real-life analogue to the psychological method that Raglan employs: psychodrama. In the 1970s, new methods of conflict resolution pioneered in places like Esalen were beginning to seep into the mainstream of North American society. These methods soon spread into the corporate world as a purported means of defusing tensions at work and making an office more productive. The “encounter group” soon became a punchline, but the principles behind the Age of Aquarius’s more touchy-feely psychodynamic methods soon became part of the warp and weft of corporate culture in the ’80s and well beyond.

Nola’s estranged husband Frank interviews a former Raglan patient, Jan Hartog, in an attempt to discredit Somafree so Frank can regain custody of his daughter. This patient bears the scars of Raglan’s work on him: a lymphatic cancer sprouting from his neck (an eerie foreshadowing of the coming of another mysterious lymphatic disorder that would soon break out all over North America). Hartog plans to sue; not to achieve victory in a courtroom, but to destroy Raglan’s reputation. It doesn’t matter if they win, Hartog says, because “They’ll just remember the slogan. Psychoplasmics can cause cancer.” The 1970s was full of an increased awareness of the carcinogens that surrounded us in the late-industrial West—cigarettes, sweeteners, food dyes, and pesticides—thanks in large part to the nascent environmental and consumer rights movements, which faced off against corporations using  weapons of negative publicity.

By the time we get to Scanners in 1981, we are fully invested in a world of shadowy corporate overlords. A huge multinational security firm, ConSec, tries to shepherd psychics called “scanners,” ostensibly to help them control their powers, but also to utilize and exploit their paranormal abilities. Protagonist Cameron Vale (Steven Lack) is apprehended off the streets, where, due to his psychic pain, he’s living as a derelict. We learn that scanners don’t “fit in” with society. When Vale is given the inhibitive drug ephemerol by ConSec’s head of scanner research, Dr. Paul Ruth (Patrick McGoohan), he is able to get himself together and is even given a new proto-yuppie wardrobe and mission by ConSec: eliminate rogue scanner Darryl Revok (Michael Ironside). But as Vale accepts his mission and new identity, he finds himself enlisted in ConSec’s private war against renegade scanners. When he runs into an emerging cell of scanners who are forming a powerful “group mind” in a New Age-like encounter session, assassins controlled by Revok murder most of the cell. “Everywhere you go, somebody dies,” one of the hive mind tells Vale, who is complicit with ConSec’s need to exert corporate control over scanners, including the use of violence as part of the corporate mission. Meanwhile, ConSec itself is riddled with moles working with Revok. Indeed, a chemical and pharmaceutical company called “Biocarbon Amalgamate,” founded by Dr. Ruth but now infiltrated by Revok, manufactures ephemerol in massive quantities. Scanners recontexualizes the Cold War espionage “wilderness of mirrors” in terms of corporate espionage for a new age of corporate domination. (It’s no coincidence that Cronenberg cast McGoohan, one of the Cold War’s most famous fictional spies, in the role of Dr. Ruth.)

ConSec’s corporate mission is revealed in a board meeting when the new head of security says, “We’re in the business of international security. We deal in weaponry and private armories.” This head of security also tells Dr. Ruth, “Let us leave the development of dolphins and freaks as weapons of espionage to others.” To the new breed of ConSec executive, fringe ’70s research is a thing of the past, despite its obvious power and relevance. The future is in fighting proxy wars, ensuring private security for the wealthy, and providing mercenary security forces. ConSec in this way is like many other private security firms that first emerged in the 1970s and ’80s. Begun as an outgrowth of post-colonial British military adventurism, the private military company soon became a way for ex-military officers to assure themselves a handsome post-service sinecure in a new era where hot wars were a thing of the past. “Brushfire wars” would continue to ensue, ensuring these companies an expanding portfolio, both in the waning years of the Cold War and in the 1990s and beyond. In fact, it’s interesting to note that many of the real-world military’s supposed psychic assets themselves got into private security after the U.S. Army shut down fringe science projects like Project STARGATE. Art imitates life imitates art.

Videodrome expands Cronenberg’s conspiratorial corporate, military, and espionage worldview into the rapidly exploding world of the media in the early ’80s. Leaps forward in technology, all of which are explicitly called out in Videodrome, litter the film’s visual landscape. Cable television, satellite transmissions (and the attendant hacking thereof), video cassette recorders, the rise of video pornography, virtual reality, postmodern media theory, and violence in entertainment all play essential roles in the film. Max Renn’s (James Woods) tiny Civic TV/Channel 83 (itself based on groundbreaking independent Toronto television station CityTV) is trying to survive as best it can in a world of massive international media players. Ever seeking the latest hit that will tap into the public’s unending hunger for sex and violence, his on-staff “satellite pirate” Harlan delivers the mysterious Videodrome transmission. Harlan is later revealed to be working with the Videodrome conspiracy, having intentionally exposed Max to the signal. In a memorable speech, Harlan nails Max’s amoral desire to sell sex and violence to his viewers: “This cesspool you call a television station, and your people who wallow around in it, and your viewers who watch you do it; you’re rotting us away from the inside.” When Renn is deep into his Videodrome-triggered hallucinations, he is offered corporate “help” much as Cameron Vale was. This time, his “savior” is Barry Convex, a representative of Spectacular Optical. In his video message to Max, he, like the ConSec executive before him, lays out Spectacular Optical’s corporate mission:

I’d like to invite you into the world of Spectacular Optical, an enthusiastic global corporate citizen. We make inexpensive glasses for the Third World… and missile guidance systems for NATO. We also make Videodrome, Max.

The final form of the military-industrial-entertainment complex is laid bare. Videodrome’s intent is to harden and make psychotic a North American television audience who’ve “become soft,” as Harlan puts it. Renn’s hallucinations are recorded, he is literally “reprogrammed” to kill Civic TV’s board (thanks to the memorable hallucinatory image of Convex sticking a VHS tape into Renn’s gut). Renn is then reprogrammed to retaliate and assassinate Convex by the much more ’70s-cult Cathode Ray Mission of “media prophet” Brian O’Blivion, whose postmodern, expressly McLuhanesque view of television’s place in the world allowed Videodrome to come into existence in the first place: “I had a brain tumor and I had visions. I believe the visions caused the tumor and not the reverse… when they removed the tumor, it was called Videodrome.” It’s also worth noting that O’Blivion tells us that Videodrome made him its first victim; postmodern criticism of the medium of television is no match for its violent, cancerous growth.

The deregulation of media in the U.S. in the Reagan years is common knowledge; rules around children’s television were especially eviscerated, which allowed for an explosion in violent, warlike cartoons based on popular toy lines, training a new generation for a lifetime of endless war. Combined with the aforementioned explosion of video technology, the laissez-faire environment shepherded by Reagan’s FCC allowed a new breed of cable television magnates to get rich and created a television and media landscape with a relatively friction-free relationship to government. By the time the first Gulf War broke out in 1991, war provided the cable news networks with surefire ratings and cable news provided the propaganda platform for the war effort, a mutually beneficial (and Cronenberg-esque) symbiosis that’s continued to metastasize through multiple subsequent wars in the Middle East. The world of Videodrome, the one Harlan evokes where America will no longer be soft in a world full of tough hombres, has finally come to fruition thanks in part to all of our enmeshment in the video arena—the video drome.

After Videodrome—in The Fly (1986), Dead Ringers (1988), and Crash (1996)—Cronenberg focuses less on sinister organizations and more on monomaniacal researchers, doctors, and fetishists who pursue their individual idiosyncratic agendas through the director’s trademark twisting mindscapes (and bodyscapes). With the exception of eXistenZ (1999), Cronenberg’s meditation on computer technology and gaming released amidst the first dot-com bubble, and his Occupy-influenced adaptation of Don DeLillo’s 2003 novel Cosmopolis (2012), he has retreated from a more overt suspicion of corporations and shadowy conspiracies. His warning about these invisible masters pulling the strings of society came during the time period when something could have been done about corporate hegemony. But now, the conspiracy operates in the open. We are now all of us the dumb, trusting Cronenberg protagonist, lulled into a false sense of security by a series of “enthusiastic corporate citizens.” Long live the new flesh.

According to the Father of Propaganda an Invisible Government Controls Our Minds with a Thought Prison

By Sigmund Fraud

Source: Waking Times

“Who are the men who without our realizing it, give us our ideas, tell us whom to admire and whom to despise, what to believe about the ownership of public utilities, about the tariff, about the price of rubber, about the Dawes Plan, about immigration; who tell us how our houses should be designed, what furniture we should put in them, what menus we should serve on our table, what kind of shirts we must wear, what sports we should indulge in, what plays we should see, what charities we should support, what pictures we should admire, what slang we should affect, what jokes we should laugh at?” ~ Edward Bernays [b.11/22/1891, d. 3/9/1995]  Propaganda

Authored by Edward Bernays and published in 1928, the book Propaganda still holds its position as the gold standard for influencing and manipulating public behavior. Drawing on his expertise in psychology while using the language of manipulation, Bernays pioneered social engineering via mass media, and his work lives on in the distorted, statist, consumer world we have today.

But who are the ones behind the curtain telling us what to think by directing our attention onto the things which serve interests?

Interestingly, chapter III of Propaganda is titled, ‘The New Propagandists, and is devoted to explaining why the controls for mass manipulation are so closely guarded by a relatively tiny elite who sit in the shadows, out of the public eye, choosing what we are to see and to think, even controlling the politicians we elect to represent us.

If we set out to make a list of the men and women who, because of their position in public life, might fairly be called the molders of public opinion we could quickly arrive at an extended list of persons mentioned in “Who’s Who…”

Such a list would comprise several thousand persons. But it is well known that many of these leaders are themselves led, sometimes by persons whose names are known to few.

Such persons typify in the public mind the type of ruler associated with the phrase invisible government.

An invisible government of corporate titans and behind the scenes influencers who’s mark on culture cannot be understated today. Bernays continues:

The invisible government tends to be concentrated in the hands of the few because of the expense of manipulating the social machinery which controls the opinions and habits of the masses.

The public relations counsel, then, is the agent who, working with modern media communication and the group formation of society, brings an idea to the consciousness of the public. But he is a great deal more than that. He is concerned with courses of action, doctrines, systems and opinions, and the securing of public support for them.

Ultimately, the goal of this type of mass-produced, pop-culture propaganda is to weaken the individual’s ability to think critically, thereby creating an environment where many people look to one another for approval, always second-guessing their own faculties. When this happens, the strength of the collective group begins to take form and multiply, and ideas can be implanted into the popular culture, taking root in the form of widespread conformist behavior.

Thinking critically means making reasoned judgments that are logical and well thought out. It is a way of thinking in which one doesn’t simply accept all arguments and conclusions to which one is exposed without questioning the arguments and conclusions. It requires curiosity, skepticism and humility. People who use critical thinking are the ones who say things such as, “How do you know that?” “Is this conclusion based on evidence or gut feelings?” and “Are there alternative possibilities when given new pieces of information?””  [Source]

Final Thoughts

The takeaway here is that not much has changed in 100 years of corporate/statist American culture, other than the technical capacity to scale this ever upward. Our lives are still heavily influenced by the likes of the described by Bernays. There is one advantage we do have now, however, as technology has given us greater access to the truth and we are now free to split from the matrix psychologically by understanding what it is and how it influences our lives. If we choose to do so, that is, if we choose to take the red pill.

In order to understand your life and your mission here on earth in the short time you have, it is imperative to learn to see the thought prison that has been built around you, and to actively circumnavigate it. Free-thinking is being stamped out by the propagandists, but our human tendency is to crave freedom, and with the aid of truth, we are more powerful than the control matrix and the invisible government.