Identity Theft and the Body’s Disappearance

By Robert Bohm

Source: The Hampton Institute

“What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination?”

– Allen Ginsberg from his poem “ Howl

Identity theft, at least the most familiar type, is possible because today the individual exists not merely as flesh and blood, but as flesh and blood spliced with bank account numbers, user names, passwords, credit card chips, etc. These added parts aren’t secondary to the individual’s overall identity, they’re central to it. Sometimes they’re all there is of it, as in many banking and purchasing transactions. In such instances, the data we’ve supplied to the relevant institutions doesn’t merely represent us, it is us. Our bodies alone can’t complete transactions without the account numbers, user names, passwords, credit card numbers, and ID cards which have become our identity’s essence. Without them, in many ways, we don’t exist.

In a worst case scenario, if someone gets hold of this private data, they can become us by possessing the data that is us. Following this, who or what we are is no longer a question. We don’t exist, except in the form of a stolen dataset now under someone else’s control.

In such a case, an unknown proxy has eliminated us and become who we once were.

Although problematic, the above form of identity theft is relatively minor. A worse form is one we all know about, yet chronically underestimate because we think of ourselves as too canny to be conned. Nonetheless, this other form of identity theft frames and limits everything we do. In the process, it fleeces us of the fullness of our identities and subjects our lives to a type of remote control. This remote control consists of the combined influence on us, from childhood onward, of society’s major institutions and dominant activities, which seed us with a variety of parameters for how to acceptably navigate society and and its particular challenges.

This process is usually called “socialization.” However, it’s better seen as a sorting procedure in which society sifts us through a citizenship sieve in order to eliminate supposed defects, thereby guaranteeing that, despite each of us possessing unique characteristics, we share an underlying uniformity. Ultimately, this process is a kind of identity eugenics which strives to purify the population by eliminating or weakening troublesome qualities – e.g., an overly questioning attitude, chronic boundary-testing, a confrontational stance toward authority, a fierce protectiveness toward whatever space the body inhabits, etc. Such traits are frowned upon because they’re seen by the status quo as a likely threat to society’s stability.

Such indoctrination is much subtler yet, in many ways, more pervasive than outright propaganda. Its theater of operations is everywhere, taking place on many fronts. Public and private education, advertising, mass culture, government institutions, the prevailing ideas of how to correct socioeconomic wrongs (this is a “good” form of protest, this a “bad” one), the methods by which various slangs are robbed of their transgressive nature through absorption into the mainstream, the social production of substitute behaviors for nonconformity and rebellion – each of these phenomena and others play a role in generating the so-called “acceptable citizen,” a trimmed down (i.e., possesses reduced potential) version of her or his original personality.

Make no doubt about it, this trimming of the personality is a form of identity theft. It is, in fact, the ultimate form. Take as an example the African slave in the U.S.: abducted from her or his homeland, forbidden from learning to read or write, denied legal standing in the courts, given no say over whether offspring would be sold to another owner or remain with them. The slave was robbed of her/his most essential identity, their status as a human being.

In his book, The Souls of Black Folk , W.E.B. Du Bois described this theft in terms of how slavery reduces the slave to a person with “no true self-consciousness” – that is, with no stable knowledge of self, no clear sense of who she or he is in terms of culture, preceding generations, rituals for bringing to fruition one’s potential to create her or his own fate. As Du Bois correctly argued, this left the slave, and afterwards the freed Black, with a “longing to attain self-conscious manhood,” to know who she or he was, to see oneself through one’s own eyes and not through the eyes of one’s denigrators – e.g., white supremacists, confederate diehards, “good” people who nonetheless regarded Blacks as “lesser,” etc. Du Bois understood that from such people’s perspectives, Blacks possessed only one identity: the identity of being owned, of possessing no value other than what its owner could extract from them. Without an owner to extract this value, the slave was either identity-less or possessed an identity so slimmed and emaciated as to be a nothing.

The point here isn’t that today socialization enslaves the population in the same way as U.S. slavery once enslaved Blacks, but rather that identity theft is, psychologically and culturally speaking, a key aspect of disempowering people and has been for centuries. Today, because of mass culture and new technologies, the methods of accomplishing it are far more sophisticated than during other eras.

How disempowerment/identity theft occurs in contemporary society is inseparable from capitalism’s current state of development. We long ago passed the moment (after the introduction of assembly line production in the early 20th century) when modern advertising started its trek toward becoming one of the most powerful socialization forces in the U.S. As such, it convinces consumers not only to purchase individual products but, even more importantly, sells us on the idea that buying in general and all the time, no matter what we purchase, is proof of one’s value as a person.

To accomplish this end, modern advertising was molded by its creators into a type of PSYOP designed for destabilizing individuals’ adherence to old saws like “a penny saved is a penny earned” and “without frugality none can be rich, and with it very few would be poor.” Once this happened, the United States’ days of puritan buying restraint were over. However, modern advertising was never solely about undermining personal fiscal restraint. It was also about manipulating feelings of personal failure – e.g., dissatisfaction with lifestyle and income, a sense of being trapped, fear of being physically unappealing, etc. – and turning them not into motives for self-scrutiny or social critiques, but into a spur for commodity obsession. This wasn’t simply about owning the product or products, but an obsessive hope that buying one or more commodities would trigger relief from momentary or long-term anxiety and frustration related to one’s life-woes: job, marriage, lack of money, illness, etc.

Helen Woodward, a leading advertising copywriter of the early decades of the 20th century, described how this was done in her book, Through Many Windows , published in 1926. One example she used focused on women as consumers:

The restless desire for a change in fashions is a healthy outlet. It is normal to want something different, something new, even if many women spend too much time and too much money that way. Change is the most beneficent medicine in the world to most people. And to those who cannot change their whole lives or occupations, even a new line in a dress is often a relief. The woman who is tired of her husband or her home or a job feels some lifting of the weight of life from seeing a straight line change into a bouffant, or a gray pass into a beige. Most people do not have the courage or understanding to make deeper changes.

Woodward’s statement reveals not only the advertising industry’s PSYOP characteristic of manipulating people’s frustrations in order to lure them into making purchases, but also the industry’s view of the people to whom it speaks through its ads. As indicated by Woodward’s words, this view is one of condescension, of viewing most consumers as unable to bring about real socioeconomic change because they lack the abilities – “the courage or understanding” – necessary to do so. Consequently, their main purpose in life, it is implied, is to exist as a consumer mass constantly gorging on capitalism’s products in order to keep the system running smoothly. In doing this, Woodward writes, buyers find in the act of making purchases “a healthy outlet” for troubled emotions spawned in other parts of their lives.

Such advertising philosophies in the early 20th century opened a door for the industry, one that would never again be closed. Through that door (or window), one could glimpse the future: a world with an ever greater supply of commodities to sell and an advertising industry ready to make sure people bought them. To guarantee this, advertisers set about creating additional techniques for reshaping public consciousness into one persuaded that owning as many of those commodities as possible was an existential exercise of defining who an individual was.

In his book The Consumer Society , philosopher Jean Baudrillard deals with precisely this process. He writes that such a society is driven by:

the contradiction between a virtually unlimited productivity and the need to dispose of the product. It becomes vital for the system at this stage to control not only the mechanism of production, but also consumer demand.

“To control … consumer demand.” This is the key phrase here. Capitalist forces not only wanted to own and control the means of production in factories, it also wanted to control consumers in such a way that they had no choice but to buy, then buy more. In other words, capitalism was in quest of a strategy engineered to make us synch our minds to a capitalism operating in overdrive (“virtually unlimited” production).

The way this occurs, Baudrillard argues, is by capitalism transforming (through advertising) the process of buying an individual product from merely being a response to a “this looks good” or “that would be useful around the house” attitude to something more in line with what psychologists call “ego integration.” It refers to that part of human development in which an individual’s various personality characteristics (viewpoints, goals, physical desires, etc.) are organized into a balanced whole. At that point, what advertising basically did for capitalism was develop a reconfigured ego integration process in which the personality is reorganized to view its stability as dependent on its life as a consumer.

Advertisers pulled this off because the commodity, in an age of commodity profusion, isn’t simply a commodity but is also an indicator or sign referring to a particular set of values or behavior, i.e. a particular type of person. It is this which is purchased: the meaning, or constellation of meanings, which the commodity indicates.

In this way, the commodity, once bought, becomes a signal to others that “I, the owner, am this type of person.” Buy an Old Hickory J143 baseball bat and those in the know grasp that you’re headed for the pros. Sling on some Pandora bling and all the guys’ eyes are on you as you hip-swing into the Groove Lounge. Even the NY Times is hip to what’s up. If you want to be a true Antifa activist, the newspaper informed its readers on Nov. 29, 2017, this is the attire you must wear:

Black work or military boots, pants, balaclavas or ski masks, gloves and jackets, North Face brand or otherwise. Gas masks, goggles and shields may be added as accessories, but the basics have stayed the same since the look’s inception.

After you dress up, it’s not even necessary to attend a protest and fight fascists to be full-blown Antifa. You’re a walking billboard (or signification) proclaiming your values everywhere. Dress the part and you are the part.

Let’s return to Baudrillard, though. In The System of Objects , another of his books, he writes about how the issue of signification, and the method by which individuals purchase particular commodities in order to refine their identity for public consumption, becomes the universal mass experience:

To become an object of consumption, an object must first become a sign. That is to say: it must become external, in a sense, to a relationship that it now merely signifies … Only in this context can it be ‘personalized’, can it become part of a series, and so on; only thus can it be consumed, never in its materiality, but in its difference.

This “difference” is what the product signifies. That is, the product isn’t just a product anymore. It isn’t only its function. It has transitioned into an indicator of a unique personality trait, or of being a member of a certain lifestyle grouping or social class, or of subscribing to a particular political persuasion, Republican, anarchist, whatever. In this way, choosing the commodities to purchase is essential to one’s self-construction, one’s effort to make sure the world knows exactly who they are.

The individual produced by this citizen-forming process is a reduced one, the weight of her/his full personality pared down by cutting off the unnecessary weight of potentials and inclinations perceived as “not a good fit” for a citizen at this stage of capitalism. Such a citizen, however, isn’t an automaton. She or he makes choices, indulges her or his unique appetites, even periodically rebels against bureaucratic inefficiency or a social inequity perceived to be particularly stupid or unfair. Yet after a few days or few months of this activity, this momentary rebel fades back into the woodwork, satisfied by their sincere but token challenge to the mainstream. The woodwork into which they fade is, of course, their home or another favorite location (a lover’s apartment, a bar, a ski resort cabin, a pool hall, etc.).

From this point on, or at least for the foreseeable future, such a person isn’t inclined to look at the world with a sharp political eye, except possibly within the confines of their private life. In this way, they turn whatever criticism of the mainstream they may have into a petty gripe endowed with no intention of joining with others in order to fight for any specific change(s) regarding that political, socioeconomic or cultural phenomenon against which the complaint has been lodged. Instead, all the complainer wants is congratulations from her or his listener(s) about how passionate, on-target, and right the complaint was.

This is the sieve process, identity eugenics, in action. Far more subtle and elastic than previous methods of social control, it narrows what we believe to be our options and successfully maneuvers us into a world where advertising shapes us more than schools do. In this mode, it teaches us that life’s choices aren’t so much about justice or morality, but more about what choosing between commodities is like: which is more useful to me in my private life, which one better defines me as a person, which one makes me look cooler, chicer, brainier, hunkier, more activist to those I know.

It is in this context that a young, new, “acceptable” citizen enters society as a walking irony. Raised to be a cog in a machine in a time of capitalistic excess, the individual arrives on the scene as a player of no consequence in a game in which she or he has been deluded that they’re the game’s star. But far from being a star, this person, weakened beyond repair by the surrender of too much potential, is so without ability that she or he has no impact whatsoever on the game. Consequently, this individual is, for all practical purposes, an absence. The ultimate invisible person, a nothing in the midst of players who don’t take note of this absence at all. And why should they? The full-of-potential individual who eventually morphed into this absence is long gone, remembered by no one, except as a fading image of what once was.

This process of reducing a potentially creative person into a virtual non-presence is a form of ideological anorexia. Once afflicted, an individual refuses nourishment until they’re nothing but skin and bones. However, the “weight” they’ve lost doesn’t consist of actual pounds. Instead, it involves a loss of the psychological heftiness and mental bulk necessary to be a full human being.

One can’t lose more weight than that.

Human life as we once knew it is gone, replaced by the ritual of endless purchasing. This is existence in what used to be called “the belly of the beast.” Our role in life has become to nourish capitalism by being at its disposal, by giving of ourselves. Such giving frequently entails self-mutilation: the debt, credit card and otherwise, that bludgeons to death the dreams of many individuals and families.

This quasi-religious self-sacrifice replicates in another form: the Dark Ages practice employed by fanatical monks and other flagellants who lashed themselves with whips made from copper wires, thereby ripping their flesh and bleeding until they descended into a state of religious hysteria. The more we give of ourselves in this way, the thinner and more weightless we become. Meanwhile, the god whom Allen Ginsberg called Moloch grows more obese day after day, its belly is filled with:

Robot apartments! invisible suburbs! skeleton treasuries! blind capitals! demonic industries! spectral nations! invincible madhouses! granite cocks! monstrous bombs!…

Dreams! adorations! illuminations! religions! the whole boatload of sensitive bullshit!

What capitalism wants from us, of course, isn’t merely self-sacrifice, it’s surrender. Hunger for life is viewed negatively by the status quo because it nourishes the self, making it stronger and more alert and, therefore, better prepared to assert itself. The fact that such an empowered self is more there (possesses more of a presence) than its undersized counterpart makes the healthier self unacceptable to the powers that be. This is because there-ness is no longer an option in our national life. Only non-there-ness is. If you’re not a political anorexic, you’re on the wrong side.

Wherever we look, we see it. Invisibility, or at least as much of it as possible, is the individual’s goal. It’s the new real. Fashion reveals this as well as anything. It does so by disseminating an ideal of beauty that fetishizes the body’s anorexic wilting away. Not the body’s presence but its fade to disappearance is the source of its allure. The ultimate fashion model hovers fragilely on the brink of absence in order not to distract from the only thing which counts in capitalism: the commodity to be sold – e.g., the boutique bomber jacket, the shirt, the pantsuit, the earrings, the shawl, the stilettos, the iPhone, the Ferrari, and, possibly most of all, the political passivity intrinsic to spending your life acquiring things in order to prove to others and ourselves that we’ve discovered in these things something more useful than Socrates’ goal of knowing thyself or Emma Goldman’s warning , “The most unpardonable sin in society is independence of thought.”

What is true on the fashion runway is also true in politics. Just as the best model is one thin enough to fade into non-presence, so our democracy, supposedly ruled “by and for the people,” has thinned down so much that “the people” can’t even be seen (except as stage props), let alone get their hands on democracy except in token ways. No matter how often we the people are praised rhetorically by politicians, we aren’t allowed as a group to get in the way of the capitalist system’s freedom to do whatever it wants in order to sustain commodity worship and guarantee capital’s right to permanent rule. If the military-industrial complex needs another war in order to pump out more profits, then so be it. We have no say in the matter. The identity theft built into society’s structure makes sure of this. It’s stripped us of our “weight” – our creativity, our willingness to take political risks, our capacity to choose action over posturing. After this forced weight loss, what’s left of us is a mess. Too philosophically and psychologically anemic to successfully challenge our leaders’ decisions, we, for all practical purposes, disappear.

As a reward for our passivity, we’re permitted a certain range of freedom – as long as “a certain range” is defined as “varieties of buying” and doesn’t include behavior that might result in the population’s attainment of greater political power.

So, it continues, the only good citizen is the absent citizen. Which is to say, a citizen who has dieted him or herself into a state of political anorexia – i.e., that level of mental weightlessness necessary for guaranteeing a person’s permanent self-exclusion from the machinery of power.

***

Our flesh no longer exists in the way it once did. A new evolutionary stage has arrived.

In this new stage, the flesh isn’t merely what it seems to be: flesh, pure and simple. Instead, it’s a hybrid. It’s what exists after the mind oversees its passage through the sieve of mass culture.

After this passage, what the flesh is now are the poses it adopts from studying movies, rappers, punk rockers, fashionistas of all kinds, reality TV stars, football hunks, whomever. It’s also what it wears, skinny jeans or loose-fitting chinos, short skirt or spandex, Hawaiian shirt or muscle tank top, pierced bellybutton, dope hiking boots, burgundy eyeliner. Here we come, marching, strolling, demon-eyed, innocent as Johnny Appleseed. Everybody’s snapping pics with their phones, selfies and shots of others (friends, strangers, the maimed, the hilarious, the so-called idiotic). The flesh’s pictures are everywhere. In movie ads, cosmetic ads, suppository ads, Viagra ads. This is the wave of the already-here but still-coming future. The actual flesh’s replacement by televised, printed, digitalized and Photoshopped images of it produces the ultimate self-bifurcation.

Increasingly cut off from any unmediated life of its own, the flesh now exists mostly as a natural resource for those (including ourselves) who need it for a project; to photograph it, dress it up, pose it in a certain way, put it on a diet, commodify/objectify it in any style ranging from traditional commodification to the latest avant-garde objectification.

All these stylings/makeovers, although advertised as a form of liberation for the flesh (a “freeing” of your flesh so you can be what you want to be), are in fact not that. Instead, they are part of the process of distancing ourselves from the flesh by always doing something to it rather than simply being it.

When we are it, we feel what the flesh feels, the pain, the joy, the satisfaction, the terror, the disgust, the hints of hope, a sense of irreparable loss, whatever.

When we objectify it, it is a mannequin, emotionless, a thing that uses up a certain amount of space. As such we can do what we want with it: decorate it, pull it apart, vent our frustrations on it, starve it, practice surgical cuts on it, put it to whatever use we like. It isn’t a person. It is separate from our personhood and we own it.

In fact we own all the world’s flesh.

We live, after all, in the American Empire, and the Empire owns everything. As the Empire’s citizens, we own everything it owns. Except for one thing: ourselves.

***

The flesh is both here and not here. Increasingly, it is more an object that we do things to – e.g., bulk it up, change its hair color, mass-kill it from a hotel window on the 32nd floor, view in a porno flick – than a presence in its own right (i.e., self-contained, a force to be reckoned with). In this sense, it is a growing absence, each day losing more of its self-determination and becoming more a thing lost than something that exists fully, on its own, in the here and now. Given this, the proper attitude to have toward the flesh is one of nostalgia.

Of course, the flesh hasn’t really disappeared. What has disappeared is what it once was, a meat-and-bones reality, a site of pleasure and injury. Now, however, it’s not so valuable in itself as it is in its in its role as a starting-off point for endless makeovers.

These makeover options are arrayed before the consumer everywhere: online, in big box stores, in niche markets and so on. Today, it is in these places, not at birth, that the flesh starts its trek toward maturation. It does this by offering itself up as a sacrifice to be used as they see fit by the fashion industry, the gym industry, the addiction-cure industry, the diet industry, the pharmaceutical industry, the education industry, etc. Each body in the nation reaches its fullest potential only when it becomes a testing site to be used by these industries as they explore more and better ways to establish themselves as indispensable to capitalism’s endless reproduction.

In the end, the flesh, the target of all this competition for its attention, has less of a life on its own than it does as the object of advertisers’ opinions about what can be done to improve it or to reconstruct it. Only to the extent that the flesh can transcend or reconstitute itself can it be said to be truly alive.

This last fact – about aliveness – represents the culmination of a process. This process pertains to the visualization and digitalization of everything and the consequent disappearance of everything behind a wall of signification.

A televised or computerized image, discussion, commentary, conjecture, etc., becomes the thing it meditates on, depicts or interprets. This happens by virtue of the fact that the thing itself (the real flesh behind the televised or computerized image, discussion, commentary, conjecture, etc.) has disappeared into the discussion or into the image of it presented on the computer or TV screen.

In the same way, an anorexic model (her/his flesh and blood presence) disappears into the fashions she or he displays for the public.

In each instance the thing (the flesh) now no longer exists except in other people’s meditations on it; it has become those other people’s meditations. The ultimate anorexic, it (the thing) has lost so much weight it’s no longer physically there except as an idea in someone else’s mind or in a series of binary codings inside computers.

This is the final victory of absence over there-ness, of the anorexic ideal over the idea of being fully human (i.e., “bulging with existence,” “fat with life”). The self has been successfully starved to the point of such a radical thinness that it can no longer stand up to a blade of grass, let alone make itself felt by the powers that be.

No Need To Wait – Dystopia Is Almost Upon Us

Source: TruePublica

Microsoft’s CEO has warned the technology industry against creating a dystopian future, the likes of which have been predicted by authors including George Orwell and Aldous Huxley. Satya Nadella kicked off the the company’s 2017 Build conference with a keynote that was as unexpected as it was powerful. He told the developers in attendance that they have a huge responsibility, and that the choices they make could have enormous implications.

They won’t listen of course. The collection of big data along with management, selling and distribution and the systems architecture to control it is now worth exactly double global military defence expenditure. In fact, this year, the big data industry overtook the worlds most valuable traded commodity – oil.

The truth is that the tech giants have already captured us all. We are already living in the beginnings of a truly dystopian world.

Leaving aside the endemic surveillance society our government has chosen on our behalf with no debate, politically or otherwise, we already have proof of the now and where it is leading. With fingerprint scanning, facial recognition, various virtual wallets to pay for deliveries, some would say your identity is as good as stolen. If it isn’t, it soon will be. That’s because the hacking industry, already worth a mind blowing $1trillion annually is expected to reach $2.1 trillion in just 14 months time.

The reality of not being able to take public transportation, hire a car, buy a book, or a coffee – requiring full personal identification is almost upon us. Britain even had an intention to be completely cashless by 2025 – postponed only by the impact of Brexit.

Alexa, the Amazon home assistant listens to everything said in the house. It is known to record conversations. Recently, police in Arkansas, USA demanded that Amazon turn over information collected from a murder suspect’s Echo — the speaker that controls Alexa, because they already knew what information could be extracted from it.

32M is the first company in the US that provides a human chip, allowing employees “to make purchases in their break-room micro market, open doors, login to computers, use the copy machine.” 3M also confirmed what the chip could really do – telling employees to “use it as your passport, public transit and all purchasing opportunities.”

Various Apps now locate people you may know and your own location can be shared amongst others without your knowledge and we’ve known for years that governments and private corporations have access to this data, whether you like it not.

Other countries are providing even scarier technologies.  Hypebeast Magazine reports that  Aadhaar is a 12-digit identity number issued to all Indian residents based on their biometric and demographic data. “This data must be linked to their bank account or else they’ll face the risk of losing access to their account. Folks have until the end of the year to do this, with phone numbers soon to be connected through the 12 digits by February. Failure to do so will deactivate the service. ” The technology has the ability to refuse access to state supplied services such as healthcare.

Our article “Insurance Industry Leads The Way in Social Credit Systems” also highlights what the fusion of technology and data is likely to end up doing for us. An astonishing 96 per cent of insurers think that ecosystems or applications made by autonomous organisations are having a major impact on the insurance industry. The use of social credit mechanisms is being developed, some already implemented, which will determine our future behaviour, which will affect us all – both individually and negatively.”

The Chinese government plans to launch its Social Credit System in 2020. Already being piloted on 12 million of its citizens, the aim is to judge the trustworthiness – or otherwise – of its 1.3 billion residents. Something as innocuous as a person’s shopping habits become a measure of character. But the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like. Friends are considered as well and individual credit scores fall depending on their trustworthiness. It’s not possible to imagine how far this will go in the end.

However to get us all there, to that situation, we need to be distracted from what is going on in the background. Some, are already concerned.

 

Distraction – detaching us from truth and reality

The Guardian wrote an interesting piece recently which highlighted some of the concerns of those with expert insider knowledge of the tech industry. For instance, Justin Rosenstein, the former Google and Facebook engineer who helped build the ‘like’ button –  is concerned. He believes there is a case for state regulation of smartphone technology because it is “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies.

If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.” Rosenstien also makes the observation that after Brexit and the election of Trump, digital forces have completely upended the political system and, left unchecked, could render democracy as we know it obsolete.

Carole Cadwalladre’s recent Exposé in the Observer/Guardian proved beyond doubt that democracy has already departed.  Here we learn about a shadowy global operation involving big data and billionaires who influenced the result of the EU referendum. Britain’s future place in the world has been altered by technology.

Nir Eyal 39, the author of Hooked: How to Build Habit-Forming Products writes: “The technologies we use have turned into compulsions, if not full-fledged addictions.” Eyal continues: “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Eyal feels the threat and protects his own family by cutting off the internet completely at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.”

The truth is we are no longer in control and have not been since we learned that our government was lying to us with the Snowden revelations back in 2013.

Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry agrees about the lack of control. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” Harris insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.

Harris is a tech whistleblower. He is lifting the lid on the vast powers accumulated by technology companies and the ways they are abusing the influence they have at their fingertips – literally.

“A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today.”

The techniques these companies use such as social reciprocity, autoplay and the like are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, ultimately revealed that the company can identify when teenagers feel “worthless or “insecure.” Harris adds, that this is “a perfect model of what buttons you can push in a particular person”.

Chris Marcellino, 33, a former Apple engineer is now in the final stages of retraining to be a neurosurgeon and notes that these types of technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

Roger McNamee, a venture capitalist who benefited from hugely profitable investments in Google and Facebook, has grown disenchanted with both of the tech giants. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”

James Williams ex-Google strategist who built the metrics system for the company’s global search advertising business, says Google now has the “largest, most standardised and most centralised form of attentional control in human history”. “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

Williams also takes the view that if the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on. If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?”

“Will we be able to recognise it, if and when it happens?” Williams says. “And if we can’t, then how do we know it hasn’t happened already?”

 

The dystopian arrival

Within ten years, some are speculating that many of us will be wearing eye lenses. Coupled with social media, we’ll be able to identify strangers and work out that a particular individual, in say a bar, has a low friend compatibility, and data shows you will likely not have a fruitful conversation. This idea is literally scratching the surface of the information overload en-route right now.

It is not at all foolish to think that in that same bar a patron is shouting at the bartender, who refuses to serve him another drink because the glass he was holding measured his blood-alcohol level through the sweat in his fingers. He’ll have to wait at least 45 minutes before he’ll be permitted to order another scotch. You might even think that is a good idea – it isn’t.

Google’s Quantum Artificial Intelligence  Lab, already works with other organisations associated with NASA. Google’s boss sits on the Board of the Pentagon with links plugged directly into the surveillance architecture of the NSA in the USA and GCHQ in Britain. This world, where artificial intelligence makes its mark, as Williams mentions earlier, will deliberately undermine the ability to think for yourself.

In the scenario of the eye lenses, you might even have the ability to command your eyewear to shut down. But when you do, suddenly you are confronted with an un-Googled world. It appears drab and colourless in comparison. The people before you are bland, washed out and unattractive. The art, plants, wall paint, lighting and decorations had all been shaped by your own preferences, and without the distortion field your wearable eyewear provided, the world appears as a grey, lifeless template.

You find it difficult to last without the assistance of your self imposed augmented life, and accompanied by nervous laughter you switch it back on. The world you view through the prism of your computer eyewear has become your default setting. You know you have free will, but don’t feel like you need it. As Marcellino says the same neurological pathways as gambling and drug use drive how you choose to see the world.

This type of technology will be available and these types of scenario’s will become real, sooner than you think.

Our governments, allied with the tech giants are coercing us into a place of withering obedience with the use of 360 degree state surveillance. New technology, which is somehow seen as the road to liberty, contentment and prosperity, is really our future being shaped by a system that will destroy our civil liberties, crush our human rights and it will eventually ensnare and trap us all. This much they are already attempting in China and Japan with social credit mechanisms and pre-crime technology which is a truly frightening prospect. Without debate or our knowledge, here in western democracies, these technologies are already in use.

 

“One Long Discomfort”: The Legacy and Future of David Lindsay’s ‘A Voyage to Arcturus’

By Ben Schwartz

Source: We Are the Mutants

Ballantine “Adult Fantasy” edition, 1973, with cover art by Bob Pepper

David Lindsay’s masterpiece A Voyage to Arcturus was first published in London in 1920 by Methuen & Co. It came dressed in a simple red cloth cover; no dust jacket, just the title and author’s name debossed into the front. This first printing sold less than 600 copies, and so Arcturus didn’t come to the US until Macmillan brought it out in 1964. In 1968, Ballantine picked it up after the massive success of the publisher’s Lord of the Rings paperbacks, and, for the first time ever, the cover featured bespoke art, painted by Bob Pepper. The printing predated Ballantine’s influential Adult Fantasy series, edited by Lin Carter, but was eventually given honorary membership, with later printings carrying the unicorn stamp and benefiting from the cachet the series possessed.

With the late-1960s Lord of the Rings phenomenon leading the charge, speculative fiction, and Arcturus with it, rode into the public consciousness on about as high a tide as it has ever had. Lindsay’s biographer Bernard Sellin notes that Ballantine’s edition “[had]… overtaken all the accumulated efforts of forty years” in terms of circulating Lindsay’s first novel. But he’s quick to point out that Lindsay’s audience is still limited, and that “The average, sensual reader is in serious danger of being disappointed in Lindsay.” Sellin wrote this in 1981 and, with a weird choice of words, envisions a “‘superior race’ of readers, anxious to go beyond the plot” of Arcturus and grasp what it’s really about. Today, in 2018, Lindsay’s potential audience, superior or otherwise, struggles against a vanishing text.

In the UK, Gollancz brought out an Arcturus reissue in the ’40s (the “novel… is regarded by some of those who have read it as a work of genius,” the cover read), which was subsequently routed into their “Rare Works of Imaginative Fiction” reissues in the early ’60s. Today, the label keeps it alive in its “Fantasy Masterworks” series as an affordable paperback. A high quality limited edition from Savoy Books was the high point of its publication history, but that small batch is fifteen years gone now.

In the states, the novel languishes in Print on Demand Hell. Most readily available copies are ill-starred editions from nebulous outfits bearing names like CreateSpace and Wilder Publications, featuring non sequitur cover images that look like refugees from a Windows ME screensaver folder: a field of wheat, a macro of autumn leaves, an anonymous, slightly-out-of-focus Roman ruin. Even outside of PoD territory there are some seriously janky efforts, leprous with typos: the first printing of Arcturus from Bison Press misspelled the word “Commemorative” on its own cover, and newer printings still contain fistfuls of errors.

And this is a book that counts Clive Barker, Alan Moore, Michael Moorcock, and Jeff Vandermeer among its admirers. C.S. Lewis called it the “real father” of his Space Trilogy. Pathological anti-genre lit critic Harold Bloom’s sole piece of published fiction—ever—is a pseudo-sequel to Arcturus called A Voyage to Lucifer. Colin Wilson, who became a literary sensation with publication of his The Outsider in 1956, put it in his curriculum while teaching and wrote multiple essays about Lindsay. These and other enthusiasts have tended the flame over the years, keeping the book visible to the small cadre of readers that are likely to respond to it. But will Arcturus ever grow beyond that niche audience?

It may be helpful to explain what readers find when they pick up the novel. On a superficial level, A Voyage to Arcturus is a spacefaring adventure of a strong, competent hero, same as you’d find in any number of time-yellowed pulp paperbacks. After a few strange chapters spent on earth, our hero, Maskull, and his two companions, Nightspore and Krag, journey to Tormance, a planet orbiting Arcturus, which in the book is a binary star with two suns, Branchspell and Alppain. Maskull wakes alone in a fantastical desert on Tormance, and quickly becomes embroiled in this new world. There are rocket ships, tentacle arms, dreamlike landscapes—Tormance is prodigious when it comes to landscapes: like Ifdawn Marest, a place of crags and mountains that are constantly sinking and shooting up in fatal, vertiginous thousand-foot shifts; or Matterplay, a valley so replete with life energy that new beings literally pop into existence, fully formed; or the Sinking Sea, whose water varies in density from place to place and which Maskull navigates by riding a giant, semi-living treelike creature. The evocative names of places and people have a distinctly Amazing Stories vibe: Disscourn, Panawe, Corpang, the Lusion Plain.

Maskull sets out ostensibly looking for Nightspore and Krag. But as he proceeds, it becomes clear that his purpose on Tormance is tied to that of a being called Surtur, who draws Maskull northward with a slow, insistent drumbeat that only he can hear. Every chapter sees Maskull enter a new region of Tormance, each with its own particular landscape and specific philosophical culture—a sort of Gulliver’s Travels recast as a troubling, darkly symbolic dream. Ifdawn Marest lives violently, crudely, simply—its residents engage in contests of mind control to dominate, torture, and kill one another. The land of Sant houses vain ascetics who have renounced all the physical pleasures of the world. In Matterplay, Maskull encounters the last of the phaen, an ancient race composed not of men or women but a third, primordial gender. Names of other supreme beings are revealed: some mention Muspel, but many talk of Crystalman, possibly another god, or maybe just another name for Surtur—the Tormancians’ accounts vary. But when people die on Tormance, their faces twist into a nauseating smile known as Crystalman’s grin. The precise cosmology always remains just out of focus, however, and this refusal to resolve comes to drive Maskull forward more than the thought of finding his companions. And through this driving impetus, Maskull finds each place, each philosophy, exposed as limited, false, incomplete. This falseness usually results in an explosion of ugly violence, and Maskull, often as not, is perpetrating it.

And so the book proceeds, like some dark, cosmic picaresque, until Maskull reaches Surtur’s Ocean, the northernmost ocean of Tormance. He reunites with Krag, who seems to be expecting him. Krag takes the physically failing Maskull on a raft out to sea, on a journey to Muspel, which Maskull learns is the name of the “true world,” the world outside the corruption of illusory things. As they sail along, Maskull, exhausted and spent, dies, which somehow releases Nightspore back into being. Then Krag lets Nightspore off at a lone edifice in the sea. As he ascends through it, Nightspore stops at a succession of windows that show him the nature of reality: there is Muspel, Surtur’s world, the impartial, pure, true world that most are prevented from seeing by the illusory world of Crystalman, who is not an aspect of Surtur but an embodiment of deceit and distraction. Violence, art, love, talk, work, play—all of these are tools Crystalman uses to ensnare the spark of Muspel contained in each living thing, preventing that life from returning to the world it came from. All the inhabitants of Tormance and their multifarious philosophies were blinded to this truth by Crystalman—and that’s why, when they died, their faces contorted into Crystalman’s Grin, the signature of his triumph over their souls.

Arcturus ends with the resurrected/transmogrified/newborn Nightspore descending the tower and meeting up with Krag again, who reveals that he is Surtur, and that his name on earth is Pain. Nightspore steps back onto the raft and the two sail away into the darkness, presumably to continue their struggle against Crystalman, on earth or elsewhere. It’s a powerful, striking, triumphless ending—a metaphysical cliffhanger that opens up long avenues of thought.

Anybody reading with their internal aerial up and receiving would have noticed something going on with Arcturus before the final chapters, but they are only the biggest among many clues that make it clear the novel is more than a weightless adventure yarn. Maskull is an off-putting protagonist. He’s animated less by personality and more by some psychic decree outside of his control (authorial or otherwise). He’s got the wrong proportions for a standard hero: Lindsay describes him as “a kind of giant, but of broader and more robust physique than most giants,” with a full beard, short bristling hair, and features that are “thick and heavy, coarsely modeled, like those of a wooden carving”—and yet with eyes sparkling with “intelligence and audacity.” He’s impulsive, driven, and violent—and key to the dark energy that propels Arcturus away from genre pulp into deeper, thornier territory.

Much early speculative fiction created vistas of longing; they showed better worlds, nobler peoples, purer ways of living. The Lord of the Rings set the standard in this regard but it was hardly alone, and not the first. The Worm Ouroboros, Lud-in-the-Mist, Time and the Gods are others—all committed to beauty and magic and bravery as antidotes to our own world. They didn’t deny their correlation to accepted reality, but they actively opposed aspects of that reality by showing us better versions. Arcturus, rather than look outward over the hills of faerie, turns inward, drills down until it exposes its fundamental vision of existence, and that vision is a searing one. Its aspect is fire, and whereas most speculative fiction is aspirational, Arcturus is agonized; reality is, like the unearthly wound Maskull receives from Krag, “one long discomfort,” a galaxy of damnation:

Millions of grotesque, vulgar, ridiculous, sweetened individuals – once Spirit – were calling out from their degradation and agony for salvation from Muspel…

Arcturus the planet isn’t meant to be “real” like Minas Tirith or Lud-in-the-Mist or Witchland are meant to be real. Instead of creating another world, Lindsay showed us our own; refracted through the alien metaphors of Tormance, yes, but nevertheless recognizable. As anthropologist Loren Eiseley notes in his introduction to the Ballantine edition, Arcturus is really “a long earth journey.” There’s a dystopia in Lindsay’s novel, though the dystopia is not political or societal, but metaphysical. It’s not a nightmare city, but a nightmare world; not a corrupt government, but a corrupt soul. Maskull’s vicious, driving nature allows him to open that final door for readers.

Naturally, this dark, anguished, philosophical heart impacted Arcturus’ initial sales. In 1920, science fiction seemed impossibly far from literary “respectability.” There was a strong undercurrent of literary speculative fiction at the time, but it wasn’t universally popular and certainly not accepted by the establishment. Arcturus came blazing fully-formed into the world, subverting tropes that had barely been established. And you can imagine potential readers either avoiding Arcturus because of those tropes, or dropping it because it didn’t thoroughly conform to nascent genre conventions. Arcturus did itself no commercial favors by tapping SF in the name of art. It made itself a black sheep among black sheep.

Sellin ends his ’81 overview of Linday’s life and work as all essays on Arcturus and Lindsay end: with hope for a wider readership in the future. But I predict Arcturus will continue to be preserved by a small but vocal readership—no more. I think it has already assumed the strange, somewhat sour mantle of an “influential” classic, one whose most visible legacy will always be the way it presaged so much that came after. Once you read Arcturus, you’re always finding chunks of it here and there, like burning fragments of an exploded spaceship smoldering in a field. Its Mariana Trench pessimism turns up in Harlan Ellison and, with a paranoiac twist, in Philip K. Dick. Its deep exploration of reality through violence and sexuality bring to mind A Clockwork Orange, Dhalgren; and Maskull’s surrender into a metaphysical system vaster than himself hits on core conceits in much of Pynchon. And most obviously, science fiction as metaphor for our own world, our own souls, was a shocking and (to some) ugly experiment in Arcturus—but today it’s as common as grass.

I think the novel’s admirers want recognition for Arcturus because Lindsay’s life is always painted as one of frustration, where recognition for his accomplishments was continually withheld. And that’s true. But he also created a masterwork, and it seems weird to quibble with immortality, no matter how it comes. Even today, Lindsay’s first novel stands out in any literary landscape, casting a long shadow: an architecture phased in from a parallel dimension both alien and familiar.

Disarming the Weapons of Mass Distraction

By Madeleine Bunting

Source: Rise Up Times

“Are you paying attention?” The phrase still resonates with a particular sharpness in my mind. It takes me straight back to my boarding school, aged thirteen, when my eyes would drift out the window to the woods beyond the classroom. The voice was that of the math teacher, the very dedicated but dull Miss Ploughman, whose furrowed grimace I can still picture.

We’re taught early that attention is a currency—we “pay” attention—and much of the discipline of the classroom is aimed at marshaling the attention of children, with very mixed results. We all have a history here, of how we did or did not learn to pay attention and all the praise or blame that came with that. It used to be that such patterns of childhood experience faded into irrelevance. As we reached adulthood, how we paid attention, and to what, was a personal matter and akin to breathing—as if it were automatic.

Today, though, as we grapple with a pervasive new digital culture, attention has become an issue of pressing social concern. Technology provides us with new tools to grab people’s attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain “productive” on a cab journey.

Even those spare moments of time in our day—waiting for a bus, standing in a queue at the supermarket—can now be “harvested,” says the writer Tim Wu in his book The Attention Merchants. In this quest to pursue “those slivers of our unharvested awareness,” digital technology has provided consumer capitalism with its most powerful tools yet. And our attention fuels it. As Matthew Crawford notes in The World Beyond Your Head, “when some people treat the minds of other people as a resource, this is not ‘creating wealth,’ it is transferring it.”

There’s a whiff of panic around the subject: the story that our attention spans are now shorter than a goldfish’s attracted millions of readers on the web; it’s still frequently cited, despite its questionable veracity. Rates of diagnosis attention deficit hyperactivity disorder in children have soared, creating an $11 billion global market for pharmaceutical companies. Every glance of our eyes is now tracked for commercial gain as ever more ingenious ways are devised to capture our attention, if only momentarily. Our eyeballs are now described as capitalism’s most valuable real estate. Both our attention and its deficits are turned into lucrative markets.

There is also a domestic economy of attention; within every family, some get it and some give it. We’re all born needing the attention of others—our parents’, especially—and from the outset, our social skills are honed to attract the attention we need for our care. Attention is woven into all forms of human encounter from the most brief and transitory to the most intimate. It also becomes deeply political: who pays attention to whom?

Social psychologists have researched how the powerful tend to tune out the less powerful. One study with college students showed that even in five minutes of friendly chat, wealthier students showed fewer signs of engagement when in conversation with their less wealthy counterparts: less eye contact, fewer nods, and more checking the time, doodling, and fidgeting. Discrimination of race and gender, too, plays out through attention. Anyone who’s spent any time in an organization will be aware of how attention is at the heart of office politics. A suggestion is ignored in a meeting, but is then seized upon as a brilliant solution when repeated by another person.

What is political is also ethical. Matthew Crawford argues that this is the essential characteristic of urban living: a basic recognition of others.

And then there’s an even more fundamental dimension to the politics of attention. At a primary level, all interactions in public space require a very minimal form of attention, an awareness of the presence and movement of others. Without it, we would bump into each other, frequently.

I had a vivid demonstration of this point on a recent commute: I live in East London and regularly use the narrow canal paths for cycling. It was the canal rush hour—lots of walkers with dogs, families with children, joggers as well as cyclists heading home. We were all sharing the towpath with the usual mixture of give and take, slowing to allow passing, swerving around and between each other. Only this time, a woman was walking down the center of the path with her eyes glued to her phone, impervious to all around her. This went well beyond a moment of distraction. Everyone had to duck and weave to avoid her. She’d abandoned the unspoken contract that avoiding collision is a mutual obligation.

This scene is now a daily occurrence for many of us, in shopping centers, station concourses, or on busy streets. Attention is the essential lubricant of urban life, and without it, we’re denying our co-existence in that moment and place. The novelist and philosopher, Iris Murdoch, writes that the most basic requirement for being good is that a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”

Attention is what draws us out of ourselves to experience and engage in the world. The word is often accompanied by a verb—attention needs to be grabbed, captured, mobilized, attracted, or galvanized. Reflected in such language is an acknowledgement of how attention is the essential precursor to action. The founding father of psychology William James provided what is still one of the best working definitions:

It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

Attention is a limited resource and has to be allocated: to pay attention to one thing requires us to withdraw it from others. There are two well-known dimensions to attention, explains Willem Kuyken, a professor of psychology at Oxford. The first is “alerting”— an automatic form of attention, hardwired into our brains, that warns us of threats to our survival. Think of when you’re driving a car in a busy city: you’re aware of the movement of other cars, pedestrians, cyclists, and road signs, while advertising tries to grab any spare morsel of your attention. Notice how quickly you can swerve or brake when you spot a car suddenly emerging from a side street. There’s no time for a complicated cognitive process of decision making. This attention is beyond voluntary control.

The second form of attention is known as “executive”—the process by which our brain selects what to foreground and focus on, so that there can be other information in the background—such as music when you’re cooking—but one can still accomplish a complex task. Crucially, our capacity for executive attention is limited. Contrary to what some people claim, none of us can multitask complex activities effectively. The next time you write an email while talking on the phone, notice how many typing mistakes you make or how much you remember from the call. Executive attention can be trained, and needs to be for any complex activity. This was the point James made when he wrote: “there is no such thing as voluntary attention sustained for more than a few seconds at a time… what is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.”

Attention is a complex interaction between memory and perception, in which we continually select what to notice, thus finding the material which correlates in some way with past experience. In this way, patterns develop in the mind. We are always making meaning from the overwhelming raw data. As James put it, “my experience is what I agree to attend to. Only those items which I notice shape my mind—without selective interest, experience is an utter chaos.”

And we are constantly engaged in organizing that chaos, as we interpret our experience. This is clear in the famous Gorilla Experiment in which viewers were told to watch a video of two teams of students passing a ball between them. They had to count the number of passes made by the team in white shirts and ignore those of the team in black shirts. The experiment is deceptively complex because it involves three forms of attention: first, scanning the whole group; second, ignoring the black T-shirt team to keep focus on the white T-shirt team (a form of inhibiting attention); and third, remembering to count. In the middle of the experiment, someone in a gorilla suit ambles through the group. Afterward, half the viewers when asked hadn’t spotted the gorilla and couldn’t even believe it had been there. We can be blind not only to the obvious, but to our blindness.

There is another point in this experiment which is less often emphasized. Ignoring something—such as the black T-shirt team in this experiment—requires a form of attention. It costs us attention to ignore something. Many of us live and work in environments that require us to ignore a huge amount of information—that flashing advert, a bouncing icon or pop-up.

In another famous psychology experiment, Walter Mischel’s Marshmallow Test, four-year-olds had a choice of eating a marshmallow immediately or two in fifteen minutes. While filmed, each child was put in a room alone in front of the plate with a marshmallow. They squirmed and fidgeted, poked the marshmallow and stared at the ceiling. A third of the children couldn’t resist the marshmallow and gobbled it up, a third nibbled cautiously, but the last third figured out how to distract themselves. They looked under the table, sang… did anything but look at the sweet. It’s a demonstration of the capacity to reallocate attention. In a follow-up study some years later, those who’d been able to wait for the second marshmallow had better life outcomes, such as academic achievement and health. One New Zealand study of 1,000 children found that this form of self-regulation was a more reliable predictor of future success and wellbeing than even a good IQ or comfortable economic status.

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

*

I learned to concentrate as a child. Being a bookworm helped. I’d be completely absorbed in my reading as the noise of my busy family swirled around me. It was good training for working in newsrooms; when I started as a journalist, they were very noisy places with the clatter of keyboards, telephones ringing and fascinating conversations on every side. What has proved much harder to block out is email and text messages.

The digital tech companies know a lot about this widespread habit; many of them have built a business model around it. They’ve drawn on the work of the psychologist B.F. Skinner who identified back in the Thirties how, in animal behavior, an action can be encouraged with a positive consequence and discouraged by a negative one. In one experiment, he gave a pigeon a food pellet whenever it pecked at a button and the result, as predicted, was that the pigeon kept pecking. Subsequent research established that the most effective way to keep the pigeon pecking was “variable-ratio reinforcement.” Give the pigeon a food pellet sometimes, and you have it well and truly hooked.

We’re just like the pigeon pecking at the button when we check our email or phone. It’s a humiliating thought. Variable reinforcement ensures that the customer will keep coming back. It’s the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

In much public space, wherever your eye lands—from the back of the toilet door, to the handrail on the escalator, or the hotel key card—an ad is trying to grab your attention, and does so by triggering the oldest instincts of the human mind: fear, sex, and food. Public places become dominated by people trying to sell you something. In his tirade against this commercialization, Crawford cites advertisements on the backs of school report cards and on debit machines where you swipe your card. Before you enter your PIN, that gap of a few seconds is now used to show adverts. He describes silence and ad-free experience as “luxury goods” that only the wealthy can afford. Crawford has invented the concept of the “attentional commons,” free public spaces that allow us to choose where to place our attention. He draws the analogy with environmental goods that belong to all of us, such as clean air or clean water.

Some legal theorists are beginning to conceive of our own attention as a human right. One former Google employee warned that “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have.” They use the insights into human behavior derived from social psychology—the need for approval, the need to reciprocate others’ gestures, the fear of missing out. Your attention ceases to be your own, pulled and pushed by algorithms. Attention is referred to as the real currency of the future.

*

In 2013, I embarked on a risky experiment in attention: I left my job. In the previous two years, it had crept up on me. I could no longer read beyond a few paragraphs. My eyes would glaze over and, even more disastrously for someone who had spent their career writing, I seemed unable to string together my thoughts, let alone write anything longer than a few sentences. When I try to explain the impact, I can only offer a metaphor: it felt like my imagination and use of language were vacuum packed, like a slab of meat coated in plastic. I had lost the ability to turn ideas around, see them from different perspectives. I could no longer draw connections between disparate ideas.

At the time, I was working in media strategy. It was a culture of back-to-back meetings from 8:30 AM to 6 PM, and there were plenty of advantages to be gained from continuing late into the evening if you had the stamina. Commitment was measured by emails with a pertinent weblink. Meetings were sometimes as brief as thirty minutes and frequently ran through lunch. Meanwhile, everyone was sneaking time to battle with the constant emails, eyes flickering to their phone screens in every conversation. The result was a kind of crazy fog, a mishmash of inconclusive discussions.

At first, it was exhilarating, like being on those crazy rides in a theme park. By the end, the effect was disastrous. I was almost continuously ill, battling migraines and unidentifiable viruses. When I finally made the drastic decision to leave, my income collapsed to a fraction of its previous level and my family’s lifestyle had to change accordingly. I had no idea what I was going to do; I had lost all faith in my ability to write. I told friends I would have to return the advance I’d received to write a book. I had to try to get back to the skills of reflection and focus that had once been ingrained in me.

The first step was to teach myself to read again. I sometimes went to a café, leaving my phone and computer behind. I had to slow down the racing incoherence of my mind so that it could settle on the text and its gradual development of an argument or narrative thread. The turning point in my recovery was a five weeks’ research trip to the Scottish Outer Hebrides. On the journey north of Glasgow, my mobile phone lost its Internet connection. I had cut myself loose with only the occasional text or call to family back home. Somewhere on the long Atlantic beaches of these wild and dramatic islands, I rediscovered my ability to write.

I attribute that in part to a stunning exhibition I came across in the small harbor town of Lochboisdale, on the island of South Uist. Vija Celmins is an acclaimed Latvian-American artist whose work is famous for its astonishing patience. She can take a year or more to make a woodcut that portrays in minute detail the surface of the sea. A postcard of her work now sits above my desk, a reminder of the power of slow thinking.

Just as we’ve had a slow eating movement, we need a slow thinking campaign. Its manifesto could be the German poet Rainer Maria Rilke’s beautiful “Letters to a Young Poet”:

To let every impression and the germ of every feeling come to completion inside, in the dark, in the unsayable, the unconscious, in what is unattainable to one’s own intellect, and to wait with deep humility and patience for the hour when a new clarity is delivered.

Many great thinkers attest that they have their best insights in moments of relaxation, the proverbial brainwave in the bath. We actually need what we most fear: boredom.

When I left my job (and I was lucky that I could), friends and colleagues were bewildered. Why give up a good job? But I felt that here was an experiment worth trying. Crawford frames it well as “intellectual biodiversity.” At a time of crisis, we need people thinking in different ways. If we all jump to the tune of Facebook or Instagram and allow ourselves to be primed by Twitter, the danger is that we lose the “trained powers of concentration” that allow us, in Crawford’s words, “to recognize that independence of thought and feeling is a fragile thing, and requires certain conditions.”

I also took to heart the insights of the historian Timothy Snyder, who concluded from his studies of twentieth-century European totalitarianism that the way to fend off tyranny is to read books, make an effort to separate yourself from the Internet, and “be kind to our language… Think up your own way of speaking.” Dropping out and going offline enabled me to get back to reading, voraciously, and to writing; beyond that, it’s too early to announce the results of my experiment with attention. As Rilke said, “These things cannot be measured by time, a year has no meaning, and ten years are nothing.”

*

A recent column in The New Yorker cheekily suggests that all the fuss about the impact of digital technologies on our attention is nothing more than writers’ worrying about their own working habits. Is all this anxiety about our fragmenting minds a moral panic akin to those that swept Victorian Britain about sexual behavior? Patterns of attention are changing, but perhaps it doesn’t much matter?

My teenage children read much less than I did. One son used to play chess online with a friend, text on his phone, and do his homework all at the same time. I was horrified, but he got a place at Oxford. At his interview, he met a third-year history undergraduate who told him he hadn’t yet read any books in his time at university. But my kids are considerably more knowledgeable about a vast range of subjects than I was at their age. There’s a small voice suggesting that the forms of attention I was brought up with could be a thing of the past; the sustained concentration required to read a whole book will become an obscure niche hobby.

And yet, I’m haunted by a reflection: the magnificent illuminations of the eighth-century Book of Kells has intricate patterning that no one has ever been able to copy, such is the fineness of the tight spirals. Lines are a millimeter apart. They indicate a steadiness of hand and mind—a capability most of us have long since lost. Could we be trading in capacities for focus in exchange for a breadth of reference? Some might argue that’s not a bad trade. But we would lose depth: artist Paul Klee wrote that he would spend a day in silent contemplation of something before he painted it. Paul Cézanne was similarly known for his trance like attention on his subject. Madame Cézanne recollected how her husband would gaze at the landscape, and told her, “The landscape thinks itself in me, and I am its consciousness.” The philosopher Maurice Merleau-Ponty describes a contemplative attention in which one steps outside of oneself and immerses oneself in the object of attention.

It’s not just artists who require such depth of attention. Nearly two decades ago, a doctor teaching medical students at Yale was frustrated at their inability to distinguish between types of skin lesions. Their gaze seemed restless and careless. He took his students to an art gallery and told them to look at a picture for fifteen minutes. The program is now used in dozens of US medical schools.

Some argue that losing the capacity for deep attention presages catastrophe. It is the building block of “intimacy, wisdom, and cultural progress,” argues Maggie Jackson in her book Distracted, in which she warns that “as our attentional skills are squandered, we are plunging into a culture of mistrust, skimming, and a dehumanizing merging between man and machine.” Significantly, her research began with a curiosity about why so many Americans were deeply dissatisfied with life. She argues that losing the capacity for deep attention makes it harder to make sense of experience and to find meaning—from which comes wonder and fulfillment. She fears a new “dark age” in which we forget what makes us truly happy.

Strikingly, the epicenter of this wave of anxiety over our attention is the US. All the authors I’ve cited are American. It’s been argued that this debate represents an existential crisis for America because it exposes the flawed nature of its greatest ideal, individual freedom. The commonly accepted notion is that to be free is to make choices, and no one can challenge that expression of autonomy. But if our choices are actually engineered by thousands of very clever, well-paid digital developers, are we free? The former Google employee Tristan Harris confessed in an article in 2016 that technology “gives people the illusion of free choice while architecting the menu so that [tech giants] win, no matter what you choose.”

Despite my children’s multitasking, I maintain that vital human capacities—depth of insight, emotional connection, and creativity—are at risk. I’m intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

One of the most interesting responses to our distraction epidemic is mindfulness. Its popularity is evidence that people are trying to find a way to protect and nourish their minds. Jon Kabat-Zinn, who pioneered the development of secular mindfulness, draws an analogy with jogging: just as keeping your body fit is now well understood, people will come to realize the importance of looking after their minds.

I’ve meditated regularly for twenty years, but curious as to how this is becoming mainstream, I went to an event in the heart of high-tech Shoreditch in London. In a hipster workspaces with funky architecture, excellent coffee, and an impressive range of beards, a soft-spoken retired Oxford professor of psychology, Mark Williams, was talking about how multitasking has a switching cost in focus and concentration. Our unique human ability to remember the past and to think ahead brings a cost; we lose the present. To counter this, he advocated a daily practice of mindfulness: bringing attention back to the body—the physical sensations of the breath, the hands, the feet. Williams explained how fear and anxiety inhibit creativity. In time, the practice of mindfulness enables you to acknowledge fear calmly and even to investigate it with curiosity. You learn to place your attention in the moment, noticing details such as the sunlight or the taste of the coffee.

On a recent retreat, I was beside a river early one morning and a rower passed. I watched the boat slip by and enjoyed the beauty in a radically new way. The moment was sufficient; there was nothing I wanted to add or take away—no thought of how I wanted to do this every day, or how I wanted to learn to row, or how I wished I was in the boat. Nothing but the pleasure of witnessing it. The busy-ness of the mind had stilled. Mindfulness can be a remarkable bid to reclaim our attention and to claim real freedom, the freedom from our habitual reactivity that makes us easy prey for manipulation.

But I worry that the integrity of mindfulness is fragile, vulnerable both to commercialization by employers who see it as a form of mental performance enhancement and to consumer commodification, rather than contributing to the formation of ethical character. Mindfulness as a meditation practice originates in Buddhism, and without that tradition’s ethics, there is a high risk of it being hijacked and misrepresented.

Back in the Sixties, the countercultural psychologist Timothy Leary rebelled against the conformity of the new mass media age and called for, in Crawford’s words, an “attentional revolution.” Leary urged people to take control of the media they consumed as a crucial act of self-determination; pay attention to where you place your attention, he declared. The social critic Herbert Marcuse believed Leary was fighting the struggle for the ultimate form of freedom, which Marcuse defined as the ability “to live without anxiety.” These were radical prophets whose words have an uncanny resonance today. Distraction has become a commercial and political strategy, and it amounts to a form of emotional violence that cripples people, leaving them unable to gather their thoughts and overwhelmed by a sense of inadequacy. It’s a powerful form of oppression dressed up in the language of individual choice.

The stakes could hardly be higher, as William James knew a century ago: “The faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will.” And what are we humans without these three?

Challenges for Resolving Complex Conflicts

By Robert J. Burrowes

While conflict theories and resolution processes advanced dramatically during the second half of the 20th century, particularly thanks to the important work of several key scholars such as Professor Johan Galtung – see ‘Conflict Transformation by Peaceful Means (the Transcend Method)’ – significant gaps remain in the conflict literature on how to deal with particular conflict configurations. Notably, these include the following four.

First, existing conflict theory does not adequately explain, emphasize and teach how to respond in those circumstances in which parties cannot be brought to the table to deeply consider a conflict and the measures necessary to resolve it. This particularly applies in cases where one or more parties is violently defending (often using a combination of direct and structural violence) substantial interrelated (material and non-material) interests. The conflict between China and Tibet over the Chinese-occupied Tibetan plateau, the many conflicts between western corporations and indigenous peoples over exploitation of the natural environment, and the conflict between the global elite and ‘ordinary’ people over resource allocation in the global economy are obvious examples of a vast number of conflicts in this category. As one of the rare conflict theorists who addresses this question, Galtung notes that structural violence ‘is not only evil, it is obstinate and must be fought’, and his preferred strategy is nonviolent revolution. See The True Worlds: A Transnational Perspective p. 140. But how?

Second, existing conflict theory does not explain how to respond in those circumstances in which one or more parties to the conflict are insane. The conflict between Israel and Palestine over Israeli-occupied Palestine classically illustrates this problem, particularly notable in the insanity of Israeli Prime Minister Binjamin Netanyahu, Defense Minister Avigdor Lieberman and Justice Minister Ayelet Shaked. But it is also readily illustrated by the insanity of the current political/military leadership in the USA and the insanity of the political, military and Buddhist leaders in Myanmar engaged in a genocidal assault on the Rohingya. For a brief discussion of the meaning and cause of this insanity see ‘The Global Elite is Insane Revisited’.

As an aside, there is little point deluding ourselves that insanity is not a problem or even ‘diplomatically’ not mentioning the insanity (if this is indeed the case) of certain parties in particular conflicts. The truth enables us to fully understand a conflict so that we can develop and implement a strategy to deal with all aspects of that truth. Any conflict strategy that fails to accurately identify and address all key aspects of the conflict, including the insanity of any of the parties, will virtually certainly fail.

Third, and more fundamentally, existing conflict theory does not take adequate account of the critical role that several unconscious emotions play in driving conflict in virtually all contexts, often preventing its resolution. This particularly applies in the case of (but is not limited to) suppressed terror, self-hatred and anger which are often unconsciously projected as fear of, hatred for and anger at an opponent or even an innocent third-party (essentially because this individual/group feels ‘safe’ to the person who is projecting). See ‘The Psychology of Projection in Conflict’.

While any significant ongoing conflict would illustrate this point adequately, the incredibly complex and interrelated conflicts being conducted in the Middle East, the prevalent Islamophobia in some western countries, and the conflicts over governance and exploitation of resources in the Democratic Republic of Congo are superlative examples. Ignoring suppressed (and projected) emotions can stymie conflict resolution in any context, interpersonally and geopolitically, and it does so frequently.

Fourth, existing conflict theory pays little attention to the extinction-causing conflict being ongoingly generated by human over-consumption in the finite planetary biosphere (and currently resulting in 200 species extinctions daily) which is sometimes inadequately identified as a conflict caused by capitalism’s drive for unending economic growth in a finite environment.

So what can we do?

Well, to begin, in all four categories of cases mentioned above, I would use Gandhian nonviolent strategy to compel violent opponents to participate in a conflict transformation process such as Galtung’s. Why nonviolent and why Gandhian? Nonviolent because our intention is to process the conflict to achieve a higher level of need satisfaction for all parties and violence against any or all participants is inconsistent with that intention. But Gandhian nonviolence because only Gandhi’s version of nonviolence has this conflict intention built into it. See ‘Conception of Nonviolence’.

‘But isn’t this nonviolent strategy simply coercion by another name?’ you might ask. Well, according to the Norwegian philosopher, Professor Arne Naess, it is not. In his view, if a change of will follows the scrutiny of norms in the context of new information while one is ‘in a state of full mental and bodily powers’, this is an act of personal freedom under optimal conditions. Naess highlights this point with the following example: Suppose that one person carries another against their will into the streets where there is a riot and, as a result of what they see, the carried person changes some of their attitudes and opinions. Was the change coerced? According to Naess, while the person was coerced into seeing something that caused the change, the change itself was not coerced. The distinction is important, Naess argues, because satyagraha (Gandhian nonviolent struggle) is incompatible with changes of attitudes or opinions that are coerced. See Gandhi and Group Conflict: An Exploration of Satyagraha pp. 91-92.

To elaborate this point: Unlike other conceptions of nonviolence, Gandhi’s nonviolence is based on certain premises, including the importance of the truth, the sanctity and unity of all life, and the unity of means and end, so his strategy is always conducted within the framework of his desired political, social, economic and ecological vision for society as a whole and not limited to the purpose of any immediate campaign. It is for this reason that Gandhi’s approach to strategy is so important. He is always taking into account the ultimate end of all nonviolent struggle – a just, peaceful and ecologically sustainable society of self-realized human beings – not just the outcome of this campaign. He wants each campaign to contribute to the ultimate aim, not undermine vital elements of the long-term and overarching struggle to create a world without violence.

Consequently, given his conception of nonviolence, Gandhi’s intention is to reach a conflict outcome that recognizes the sanctity and unity of all life which, obviously, includes the lives (but also the physical and emotional well-being) of his opponents. His nonviolent strategy is designed to compel participation in a conflict process but not to impose his preferred outcome unilaterally. See Nonviolent Campaign Strategy and Nonviolent Defense/Liberation Strategy.

This can apply in the geopolitical context or in relation to ordinary individuals ‘merely’ participating in the violence of overconsumption. Using nonviolent strategy to campaign on the climate catastrophe or other environmental issues can include mobilizing individuals and communities to emulate Gandhi’s asceticism in a modest way by participating in the fifteen-year strategy outlined in The Flame Tree Project to Save Life on Earth which he inspired.

But even if we can use nonviolent strategy effectively to get the conflicting parties together, the reality is that suppressed and projected emotions – particularly fear, self-hatred and anger as mentioned above – or even outright insanity on the part of one or more parties may still make efforts to effectively transform the conflict impossible. So for conflict resolution to occur, we need individuals who are willing and able to participate with at least minimal goodwill in designing a superior conflict outcome beneficial to everyone concerned.

Hence, I would do one more thing in connection with this process. Prior to, and then also in parallel with, the ‘formal’ conflict process, I would provide opportunities for all individuals engaged in the process (or otherwise critical to it because of their ‘background’ role, perhaps as a leader not personally present at the formal conflict process) to explore in a private setting with a skilled ‘nisteler’ (who is outside the conflict process), the unconscious emotions that are driving their particular approach to the conflict. See ‘Nisteling: The Art of Deep Listening’. The purpose of this nisteling is to allow each participant in the conflict process to bring a higher level of self-awareness to it. See ‘Human Intelligence or Human Awareness?’

I am not going to pretend that this would necessarily be possible, quick, easy or even work in every context. Insane individuals are obviously the last to know they have a psychological problem and the least likely to participate in a process designed to uncover and remove the roots of their insanity. However, those who are trapped in a dysfunctional psychological state short of insanity may be willing to avail themselves of the opportunity. In time, the value of this aspect of the conflict resolution process should become apparent, particularly because delusions and projections are exposed by the person themself (as an outcome of the expertise of the person nisteling).

Obviously, I am emphasizing the psychological aspects of the conflict process because my own considerable experience as a nonviolent activist together with my research convinces me that understanding violence requires an understanding of the psychology that drives it. If you are interested, you can read about the psychology of violence, including the 23 psychological characteristics in the emotional profile of archetype perpetrators of violence, in the documents Why Violence? and Fearless Psychology and Fearful Psychology: Principles and Practice.

Ideally, I would like to see the concept of nistelers operating prior to, and then parallel with, focused attention on the conflict itself normalized as an inherent part of the conflict resolution process. Clearly, we need teams of people equipped to perform this service, a challenge in itself in the short-term.

If, however, conflicting parties cannot be convinced to participate in this process with reasonable goodwill, we can always revert to using nonviolent strategy to compel them to do so. And, if all attempts to conduct a reasonable conflict process fail (particularly in a circumstance in which insanity is the cause of this failure), to impose a nonviolent solution which nevertheless takes account of the insane’s party’s legitimate needs. (Yes, on just that one detail, I diverge from Gandhi.)

Having stated that, however, I acknowledge that only a rare individual has the capacity to think, plan and act strategically in tackling a violent conflict nonviolently, so considerable education in nonviolent strategy will be necessary and is a priority.

Given what is at stake, however – a superior strategy for tackling and resolving violent geopolitical conflicts including those (such as the threat of nuclear war, the climate catastrophe and decimation of the biosphere) that threaten human extinction – any resources devoted to improving our capacity to deliver this outcome would be well spent.

Provided, of course, that reducing (and ultimately eliminating) violence and resolving conflict is your aim.

In addition to the above, I would do something else more generally (that is, outside the conflict process).

Given that dysfunctional parenting is ultimately responsible for the behaviour of those individuals who generate and perpetuate violent conflicts, I would encourage all parents to consider making ‘My Promise to Children’ so that we start to produce a higher proportion of functional individuals who know how to powerfully resolve conflicts in their lives without resort to violence. If any parent feels unable to make this promise, then they have the option of tackling this problem at its source by ‘Putting Feelings First’.

If we do not dramatically and quickly improve our individual and collective capacity to resolve conflicts nonviolently, including when we are dealing with individuals who are insane, then one day relatively soon we will share the fate of those 200 species of life we drove to extinction today.

 

 

Biodata: Robert J. Burrowes has a lifetime commitment to understanding and ending human violence. He has done extensive research since 1966 in an effort to understand why human beings are violent and has been a nonviolent activist since 1981. He is the author of Why Violence? His email address is flametree@riseup.net and his website is here.

Robert J. Burrowes
P.O. Box 68
Daylesford, Victoria 3460
Australia

Email: flametree@riseup.net

Websites:
Nonviolence Charter
Flame Tree Project to Save Life on Earth
‘Why Violence?’
Feelings First
Nonviolent Campaign Strategy
Nonviolent Defense/Liberation Strategy
Anita: Songs of Nonviolence
Robert Burrowes
Global Nonviolence Network

Zig Zag Zen: An Interview with Author Allan Badiner

The Intersection of Psychedelic Spirituality and Buddhist Practice

By Jennifer Bleyer

Source: MAPS

Buddhism and psychedelic use have been linked since at least the 1950s, when influential thinkers and writers such as Allen Ginsberg, Jack Kerouac, and Alan Watts experimented with both as avenues toward understanding the mind. The tacitly acknowledged connection took a leap forward in 2002 with the publication of Zig Zag Zen: Buddhism and Psychedelics, a collection of essays, interviews, articles edited by Allan Badiner, which examines the two realms and their similarities and differences. A new edition of Zig Zag Zen was published in 2015. Badiner, a contributing editor of Tricycle, is a longtime supporter of MAPS.—Jennifer Bleyer


How did you become involved in Buddhism?

I’d never had any interest or belief in any religion, but when I was in my early 30s, I spent a year traveling in India, and right before returning home I took some advice to enroll in a Buddhist meditation retreat in Sri Lanka. I hated it. My bones ached, the only food was stewed greens, the venue was overrun with bugs, and the bed was a blanket over wood boards. Suddenly, when the ten-day retreat was almost over, I felt free of any pain and almost ecstatic— and not just because I was leaving. The bugs were my relatives. I slept like a baby. I was overtaken by a subtle but persistent wave of ecstasy, and felt a diminished sense of separation from others.

And how were you exposed to psychedelics?

After that I trip I returned to California, and the meditative glow eventually faded. But I had been “bitten” by the Buddhist bug. I took classes with a senior Buddhist monk, studied Pali, the original language of the Buddha, and earned a Masters degree at the College of Buddhist Studies, a small Theraveda university in Los Angeles—all because I was trying to understand how to return to the blissful state I had experienced on the retreat. I was writing a column called “Mind and Spirit” for the LA Weekly while working on my Buddhist studies, and had a plan to interview Terence McKenna. He accused me of being an “armchair Buddhist” and challenged me to try sacred plants, such as psilocybin mushrooms. We became friends, and I visited him at his home in Hawaii where he treated me to yagé, or ayahuasca, the so-called vine of the soul. Sometime later, I experienced MDMA and got to know Alexander Shulgin. I regularly enjoyed Friday night dinners at the Shulgin home, where the Bay Area psychedelic community gathered. So, indeed, I became formally and viscerally connected to both Buddhism and psychedelics.

How do you describe the relationship between Buddhism and psychedelics?

Both share an interest in the primacy of mind and present moment awareness, and while they are very different in character, the 1950s Beat Generation and the 1960s cultural revolution were both heavily influenced by Eastern wisdom traditions, including Buddhism, as well as LSD, psilocybin, and peyote. I think their relationship manifests in the human pursuit of evolution. Many people seek the compassionate wisdom of the Buddhist philosophy, also known as the Dharma, as well as the psychic reset and transformational power that certain plant substances offer. A kind of practical magic results when the “Zig” zags into Zen—when a time-tested philosophy and ethical system meets plant-assisted changes in consciousness.

Your book identified the complementary natures of Buddhism and psychedelics as facilitators of the “liberation of the mind.” How has it been received in the Buddhist community?

It was fascinating to me that, with only one exception, every American Buddhist teacher I interviewed had personally experienced psychedelics prior to getting into Buddhism. One of the most revered and respected teachers, Jack Kornfield, went so far as to say that were it not for LSD, he would never have been able to grasp the Dharma. It should be noted that Zig Zag Zen also presents the thinking of teachers who are clearly not fans of psychedelics. I was secretly hoping for Zig Zag Zen to ruffle a lot of Buddhist feathers, envisioning that the controversy would drive sales. When the book was released, the Buddhist community in general was like, “Buddhism, psychedelics, ok…so?” The anti-Zig Zag Zen rallies and Buddhist book boycotts I was imagining never materialized. (laughs)

You said in a recent interview with Tricycle that “psychedelic use is an issue for many contemporary Buddhists.” Why is that?

Anyone who becomes seriously focused on spiritual development has to at least consider the issue of psychedelic use. Everyone knows someone who, after taking acid or magic mushrooms, or attending a peyote sweat lodge, experienced themselves as changed forever for the better. In the mid-20th century, famous Buddhist writers like Alan Watts and Ram Dass popularized psychedelics, even as Buddhist centers were filled with young people who had experienced psychedelics and were eager to find more practical and gentle routes to the same “destination.” Added to the question of psychedelic use is the realization that we live in a critical time ecologically. The sixth great extinction is underway. Who would have thought that we would live to see the extinction of elephants, or tigers, or orangutans? Coastal cities are experiencing unremitting flooding, and the end of ice caps is a planetary inevitability. Recognizing the interbeing of people and planet is the fundamental awakening of our time. Buddhists are enlightened by the extent of their compassion—for themselves, for other people, for all living beings, and for the planet itself.

Isn’t there a Buddhist rule about not using intoxicants?

Buddhist precepts are not hard rules or commandments but guiding principles meant to facilitate progress on the path. Buddhists refrain from killing, taking what is not given, sexual misconduct, and incorrect speech. According to Robert Thurman, the chair of Buddhist studies at Columbia University, the Buddhist fifth precept, which is interpreted by some as prohibiting all substance use, specifically refers to grain alcohol, which was a problem even in the Buddha’s day as it’s likely to lead to carelessness, and to the user violating the other four precepts. Obviously, one can misuse many substances to the point of intoxication, but it is not correct to say generally that psychedelics are intoxicants.

How did you get involved with MAPS?

I met Rick Doblin through mutual friends when he was a college student, and he had just attended a psychedelic conference at Esalen Institute, my neighbor in Big Sur. He was very excited about the emergence of a psychedelic culture consisting of scientists, physicians, caregivers, and psychiatrists. He had personally experienced the healing power of psychedelics and told me that he was dedicating his life to making these materials legal and respected for their helpful effects. I promptly told him he was hallucinating, so to speak, and that he should get a job, maybe in academia. These were the Reagan years. Attitudes about cannabis and psychedelics in the ’80s were harshly negative. It is a profound testament to the power of psychedelics and of Rick’s formidable persistence that now the government is approving clinical testing of psychedelic drugs for medical use. Rick’s dream is coming true. The physical, mental, and emotional healing possible with psychedelics makes this pursuit a moral imperative. Rick does not consider himself a Buddhist, but he is definitely in the business of relieving suffering—and that is the primary goal of Buddhism.

Why are you so passionate about MAPS’ work?

Well, having entered on the ground floor, I’ve watched MAPS grow from the moment Rick spoke about his vision in my driveway, to its emergence as an amazing organization making epic and sorely needed change. It is poised to open the first non-profit pharmaceutical company, turning psychedelics and cannabis into prescription medicines; educating therapists in to practice psychedelic-assisted therapy; building a network of clinics, and educating the public about the risks and benefits of these substances. As the psychologist, Ralph Metzner, a contributor to Zig Zag Zen, points out: “Two of the most beneficent potential areas for application of psychedelic technologies are in the treatment of addictions and in the psycho-spiritual preparation for the final transition.” I feel totally aligned with this vision.

Ultimately, what do you hope to achieve through your support of MAPS?

I hope to play a role in helping MAPS raise the funds required for the Phase 3 trials of MDMA as a treatment for PTSD—the critical step to becoming a prescription medicine. Like everyone, I see so many people suffering around me. I have confidence that psychedelics can be a serious medicine, as well as a powerful tool for personal self-development. The Anthropocene—the age of human-driven change to the Earth’s natural systems—has ushered in a new urgency for shamanic and psychedelic tools. The clock is ticking. We need all the help available to foment an evolution in our relationships with our neighbors, neighboring nations, and the planet.

8 Signs of a Mind Infected by Political Malware

By Jordan Bates

Source: High Existence

Your mind is similar to a computer.

Your brain is the hardware, your worldview the software.

The operating system you’re running is heavily influenced by your culture, upbringing, education, and many other factors.

Arguably, a well-functioning mind is a mind that can update its operating system.

As new information comes in, a healthy mind will revise its previous conclusions about the world to account for the new data.

The smartest people in the world do this: They’re constantly reading, tinkering, experimenting, and in the process updating their understanding of the world.

After all, the more accurate your models are, the better decisions you’ll make, and the more success you’ll have.

This holds true in virtually every area of life. As the renowned economist John Maynard Keynes put it:

“When my information changes, I alter my conclusions. What do you do, sir?”

Dogma as Malware

Armed with this understanding, we can see that an unhealthy mind is a mind that does not or cannot update itself.

Instead of expanding and revising its models to reflect new information, it will warp and misshape the data to force-fit its existing models.

This problem is captured nicely by a favorite folk saying of the brilliant billionaire investor, Charlie Munger:

“To the man with only a hammer, every problem looks like a nail.”

What causes a mind to misfire in this way?

In a word: dogma: Absolute belief of any kind.

When the mind is convinced that something is incontrovertibly true, it ceases to update its views on that area of reality.

Any dogmatic ideology, then, can be seen as a kind of malware, or virus, attempting to infiltrate our mental computers.

Dogmatic ideologies—religious, political, or otherwise—are essentially trying to convince your mind to freeze into a certain shape and remain that way for the rest of your life.

As previously discussed, to allow one’s mind to freeze is generally disastrous, as a mind incapable of updating itself will tend to adapt very poorly to a complex world.

Unfortunately, certainty feels comfortable to us. It makes us feel like we’re in control, like we’ve got it all figured out. As a result, many minds are frozen by dogmatic malware.

This is an unfortunate state of affairs, as we humans can’t really afford to be non-adaptive at this point in history. We’re facing dire challenges, and we need our collective intelligence and decision-making to be sharp as possible.

8 Symptoms of Political Malware

One way to avoid getting mind-pwnd by dogmatic malware is to learn to recognize the warning signs.

If you can notice other people’s malfunctioning operating systems, you’re much more likely to be able to debug your own.

To hopefully help you do this, I’m going to outline eight telltale symptoms of a brain that’s been compromised by dogmatic political malware.

Political malware is far from the only form of dogma-malware lurking in the world today, but it’s sufficiently common that it should be a useful case to focus on and learn to recognize. And, naturally, many of these points can be extended to other domains.

Here are eight common symptoms of a brain-computer infected by political malware:

1. Inability to explain the arguments or evidence that led to current conclusions.

High-functioning minds don’t just believe things because they feel good or because someone told them to. They require evidence and well-reasoned arguments to support their positions.

If a person is unable to explain the evidence and/or arguments that convinced them of a particular political conclusion, it’s highly likely that they hold that belief simply because their political tribe does.

2. Never says, “I don’t have an opinion on this because I haven’t done enough research and thinking on it.”

Dogmatic, non-adaptive minds tend to have an opinion on everything. Even if they haven’t thought about a given issue for themselves, they just default to whatever opinion is popular with their tribe.

Healthy minds, by contrast, are extremely humble. They realize the world is ridiculously complex and that it’s actually impossible to have an informed opinion on everything. They are honest about what they don’t know, and they realize they should be cautious about forming opinions because humans are so good at deluding themselves and jumping to premature conclusions.

As the genius physicist Richard Feynman put it:

“The first principle is that you must not fool yourself — and you are the easiest person to fool.”

3. Treats affiliation like a badge of honor.

Whatever they happen to be—Republican or Democrat, radical or centrist, libertarian or fascist, conservative or liberal—you know it. Because they advertise it.

They’re proud to be a member of their particular team. But when a person is really proud to be part of something that requires them to hold certain beliefs, what are the chances that they’re going to be able to update those beliefs as they encounter new information? Slim to none. Sharp minds value truth over team and tend not to have strong political affiliations.

4. Views don’t change over time.

Ask a dogmatic person their thoughts on a certain political issue, then ask them again in five years. You’ll almost surely get the same answer. No added nuance, no “Well, I thought about this more and my take is a little bit different now.” Just the same old scripts, repeated ad nauseam.

5. Quickly becomes hostile in political conversations.

The thing about joining a political tribe and thus making your politics a really deep, important part of your identity is that it becomes extremely difficult to have a calm conversation about ideas. 

When you challenge a dogmatic political mind, you’re not just challenging their ideas. You’re challenging their tribe, their identity: the cornerstone of their sense of security in this universe. Naturally, this often doesn’t go over so well.

Healthy minds, by contrast, are interested in the truth, or the best solution, rather than preserving their sense of tribal pride. Therefore they can entertain multiple positions on a single issue without having their feathers ruffled. For them ideas are just ideasand they want to find as many good ideas as possible, let them do battle, and determine which are the best.

6. Absolute faith in the correctness of their own views.

There’s a reason Jordan Greenhall uses the terms “Blue Church” and “Red Religion” to describe the two major political monoliths vying for power in the West.

He’s not the first person to notice that for many people, politics has become a form of religion. With the secularization of the West in recent history, it’s not a surprise that people’s religious drives have been diverted into another dogmatic domain.

Adaptive minds, by contrast, expect to be wrong. The idea that they’ve somehow reached the Final Truth of reality seems ludicrous.

“You should take the approach that you’re wrong. Your goal is to be less wrong.”

― Elon Musk

7. Displays an “If you disagree with me, you must be my enemy” mentality.

For highly dogmatic minds, any disagreement is interpreted as an act of war. If you disagree with them, or even offer an alternate possibility, you must not be on their team, and if you’re not on their team, you must be on an opposing team—an enemy.

This black-and-white thinking is made all the worse when a country has just two major political parties, as in the case of the United States. In a well-functioning bipartisan system, the two parties should at least be able to cooperate, compromise, and realize everyone is ultimately seeking to improve the country, despite disagreeing about how best to do that. Unfortunately, in the profoundly divisive and polarized US political climate of 2018, bipartisan cooperation and understanding has become impossible for many people. This is a grim omen of things to come.

Adaptive minds realize that disagreement is healthy, and that talking through disagreements presents an opportunity to learn and refine one’s views. They furthermore understand that black-and-white thinking fails to account for the complexity of the world. They see that it is unwise to rigidly categorize someone as an enemy or as a member of a certain tribe based on a couple of their positions, considering there are potentially infinite positions one could take on any given issue.

8. All viewpoints are identical to those of a single political camp.

If you can guess a person’s positions on climate change, social welfare, immigration, and gun control, based on their position on some unrelated issue like abortion, you can be fairly certain that they’ve inherited tribal dogmas, rather than forming their own conclusions.

The appeal of subscribing to a dogmatic ideology is that there is an answer for everything. You just repeat the views that are popular with your tribe, and you never have to go to the trouble of analyzing individual issues for yourself.

Active minds, by contrast, hold complex, nuanced, unpredictable views, because they analyze each issue independently. They seek out the best arguments and evidence supporting different positions on the issue, and they form their own conclusions. Or often they’re agnostic on certain issues, because they’ve confronted the true complexity and don’t feel confident enough to favor one compelling view over another.

Conclusion: Activate Your Mind

A healthy mind is a mind that updates itself based on new arguments and evidence.

Cultivating this form of mental health will serve you well in all areas of life. It’s also arguably something that we need more people to do, if we hope to continue to flourish as a species and help other earthly species to flourish.

Humanity currently finds itself in the midst of unprecedented global changes. In such complex and unpredictable times, we surely need to be adaptable and open to good ideas, wherever they may come from. We are gaining the technological power of gods, but without the wisdom and care of gods to accompany this power, we are likely to wield it in disastrous ways.

Gaining the wisdom and care of gods begins with each of us: with our individual decisions to activate our minds—to actively pursue greater knowledge, wisdom, and understanding.

Hopefully this post has offered you some mind-activating inspiration and direction. The need for individuals to take their education and cognitive empowerment into their own hands extends far beyond politics. The degree to which we are collectively successful in this endeavor may well determine whether we create a utopia or an apocalypse in the coming decades and centuries.

All of this is to say that, your mind matters. Take good care of it. Best of luck.

“An Enthusiastic Corporate Citizen”: David Cronenberg and the Dawn of Neoliberalism

(Editor’s note: In commemoration of director David Cronenberg’s 75th birthday we present this compelling and socially relevant analysis of his filmography.)

By Michael Grasso

Source: We Are the Mutants

The cinematic corpus of David Cronenberg is probably best known for its expertly uncanny use of body horror, but looming almost as large in the writer-director’s various universes is the presence of faceless, all-powerful organizations. Like his rough contemporary Thomas Pynchon and the conspiracies that litter Pynchon’s early works—V. (1963), The Crying of Lot 49 (1966), and Gravity’s Rainbow (1973)—Cronenberg’s shadowy organizations offer fodder for paranoid conspiracy. These conspiracies operate under the cloak of beneficent academic institutes and, in his later work, corporations. The transition from institutes to corporations occurred during Cronenberg’s late ’70s and early ’80s output, specifically the trio of films The Brood (1979), Scanners (1981), and Videodrome (1983).

It is no coincidence that, at this particular time, international finance and prevailing political winds helped put the corporation in society’s driver’s seat. In Adam Curtis’s recent documentary film HyperNormalisation (2016), he notes how the default of the city of New York in 1975 opened the door for private investment and the finance industry to get their hands on municipal governance on a large scale for the first time, and how this creaked open the door for the Thatcher-Reagan privatization wave in the ’80s. These last few “hinge” years of the 1970s offered the last chance for a real alternative to the coming neoliberal revolution. Soon, all alternatives for governance in the name of the public good were destroyed. Corporatism tightened its grip on the Western polity.

Cronenberg’s early eerie organizations—the “Canadian Academy of Erotic Enquiry” from Stereo (1969) and the panoply of gruesome academic and cosmetic conspiracies in his Crimes of the Future (1970)—eventually yielded to corporations like Scanners‘ ConSec and Videodrome‘s Spectacular Optical. In these early works, Cronenberg’s mysterious organizations are headed by visionary (mad) geniuses. In 1975’s Shivers, experiments by a lone mad scientist infect an entire apartment building with parasites, which awaken dark impulses in the building’s residents and spread themselves through sexual violence. But as the decade went on, Cronenberg slowly backed away from utilizing the character of a singular scientific genius harboring a twisted vision of the future. Now, organizations sought to pull the strings from the shadows. The key transitional work in this chronology is the sometimes-underlooked The Brood from 1979.

In the film, Oliver Reed plays esteemed psychologist Dr. Hal Raglan, who has developed a method of exorcising deep-seated psychological issues using a technique called “psychoplasmics.” In intense one-on-one sessions reminiscent of psychodrama, Raglan is able to physically remove trauma from the human body in the form of ulcers, rashes, and, we eventually discover, cancer. In the ultimate reveal, it’s shown that Raglan has helped traumatized patient Nola Carveth (Samantha Eggar) to birth violent, deformed homunculi who go out into the world, psychically connected to her, in order to resolve her childhood abandonment issues and abuse with bloody murder. Raglan’s foundation, the Somafree Institute of Psychoplasmics (its name simultaneously evocative of Aldous Huxley’s perfect drug soma, and reminiscent of fringe psychological research like Wilhelm Reich’s orgone theory) inhabits a modernist chalet far outside the city of Toronto. Non-resident patients have to be bussed in. Raglan’s public reputation is that of an eccentric, but effective, therapist. At several points in the film we see the covers of Raglan’s presumably best-selling The Shape of Rage. (Curiously, a decade later, in 1990, a documentary titled Child of Rage would be released covering the controversial use of “attachment therapy.”)

As depicted in the film, Somafree is not a corporation. But the thematic threads surrounding Raglan and his Institute are based on real-life trends in the 1970s. In its practices and in the person of Raglan, Somafree resembles psycho-intensive institutes like Esalen, self-improvement organizations like Lifespring, and personalities like Werner Erhard. Erhard’s est movement used primal abuse to ostensibly create psychological breakthroughs, helping the “patient” become more assertive, more powerful, less prone to obeying impulses caused by their early traumas. There is also the real-life analogue to the psychological method that Raglan employs: psychodrama. In the 1970s, new methods of conflict resolution pioneered in places like Esalen were beginning to seep into the mainstream of North American society. These methods soon spread into the corporate world as a purported means of defusing tensions at work and making an office more productive. The “encounter group” soon became a punchline, but the principles behind the Age of Aquarius’s more touchy-feely psychodynamic methods soon became part of the warp and weft of corporate culture in the ’80s and well beyond.

Nola’s estranged husband Frank interviews a former Raglan patient, Jan Hartog, in an attempt to discredit Somafree so Frank can regain custody of his daughter. This patient bears the scars of Raglan’s work on him: a lymphatic cancer sprouting from his neck (an eerie foreshadowing of the coming of another mysterious lymphatic disorder that would soon break out all over North America). Hartog plans to sue; not to achieve victory in a courtroom, but to destroy Raglan’s reputation. It doesn’t matter if they win, Hartog says, because “They’ll just remember the slogan. Psychoplasmics can cause cancer.” The 1970s was full of an increased awareness of the carcinogens that surrounded us in the late-industrial West—cigarettes, sweeteners, food dyes, and pesticides—thanks in large part to the nascent environmental and consumer rights movements, which faced off against corporations using  weapons of negative publicity.

By the time we get to Scanners in 1981, we are fully invested in a world of shadowy corporate overlords. A huge multinational security firm, ConSec, tries to shepherd psychics called “scanners,” ostensibly to help them control their powers, but also to utilize and exploit their paranormal abilities. Protagonist Cameron Vale (Steven Lack) is apprehended off the streets, where, due to his psychic pain, he’s living as a derelict. We learn that scanners don’t “fit in” with society. When Vale is given the inhibitive drug ephemerol by ConSec’s head of scanner research, Dr. Paul Ruth (Patrick McGoohan), he is able to get himself together and is even given a new proto-yuppie wardrobe and mission by ConSec: eliminate rogue scanner Darryl Revok (Michael Ironside). But as Vale accepts his mission and new identity, he finds himself enlisted in ConSec’s private war against renegade scanners. When he runs into an emerging cell of scanners who are forming a powerful “group mind” in a New Age-like encounter session, assassins controlled by Revok murder most of the cell. “Everywhere you go, somebody dies,” one of the hive mind tells Vale, who is complicit with ConSec’s need to exert corporate control over scanners, including the use of violence as part of the corporate mission. Meanwhile, ConSec itself is riddled with moles working with Revok. Indeed, a chemical and pharmaceutical company called “Biocarbon Amalgamate,” founded by Dr. Ruth but now infiltrated by Revok, manufactures ephemerol in massive quantities. Scanners recontexualizes the Cold War espionage “wilderness of mirrors” in terms of corporate espionage for a new age of corporate domination. (It’s no coincidence that Cronenberg cast McGoohan, one of the Cold War’s most famous fictional spies, in the role of Dr. Ruth.)

ConSec’s corporate mission is revealed in a board meeting when the new head of security says, “We’re in the business of international security. We deal in weaponry and private armories.” This head of security also tells Dr. Ruth, “Let us leave the development of dolphins and freaks as weapons of espionage to others.” To the new breed of ConSec executive, fringe ’70s research is a thing of the past, despite its obvious power and relevance. The future is in fighting proxy wars, ensuring private security for the wealthy, and providing mercenary security forces. ConSec in this way is like many other private security firms that first emerged in the 1970s and ’80s. Begun as an outgrowth of post-colonial British military adventurism, the private military company soon became a way for ex-military officers to assure themselves a handsome post-service sinecure in a new era where hot wars were a thing of the past. “Brushfire wars” would continue to ensue, ensuring these companies an expanding portfolio, both in the waning years of the Cold War and in the 1990s and beyond. In fact, it’s interesting to note that many of the real-world military’s supposed psychic assets themselves got into private security after the U.S. Army shut down fringe science projects like Project STARGATE. Art imitates life imitates art.

Videodrome expands Cronenberg’s conspiratorial corporate, military, and espionage worldview into the rapidly exploding world of the media in the early ’80s. Leaps forward in technology, all of which are explicitly called out in Videodrome, litter the film’s visual landscape. Cable television, satellite transmissions (and the attendant hacking thereof), video cassette recorders, the rise of video pornography, virtual reality, postmodern media theory, and violence in entertainment all play essential roles in the film. Max Renn’s (James Woods) tiny Civic TV/Channel 83 (itself based on groundbreaking independent Toronto television station CityTV) is trying to survive as best it can in a world of massive international media players. Ever seeking the latest hit that will tap into the public’s unending hunger for sex and violence, his on-staff “satellite pirate” Harlan delivers the mysterious Videodrome transmission. Harlan is later revealed to be working with the Videodrome conspiracy, having intentionally exposed Max to the signal. In a memorable speech, Harlan nails Max’s amoral desire to sell sex and violence to his viewers: “This cesspool you call a television station, and your people who wallow around in it, and your viewers who watch you do it; you’re rotting us away from the inside.” When Renn is deep into his Videodrome-triggered hallucinations, he is offered corporate “help” much as Cameron Vale was. This time, his “savior” is Barry Convex, a representative of Spectacular Optical. In his video message to Max, he, like the ConSec executive before him, lays out Spectacular Optical’s corporate mission:

I’d like to invite you into the world of Spectacular Optical, an enthusiastic global corporate citizen. We make inexpensive glasses for the Third World… and missile guidance systems for NATO. We also make Videodrome, Max.

The final form of the military-industrial-entertainment complex is laid bare. Videodrome’s intent is to harden and make psychotic a North American television audience who’ve “become soft,” as Harlan puts it. Renn’s hallucinations are recorded, he is literally “reprogrammed” to kill Civic TV’s board (thanks to the memorable hallucinatory image of Convex sticking a VHS tape into Renn’s gut). Renn is then reprogrammed to retaliate and assassinate Convex by the much more ’70s-cult Cathode Ray Mission of “media prophet” Brian O’Blivion, whose postmodern, expressly McLuhanesque view of television’s place in the world allowed Videodrome to come into existence in the first place: “I had a brain tumor and I had visions. I believe the visions caused the tumor and not the reverse… when they removed the tumor, it was called Videodrome.” It’s also worth noting that O’Blivion tells us that Videodrome made him its first victim; postmodern criticism of the medium of television is no match for its violent, cancerous growth.

The deregulation of media in the U.S. in the Reagan years is common knowledge; rules around children’s television were especially eviscerated, which allowed for an explosion in violent, warlike cartoons based on popular toy lines, training a new generation for a lifetime of endless war. Combined with the aforementioned explosion of video technology, the laissez-faire environment shepherded by Reagan’s FCC allowed a new breed of cable television magnates to get rich and created a television and media landscape with a relatively friction-free relationship to government. By the time the first Gulf War broke out in 1991, war provided the cable news networks with surefire ratings and cable news provided the propaganda platform for the war effort, a mutually beneficial (and Cronenberg-esque) symbiosis that’s continued to metastasize through multiple subsequent wars in the Middle East. The world of Videodrome, the one Harlan evokes where America will no longer be soft in a world full of tough hombres, has finally come to fruition thanks in part to all of our enmeshment in the video arena—the video drome.

After Videodrome—in The Fly (1986), Dead Ringers (1988), and Crash (1996)—Cronenberg focuses less on sinister organizations and more on monomaniacal researchers, doctors, and fetishists who pursue their individual idiosyncratic agendas through the director’s trademark twisting mindscapes (and bodyscapes). With the exception of eXistenZ (1999), Cronenberg’s meditation on computer technology and gaming released amidst the first dot-com bubble, and his Occupy-influenced adaptation of Don DeLillo’s 2003 novel Cosmopolis (2012), he has retreated from a more overt suspicion of corporations and shadowy conspiracies. His warning about these invisible masters pulling the strings of society came during the time period when something could have been done about corporate hegemony. But now, the conspiracy operates in the open. We are now all of us the dumb, trusting Cronenberg protagonist, lulled into a false sense of security by a series of “enthusiastic corporate citizens.” Long live the new flesh.