What Do You (Think You) Really Know?

On the need for maintaining a curious spirit.

By Tom Bunzel

Source: The Pulse

“When in doubt observe and ask questions.  When certain, observe at length and ask many more questions.”  — George Patton

Once again, I thought I would follow up on Joe Martino’s recent discussion of huge gaps and worse in many peoples’ understanding.  This is such an important point first made by Socrates: You don’t know what you don’t know.

Joe mentioned the Dunning-Kruger effect in which people with relatively little knowledge will overestimate what they in fact know.  Joe’s piece used the example of brand influencers who tend to become very sure of their pronouncements because, of course, they had a vested interest in it.  

This speaks to the conditioning that takes place in corporate environments, similar to how an individual accumulates a self or Ego.

One thing that happens with people who are sure of themselves is that they frequently break off mentally and become separate from any sense of Wholeness (to defend their position).  This crystallizes the ego which then, with many insecure people, and hardens with each challenge – something we can see in our current politics.

It’s also obvious that whatever we think we know is generally another thought.  The exception is if it is a bodily sensation or emotion which is purely felt, without interpretation.

Unfortunately, most people are also heavily invested in feeling good – or avoiding any discomfort – and the discipline of sensing emotions in the body and allowing them to be experienced without judgement is quite foreign.  I say it’s unfortunate because in my recent experience, it is the one way to begin to heal trauma stored in the body.

And speaking of the body, this is where we can surely go even a step further.

We Don’t Know What We Don’t Know that We Don’t Know

Things get even more complicated when you focus on the real mystery.  The vastness of the gaps is staggering.

An example of this is qualia – defined as “the internal and subjective component of sense perceptions, arising from stimulation of the senses by phenomena.”

What this really comes down to is our Experience, which is somehow created out of our Awareness of whatever is happening.  But qualia for humans is more subtle and indefinable – scientists have been unable to explain, for example, why some wine tastes dry and another fruity.

How does the subjective part – the judgment – come into (our?) awareness.

Neuroscience now can identify and label the various components of the experience in terms of physics and biochemistry, but they cannot explain how “we” experience things like awe, gratitude and so on.

And for the sense of “someone” experiencing any phenomena they can only give it another label:  Consciousness.  

When trying to define “ourselves” or the experience of being, our language has proven so inadequate (partly due to the subject/object grammatical bias of English and most modern languages) that to the extent any speculations or theories are wrong – we probably don’t even have the capability to comprehend what makes them so incomplete.

Attempts to pin the experience of the self down scientifically have fallen short, as I described in “AI and the Hard Problem of Consciousness”.

Where is the Ground of Our Experience?

This conundrum exists because science deals with facts and certainty and our experience actually seems to arise in a space other than that which science can adequately define.

But still, we are deeply conditioned to believe that our experiences and thoughts about what has, may or is happening comprise a separate self or identity, that is often surmised to exist in the brain.

But recent neurological advances have failed to locate any physical or even biochemical basis for a separate self.

If we return once again to the body, we can also see that all of our senses, and even thoughts, can be reduced to a biochemical reaction.  We can now view it as information passing from the receptor to the brain, gut or heart.

Since this information is based on very specific individual parameters, the results are almost by definition finite and fallible.  What we experience is by no means a universally known phenomenon. We generally experience what we’ve been conditioned to experience, which separates us from other humans, and for that matter from all other life forms.

Birds can see better than we can, whales, dolphins and bats live on sound and most animals can hear things humans can’t.  If you have a cat you know that its world is nothing like yours.

And now, even when humans have invented incredible instruments to augment our senses and allow a glimpse even into vast apparent distances away from our planet out into the cosmos, we are confronted more and more with phenomena that we cannot explain.

(I wrote “apparent distances” because our view of “outer” space is always, inevitably a subjective experience that seems to create an image within “us” – presumably within our brains from a signal through the optic nerve.  I often suspect that even the way we perceive “outer space” is a function of our limited sensory and intellectual capacities).

The irony is that it is Science that now points most effectively to what we don’t even know that we don’t know, and yet it is the same science that is the cause of so much human hubris and delusion.

Before Quantum Mechanics was discovered and Einstein’s theories verified we didn’t know that we had no clue about matter and energy.  Because a lot in the quantum world doesn’t make “sense” there are presumably more vast areas where we can’t even comprehend our own ignorance.

I dealt with some of these issues when I wrote about Robert Lanza’s theory of Biocentrism.  Science seems to be dragged kicking and screaming into a new paradigm where the self we believe in doesn’t really exist – and what WE ARE (not what we have or do) is aligned intimately with Nature, or our environment, or whatever label one might want to use in what is really an infinitely sacred Mystery.

The Limitations of Current Scientific Labels

Another example would be biology where for centuries all life was thought of as either animal or plant.  Then they found microbes, and eventually viral agents and the line between life and the “objective” world blurred.

Returning our attention to qualia, and consciousness, again these personal “events” are defined as subjective experiences.

Science can’t really explain subjectivity.  As previously noted, that may well be because explanations involve a subject and an object (as conditioned by our language and grammar) but what if everything is just an arising in Consciousness (subjectivity)?

If we take the concept of “Wholeness” literally – there cannot in fact be anyone outside of the whole to have a subjective experience.

This is the essence of “Nonduality” – a modern popular philosophical movement.

Our interpreted experience is comprised of thoughts, words and feelings.  But where do these occur?

The body and the brain are the short answers, the ones the DKE people would grab hold on, but upon deeper examination “your” experience of your hands, for example, takes place visually and tactilely; you can both see your hands and feel them.  But how is that happening and by whom?

But what makes them “yours”?  What is it that differentiates “you” from everything else you see or feel?

If you think about it, a separate “identity” was not originally within awareness when your body was first born.  It started when someone told you ‘your’ name.

And ever since the narrative of a separate person, made up mainly of thoughts and memories has accumulated more knowledge based on that one erroneous assumption of separation, culminating in an illusory experience of “you.”

How do we make the bulk of humanity aware of this delusion?   I wonder if it is not central to the issue of what we now may become “Disclosure” where our psychological world seems poised to explode in ways we cannot know that we do not know.  And as the 70’s comedy team called Firesign Theater once said:  “Everything you know is wrong.”

I would think that under the circumstances the most appropriate position to take in many instances is what Eckhart Tolle recommends – deep acceptance of not knowing.

Moreover, in the face of such overwhelming evidence of our ignorance, we might be better advised to ask very deep questions – and as Joe Martino has also mentioned, allowing silent sensing to bring us a response (perhaps not even answer) that could even bring up physical emotions or sensations, but no actual conclusion.

I tried to use that technique in the book I wrote recently “Conversations with Nobody” written with AI, about AI and giving a taste of AI.

Because the format of the book was an apparent “conversation” with a nonhuman intelligence the questions I posed (or prompts) were actually the only creative element in the book – and were designed to either take the AI’s response and follow up with more depth, or pose a question that had some nuance and would make the reader think about an issue like the one in this article.

Of course the potential promise of AI is to provide impersonal and presumably more factual information than a mere human; but so far that promise is unfulfilled.

The AI generally gave answers perfectly in line with the most obvious human biases – not surprising in that its “answers” were simply guesses as to next appropriate word in the response, based on its programming as a “language” model.  No actual human thought was involved in the response.

But the openness of the question may evoke an appropriate feeling in one who considers it silently.  It may even take one beyond one’s mind.  Questions to ponder and go beyond the conditioned limitations of Dunning-Kruger:

What do we really know? 

Who (or what) are we?

What is our relationship to reality – what was here before we got here and thought about it?

‘Us vs. Them’ Thinking Is Hardwired—But Consciousness Can Overpower It

By Joe Martino

Source: The Pulse

Are we hardwired to categorize each other into ‘Us vs Them?’ Simply put, yes. But this hardwiring is remarkably easy to break. It’s time we equip ourselves with the knowledge and tools to do so.

Why? Well, anything from powerful governments to mainstream and alternative media is consistently trying to divide us by focusing aggressively on our differences. When we know how these mechanisms work and focus our consciousness in an effective way, these attempts to divide fail.

First a quick story.

When we first launched our membership I hired marketing teams to help. They would always tell me “in the sales messaging we need to create an ‘Us vs Them dynamic’ in order to get people to buy.” I resisted.

I felt like that wasn’t necessary and also contrary to our mission of unifying people to create a better world. But, the marketers were right. When we would A/B test sales messaging without ‘Us vs Them’ compared to sales messaging with it, the ‘Us vs Them’ messaging always won.

I have to admit this made me a bit sad. Was I thinking too idealistically about human beings? Maybe. But I always asked ‘Why doesn’t Us vs Them messaging work on me?” Before you suggest I’m naive and think “of course the messaging works on you,” read to the end of this piece. You’ll see how simply it is for this messaging not to work on anyone should they choose.

Neuroscience suggests that this Us vs Them biological trait developed thousands of years ago. As you might imagine, it had much to do with survival in much different times than we live in today.

It essentially states that human beings are primed to make very basic and categorical Us vs Them judgments. In fact, your brain is processing these Us vs Them differences in a twentieth of a second. These noticed differences are your classically promoted artificial constructs of: ethnicity, gender, skin color, age, socioeconomic class, and even something like sports team preference.

(I say artificial because social constructs tell us we should focus on these as differences. We don’t actually have to see them as differences as we are ALL one species. Other animals do not engage their Us vs Them biology within their species – only we do – and it’s a taught behaviour.)

All of this is enhanced by a multi-purposed hormone in our brain called Oxytocin.

Oxytocin has been shown to be a wonderful hormone for connecting and bonding with others that we know in our communities, ‘Us’. It’s often referred to as the love hormone or connection hormone. It essentially heightens meaningful connections with our in-group.

But on the flip side, studies have shown that when it comes to people who we see as strangers (Them), oxytocin creates a decrease in cooperation and increases envy when people are struggling. Further, it pushes one to gloat when they are ‘winning.’ In fact, there are many ways in which oxytocin is linked to dividing ourselves from ‘Them’ and keeping it that way.

In short, oxytocin enhances this Us vs Them divide. But, while this sounds like we may be doomed for division and judgment of others forever, what has also been shown is that these almost instantaneous judgements we make can be manipulated incredibly easily. And that’s without bringing in a little bit of conscious awareness and presence.

Studies have shown that when you expose people to others who look different from them, the ‘Us’ wiring often turns on when people look similar, and the ‘Them’ wiring turns on when people look different. This showed an implicit judgment based on ethnicity.

Yet when those same people were exposed to people who looked the same and different from them but were now wearing baseball hats with team logos, things changed. Suddenly the participants brains indicated that Us and Them had nothing to do with ethnicity anymore, it was about which team they supported.

A tiny change created an instantaneous manipulation of who is Us vs Them. I wrote about a similar phenomenon in a recent piece called Racism Plummeted When People Were Told Aliens Exist. This showed that when humans were told aliens existed, the Us became humans, and the Them became aliens. Suddenly people were nicer to each other and more accepting of differences.

The point? It doesn’t take much to for us to see each other as one, we just have to focus our consciousness a bit.

Consciousness Is Everything

There is much gained in our well being and in society when we become more self-aware, reflective and present. It moves unconscious and often automatic behaviour into a lens where we can ask: why do we do this?

Further, an increase in the above mentioned traits can lead to something as simple as stopping ourselves before we do something destructive out of anger. We can then pause for a short period, reflect, and respond.

The more we move from automatic, unconscious and reactive ways of living to present, reflective and conscious ways of living, the more harmony surrounds our lives.

What I’m saying can bring up similarities to new-agey, online influencer judgments that the ‘conscious’ are more than the ‘unconscious,’ this is not at all what I’m saying here. And I will expand in a moment.

First, I trust it’s clear that we can be manipulated into different Us vs Them dichotomies very easily. Studies exploring implicit bias also show that when we are exposed to different people regularly, we become less judgemental towards their differences.

Even still, we can build self awareness around how WE play into these divides in order to diffuse them.

For example, during COVID government leaders like Justin Trudeau sowed division in Canada by making unvaccinated Canadians less than. He did this by calling them names and characterizing them in ways that would cause the general population to dislike them.

“They [the unvaccinated] don’t believe in science/progress and are very often misogynistic and racist. It’s a very small group of people, but that doesn’t shy away from the fact that they take up some space.”Justin Trudeau

He combined this rhetoric with un-contextualized and false data around vaccination status in Canada to strengthen his propaganda.

Note, I’m not focusing on Trudeau here because I’m “a Poilievre fan” or anything. In fact, I don’t feel our current structure of politics is healthy nor serving us as people. I’m simply illustrating the ways in which our society has become ‘lost’ in division. Getting out of it requires us to take a close examination of what is happening so we can choose a different path.

In contrast, some unvaccinated people around the world began to refer to themselves as ‘pure bloods’ because they were not vaccinated. This led to memes, hateful rhetoric, and actions like avoiding and separating themselves from their vaccinated friends and families. Instead of attempting to understand why some became vaccinated, it was often simply seen as ‘the vaccinated are sheep.’

Once again we see division, in/out groups, and greater than/less than beliefs.

In both cases we are seeing the destruction and damage that can occur when we allow our flimsy biological traits around Us vs Them to be fed and upheld by faulty information and propaganda.

What is also important to note is that our ‘authoritative’ institutions of government and mainstream media will often be guilty of this destructive behaviour which deeply normalizes it amongst the masses.

That said, we are not doomed. Consciousness is everything. By that I mean, the quality of our consciousness. The ways in which we slow down, reflect, and become more present in everyday life becomes an antidote to propaganda feeding Us vs Them.

This is why we have the section at the top of our articles to “Pause and set your pulse.” To cultivate more presence and self awareness, check out a couple of the pieces of content I list at the bottom of this piece.

Let’s go deeper. Perhaps why I have felt Us vs Them messaging does not work on me stems from: what happens when we have experiences that lead us to feel we are all one and interconnected with everything? I have had these experiences many times in my life. I experienced them through meditation, breath work, and even randomly.

Thrust into what can be best described as ‘sensing the nature of our reality,’ when you have these transcendental experiences you do not see yourself as all that separate from everything else. Sure, you are YOU, and you know you are an individual, but you also have a knowing of the connection and sameness you share with everything.

What happens if we nurture and hold that knowing in our consciousness and allow it to form our worldview? Does our sense of Us vs Them get triggered as easily? I don’t think so. But that’s my experience.

This direct experience has led me to believe that a shift in our consciousness can be one of the most powerful tools in changing the way we see each other and ultimately re-design our society – because it has to happen.

Albert Einstein had some thoughts on this as well:

“A human being is part of a whole, called by us the ‘Universe’ —a part limited in time and space. He experiences himself, his thoughts, and feelings, as something separated from the rest—a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest us. Our task must be to free ourselves from this prison by widening our circles of compassion to embrace all living creatures and the whole of nature in its beauty.”Albert Einstein

A task indeed, but one that is very possible with the right practice and focus.

Education Is Key Too

Why are things the way they are? We have to explore content that seeks to answer these questions. Instead of using social media for fast memes, 15 second videos, and short out of context video clips, deepen your knowledge. Consume less, but spend MORE time on each thing you consume.

Buying into this fast and distracting culture of information consumption is only furthering the problems we have. Social media algorithms are rigged to build an echo chamber around you and to give you ‘truth’ that confirms your existing bias’. This is not real education.

As for the biological trait itself, why can’t we focus our attention on our human similarities instead of our differences? We can. But culturally we have learned not to because somewhere along the lines it’s what we chose to obsess over.

Perhaps because powerful people glean more power by dividing masses. Perhaps because our current economic systems thrive off division, competition, and having various classes. Either way, we’re making this all up as we go. It doesn’t have to be like this.

For those who might think “this is our biology. We will always do best in small groups of the same type of people,” this doesn’t appear to be true. Studies have shown that diversity trumps ability. Diversity significantly enhances the level of innovation in organizations around the world.

Groups of only ‘like-minds’ become echo chambers and our efficiency and innovation suffers. Yet another reason we must learn to love, respect and communicate with one another.

(Note: here I’m pointing to the uncovering our potential as humans and having that be our driver. Not productivity and innovation tied to making more money and having a ‘bullish market economy’)

The Takeaway

The key is that consciousness can overcome a flimsy biological trait that produces Us vs Them thinking within us. We can engage experience and education that connects us to our similarities, adopts a consciousness of interconnection and a shared stewardship of our planet and people.

Everything from ‘woke ideology’ constantly obsessing over our differences, to political news commentary always inviting people to ‘dislike the other side’ is contributing to a worse direction for humanity. Don’t just sit there and blame it though, simply choose to engage differently. Improve how you use social media. Consume less content but go deeper.

Consciousness is the power here. Why can’t we extend our worldview of who we are into oneness and a deeper connectedness? Why must we fight to shut off our wonder and hold to petty differences?

The Varieties of Psychonautic Experience: Erik Davis’s ‘High Weirdness’

Art by Arik Roper

By Michael Grasso

Source: We Are the Mutants

High Weirdness: Drugs, Esoterica, and Visionary Experience in the Seventies
By Erik Davis
Strange Attractor Press/MIT Press, 2019

Two months ago, I devoured Erik Davis’s magisterial 2019 book High Weirdness: Drugs, Esoterica, and Visionary Experience in the Seventies the same weekend I got it, despite its 400-plus pages of sometimes dense, specialist prose. And for the past two months I have tried, in fits and starts, to gather together my thoughts on it—failing every single time. Sometimes it’s been for having far too much to say about the astonishing level of detail and philosophical depth contained within. Sometimes it’s been because the book’s presentation of the visionary mysticism of three Americans in the 1970s—ethnobotanist and psychonaut Terence McKenna, parapolitical trickster Robert Anton Wilson, and paranoid storyteller-mystic Philip K. Dick—has hit far too close to home for me personally, living in the late 2010s in a similarly agitated political (and mystical) state. In short, High Weirdness has seemed to me, sitting on my bookshelf, desk, or in my backpack, like some cursed magical grimoire out of Weird fiction—a Necronomicon or The King in Yellow, perhaps—and I became obsessed with its spiraling exploration of the unfathomable universe above and the depthless soul below. It has proven itself incapable of summary in any linear, rationalist way.

So let’s dispense with rationalism for the time being. In the spirit of High Weirdness, this review will try to weave an impressionistic, magical spell exploring the commonalities Davis unveils between the respective life’s work and esoteric, drug-aided explorations of McKenna, Wilson, and Dick: explorations that were an attempt to construct meaning out of a world that to these three men, in the aftermath of the cultural revelations and revolutions of the 1960s that challenged the supposed wisdom and goodness of American hegemony, suddenly offered nothing but nihilism, paranoia, and despair. These three men were all, in their own unique ways, magicians, shamans, and spiritualists who used the tools at their disposal—esoteric traditions from both East and West; the common detritus of 20th century Weird pop culture; technocratic research into the human mind, body, and soul; and, of course, psychedelic drugs—to forge some kind of new and desperately-needed mystical tradition in the midst of the dark triumph of the Western world’s rationalism.

A longtime aficionado of Weird America, Davis writes in the introduction to High Weirdness about his own early encounters with Philip K. Dick’s science fiction, the Church of the SubGenius, and other underground strains of the American esoteric in the aftermath of the ’60s and ’70s. As someone who came late in life to a postgraduate degree program (High Weirdness was Davis’s doctoral dissertation for Rice University’s Religion program, as part of a curriculum focus on Gnosticism, Esotericism, and Mysticism), I find it incredibly easy to identify with Davis’s desire to tug at the edges of his longtime association with and love for the Weird in a scholarly context. This book’s scholarly origins do not make High Weirdness unapproachable to the layperson, however. While Davis does delve deeply into philosophical and spiritual theorists and the context of American mysticism throughout the book, he provides succinct and germane summaries of this long history, translating the work of thinkers as diverse as early 20th century psychologist and student of religious and mystical experience William James to contemporary theorists such as Peter Sloterdijk and Mark Fisher. Davis’s introduction draws forth in great detail the long tradition of admitting the ineffable, the scientifically-inexplicable, into the creation of subjective, individual mystical experiences.

Primary among Davis’s foundational investigations, binding together all three men profiled in the book, is a full and thorough accounting of the question, “Why did these myriad mystical experiences all occur in the first half of the 1970s?” It’s a fairly common historical interpretation to look at the Nixon years in America as a hangover from the cultural revolution of the late 1960s, a retrenchment of Nixon’s “silent majority” of middle- and working-class whites vs. the perceived chaos of a militant student movement and identity-based politics among racial and sexual minorities. Davis admits that the general mystical seeking that went on in the early ’70s is a reaction to this revanchism. And while he quotes Robert Anton Wilson’s seeming affirmation of this idea—“The early 70s were the days when the survivors of the Sixties went a bit nuts”—his interest in the three individuals at the center of his study allows him to delve deeper, offering a more profound explanation of the politics and metaphysics of the era. In the immediate aftermath of the assassinations, the political and social chaos, and the election of Nixon in 1968, there was an increased tendency among the younger generation to seek alternatives to mass consumption culture, to engage in what leftist philosopher Herbert Marcuse would term “the Great Refusal.” All three of the figures Davis focuses on in this book, at some level or another, decided to opt out of what their upbringings and conformist America had planned for them, to various levels of harm to their livelihoods and physical and mental health. This refusal was part of an awareness of what a suburban middle-class life had excised from human experience: a sense of meaning-making, of a more profound spirituality detached from the streams of traditional mainline American religious life.

To find something new, the three men at the center of High Weirdness were forced to become bricoleurs—cobbling together a “bootstrap witchery,” in Davis’s words—from real-world occult traditions (both Eastern and Western); from the world of Cold War technocratic experimentation with cybernetics, neuroscience, psychedelics, and out-and-out parapsychology; and from midcentury American pop culture, including science fiction, fantasy, comic books, and pulp fiction. Davis intriguingly cites Dick’s invention of the term “kipple” in his 1968 novel Do Androids Dream of Electric Sheep? as a key concept in understanding how this detritus can be patched together and brought new life. Given Dick’s overall prescience in predicting our 21st century world of social atomization and disrepair, this seems a conceptual echo worth internalizing a half-century later. If the late 1960s represented a mini-cataclysm that showed a glimpse of what a world without the “Black Iron Prison” might look like, those who graduated to the 1970s—the ones who “went a bit nuts”—needed to figure out how to survive by utilizing the bits and scraps left behind after the sweeping turbulence blew through. In many ways, McKenna, Wilson, and Dick are all post-apocalyptic scavengers.

All three men used drugs extensively, although not necessarily as anthropotechnics specifically designed to achieve enlightenment (Davis notes that Dick in particular had preexisting psychological conditions that, in conjunction with his prodigious use of amphetamines in the 1960s, were likely one explanation for his profound and sudden breaks with consensus reality in the ’70s). But we should also recognize (as Davis does) that McKenna, Wilson, and Dick were also, in many ways, enormously privileged. As well-educated scions of white America, born between the Great Depression and the immediate aftermath of World War II, they had the luxury to experiment with spirituality, psychedelic drugs, and technology to various degrees while holding themselves consciously separate from the mainstream institutions that would eventually co-opt and recuperate many of these strains of spirituality and individual seeking into the larger Spectacle. As Davis cannily notes, “Perhaps no one can let themselves unravel into temporary madness like straight white men.” But these origins also help explain the expressly technocratic bent of many of their hopes (McKenna) and fears (Wilson and Dick). Like their close confederate in Weirdness, Thomas Pynchon (who spent his early adulthood working for defense contractor Boeing, an experience which allowed him a keener avenue to his literary critiques of 20th century America), all three men were adjacent to larger power structures that alternately thrilled and repelled them, and which also helped form their specific esoteric worldviews.

It would be a fool’s errand to try to summarize the seven central chapters of the book, which present in great detail Terence (and brother Dennis) McKenna’s mushroom-fueled experiences contacting a higher intelligence in La Chorrera, Colombia in 1971, Robert Anton Wilson’s LSD-and-sex-magick-induced contact with aliens from the star Sirius in 1973 and ’74 as detailed in his 1977 book Cosmic Trigger: The Final Secret of the Illuminati, and Philip K. Dick’s famous series of mystical transmissions and revelations in February and March of 1974, which influenced not only his fiction output for the final eight years of his life but also his colossal “Exegesis,” which sought to interpret these mystical revelations in a Christian and Gnostic context. Davis’s book is out there and I can only encourage you to buy a copy, read these chapters, and revel in their thrilling detail, exhilarating madness, and occasional absurdity. Time and time again, Davis, like a great composer of music, returns to his greater themes: the environment that created these men gave them the tools and technics to blaze a new trail out of the psychological morass of Cold War American culture. At the very least, I can present some individual anecdotes from each of the three men’s mystical experiences, as described by Davis, that should throw some illumination on how they explored their own psyches and the universe using drugs, preexisting religious/esoteric ritual, and the pop cultural clutter that had helped shape them.

Davis presents a chapter focusing on each man’s life leading up to his respective spiritual experiences, followed by a chapter (in the case of Philip K. Dick, two) on his mystical experience and his reactions to it. For Terence McKenna and his brother Dennis, their research into organic psychedelics such as the DMT-containing yagé (first popularized in the West in the Cold War period by William S. Burroughs), alternately known as oo-koo-hé or ayahuasca, led them to South America to find the source of these natural, indigenous entheogens. But at La Chorrera in Colombia they instead met the plentiful and formidable fungus Psilocybe cubensis. In their experiments with the mushroom, Terence and Dennis tuned into perceived resonances with long-dormant synchronicities within their family histories, their childhood love of science fiction, and with the larger universe. Eventually, Dennis, on a more than week-long trip on both mushrooms and ayahuasca, needed to be evacuated from the jungle, but not before he had acted as a “receiver” for cryptic hyper-verbal transmissions, the hallucinogens inside him a “vegetable television” tuned into an unseen frequency—a profound shamanic state that Terence encouraged. The language of technology, of cybernetics, of science is never far from the McKenna brothers’ paradigm of spirituality; the two boys who had spent their childhoods reading publications like Analog and Fate, who had spent their young adulthoods studying botany and science while deep in the works of Marshall McLuhan (arguably a fellow psychedelic mystic who, like the McKennas and Wilson, was steeped in a Catholic cultural tradition), used the language they knew to explain their outré experiences.

Wilson spent his 20s as an editor for Playboy magazine’s letters page and had thus been exposed to the screaming gamut of American political paranoia (while contributing to it in his own inimitable prankster style). He had used this parapolitical wilderness of mirrors, along with his interest in philosophical and magickal orientations such as libertarianism, Discordianism, and Crowleyian Thelema as fuel for both the Illuminatus! trilogy of books written with Robert Shea (published in 1975), and his more than year-long psychedelic-mystical experience in 1973 and 1974, during which he claimed to act as a receiver on an “interstellar ESP channel,” obtaining transmissions from the star Sirius. His experiences as detailed in Cosmic Trigger involve remaining in a prolonged shamanic state (what Wilson called the “Chapel Perilous,” a term redolent with the same sort of medievalism as the McKenna brothers’ belief that they would manifest the Philosopher’s Stone at La Chorrera), providing Wilson with a constant understanding of the universe’s playfully unnerving tendency towards coincidence and synchronicity. Needless to say, the experiences of one Dr. John C. Lilly, who was also around this precise time tuned into ostensible gnostic communications from a spiritual supercomputer, mesh effortlessly with Wilson’s (and Dick’s) experiences thematically; Wilson even used audiotapes of Lilly’s lectures on cognitive meta-programming to kick off his mystical trances. Ironically, it was UFO researcher and keen observer of California’s 1970s paranormal scene Jacques Vallée who helped to extract Wilson out of the Chapel Perilous—by retriggering his more mundane political paranoia, saying that UFOs and other similar phenomena were instruments of global control. In Davis’s memorable words, “Wilson did not escape the Chapel through psychiatric disenchantment but through an even weirder possibility.”

Philip K. Dick, who was a famous science fiction author at the dawn of the ’70s, had already been through his own drug-induced paranoias, political scrapes, and active Christian mystical seeking. Unlike McKenna and Wilson, Dick was a Protestant who had stayed in close contact with his spiritual side throughout adulthood. In his interpretation of his mystical 2-3-74 experience, Dick uses the language and epistemology of Gnostic mystical traditions two millennia old. Davis also notes that Dick used the plots of his own most overtly political and spiritual ’60s output to help him understand and interpret his transcendent experiences. Before he ever heard voices or received flashes of information from a pink laser beam or envisioned flashes of the Roman Empire overlapping with 1970s Orange County California, Dick’s 1960s novels, specifically The Three Stigmata of Palmer Eldritch (1965) and Ubik (1969), had explored the very nature of reality and admitted the possibility of a Gnostic universe run by unknowable, cruel demiurges. Even in these hostile universes, however, there exists a messenger of hope and mercy who seeks to destroy the illusion of existence and bring relief. These existing pieces of cultural and religious “kipple,” along with the parasocial aspects of Christian belief that were abroad in California at the time, such as the Jesus People movement (the source of the Ichthys fish sign that triggered the 2-3-74 experience), gave Dick the equipment he needed to make sense of the communications he received and the consoling realization that he was not alone, that he was instead part of an underground spiritual movement that acted as a modern-day emanation of the early Christian church.

After learning about these three figures’ shockingly similar experiences with drug-induced contact with beyond, the inevitable question emerges: what were all these messages, these transmissions from beyond, trying to convey? One common aspect of all three experiences is how cryptic they are (and how difficult and time-consuming it was for each of these men to interpret just what the messages were saying). It’s also a little sobering to discover through Davis’s accounts how personal all three experiences were, whether it’s Terence and Dennis’s private fraternal language during the La Chorrera experiment, or mysterious phone calls placed back in time to their mother in childhood, or a lost silver key that Dennis was able to, stage-magician-like, conjure just as they were discussing it, or the message Philip K. Dick received to take his son Christopher to the doctor for an inguinal hernia that could have proven fatal. But alongside these personal epiphanies, there is also always an undeniable larger social and political context, especially as both Wilson and Dick saw their journeys in 1973 and 1974 as a way to confront and deal with the intense paranoia around Watergate and the fall of Richard Nixon (in his chapter setting the scene of the ’70s, Davis calls Watergate “a mytho-poetic perversion of governance”). In every case, the message from beyond requires interpretation, meaning-making, and, in Davis’s terminology, “constructivism.” The reams of words spoken and written by all three men analyzing their respective mystical experiences are an essential part of the experience. And these personal revelations all are attempts by the three men to make sense of the chaos of both their personal lives and their existence in an oppressive 20th century technocratic society: to inject some sense of mystery into daily existence, even if it took the quasi-familiar and, yes, somewhat comforting form of transmissions from a mushroom television network or interstellar artificial intelligence.

Over the past nine months I’ve spent much of my own life completing (and recovering from the process of completing) a Master’s degree. My own academic work, focusing on nostalgia’s uses in binding together individuals and communities with their museums, tapped into my earliest memories of museum visits in the late 1970s, when free education was seemingly everywhere (and actually free), when it was democratic and diverse, when it was an essential component of a rapidly-disappearing belief in social cohesion. In a lot of ways, my work at We Are the Mutants over the past three years is the incantation of a spell meant to conjure something new and hopeful from the “kipple” of a childhood suffused in disposable pop culture, the paranormal and “bootstrap witchery,” and science-as-progress propaganda. At the same time, over the past three years the world has been at the constant, media-enabled beck and call of a figure ten times more Weird and apocalyptic and socially malignant than any of Philip K. Dick’s various Gnostic emanations of Richard Nixon.

Philip K. Dick believed he was living through a recapitulation of the Roman Empire, that time was meaningless when viewed from the perspective of an omniscient entity like VALIS. In the correspondences and synchronicities I have witnessed over the past few months—in the collapse of political order and the revelation of profound, endemic corruption behind the scenes of the ruling class—this sense of recurring history has sent me down a similar set of ecstatic and paranoid corridors as McKenna, Wilson, and Dick. The effort to find meaning in a world that once held some inherent structure in childhood but has become, in adulthood, a hollow facade—a metaphysical Potemkin village—is profoundly unmooring. But meaning is there, even if we need technics such as psychedelic drugs, cybernetics (Davis’s final chapter summarizing how the three men’s mystical explorations fed into the internet as we know it today is absolutely fascinating), and parapolitical activity to interpret it. On this, the 50th anniversary of the summer of 1969, commonly accepted as the moment the Sixties ended, with echoes of moon landings and Manson killings reverberating throughout the cultural theater, is it any wonder that the appeal of broken psychonauts trying to pick up the pieces of a shattered world would appeal to lost souls in 2019? High Weirdness as a mystical tome remains physically and psychically close to me now, and probably will for the remainder of my life; and if the topics detailed in this review intrigue you the way they do me, it will remain close to you as well.

A belief in meritocracy is not only false: it’s bad for you

By Clifton Mark

Source: Aeon

‘We are true to our creed when a little girl born into the bleakest poverty knows that she has the same chance to succeed as anybody else …’ Barack Obama, inaugural address, 2013

‘We must create a level playing field for American companies and workers.’ Donald Trump, inaugural address, 2017

Meritocracy has become a leading social ideal. Politicians across the ideological spectrum continually return to the theme that the rewards of life – money, power, jobs, university admission – should be distributed according to skill and effort. The most common metaphor is the ‘even playing field’ upon which players can rise to the position that fits their merit. Conceptually and morally, meritocracy is presented as the opposite of systems such as hereditary aristocracy, in which one’s social position is determined by the lottery of birth. Under meritocracy, wealth and advantage are merit’s rightful compensation, not the fortuitous windfall of external events.

Most people don’t just think the world should be run meritocratically, they think it is meritocratic. In the UK, 84 per cent of respondents to the 2009 British Social Attitudes survey stated that hard work is either ‘essential’ or ‘very important’ when it comes to getting ahead, and in 2016 the Brookings Institute found that 69 per cent of Americans believe that people are rewarded for intelligence and skill. Respondents in both countries believe that external factors, such as luck and coming from a wealthy family, are much less important. While these ideas are most pronounced in these two countries, they are popular across the globe.

Although widely held, the belief that merit rather than luck determines success or failure in the world is demonstrably false. This is not least because merit itself is, in large part, the result of luck. Talent and the capacity for determined effort, sometimes called ‘grit’, depend a great deal on one’s genetic endowments and upbringing.

This is to say nothing of the fortuitous circumstances that figure into every success story. In his book Success and Luck (2016), the US economist Robert Frank recounts the long-shots and coincidences that led to Bill Gates’s stellar rise as Microsoft’s founder, as well as to Frank’s own success as an academic. Luck intervenes by granting people merit, and again by furnishing circumstances in which merit can translate into success. This is not to deny the industry and talent of successful people. However, it does demonstrate that the link between merit and outcome is tenuous and indirect at best.

According to Frank, this is especially true where the success in question is great, and where the context in which it is achieved is competitive. There are certainly programmers nearly as skilful as Gates who nonetheless failed to become the richest person on Earth. In competitive contexts, many have merit, but few succeed. What separates the two is luck.

In addition to being false, a growing body of research in psychology and neuroscience suggests that believing in meritocracy makes people more selfish, less self-critical and even more prone to acting in discriminatory ways. Meritocracy is not only wrong; it’s bad.

The ‘ultimatum game’ is an experiment, common in psychological labs, in which one player (the proposer) is given a sum of money and told to propose a division between him and another player (the responder), who may accept the offer or reject it. If the responder rejects the offer, neither player gets anything. The experiment has been replicated thousands of times, and usually the proposer offers a relatively even split. If the amount to be shared is $100, most offers fall between $40-$50.

One variation on this game shows that believing one is more skilled leads to more selfish behaviour. In research at Beijing Normal University, participants played a fake game of skill before making offers in the ultimatum game. Players who were (falsely) led to believe they had ‘won’ claimed more for themselves than those who did not play the skill game. Other studies confirm this finding. The economists Aldo Rustichini at the University of Minnesota and Alexander Vostroknutov at Maastricht University in the Netherlands found that subjects who first engaged in a game of skill were much less likely to support the redistribution of prizes than those who engaged in games of chance. Just having the idea of skill in mind makes people more tolerant of unequal outcomes. While this was found to be true of all participants, the effect was much more pronounced among the ‘winners’.

By contrast, research on gratitude indicates that remembering the role of luck increases generosity. Frank cites a study in which simply asking subjects to recall the external factors (luck, help from others) that had contributed to their successes in life made them much more likely to give to charity than those who were asked to remember the internal factors (effort, skill).

Perhaps more disturbing, simply holding meritocracy as a value seems to promote discriminatory behaviour. The management scholar Emilio Castilla at the Massachusetts Institute of Technology and the sociologist Stephen Benard at Indiana University studied attempts to implement meritocratic practices, such as performance-based compensation in private companies. They found that, in companies that explicitly held meritocracy as a core value, managers assigned greater rewards to male employees over female employees with identical performance evaluations. This preference disappeared where meritocracy was not explicitly adopted as a value.

This is surprising because impartiality is the core of meritocracy’s moral appeal. The ‘even playing field’ is intended to avoid unfair inequalities based on gender, race and the like. Yet Castilla and Benard found that, ironically, attempts to implement meritocracy leads to just the kinds of inequalities that it aims to eliminate. They suggest that this ‘paradox of meritocracy’ occurs because explicitly adopting meritocracy as a value convinces subjects of their own moral bona fides. Satisfied that they are just, they become less inclined to examine their own behaviour for signs of prejudice.

Meritocracy is a false and not very salutary belief. As with any ideology, part of its draw is that it justifies the status quo, explaining why people belong where they happen to be in the social order. It is a well-established psychological principle that people prefer to believe that the world is just.

However, in addition to legitimation, meritocracy also offers flattery. Where success is determined by merit, each win can be viewed as a reflection of one’s own virtue and worth. Meritocracy is the most self-congratulatory of distribution principles. Its ideological alchemy transmutes property into praise, material inequality into personal superiority. It licenses the rich and powerful to view themselves as productive geniuses. While this effect is most spectacular among the elite, nearly any accomplishment can be viewed through meritocratic eyes. Graduating from high school, artistic success or simply having money can all be seen as evidence of talent and effort. By the same token, worldly failures becomes signs of personal defects, providing a reason why those at the bottom of the social hierarchy deserve to remain there.

This is why debates over the extent to which particular individuals are ‘self-made’ and over the effects of various forms of ‘privilege’ can get so hot-tempered. These arguments are not just about who gets to have what; it’s about how much ‘credit’ people can take for what they have, about what their successes allow them to believe about their inner qualities. That is why, under the assumption of meritocracy, the very notion that personal success is the result of ‘luck’ can be insulting. To acknowledge the influence of external factors seems to downplay or deny the existence of individual merit.

Despite the moral assurance and personal flattery that meritocracy offers to the successful, it ought to be abandoned both as a belief about how the world works and as a general social ideal. It’s false, and believing in it encourages selfishness, discrimination and indifference to the plight of the unfortunate.

Billionaires Want Poor Children’s Brains to Work Better

By Gerald Coles

Source: CounterPunch

Why are many poor children not learning and succeeding in school? For billionaire Bill Gates, who funded the start-up of the failed Common Core Curriculum Standards, and has been bankrolling the failing charter schools movement, and Facebook’s Mark Zuckerberg, it’s time to look for another answer, this one at the neurological level. Poor children’s malfunctioning brains, particularly their brains’ “executive functioning”–that is, the brain’s working memory, cognitive flexibility, and inhibitory control–must be the reason why their academic performance isn’t better.

Proposing to fund research on the issue, the billionaires reason that not only can executive malfunctioning cause substantial classroom learning problems and school failure, it also can adversely affect socio-economic status, physical health, drug problems, and criminal convictions in adulthood. Consequently, if teachers of poor students know how to improve executive function, their students will do well academically and reap future “real-world benefits.” For Gates, who is always looking for “the next big thing,” this can be it in education.

Most people looking at this reasoning would likely think, “If executive functioning is poorer in poor children, why not eliminate the apparent cause of the deficiency, i.e., poverty?” Not so for the billionaires. For them, the “adverse life situations” of poor students are the can’t-be-changed-givens. Neither can instructional conditions that cost more money provide an answer. For example, considerable research on small class size teaching has demonstrated its substantially positive academic benefits, especially for poor children, from grammar school through high school and college. Gates claims to know about this instructional reform, but money-minded as he is, he insists these findings amount to nothing more than a “belief” whose worst impact has been to drive “school budget increases for more than 50 years.”

Cash–rather, the lack of it–that’s the issue: “You can’t fund reforms without money and there is no more money,” he insists. Of course, nowhere in Gates’ rebuke of excessive school spending does he mention corporate tax dodging of state income taxes, which robs schools of billions of dollars. Microsoft, for example, in which Gates continues to play a prominent role as “founder and technology advisor” on the company’s Board of Directors would provide almost $29.6 billion in taxes that could fund schools were its billions stashed offshore repatriated.

In a detailed example of Microsoft’s calculated tax scheming and dodging that would provide material for a good classroom geography lesson, Seattle Times reporter, Matt Day, outlined one of the transcontinental routes taken by a dollar spent for a Microsoft product in Seattle. Immediately after the purchase, the dollar takes a short trip to Microsoft’s company headquarters in nearby Redmond, Washington, after which it moves to a Microsoft sales subsidiary in Nevada. Following a brief rest, the dollar breathlessly zigzags from one offshore tax haven to another, finally arriving in sunny Bermuda where it joins $108 billion of Microsoft’s other dollars. Zuckerberg’s Facebook has similarly kept its earnings away from U.S. school budgets.

By blaming poor children’s school learning failure on their brains, the billionaires are continuing a long pseudoscientific charade extending back to 19th century “craniology,” which used head shape-and-size to explain the intellectual inferiority of “lesser” groups, such as southern Europeans and blacks. When craniology finally was debunked in the early 20thcentury, psychologists devised the IQ test, which sustained the mental classification business. Purportedly a more scientific instrument, it was heavily used not only to continue craniology’s identification of intellectually inferior ethnic and racial groups, but also to “explain” the educational underachievement of black and poor-white students.

After decades of use, IQ tests were substantially debunked from the 1960s onward, but new, more neurologically complex, so-called brain-based explanations emerged for differing educational outcomes. These explanations conceived of the overall brain as normal, but contended that brain glitches impeded school learning and success. Thus entered “learning disabilities,” “dyslexia,”and “attention deficit hyperactivity disorder (ADHD)” as major neuropsychological concepts to (1) explain school failure, particularly for poor children, although the labels also extended to many middle-class students; and (2) serve as “scientific” justification for scripted, narrow, pedagogy in which teachers seemingly reigned in the classroom, but in fact, were themselves controlled by the prefabricated curricula.

In the forefront of this pedagogy was the No Child Left Behind legislation (NCLB), with its lock-step instruction, created under George W. Bush and continued by Barack Obama. Supposedly “scientifically-based,” federal funds supported research on “brain-based” teaching that would be in tune with the mental make-up of poor children, thereby serving to substitute for policy that would address poverty’s influence on educational outcomes. My review of the initial evidence supposedly justifying the launching of this diversionary pedagogy revealed it had no empirical support. However, for the students this instruction targeted, a decade had to pass before national test results confirmed its failure.

The history of “scientific brain-based” pedagogy for poor children has invariably been a dodge from addressing obvious social-class influences. In its newest iteration– improve poor children’s  executive functioning–billionaires Gates and Zuckerberg will gladly put some cash into promoting a new neurological fix for poor children, thereby helping (and hoping) to divert the thinking of education policy-makers, teachers and parents. Never mind that over three years ago, a review of research on executive functioning and academic achievement failed to find “compelling evidence that a causal association between the two exists.” What’s critical for these billionaires and the class they represent is that the nation continues to concoct policy that does not deplete the wealth of the rich and helps explain away continued poverty. Just because research on improving executive functioning in poor children has not been found to be a solution for their educational underachievement, doesn’t mean it can’t be!

Now that’s slick executive functioning!

 

Gerald Coles is an educational psychologist who has written extensively on the psychology, policy and politics of education. He is the author of Miseducating for the Global Economy: How Corporate Power Damages Education and Subverts Students’ Futures (Monthly Review Press).

Disarming the Weapons of Mass Distraction

By Madeleine Bunting

Source: Rise Up Times

“Are you paying attention?” The phrase still resonates with a particular sharpness in my mind. It takes me straight back to my boarding school, aged thirteen, when my eyes would drift out the window to the woods beyond the classroom. The voice was that of the math teacher, the very dedicated but dull Miss Ploughman, whose furrowed grimace I can still picture.

We’re taught early that attention is a currency—we “pay” attention—and much of the discipline of the classroom is aimed at marshaling the attention of children, with very mixed results. We all have a history here, of how we did or did not learn to pay attention and all the praise or blame that came with that. It used to be that such patterns of childhood experience faded into irrelevance. As we reached adulthood, how we paid attention, and to what, was a personal matter and akin to breathing—as if it were automatic.

Today, though, as we grapple with a pervasive new digital culture, attention has become an issue of pressing social concern. Technology provides us with new tools to grab people’s attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain “productive” on a cab journey.

Even those spare moments of time in our day—waiting for a bus, standing in a queue at the supermarket—can now be “harvested,” says the writer Tim Wu in his book The Attention Merchants. In this quest to pursue “those slivers of our unharvested awareness,” digital technology has provided consumer capitalism with its most powerful tools yet. And our attention fuels it. As Matthew Crawford notes in The World Beyond Your Head, “when some people treat the minds of other people as a resource, this is not ‘creating wealth,’ it is transferring it.”

There’s a whiff of panic around the subject: the story that our attention spans are now shorter than a goldfish’s attracted millions of readers on the web; it’s still frequently cited, despite its questionable veracity. Rates of diagnosis attention deficit hyperactivity disorder in children have soared, creating an $11 billion global market for pharmaceutical companies. Every glance of our eyes is now tracked for commercial gain as ever more ingenious ways are devised to capture our attention, if only momentarily. Our eyeballs are now described as capitalism’s most valuable real estate. Both our attention and its deficits are turned into lucrative markets.

There is also a domestic economy of attention; within every family, some get it and some give it. We’re all born needing the attention of others—our parents’, especially—and from the outset, our social skills are honed to attract the attention we need for our care. Attention is woven into all forms of human encounter from the most brief and transitory to the most intimate. It also becomes deeply political: who pays attention to whom?

Social psychologists have researched how the powerful tend to tune out the less powerful. One study with college students showed that even in five minutes of friendly chat, wealthier students showed fewer signs of engagement when in conversation with their less wealthy counterparts: less eye contact, fewer nods, and more checking the time, doodling, and fidgeting. Discrimination of race and gender, too, plays out through attention. Anyone who’s spent any time in an organization will be aware of how attention is at the heart of office politics. A suggestion is ignored in a meeting, but is then seized upon as a brilliant solution when repeated by another person.

What is political is also ethical. Matthew Crawford argues that this is the essential characteristic of urban living: a basic recognition of others.

And then there’s an even more fundamental dimension to the politics of attention. At a primary level, all interactions in public space require a very minimal form of attention, an awareness of the presence and movement of others. Without it, we would bump into each other, frequently.

I had a vivid demonstration of this point on a recent commute: I live in East London and regularly use the narrow canal paths for cycling. It was the canal rush hour—lots of walkers with dogs, families with children, joggers as well as cyclists heading home. We were all sharing the towpath with the usual mixture of give and take, slowing to allow passing, swerving around and between each other. Only this time, a woman was walking down the center of the path with her eyes glued to her phone, impervious to all around her. This went well beyond a moment of distraction. Everyone had to duck and weave to avoid her. She’d abandoned the unspoken contract that avoiding collision is a mutual obligation.

This scene is now a daily occurrence for many of us, in shopping centers, station concourses, or on busy streets. Attention is the essential lubricant of urban life, and without it, we’re denying our co-existence in that moment and place. The novelist and philosopher, Iris Murdoch, writes that the most basic requirement for being good is that a person “must know certain things about his surroundings, most obviously the existence of other people and their claims.”

Attention is what draws us out of ourselves to experience and engage in the world. The word is often accompanied by a verb—attention needs to be grabbed, captured, mobilized, attracted, or galvanized. Reflected in such language is an acknowledgement of how attention is the essential precursor to action. The founding father of psychology William James provided what is still one of the best working definitions:

It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

Attention is a limited resource and has to be allocated: to pay attention to one thing requires us to withdraw it from others. There are two well-known dimensions to attention, explains Willem Kuyken, a professor of psychology at Oxford. The first is “alerting”— an automatic form of attention, hardwired into our brains, that warns us of threats to our survival. Think of when you’re driving a car in a busy city: you’re aware of the movement of other cars, pedestrians, cyclists, and road signs, while advertising tries to grab any spare morsel of your attention. Notice how quickly you can swerve or brake when you spot a car suddenly emerging from a side street. There’s no time for a complicated cognitive process of decision making. This attention is beyond voluntary control.

The second form of attention is known as “executive”—the process by which our brain selects what to foreground and focus on, so that there can be other information in the background—such as music when you’re cooking—but one can still accomplish a complex task. Crucially, our capacity for executive attention is limited. Contrary to what some people claim, none of us can multitask complex activities effectively. The next time you write an email while talking on the phone, notice how many typing mistakes you make or how much you remember from the call. Executive attention can be trained, and needs to be for any complex activity. This was the point James made when he wrote: “there is no such thing as voluntary attention sustained for more than a few seconds at a time… what is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.”

Attention is a complex interaction between memory and perception, in which we continually select what to notice, thus finding the material which correlates in some way with past experience. In this way, patterns develop in the mind. We are always making meaning from the overwhelming raw data. As James put it, “my experience is what I agree to attend to. Only those items which I notice shape my mind—without selective interest, experience is an utter chaos.”

And we are constantly engaged in organizing that chaos, as we interpret our experience. This is clear in the famous Gorilla Experiment in which viewers were told to watch a video of two teams of students passing a ball between them. They had to count the number of passes made by the team in white shirts and ignore those of the team in black shirts. The experiment is deceptively complex because it involves three forms of attention: first, scanning the whole group; second, ignoring the black T-shirt team to keep focus on the white T-shirt team (a form of inhibiting attention); and third, remembering to count. In the middle of the experiment, someone in a gorilla suit ambles through the group. Afterward, half the viewers when asked hadn’t spotted the gorilla and couldn’t even believe it had been there. We can be blind not only to the obvious, but to our blindness.

There is another point in this experiment which is less often emphasized. Ignoring something—such as the black T-shirt team in this experiment—requires a form of attention. It costs us attention to ignore something. Many of us live and work in environments that require us to ignore a huge amount of information—that flashing advert, a bouncing icon or pop-up.

In another famous psychology experiment, Walter Mischel’s Marshmallow Test, four-year-olds had a choice of eating a marshmallow immediately or two in fifteen minutes. While filmed, each child was put in a room alone in front of the plate with a marshmallow. They squirmed and fidgeted, poked the marshmallow and stared at the ceiling. A third of the children couldn’t resist the marshmallow and gobbled it up, a third nibbled cautiously, but the last third figured out how to distract themselves. They looked under the table, sang… did anything but look at the sweet. It’s a demonstration of the capacity to reallocate attention. In a follow-up study some years later, those who’d been able to wait for the second marshmallow had better life outcomes, such as academic achievement and health. One New Zealand study of 1,000 children found that this form of self-regulation was a more reliable predictor of future success and wellbeing than even a good IQ or comfortable economic status.

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

*

I learned to concentrate as a child. Being a bookworm helped. I’d be completely absorbed in my reading as the noise of my busy family swirled around me. It was good training for working in newsrooms; when I started as a journalist, they were very noisy places with the clatter of keyboards, telephones ringing and fascinating conversations on every side. What has proved much harder to block out is email and text messages.

The digital tech companies know a lot about this widespread habit; many of them have built a business model around it. They’ve drawn on the work of the psychologist B.F. Skinner who identified back in the Thirties how, in animal behavior, an action can be encouraged with a positive consequence and discouraged by a negative one. In one experiment, he gave a pigeon a food pellet whenever it pecked at a button and the result, as predicted, was that the pigeon kept pecking. Subsequent research established that the most effective way to keep the pigeon pecking was “variable-ratio reinforcement.” Give the pigeon a food pellet sometimes, and you have it well and truly hooked.

We’re just like the pigeon pecking at the button when we check our email or phone. It’s a humiliating thought. Variable reinforcement ensures that the customer will keep coming back. It’s the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

In much public space, wherever your eye lands—from the back of the toilet door, to the handrail on the escalator, or the hotel key card—an ad is trying to grab your attention, and does so by triggering the oldest instincts of the human mind: fear, sex, and food. Public places become dominated by people trying to sell you something. In his tirade against this commercialization, Crawford cites advertisements on the backs of school report cards and on debit machines where you swipe your card. Before you enter your PIN, that gap of a few seconds is now used to show adverts. He describes silence and ad-free experience as “luxury goods” that only the wealthy can afford. Crawford has invented the concept of the “attentional commons,” free public spaces that allow us to choose where to place our attention. He draws the analogy with environmental goods that belong to all of us, such as clean air or clean water.

Some legal theorists are beginning to conceive of our own attention as a human right. One former Google employee warned that “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have.” They use the insights into human behavior derived from social psychology—the need for approval, the need to reciprocate others’ gestures, the fear of missing out. Your attention ceases to be your own, pulled and pushed by algorithms. Attention is referred to as the real currency of the future.

*

In 2013, I embarked on a risky experiment in attention: I left my job. In the previous two years, it had crept up on me. I could no longer read beyond a few paragraphs. My eyes would glaze over and, even more disastrously for someone who had spent their career writing, I seemed unable to string together my thoughts, let alone write anything longer than a few sentences. When I try to explain the impact, I can only offer a metaphor: it felt like my imagination and use of language were vacuum packed, like a slab of meat coated in plastic. I had lost the ability to turn ideas around, see them from different perspectives. I could no longer draw connections between disparate ideas.

At the time, I was working in media strategy. It was a culture of back-to-back meetings from 8:30 AM to 6 PM, and there were plenty of advantages to be gained from continuing late into the evening if you had the stamina. Commitment was measured by emails with a pertinent weblink. Meetings were sometimes as brief as thirty minutes and frequently ran through lunch. Meanwhile, everyone was sneaking time to battle with the constant emails, eyes flickering to their phone screens in every conversation. The result was a kind of crazy fog, a mishmash of inconclusive discussions.

At first, it was exhilarating, like being on those crazy rides in a theme park. By the end, the effect was disastrous. I was almost continuously ill, battling migraines and unidentifiable viruses. When I finally made the drastic decision to leave, my income collapsed to a fraction of its previous level and my family’s lifestyle had to change accordingly. I had no idea what I was going to do; I had lost all faith in my ability to write. I told friends I would have to return the advance I’d received to write a book. I had to try to get back to the skills of reflection and focus that had once been ingrained in me.

The first step was to teach myself to read again. I sometimes went to a café, leaving my phone and computer behind. I had to slow down the racing incoherence of my mind so that it could settle on the text and its gradual development of an argument or narrative thread. The turning point in my recovery was a five weeks’ research trip to the Scottish Outer Hebrides. On the journey north of Glasgow, my mobile phone lost its Internet connection. I had cut myself loose with only the occasional text or call to family back home. Somewhere on the long Atlantic beaches of these wild and dramatic islands, I rediscovered my ability to write.

I attribute that in part to a stunning exhibition I came across in the small harbor town of Lochboisdale, on the island of South Uist. Vija Celmins is an acclaimed Latvian-American artist whose work is famous for its astonishing patience. She can take a year or more to make a woodcut that portrays in minute detail the surface of the sea. A postcard of her work now sits above my desk, a reminder of the power of slow thinking.

Just as we’ve had a slow eating movement, we need a slow thinking campaign. Its manifesto could be the German poet Rainer Maria Rilke’s beautiful “Letters to a Young Poet”:

To let every impression and the germ of every feeling come to completion inside, in the dark, in the unsayable, the unconscious, in what is unattainable to one’s own intellect, and to wait with deep humility and patience for the hour when a new clarity is delivered.

Many great thinkers attest that they have their best insights in moments of relaxation, the proverbial brainwave in the bath. We actually need what we most fear: boredom.

When I left my job (and I was lucky that I could), friends and colleagues were bewildered. Why give up a good job? But I felt that here was an experiment worth trying. Crawford frames it well as “intellectual biodiversity.” At a time of crisis, we need people thinking in different ways. If we all jump to the tune of Facebook or Instagram and allow ourselves to be primed by Twitter, the danger is that we lose the “trained powers of concentration” that allow us, in Crawford’s words, “to recognize that independence of thought and feeling is a fragile thing, and requires certain conditions.”

I also took to heart the insights of the historian Timothy Snyder, who concluded from his studies of twentieth-century European totalitarianism that the way to fend off tyranny is to read books, make an effort to separate yourself from the Internet, and “be kind to our language… Think up your own way of speaking.” Dropping out and going offline enabled me to get back to reading, voraciously, and to writing; beyond that, it’s too early to announce the results of my experiment with attention. As Rilke said, “These things cannot be measured by time, a year has no meaning, and ten years are nothing.”

*

A recent column in The New Yorker cheekily suggests that all the fuss about the impact of digital technologies on our attention is nothing more than writers’ worrying about their own working habits. Is all this anxiety about our fragmenting minds a moral panic akin to those that swept Victorian Britain about sexual behavior? Patterns of attention are changing, but perhaps it doesn’t much matter?

My teenage children read much less than I did. One son used to play chess online with a friend, text on his phone, and do his homework all at the same time. I was horrified, but he got a place at Oxford. At his interview, he met a third-year history undergraduate who told him he hadn’t yet read any books in his time at university. But my kids are considerably more knowledgeable about a vast range of subjects than I was at their age. There’s a small voice suggesting that the forms of attention I was brought up with could be a thing of the past; the sustained concentration required to read a whole book will become an obscure niche hobby.

And yet, I’m haunted by a reflection: the magnificent illuminations of the eighth-century Book of Kells has intricate patterning that no one has ever been able to copy, such is the fineness of the tight spirals. Lines are a millimeter apart. They indicate a steadiness of hand and mind—a capability most of us have long since lost. Could we be trading in capacities for focus in exchange for a breadth of reference? Some might argue that’s not a bad trade. But we would lose depth: artist Paul Klee wrote that he would spend a day in silent contemplation of something before he painted it. Paul Cézanne was similarly known for his trance like attention on his subject. Madame Cézanne recollected how her husband would gaze at the landscape, and told her, “The landscape thinks itself in me, and I am its consciousness.” The philosopher Maurice Merleau-Ponty describes a contemplative attention in which one steps outside of oneself and immerses oneself in the object of attention.

It’s not just artists who require such depth of attention. Nearly two decades ago, a doctor teaching medical students at Yale was frustrated at their inability to distinguish between types of skin lesions. Their gaze seemed restless and careless. He took his students to an art gallery and told them to look at a picture for fifteen minutes. The program is now used in dozens of US medical schools.

Some argue that losing the capacity for deep attention presages catastrophe. It is the building block of “intimacy, wisdom, and cultural progress,” argues Maggie Jackson in her book Distracted, in which she warns that “as our attentional skills are squandered, we are plunging into a culture of mistrust, skimming, and a dehumanizing merging between man and machine.” Significantly, her research began with a curiosity about why so many Americans were deeply dissatisfied with life. She argues that losing the capacity for deep attention makes it harder to make sense of experience and to find meaning—from which comes wonder and fulfillment. She fears a new “dark age” in which we forget what makes us truly happy.

Strikingly, the epicenter of this wave of anxiety over our attention is the US. All the authors I’ve cited are American. It’s been argued that this debate represents an existential crisis for America because it exposes the flawed nature of its greatest ideal, individual freedom. The commonly accepted notion is that to be free is to make choices, and no one can challenge that expression of autonomy. But if our choices are actually engineered by thousands of very clever, well-paid digital developers, are we free? The former Google employee Tristan Harris confessed in an article in 2016 that technology “gives people the illusion of free choice while architecting the menu so that [tech giants] win, no matter what you choose.”

Despite my children’s multitasking, I maintain that vital human capacities—depth of insight, emotional connection, and creativity—are at risk. I’m intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

One of the most interesting responses to our distraction epidemic is mindfulness. Its popularity is evidence that people are trying to find a way to protect and nourish their minds. Jon Kabat-Zinn, who pioneered the development of secular mindfulness, draws an analogy with jogging: just as keeping your body fit is now well understood, people will come to realize the importance of looking after their minds.

I’ve meditated regularly for twenty years, but curious as to how this is becoming mainstream, I went to an event in the heart of high-tech Shoreditch in London. In a hipster workspaces with funky architecture, excellent coffee, and an impressive range of beards, a soft-spoken retired Oxford professor of psychology, Mark Williams, was talking about how multitasking has a switching cost in focus and concentration. Our unique human ability to remember the past and to think ahead brings a cost; we lose the present. To counter this, he advocated a daily practice of mindfulness: bringing attention back to the body—the physical sensations of the breath, the hands, the feet. Williams explained how fear and anxiety inhibit creativity. In time, the practice of mindfulness enables you to acknowledge fear calmly and even to investigate it with curiosity. You learn to place your attention in the moment, noticing details such as the sunlight or the taste of the coffee.

On a recent retreat, I was beside a river early one morning and a rower passed. I watched the boat slip by and enjoyed the beauty in a radically new way. The moment was sufficient; there was nothing I wanted to add or take away—no thought of how I wanted to do this every day, or how I wanted to learn to row, or how I wished I was in the boat. Nothing but the pleasure of witnessing it. The busy-ness of the mind had stilled. Mindfulness can be a remarkable bid to reclaim our attention and to claim real freedom, the freedom from our habitual reactivity that makes us easy prey for manipulation.

But I worry that the integrity of mindfulness is fragile, vulnerable both to commercialization by employers who see it as a form of mental performance enhancement and to consumer commodification, rather than contributing to the formation of ethical character. Mindfulness as a meditation practice originates in Buddhism, and without that tradition’s ethics, there is a high risk of it being hijacked and misrepresented.

Back in the Sixties, the countercultural psychologist Timothy Leary rebelled against the conformity of the new mass media age and called for, in Crawford’s words, an “attentional revolution.” Leary urged people to take control of the media they consumed as a crucial act of self-determination; pay attention to where you place your attention, he declared. The social critic Herbert Marcuse believed Leary was fighting the struggle for the ultimate form of freedom, which Marcuse defined as the ability “to live without anxiety.” These were radical prophets whose words have an uncanny resonance today. Distraction has become a commercial and political strategy, and it amounts to a form of emotional violence that cripples people, leaving them unable to gather their thoughts and overwhelmed by a sense of inadequacy. It’s a powerful form of oppression dressed up in the language of individual choice.

The stakes could hardly be higher, as William James knew a century ago: “The faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will.” And what are we humans without these three?

Our Bigoted Brains

Photo credit: Art Killing ApathyBigotry,

By Eleanor Goldfield

Source: Popular Resistance

If you’ve ever moved beyond small talk and vapid pleasantries in conversation then you’ve likely dealt with the infuriating occurrence of trying to convince someone of a fact they just don’t want to accept. Beyond just avoiding the information, they almost seem hardwired to reject your proof in a phenomenon that I like to call “fact fear.” I noticed a sharp rise in fact fear during the 2016 elections and levels continue to hover at disturbing heights today. So, what gives? Are we really in a new era of idiocy or are we just seeing our particularly vapid and anti-intellectual culture ping off the most base and stubborn aspects of the human psyche? Both, I think.

Daniel Patrick Moynihan famously said, “You are entitled to your own opinions but not your own facts.” But in the post-truth, anti-intellectual, “I read it in a blog post so it must be true” era, our opinions and beliefs run the ever-growing risk of being founded on complete bullshit. Filter bubbles (which we covered in Episode 132) and digital spheres protect us from ideas and facts outside our own personal shit heaps and only serve up the information we want to see – which is not always factually accurate. Our culture of quick “news” and a growing lack of intellectual curiosity drive us further into an echo chamber of our own ideas – facts and information be damned. In turn, a false but far from flimsy crust solidifies around our minds –- and the soft light of truth and knowledge can’t get in.

Our brains are wired to find comfort behind that crust. In what’s known as the backfire effect, our minds reject new information that clashes with our belief systems and opinions. As David McRaney, author of the books You Are Not So Smart and You Are Now Less Dumb explains it: “Once something is added to your collection of beliefs, you protect it from harm. You do this instinctively and unconsciously when confronted with attitude-inconsistent information. Just as confirmation bias shields you when you actively seek information, the backfire effect defends you when the information seeks you, when it blindsides you. Coming or going, you stick to your beliefs instead of questioning them. When someone tries to correct you, tries to dilute your misconceptions, it backfires and strengthens those misconceptions instead. Over time, the backfire effect makes you less skeptical of those things that allow you to continue seeing your beliefs and attitudes as true and proper.” In other words, that crust acts as a shield for facts that threaten your beliefs. And you’ve probably seen this in action online. Someone posts something like “climate change isn’t real” so you face palm and proceed to post ample proof that they’re wrong and that in fact, climate change is very real. The problem is that with each fact you post, the crust hardens – for both you AND the other person. Of course, it doesn’t help that you can find just as much if not more bullshit online than you can actual truth but much of it is actually tied to the inherent laziness of our brains. “The more difficult it becomes to process a series of statements, the less credit you give them overall…In experiments where two facts were placed side by side, subjects tended to rate statements as more likely to be true when those statements were presented in simple, legible type than when printed in a weird font with a difficult-to-read color pattern. Similarly, a barrage of counterarguments taking up a full page seems to be less persuasive to a naysayer than a single, simple, powerful statement.”

So does this mean that we should just stop pointing out when someone is wrong? That we should let all kinds of ridiculous notions from lizard people to “homeless people are dangerous” slide? Of course not. Rather, it means that if we want to educate and engage, if we want people to wake the fuck up then we have to consider how the human mind works – and how it doesn’t. Furthermore, our goal shouldn’t be to win an argument as if to suggest that as soon as we win the argument, justice is at hand. Indeed, if your goal is to simply be right, the religious zeal with which you defend your ideas will only turn you into the very monster you are trying to slay. Being right, in other words, is not the point. Progress is not a church. We are not looking for converts. Our aim should always be to engage and empower; to share our knowledge and embrace the knowledge of others. Seek and speak truth – then act on it. And do not think that because you found a nugget of information you hold moral superiority over those who don’t know it. Our minds are just as susceptible to that crust as anyone else’s. We are not special or better. Indeed, we must constantly question our opinions, compare them with facts and new information. By that I mean actual facts, not “alt facts” sourced from a single Google search or indeed even something from a “trusted” publication.

In an interview last March with Robert Scheer of Truthdig, author Joel Whitney discusses the 60 year history of fake news and how it was used in the Cold War era to sew distrust and hatred of all things “commie” and Russian. Major and well-trusted media outlets were a part of the propaganda ring and Whitney notes “that the fearful political atmosphere at the time led to “secrecy being used to preside over and rule over the free press — which we’re supposed to be the champions of.” “They drank the Kool-Aid and thought they were saving freedom,” Scheer agrees. The discussion underscores the need for analysis of Cold War-era media as a way to avoid propagandized journalism today. Scheer says, “I look at the current situation, where we don’t even have a good communist enemy, so we’re inventing Russia as a reborn communist power enemy.” “I call it superpolitics,” Whitney concludes, “where essentially there’s something that’s so evil and so frightening that we have to change how our democratic institutions work.” The latest red scare is but one example of how easy it is to mold minds when you use something like the hatred of Trump as a trigger. Blend some tried and true anti-Russia sentiment in there and you’ve got yourself a brand spanking new enemy – one that allows you to further mold those democratic institutions just as seamlessly as the minds you’ve now crusted and convinced. Russia aside, the same propaganda games go for mass media collusion on everything from fracking to the military industrial complex. For example, leaked email messages show that a writer from the LA times colluded with the CIA not only in terms of getting the CIA’s OK on forthcoming stories but actually offered to write stories for them that would put a positive spin on such issues as drone warfare, saying it would be quote “reassuring to the public” and a “good opportunity” for the CIA. Various other email messages show that the same was true for a list of other media outlets.

All this to say, if I may borrow a phrase: no investigation, no right to speak. Do your own research before you claim to know something. Incidentally this will also help steel you from the rumor mill AND infiltrators. Whether it’s taking someone to task on a rumor or engaging someone in discourse, be vigilant. Be vigilant in your drive, your actions, and be primarily vigilant in your own thinking. Release the flimsy beliefs that would just as soon sink you as keep you afloat. Arm yourself with knowledge and reach out to build and to engage, not to win a Facebook tiff. Consider the goal of engagement and empowerment rather than just being right. Consider the reaction you would have if someone came at you with a barrage of links followed by “read a fucking book, asshole!” You’d more than likely write them off as an unhinged asshole even if their information is solid. Try asking a question or perhaps as Ben Franklin suggested, ask them for a favor, something small that seeds trust, pinging off the human psychology that seeks appreciation and the feel of community. Push past political theory and get down to human connection – to start with. For instance, anarchists in several communities will often engage with would-be white supremacists via the common ground of distrust and disgust in the system. They’ll sit and talk; discuss the pitfalls of a system that’s left them behind and from there grow to a discourse on the roots of their discontent, i.e. not black people. In other words, knock on the door rather than trying to break down the wall. Because that wrought iron shit crust is stronger than steel.

Finally, keep in mind that this won’t always work and nor should it. We have to accept that in our grandiose imaginings of the revolution, many people will either be against us or sitting at home praying we all just shut up. But again, if you’re not looking for converts, if you’re engaging to empower, you’ll not only find more people willing to talk to you, you’ll also more than likely learn something in the process. You might even pick something up that’ll put a dent in your own mental crust.

Alex Schlegel on Imagination, the Brain and ‘the Mental Workspace’

By Rob Hopkins

Source: Resilience

What happens in the brain when we’re being imaginative?  Neuroscientists are moving away from the idea of what’s called ‘localisationism’ (the idea that each capacity of the brain is linked to a particular ‘area’ of the brain) towards the idea that what’s more important is to identify the networks that fire in order to enable particular activities or insights.  Alex Schlegel is a cognitive neuroscientist, which he describes as being about “trying to understand how the structure and function of the brain creates the mind and the consciousness we experience and everything that makes us human, like imagination”.

He recently co-published fascinating research entitled “Network structure and dynamics of the mental workspace” which appeared in Proceedings of the National Academy of Sciences, which identified what the authors called “the mental workspace”, the network that fires in the brain when we are being imaginative.  I spoke to Alex via Skype, and started by asking him to explain what the mental workspace is [here is the podcast of our full conversation, an edited transcript appears below].

This is maybe just a product of the historical moment we’re in with cognitive neuroscience researching, that most of neuroscience research, I think I would say even now, is still focused on finding where is the neuro correlate of some function?  Where does language happen?  Where does vision happen?  Where does memory happen?  Those kinds of things.

It was very easy to ask those questions when fRMI came around, because we could stick someone in the scanner and have them do one task, and do a control test, and then do the real test, and see what part of the brain lights up, in one case rather than the other.  Those very well controlled reductionist kinds of paradigms behind these very clean blobs where something happens in one case versus the other.  I think that led a lot to the story of one place in the brain for every function and we just have to map out those places.

But in reality, the brain is a complex system.  It works in a real world which is a complex environment, and in any kind of real behaviour that we engage in, the entire brain is going to be involved in one way or the other.  Especially when you start to get into these more complex abilities that are very hard to reduce to this highly controlled A versus B kind of thing.

To really understand the behaviour itself, like imagination, it’s not that surprising that it’s going to be a complex, multi-network kind of phenomenon. I think why we were able to show that is maybe primarily because the techniques are advancing in the field and we’re starting to figure out how to look at these behaviours in a more realistic way. One of the big limitations of cognitive neuroscience research right now, because of fMRI, because of the techniques we’ve had, is that we tend of think of behaviour as activating, or not activating the brain.

When we’re doing analyses of brain activity, we’re looking for areas that become more active than another. This is changing a lot in the last few years, but at least for the first fifteen, twenty years, that was one of the only ways we would look at brain activity. So it simplistically thinks of the brain as of some other organ where it’s either buzzing, or it’s not buzzing, or it’s buzzing, or it’s not buzzing, or if it buzzes, the language happens. But really the brain is a complex computational system.

It’s doing complex computations and information processing and that’s not something you’re really going to see if you’re just looking for, in a large area, increased versus decreased activity. When we start to be able to look at the brain more in terms of the information that is processing, and where we can see information, how we can see communication between different areas, then you can start to look at things like imagination, or mental workspace, in a more complex light.

So how does that idea sit alongside the ideas firstly of the ‘Default Network’, which is often linked to creativity and imagination as well, and also to the idea that the hippocampus is the area that is essential to a healthy, functioning imagination?  Do those three ideas just fit seamlessly together, or are they heading off in different directions?

I can give you my opinion, that’s not very well founded in any kind of data, but this is something that we’ve talked about a lot in the lab.  I have a suspicion that actually we had been thinking about how to test for a while.  So the Default Mode Network was first seen as this network that would become more active in between tasks.  So when we’re doing an fRMI experiment what we’ll usually do is you’ll have some period where you’re doing the task, and then there’s a period where you’re just resting, so you can get the baseline brain activity when you’re not doing anything.  And this was a surprising result, is that actually during rest periods, some areas of the brain become more active.  And, you know, “Oh wow, it’s a surprise, the person’s not just sitting there blankly doing nothing.”  The brain doesn’t just totally deactivate.  They’re doing other stuff during those blank periods where there’s no stimulus on the screen.

From my personal experience, what you do in those rest periods is you daydream.  Your mind wanders.  You think about what you’re going to do afterwards, or stuff that’s happened during the day.  There’s a lot of research since then to back that up.  It seems to be this kind of network that’s highly involved in daydreaming like behaviour, or social imagination, those kinds of things.

My opinion, or my suspicion, is that this is illustrating how our term ‘imagination’ really encompasses a lot of different things.  When you try to lump it under this one term, this one mega term, you’re going to be missing out on a lot of the complexity, or subtlety.  So what I suspect is going on is that there’s this more like daydreaming mode of control over your inner space, where you’re not really consciously, volitionally, directing yourself to have certain experiences.  There’s a default control network that’s more taking over the daydreaming.

When I daydream I’m not trying to think about anything, it’s just letting the thoughts come.  That’s maybe part of what imagination is, but a very important part of imagination is you trying to imagine things, trying to direct yourself, thinking, “Well, what is the relationship between these two things?  Or “how can I build community?””  Or something like that.  In that case you’re taking active volitional control over these systems.  So that would be my suspicion of what’s going on.

How the results we found would differ from default mode network is that in our study we would show people some stimulus (see below) and we would say, “Rotate this 90 degrees clockwise”, so they had this fairly difficult task that they had to do and it was effortful.  This more frontal parietal network probably took over then.  And you see that a lot in other studies.  Our frontal parietal, I think they sometimes call it like an Executive Attention Network, that directs when you’re consciously trying to engage in some tests, that takes over, and if you’re not doing anything, the default mode network takes over.

So they’re both different manifestations of the imagination?  Like an active and a more passive, less conscious version?  They’re two versions of the same thing, in a sense?

Yeah, I would think that.  It fits well with what I’ve seen.  There have been studies that show that they’re in some ways antagonistic or mutually inhibiting, the default mode network and this executive attention network…

It’s like oil or water, it’s one or the other?  Or Ying and Yang, as I’ve read in some papers?

Right, but a simple way of describing these that people often resort to is that the Executive Attention Network is designed for attention to the outside world, and the Default Mode Network is attention to the inner space.  Where I would disagree with that, or suggest that that’s not the case, is that I think a better way to classify it would be that executive attention is more of this volitionally driven attention, which is usually associated with attention to the outside world.  And default mode network is more – I don’t know how to describe it exactly, but it’s more of this daydreaming network.  But the point is that your executive volitional attention can be driven to the inner space just as much as it can be driven to the outside world.

Is the mental space network the same kind of network that would be firing in people as when they’re thinking about the future and trying to be imaginative about how the future could be?

Yeah, I would think so.  I think an important difference, or an important additional part that you might start to see if you’re thinking about imagining the future, is that practically most of the time when you’re imagining the future, you’re thinking about people, and social groups, and how to navigate those kinds of dynamics.

So I would guess that then you would get added into the mix all the social processing networks that we have.  That’s actually another thing that we’re thinking about how to look at, is that practically a big chunk of human cognition is spent thinking about your relationship with other people, and how to navigate that.  There’s a good argument to be made that that kind of complex processing space was one of the main drivers of us becoming who we are.  Because social cognition is some of the most complex cognition we do, trying to imagine what somebody’s thinking by looking at their facial expression, or imagine how do I resolve a conflict between these two people who are fighting.  Things like that.

We do have very specialised regions and networks in the brain that have evolved to do that kind of processing.  So yeah, it’s a very interesting question.  That how would these other mental workspace areas, at least that we looked at, that had nothing to do with it, you know, it’s like, “Here’s this abstract shape.  What does it look like if you flip it horizontally here”, things like that.  How would they interact with these socially evolved areas?  It’s a very interesting question.

A lot of the research that I’ve been looking at is about how when people are in states of trauma, or when people grow up in states of fear, that the hippocampus visibly shrinks and that cells are burnt out in the hippocampus, and that people become less able to imagine the future.  People get stuck in the present, and it’s one of the indicators, particularly with post-traumatic stress, is that inability to look forward, and inability to imagine a future.  Do you have any knowledge of, or any speculation about, what happens to the mental workspace when people are in states of trauma or when people are in states of fear?

Definitely no data, only speculation.  As with anything real and interesting involving humans, it’s going to be incredibly complex.  So it would be very difficult, and may be impossible to distil it down to simple understandable things that are happening in the brain, but what I would guess is that, in people that are in stressful situations, and experiencing trauma, you tend to focus – like you were hinting at – you tend to focus on present.  What’s there immediately?  How do I survive this day?

You don’t tend to think much about planning for the future.  Synthesising everything that’s happened to you in the past, you just react in the moment because you don’t know what the next moments are going to be like.  It’s no more cognitive load that you can deal with because of all the stress you have.  So I would guess that for one you’re not really synthesising or processing your experiences into something brought to bear on decisions in the future as much.

And you’re not exercising those muscles of planning far into the future.  So just like any other muscle in the body, if you don’t practice the skills, and you don’t use various parts of your brain, they’re going to atrophy.  They’re not going to develop in the way that they would if you did use them.  In that sense it seems perfectly understandable and not that surprising that these areas and these networks that we found associated with these kinds of activities of projecting oneself in the future, or imaging that things don’t exist, in people for whatever reason aren’t doing that kind of thing regularly in their lives, they’re not going to be developed as much as they would from people that were happy and healthy and imaginative.

The paper that Kyung Hee Kim published in 2010, ‘The Creativity Crisis’ suggested that we might be seeing a decline in our collective imagination.  Do you have any thoughts on why that might be, or what might be some of the processes at work here?

I could speculate a couple of things.  The first thing that pops to mind obviously is education.  How we think about the educational system, how we train children.  And I don’t know about 1990 in particular but definitely starting in 1999 when we became test-crazed, that would be a very obvious culprit.

One thing to think about with the Torrance test and pretty much all tests, these standardised tests of creativity that we use, is that one of the major components that determines the outcome on the test is this divergent thinking idea.  How many ideas can you come up with?  So this has, I think, fairly detrimentally become one of the working definitions we have in psychology research of creativity, is “how much?”  And not really focusing on quality so much, and just using how many ideas you can think of as a stand in for how creative someone is.

The Torrance Test is better because it does get into other dimensions as well, but still some of the major dimensions determining the score are fluency, when you’re doing these drawings, how many components are there in the drawing?  That kind of thing.  So for instance if there were educational trends starting in the 1990s and continuing to now that were leading people to try to converge rather than diverge – you know, “What’s the one right answer?” versus, “What are lots of possible answers?” – then that could definitely lead to these changes we’ve been seeing in the tests.

Even if that were the case though, is that really a problem? Obviously we want people to be able to think of lots of possibilities but if it’s just, for instance, people who have been brought up in an educational system where they’ve been taking standardised tests all the time, and they’re trying to figure out which of the four bubbles is the right one to fill in, then that could just be a habit they’ve developed that carries over to these tests.  I don’t know exactly.

Another idea that maybe would be related to this is we’re definitely much less idle than we were in the past.  I guess we lament all the time how overscheduled kids are.  They go from soccer practice to band practice to art class, to blah, blah, blah, blah, trying to fill up their resume for college or whatever.  So if somebody is just constantly buzzing, busy, not really just stopping and daydreaming, and throwing rocks in creeks or whatever, then that’s again, it’s a habit they’re not going to have developed and they’re not going to be able to use as well.

This idleness, or giving up control to the Default Mode Network maybe, if you will, letting those ideas come in, exploring possibilities, those are things that I think often come out of boredom. And if you’re never bored, you’re never really letting those processes happen.  So that would be another thing to think about.

So if somebody is less imaginative, is that because that when the mental workspace fires, it’s including less places, or that it’s joining them up less vigorously? I don’t have all the terminology.  It all fires, but it fires to less places?  Or it fires less strongly to all those different places?

I think it would be basically everything, to give you a terrible answer.  For instance, this is where we’re really getting at how imagination is a very, very complex process that we’re distilling to a single word, and it’s really thousands of parts to come together.

For instance, if you can imagine visual experiences more or less vividly, then that’s going to play a role.  Somebody who can have very vivid mental images of things is going to probably have an easier time recombining things than somebody who really struggles to form a visual image.  Or on the flip side, there’s a lot of circumstantial evidence that people tend to go to one end or another of being very visual people, and I consider myself on those…  When I think, I tend to think a lot in terms of visual representations.  So it’s very easy for me to do the kinds of tasks that I ask subjects to do, where you know, “Here’s this weird random shape, what would it look like if it was rotated 90 degrees?”

Some people have a really hard time doing that kind of stuff though.  They’ve very smart people, but they’re just terrible at mentally manipulating images.  But if you have them think about other things, like more verbal kinds of verbal logical representations, they’re really good at that.  So even trying to talk about the mental workspace network as one static network of areas in the brain is probably not true, or probably not accurate because different people will have different connections, or different parts of it will be more active than others.

When I’m trying to mentally imagine things, for some people like me, that might involve mental or visual images, and that’s the way I think about it, but for other people it might involve much more the language areas of the brain, exercising that language network in a more mental way.  And that might lead to strengths for some people versus others, and vice versa, depending on what kinds of tests you’re trying to do, or whether you’re a verbal person that’s being forced to try to do something visual, or vice versa.

So given that these networks are involved are these complex information processing systems, there’s any number of ways where they can differ or fail, or become strengthened or become atrophied.

One of the questions I’ve asked everybody that I’ve interviewed has been if you had been elected last year as the President on a platform of ‘Make America Imaginative Again’, if you had thought actually one of the most important things we need is to have young people have a society that really cherishes the imagination, an education system where people come out really fired up and passionate, what might be some of the things you would do in your first 100 days in office?

First 100 days?  Well I think the real solutions are things that are more like 20 year solutions.  So you can start at a 100 days I guess but you definitely won’t solve it in 100 days.  For me it all comes down to how we choose to educate people.  I come at this all from a perspective of the US education system, so one thing is that we don’t view a teacher as a profession really, in the same way that we do as a medical doctor, or a lawyer.

I would say we need the equivalent training and residencies and professional degrees for teachers that we would have with anything else that’s as important a profession as teaching is.  Obviously we shouldn’t be focused on tests in the way that we are.  If you teach tests, and you teach to the kind of competencies a child should achieve by fifth grade, you’re going to be ignoring all the things that are hard to measure, for one thing, like imagination, creativity, curiosity.  How do you evaluate whether a kid’s curious?  I don’t know.

One of the changes I would want to see is that we trust more that the outcomes that we want will come rather than need to see them happen, because if you need to see a result, then you’ll only focus the things that you can see.   And for a lot of what education really does, it’s very hard to measure it in any reliable way.  If your goal is create a society of people that are civically engaged, that are curious, that are creative, compassionate, that’s all stuff that you just have to set up a system to do that, and hope that the outcome you measure will be the society you create, basically.  So that it frees you to focus on those things, and not focus on maths skills, reading skills, that kind of thing.

So in the first 100 days, what do you do? I don’t know. One concrete thing you could do is try to reorganise the teacher training system to make it more professionally aligned.

Like they have in Finland, where teachers are basically trained to Masters level, and then there’s no testing in schools of teachers.  They are then just empowered to teach, and they have the most amount of play and the shortest school hours of any country in Europe, and they constantly gain the best results and the brightest students.

Maybe that would be the first thing we could do, just copy Scandinavia.