The Forgotten Past Will Always be Repeated

By Robert Fantina

Source: CounterPunch

“Those who cannot remember the past are condemned to repeat it.” So said George Santayana.

It is difficult today to look at many, many situations in the world and not see some of the worst events in history being repeated, especially as they relate to racism and genocide. Has the world forgotten Nazi atrocities against Jews, Gypsies, intellectuals and others? Do we not remember the horrific U.S. bombing of two Japanese cities? Are Churchill’s colonial atrocities no longer worth considering?

We will review some of the most egregious events unfolding in front of us today.

Israel: This racist, apartheid state, which, for over 70 years has brutally oppressed the Palestinian people, this year declared itself the nation-state of the Jewish people, and only the Jewish people. Yet 25% of people who live within the borders of the Zionist entity are not Jewish. And despite marginalizing fully one-quarter of its residents, and relegating them to second-class citizen status, government spokespeople have the nerve to proclaim Israel as a democracy.

And what international outrage does this bring? Despite flagrant and constant violations of international law, not to mention common human decency, most of the countries of the world either ignore it all, or issue gentle rebukes, at best. And the world’s media seldom reports on it.

Myanmar: For several years now, the repression and expulsion of the Rohingya people has been ongoing government policy. A United Nations study of August 2018 found evidence of widespread violations of human rights. As of that date, nearly three-quarters of a million Rohingya people have had to flee their homeland due to the brutal persecution they have experienced. The U.N. study stated that military abuses of the Rohingyas “undoubtedly amount to the gravest crimes under international law.” Yet, like Israel’s brutal oppression of the Palestinians, this gets little press in North America.

India: Earlier this year, the government of India revoked Article 370 in the constitution that allowed occupied Kashmir some limited autonomy. Since that time, tens of thousands of Indian troops have stormed into Kashmir, communications with the outside world have been cut, and news media personal are forbidden from entering. The death toll from this recent repression is unknown. Government officials have publicly announced plans to colonize Kashmir using the same model of land confiscation and illegal settlement construction that Israel has used for decades against Palestine. Again, one listens in vain for international opposition to these crimes against humanity.

Additionally, India has now passed a law easing the path for citizenship for many refugees, but making immigration by Muslims more difficult. This has caused widespread protest demonstration throughout India, which are being met with police violence.

Saudi Arabia: That nation’s brutal onslaught against the people of Yemen continues, with the Yemeni death toll in the thousands and constantly rising. A bill passed by the U.S. Congress to prevent the U.S. from selling more weaponry to Saudi Arabia (the largest buyer of weapons from the U.S.) for use in Yemen was vetoed by President Donald Trump, thus enabling the continued slaughter of the Yemenis.

United States: Undergirding all this is the United States. That nation provides huge amounts of financial support to apartheid Israel, as well as protecting it from the legitimate consequences of its action on the international stage. Government spokespeople talk of ‘monitoring’ the situation in Myanmar, and have been mainly silent about India.

But the U.S. remains busy: its decades-long sanctions against Cuba remain. Following some easing of restrictions during the administration of President Barack Obama, Trump has re-issued them all. His sanctions against Iran, in violation of the Joint Comprehensive Plan of Action (JCPOA) and international law are causing great hardship for the people there. The U.S. supports the overthrow of the governments of Venezuela, issuing cruel sanctions against that country, and Syria, going so far as to actually arm, equip and train anti-government terrorist groups in the country, where, due largely to U.S. interference, hundreds of thousands of people have died, and hundreds of thousands more have been injured and left homeless. U.S. brutality toward the Yemenis has been mentioned above. The U.S. was involved in the recent overthrow of the Bolivian government.

Yet as recently as this week, U.S. Secretary of State Mike Pompeo proclaimed that the U.S. supports the right of people everywhere to self-determination. He said this when announcing additional sanctions against Iran, the irony of it certainly being lost on him and Trump. Self-determination is all well and good in the eyes of the U.S. government, as long as certain conditions are met: the form of government the people select must not be socialist; it must be a government that will do the U.S.’s bidding; it must not be a government that thinks its country’s natural resources are its own; it must not question or oppose Israel in any way, and it must not put its people before U.S. corporate profits. It must, in all ways and at all times, be willing to do its part to satisfy the U.S.’s geo-political goals.

As we look at the results of forgetting, or not learning from, history, we should ask an important question: When similar behaviors, perpetrated by German, Italian and English leaders happened before, what was the result? Additional questions arise: Did Germany succeed in annexing its neighbors? Were Italy and England successful in achieving their political goals? No, the result of such behaviors was a war whose devastation exceeded anything that can be imagined. And it brought the United States into its role as a super-power, which has been detrimental for much of the world.

The U.S. has never been interested in human rights or international law. Those in power in that country have only coveted riches, and have been, and are, willing to obtain them in any way possible, regardless of how much blood must flow. And that blood can be of innocent men, women and children in a far-off country, or of its own citizens who it sends to war. It will attempt, however futilely, to order the world to its liking. And as people protest, all possible efforts will be made to crush them.

One thing the U.S. and many other nations have failed to learn is that people will only be able to tolerate the thwarting of their legitimate aspirations for a limited amount of time: they will not allow it to go on forever. Working to genuinely assist them will make more friends and allies, and ultimately be far more beneficial for the world than opposing them and oppressing them ever will.

Until the U.S. and other powerful nations learn this vital lesson, the suffering they cause will be unending.

The Varieties of Psychonautic Experience: Erik Davis’s ‘High Weirdness’

Art by Arik Roper

By Michael Grasso

Source: We Are the Mutants

High Weirdness: Drugs, Esoterica, and Visionary Experience in the Seventies
By Erik Davis
Strange Attractor Press/MIT Press, 2019

Two months ago, I devoured Erik Davis’s magisterial 2019 book High Weirdness: Drugs, Esoterica, and Visionary Experience in the Seventies the same weekend I got it, despite its 400-plus pages of sometimes dense, specialist prose. And for the past two months I have tried, in fits and starts, to gather together my thoughts on it—failing every single time. Sometimes it’s been for having far too much to say about the astonishing level of detail and philosophical depth contained within. Sometimes it’s been because the book’s presentation of the visionary mysticism of three Americans in the 1970s—ethnobotanist and psychonaut Terence McKenna, parapolitical trickster Robert Anton Wilson, and paranoid storyteller-mystic Philip K. Dick—has hit far too close to home for me personally, living in the late 2010s in a similarly agitated political (and mystical) state. In short, High Weirdness has seemed to me, sitting on my bookshelf, desk, or in my backpack, like some cursed magical grimoire out of Weird fiction—a Necronomicon or The King in Yellow, perhaps—and I became obsessed with its spiraling exploration of the unfathomable universe above and the depthless soul below. It has proven itself incapable of summary in any linear, rationalist way.

So let’s dispense with rationalism for the time being. In the spirit of High Weirdness, this review will try to weave an impressionistic, magical spell exploring the commonalities Davis unveils between the respective life’s work and esoteric, drug-aided explorations of McKenna, Wilson, and Dick: explorations that were an attempt to construct meaning out of a world that to these three men, in the aftermath of the cultural revelations and revolutions of the 1960s that challenged the supposed wisdom and goodness of American hegemony, suddenly offered nothing but nihilism, paranoia, and despair. These three men were all, in their own unique ways, magicians, shamans, and spiritualists who used the tools at their disposal—esoteric traditions from both East and West; the common detritus of 20th century Weird pop culture; technocratic research into the human mind, body, and soul; and, of course, psychedelic drugs—to forge some kind of new and desperately-needed mystical tradition in the midst of the dark triumph of the Western world’s rationalism.

A longtime aficionado of Weird America, Davis writes in the introduction to High Weirdness about his own early encounters with Philip K. Dick’s science fiction, the Church of the SubGenius, and other underground strains of the American esoteric in the aftermath of the ’60s and ’70s. As someone who came late in life to a postgraduate degree program (High Weirdness was Davis’s doctoral dissertation for Rice University’s Religion program, as part of a curriculum focus on Gnosticism, Esotericism, and Mysticism), I find it incredibly easy to identify with Davis’s desire to tug at the edges of his longtime association with and love for the Weird in a scholarly context. This book’s scholarly origins do not make High Weirdness unapproachable to the layperson, however. While Davis does delve deeply into philosophical and spiritual theorists and the context of American mysticism throughout the book, he provides succinct and germane summaries of this long history, translating the work of thinkers as diverse as early 20th century psychologist and student of religious and mystical experience William James to contemporary theorists such as Peter Sloterdijk and Mark Fisher. Davis’s introduction draws forth in great detail the long tradition of admitting the ineffable, the scientifically-inexplicable, into the creation of subjective, individual mystical experiences.

Primary among Davis’s foundational investigations, binding together all three men profiled in the book, is a full and thorough accounting of the question, “Why did these myriad mystical experiences all occur in the first half of the 1970s?” It’s a fairly common historical interpretation to look at the Nixon years in America as a hangover from the cultural revolution of the late 1960s, a retrenchment of Nixon’s “silent majority” of middle- and working-class whites vs. the perceived chaos of a militant student movement and identity-based politics among racial and sexual minorities. Davis admits that the general mystical seeking that went on in the early ’70s is a reaction to this revanchism. And while he quotes Robert Anton Wilson’s seeming affirmation of this idea—“The early 70s were the days when the survivors of the Sixties went a bit nuts”—his interest in the three individuals at the center of his study allows him to delve deeper, offering a more profound explanation of the politics and metaphysics of the era. In the immediate aftermath of the assassinations, the political and social chaos, and the election of Nixon in 1968, there was an increased tendency among the younger generation to seek alternatives to mass consumption culture, to engage in what leftist philosopher Herbert Marcuse would term “the Great Refusal.” All three of the figures Davis focuses on in this book, at some level or another, decided to opt out of what their upbringings and conformist America had planned for them, to various levels of harm to their livelihoods and physical and mental health. This refusal was part of an awareness of what a suburban middle-class life had excised from human experience: a sense of meaning-making, of a more profound spirituality detached from the streams of traditional mainline American religious life.

To find something new, the three men at the center of High Weirdness were forced to become bricoleurs—cobbling together a “bootstrap witchery,” in Davis’s words—from real-world occult traditions (both Eastern and Western); from the world of Cold War technocratic experimentation with cybernetics, neuroscience, psychedelics, and out-and-out parapsychology; and from midcentury American pop culture, including science fiction, fantasy, comic books, and pulp fiction. Davis intriguingly cites Dick’s invention of the term “kipple” in his 1968 novel Do Androids Dream of Electric Sheep? as a key concept in understanding how this detritus can be patched together and brought new life. Given Dick’s overall prescience in predicting our 21st century world of social atomization and disrepair, this seems a conceptual echo worth internalizing a half-century later. If the late 1960s represented a mini-cataclysm that showed a glimpse of what a world without the “Black Iron Prison” might look like, those who graduated to the 1970s—the ones who “went a bit nuts”—needed to figure out how to survive by utilizing the bits and scraps left behind after the sweeping turbulence blew through. In many ways, McKenna, Wilson, and Dick are all post-apocalyptic scavengers.

All three men used drugs extensively, although not necessarily as anthropotechnics specifically designed to achieve enlightenment (Davis notes that Dick in particular had preexisting psychological conditions that, in conjunction with his prodigious use of amphetamines in the 1960s, were likely one explanation for his profound and sudden breaks with consensus reality in the ’70s). But we should also recognize (as Davis does) that McKenna, Wilson, and Dick were also, in many ways, enormously privileged. As well-educated scions of white America, born between the Great Depression and the immediate aftermath of World War II, they had the luxury to experiment with spirituality, psychedelic drugs, and technology to various degrees while holding themselves consciously separate from the mainstream institutions that would eventually co-opt and recuperate many of these strains of spirituality and individual seeking into the larger Spectacle. As Davis cannily notes, “Perhaps no one can let themselves unravel into temporary madness like straight white men.” But these origins also help explain the expressly technocratic bent of many of their hopes (McKenna) and fears (Wilson and Dick). Like their close confederate in Weirdness, Thomas Pynchon (who spent his early adulthood working for defense contractor Boeing, an experience which allowed him a keener avenue to his literary critiques of 20th century America), all three men were adjacent to larger power structures that alternately thrilled and repelled them, and which also helped form their specific esoteric worldviews.

It would be a fool’s errand to try to summarize the seven central chapters of the book, which present in great detail Terence (and brother Dennis) McKenna’s mushroom-fueled experiences contacting a higher intelligence in La Chorrera, Colombia in 1971, Robert Anton Wilson’s LSD-and-sex-magick-induced contact with aliens from the star Sirius in 1973 and ’74 as detailed in his 1977 book Cosmic Trigger: The Final Secret of the Illuminati, and Philip K. Dick’s famous series of mystical transmissions and revelations in February and March of 1974, which influenced not only his fiction output for the final eight years of his life but also his colossal “Exegesis,” which sought to interpret these mystical revelations in a Christian and Gnostic context. Davis’s book is out there and I can only encourage you to buy a copy, read these chapters, and revel in their thrilling detail, exhilarating madness, and occasional absurdity. Time and time again, Davis, like a great composer of music, returns to his greater themes: the environment that created these men gave them the tools and technics to blaze a new trail out of the psychological morass of Cold War American culture. At the very least, I can present some individual anecdotes from each of the three men’s mystical experiences, as described by Davis, that should throw some illumination on how they explored their own psyches and the universe using drugs, preexisting religious/esoteric ritual, and the pop cultural clutter that had helped shape them.

Davis presents a chapter focusing on each man’s life leading up to his respective spiritual experiences, followed by a chapter (in the case of Philip K. Dick, two) on his mystical experience and his reactions to it. For Terence McKenna and his brother Dennis, their research into organic psychedelics such as the DMT-containing yagé (first popularized in the West in the Cold War period by William S. Burroughs), alternately known as oo-koo-hé or ayahuasca, led them to South America to find the source of these natural, indigenous entheogens. But at La Chorrera in Colombia they instead met the plentiful and formidable fungus Psilocybe cubensis. In their experiments with the mushroom, Terence and Dennis tuned into perceived resonances with long-dormant synchronicities within their family histories, their childhood love of science fiction, and with the larger universe. Eventually, Dennis, on a more than week-long trip on both mushrooms and ayahuasca, needed to be evacuated from the jungle, but not before he had acted as a “receiver” for cryptic hyper-verbal transmissions, the hallucinogens inside him a “vegetable television” tuned into an unseen frequency—a profound shamanic state that Terence encouraged. The language of technology, of cybernetics, of science is never far from the McKenna brothers’ paradigm of spirituality; the two boys who had spent their childhoods reading publications like Analog and Fate, who had spent their young adulthoods studying botany and science while deep in the works of Marshall McLuhan (arguably a fellow psychedelic mystic who, like the McKennas and Wilson, was steeped in a Catholic cultural tradition), used the language they knew to explain their outré experiences.

Wilson spent his 20s as an editor for Playboy magazine’s letters page and had thus been exposed to the screaming gamut of American political paranoia (while contributing to it in his own inimitable prankster style). He had used this parapolitical wilderness of mirrors, along with his interest in philosophical and magickal orientations such as libertarianism, Discordianism, and Crowleyian Thelema as fuel for both the Illuminatus! trilogy of books written with Robert Shea (published in 1975), and his more than year-long psychedelic-mystical experience in 1973 and 1974, during which he claimed to act as a receiver on an “interstellar ESP channel,” obtaining transmissions from the star Sirius. His experiences as detailed in Cosmic Trigger involve remaining in a prolonged shamanic state (what Wilson called the “Chapel Perilous,” a term redolent with the same sort of medievalism as the McKenna brothers’ belief that they would manifest the Philosopher’s Stone at La Chorrera), providing Wilson with a constant understanding of the universe’s playfully unnerving tendency towards coincidence and synchronicity. Needless to say, the experiences of one Dr. John C. Lilly, who was also around this precise time tuned into ostensible gnostic communications from a spiritual supercomputer, mesh effortlessly with Wilson’s (and Dick’s) experiences thematically; Wilson even used audiotapes of Lilly’s lectures on cognitive meta-programming to kick off his mystical trances. Ironically, it was UFO researcher and keen observer of California’s 1970s paranormal scene Jacques Vallée who helped to extract Wilson out of the Chapel Perilous—by retriggering his more mundane political paranoia, saying that UFOs and other similar phenomena were instruments of global control. In Davis’s memorable words, “Wilson did not escape the Chapel through psychiatric disenchantment but through an even weirder possibility.”

Philip K. Dick, who was a famous science fiction author at the dawn of the ’70s, had already been through his own drug-induced paranoias, political scrapes, and active Christian mystical seeking. Unlike McKenna and Wilson, Dick was a Protestant who had stayed in close contact with his spiritual side throughout adulthood. In his interpretation of his mystical 2-3-74 experience, Dick uses the language and epistemology of Gnostic mystical traditions two millennia old. Davis also notes that Dick used the plots of his own most overtly political and spiritual ’60s output to help him understand and interpret his transcendent experiences. Before he ever heard voices or received flashes of information from a pink laser beam or envisioned flashes of the Roman Empire overlapping with 1970s Orange County California, Dick’s 1960s novels, specifically The Three Stigmata of Palmer Eldritch (1965) and Ubik (1969), had explored the very nature of reality and admitted the possibility of a Gnostic universe run by unknowable, cruel demiurges. Even in these hostile universes, however, there exists a messenger of hope and mercy who seeks to destroy the illusion of existence and bring relief. These existing pieces of cultural and religious “kipple,” along with the parasocial aspects of Christian belief that were abroad in California at the time, such as the Jesus People movement (the source of the Ichthys fish sign that triggered the 2-3-74 experience), gave Dick the equipment he needed to make sense of the communications he received and the consoling realization that he was not alone, that he was instead part of an underground spiritual movement that acted as a modern-day emanation of the early Christian church.

After learning about these three figures’ shockingly similar experiences with drug-induced contact with beyond, the inevitable question emerges: what were all these messages, these transmissions from beyond, trying to convey? One common aspect of all three experiences is how cryptic they are (and how difficult and time-consuming it was for each of these men to interpret just what the messages were saying). It’s also a little sobering to discover through Davis’s accounts how personal all three experiences were, whether it’s Terence and Dennis’s private fraternal language during the La Chorrera experiment, or mysterious phone calls placed back in time to their mother in childhood, or a lost silver key that Dennis was able to, stage-magician-like, conjure just as they were discussing it, or the message Philip K. Dick received to take his son Christopher to the doctor for an inguinal hernia that could have proven fatal. But alongside these personal epiphanies, there is also always an undeniable larger social and political context, especially as both Wilson and Dick saw their journeys in 1973 and 1974 as a way to confront and deal with the intense paranoia around Watergate and the fall of Richard Nixon (in his chapter setting the scene of the ’70s, Davis calls Watergate “a mytho-poetic perversion of governance”). In every case, the message from beyond requires interpretation, meaning-making, and, in Davis’s terminology, “constructivism.” The reams of words spoken and written by all three men analyzing their respective mystical experiences are an essential part of the experience. And these personal revelations all are attempts by the three men to make sense of the chaos of both their personal lives and their existence in an oppressive 20th century technocratic society: to inject some sense of mystery into daily existence, even if it took the quasi-familiar and, yes, somewhat comforting form of transmissions from a mushroom television network or interstellar artificial intelligence.

Over the past nine months I’ve spent much of my own life completing (and recovering from the process of completing) a Master’s degree. My own academic work, focusing on nostalgia’s uses in binding together individuals and communities with their museums, tapped into my earliest memories of museum visits in the late 1970s, when free education was seemingly everywhere (and actually free), when it was democratic and diverse, when it was an essential component of a rapidly-disappearing belief in social cohesion. In a lot of ways, my work at We Are the Mutants over the past three years is the incantation of a spell meant to conjure something new and hopeful from the “kipple” of a childhood suffused in disposable pop culture, the paranormal and “bootstrap witchery,” and science-as-progress propaganda. At the same time, over the past three years the world has been at the constant, media-enabled beck and call of a figure ten times more Weird and apocalyptic and socially malignant than any of Philip K. Dick’s various Gnostic emanations of Richard Nixon.

Philip K. Dick believed he was living through a recapitulation of the Roman Empire, that time was meaningless when viewed from the perspective of an omniscient entity like VALIS. In the correspondences and synchronicities I have witnessed over the past few months—in the collapse of political order and the revelation of profound, endemic corruption behind the scenes of the ruling class—this sense of recurring history has sent me down a similar set of ecstatic and paranoid corridors as McKenna, Wilson, and Dick. The effort to find meaning in a world that once held some inherent structure in childhood but has become, in adulthood, a hollow facade—a metaphysical Potemkin village—is profoundly unmooring. But meaning is there, even if we need technics such as psychedelic drugs, cybernetics (Davis’s final chapter summarizing how the three men’s mystical explorations fed into the internet as we know it today is absolutely fascinating), and parapolitical activity to interpret it. On this, the 50th anniversary of the summer of 1969, commonly accepted as the moment the Sixties ended, with echoes of moon landings and Manson killings reverberating throughout the cultural theater, is it any wonder that the appeal of broken psychonauts trying to pick up the pieces of a shattered world would appeal to lost souls in 2019? High Weirdness as a mystical tome remains physically and psychically close to me now, and probably will for the remainder of my life; and if the topics detailed in this review intrigue you the way they do me, it will remain close to you as well.

There Is No Normal

By James Howard Kunstler

Source: Kunstler.com

The wheel of time rolls forward, never retracing its path, but because it is a wheel, and we are riding in it, a persistent illusion persuades us that the landscape is recognizably the same, and that our doings within the regular turning of the seasons seem comfortably normal. There is no normal.

There is for us, at this moment in history, an especially harsh turning (so Strauss and Howe would say) as our journey takes the exit ramp out of the high energy era into the next reality of a long emergency. The human hive-mind senses that something is different, but at the same moment we’re unable to imagine changing all our exquisitely tuned arrangements — especially the thinking class in charge of all that, self-enchanted with pixeled fantasies. The dissonance over this is driving America crazy.

The wheel hit a deep pothole in 2008 turning onto the off-ramp and has been wobbling badly ever since. 2008 was a warning that going through the motions isn’t enough to sustain a sense of purpose, either nationally or for individuals trying to keep their lives together ever more desperately. The cultural memory of the confident years, when we seemed to know what we were doing, and where we were going, dogs us and mocks us.

The young adults feel all that most acutely. The pain prompts them to want to deconstruct that memory. “No, it didn’t happen that way,” they are saying. All those stories about the founding of this society — of those Great Men with their powdered hair-doos writing the national charter, and the remarkable experience of the past 200-odd years — are wrong! There was nothing wonderful about it. The whole thing was a swindle!

They are feeling the wheel’s turning most painfully, since they know they will see many more turnings in the years ahead, and the direction of the wheel is vectoring downward for them. The bottom-line is less of everything, not more. That is a new ethos here in America and it’s hardly comforting: Less income, fewer comforts, more literal hardships, fewer consolations for the universal difficulty of being alive. No wonder they are angry.

It’s this simple. We landed in the New World five hundred years ago. It was full of good things that human beings had barely begun to exploit, laid out like a banquet. There was plenty of good virgin soil for growing food, the best timber in the world, clean rivers and great lakes, ores full of iron, gold, and silver, and down deep a bonanza of coal and oil to drive the wheel through very flush times. The past century was particularly supercharged, the oil years.

Imagine living through the very start of all that, the blinding, fantastic newness of modernity! Look back at the stories and images around Teddy Roosevelt and his times, and the confidence of that era just astonishes you, An emergent cavalcade of wonders: electricity, telephones, railroads, subways, skyscrapers! And in a few more years movies, cars, airplanes, radio. Even the backstage wonders of the day were astonishments: household plumbing for all, running hot water, municipal water and sewer systems, refrigeration, tractors! It’s hard to conceive how much these developments changed the human experience of daily life.

Even the traumas of the 20th century’s world wars did not crush that sense of amazing progress, at least not in North America, spared the wars’ mighty wreckage. The post-war confidence of American society achieved a level of in-your-face laughable hubris — see the USA in your Chevrolet! — until John Kennedy was shot down, and after that the delirious moonshot euphoria steadily gave way to corrosive skepticism, anxiety, acrimony, and enmity. My generation, booming into adulthood, naively thought they could fix all that with Earth Day, tofu, and computers, and keep the great wheel rolling down into an even more glorious cybernetic nirvana.

Fakeout. That’s not where the wheel is going. We borrowed all we possibly could from the future to pretend that the system was still working, and now the future is at the door like a re-po man come to take away both the car and the house. The financial scene is an excellent analog to our collective psychology. Its workings depend on the simple faith that its workings work. So, it is easy to imagine what happens when that faith wavers.

We’re on the verge of a lot of things coming apart: supply lines, revenue streams, international agreements, political assumptions, promises to do this and that. We have no idea how to keep it together on the downside. We don’t even want to think about it. The best we can do for the moment is pretend that the downside doesn’t exist. And meanwhile, fight both for social justice and to make America great again, two seemingly noble ideas, both exercises in futility. The wheel is still turning and the change of season soon upon us. What will you do?

The Gulag of the Mind

By Charles Hugh Smith

Source: Of Two Minds

There are no physical barriers in the Gulag of the Mind–we imprison ourselves, and love our servitude. Indeed, we fear the world outside our internalized gulag, because we’ve absorbed the narrative that the gulag is secure and permanent.

We’ve also absorbed the understanding that escape will be punished. Dissent will quickly be suppressed or vilified, and the dissenter socially and economically marginalized.

In a peculiarly human pathology, we now believe the exact opposite of reality: our abuser is our savior, we’re getting wealthier when in fact we’re getting poorer, the government will always save us, even though the government is the problem, not the solution, and we’re entitled to all sorts of good things even as the entire system clings to a veneer of normalcy that is increasingly difficult to maintain.

We dare not realize the crises we’re about to face are novel, and the thinking of the past is worse than useless, as doing more of what’s failed is about to bear real consequences that cannot be papered over.

Michael Grant described this clinging to the past in his excellent account The Fall of the Roman Empire:

There was no room at all, in these ways of thinking, for the novel, apocalyptic situation which had now arisen, a situation which needed solutions as radical as itself. (The Status Quo) attitude is a complacent acceptance of things as they are, without a single new idea.

This acceptance was accompanied by greatly excessive optimism about the present and future. Even when the end was only sixty years away, and the Empire was already crumbling fast, Rutilius continued to address the spirit of Rome with the same supreme assurance.

This blind adherence to the ideas of the past ranks high among the principal causes of the downfall of Rome. If you were sufficiently lulled by these traditional fictions, there was no call to take any practical first-aid measures at all.

The Gulag of the Mind is constructed of both traditional fictions–that all the looming crises can be solved by repeating what worked in the past 50 years– and the new ones of virtual signaling–that publicly signaling our virtuous convictions is magically equivalent to actually solving problems, as if our problems are all nothing but a scarcity of virtuous convictions rather than real-world crises that will require immense fortitude and sacrifice to weather, much less resolve.

The Gulag of the Mind depends on technology–or more precisely, on a magical thinking faith that technology will always effortlessly save us: some new form of magic will manifest at the moment of need and we won’t have to change anything in our lifestyle or our corrupt power structure.

In the Gulag of the Mind, a perversion of justice passes for real justice: there are two sets of laws and two levels of enforcement: the wealthy and powerful escape justice while commoners are given life-crushing prison sentences for Drug Gulag offenses, and their vehicles and belongings are confiscated for being too poor to pay the state’s onerous penalties and fees.

Befuddled and blind, we wander toward the cliff without even seeing it, focusing on our little screens of entertainment and self-absorption. The bottom of the cliff beckons, and filled with the magical sense of security bestowed by the Gulag of the Mind, we imagine we can walk on air and escape unhurt.

Were the Atomic Bombings of Hiroshima and Nagasaki a War Crime and a Crime Against Humanity?

This August 6, 1945 file photo shows the destruction from the explosion of an atomic bomb in Hiroshima Japan AP-Photo-File

By Rossen Vassilev Jr.

Source: Global Research

74 Years Ago, the first atomic bomb was dropped on Hiroshima on August 6, 1945

Was President Harry Truman “a murderer,” as the renowned British analytic philosopher Gertrude Elizabeth Anscombe once charged? Were the atomic bombings of Hiroshima and Nagasaki indeed a war crime and a crime against humanity, as she and other academic luminaries have publicly claimed? A Distinguished Professor of Philosophy and Ethics at Oxford and Cambridge, who was one of the 20th century’s most gifted philosophers and recognizably the greatest woman philosopher in history, Dr. Anscombe openly called President Truman a “war criminal” for his decision to have the Japanese cities of Hiroshima and Nagasaki leveled by atomic bombs in August 1945 (Rachels & Rachels 127). According to another academic critic, the late American historian Howard Zinn, at least 140,000 Japanese civilians were “turned into powder and ash” in Hiroshima. Over 70,000 civilians were incinerated in Nagasaki, and another 130,000 residents of the two cities died of radiation sickness in the next five years (Zinn 23).

The two most often cited reasons for President Truman’s controversial decision were to shorten the war and to save the lives of “between 250,000 and 500,000” American soldiers who could have possibly died in battle had the U.S. military had to invade the home islands of Imperial Japan. Truman reportedly claimed that

“I could not bear this thought and it led to the decision to use the atomic bomb” (Dallek 26).

But Dr. Gertrude Anscombe, who along with her husband, Dr. Peter Geach, Professor of Philosophical Logic and Ethics, were the 20th century’s foremost philosophical champions of the doctrine that moral rules are absolute, did not buy this morally callous argument:

“Come now: if you had to choose between boiling one baby and letting some frightful disaster befall a thousand people—or a million people, if a thousand is not enough—what would you do? For men to choose to kill the innocent as a means to their ends is always murder” (Rachels & Rachels 128-129).

In 1956, Professor Anscombe and other prominent faculty members of Oxford University openly protested the decision of university administrators to grant Truman an honorary degree in gratitude for America’s wartime help. She even wrote a pamphlet, explaining that the former U.S. President was “a murderer” and “a war criminal” (Rachels & Rachels 128).

In the eyes of many contemporaries of Elizabeth Anscombe, the atomic bombings of Hiroshima and Nagasaki violated famous philosophical-ethical norms such as the “Sanctity of Human Life,” the “Wrongfulness of Killing,” and also that “it is wrong to use people as means to other people’s ends.” Former President Herbert Hoover was another early critic, openly declaring that

“The use of the atom bomb, with its indiscriminate killing of women and children, revolts me” (Alperovitz The Decision 635).

Even President Truman’s own Chief of Staff, the five-star Admiral William D. Leahy (the most senior U.S. military officer during the war) made no secret of his strong disapprobation of the atomic bombings:

“It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons…. My own feeling is that in being the first to use it, we had adopted an ethical standard common to the barbarians of the Dark Ages…. I was not taught to make war in that fashion, and wars cannot be won by destroying women and children” (Claypool 86-87, emphasis added).

The apologists for President Truman, on the other hand, seem to be using the quasi-Utilitarian “Benefits Argument” to justify the barbaric use of a devastating weapon of mass destruction, which killed hundreds of thousands of innocent civilians in the two targeted Japanese cities even though (contrary to Truman’s many public pronouncements at that time) there had been no military troops, no heavy weaponry, or even any major war-related industries in either city. Because nearly the entire adult male population of both Hiroshima and Nagasaki had been drafted by the Japanese military, it was mostly women, children, and old men who fell victims to fiery death from the sky. The excuse that Truman himself repeatedly offered was:

“The dropping of the bombs stopped the war, saved millions of lives” (Alperovitz Atomic Diplomacy 10).

He even boasted that he had “slept like a baby” the night after signing the final order to use the atomic bombs against Japan (Rachels & Rachels 127). But what Truman was saying in self-justification was far from being the truth—let alone the whole truth.

Unleashing a nuclear Frankenstein

At the urging of a fellow nuclear physicist—the anti-Nazi Hungarian émigré Leo SzilardAlbert Einstein wrote a letter to President Franklin D. Roosevelt on August 2, 1939, recommending that the U.S. government should start work on a powerful atomic device as a defensive deterrent to Nazi Germany’s possible acquisition and use of nuclear weaponry (Ham 103-104). But when the top-secret Manhattan Project finally got off the ground in early 1942, the U.S. military obviously had other, much more offensive plans regarding the future targets of America’s A-bombs. While at least 67 other Japanese cities, including the capital Tokyo, were reduced to rubble by daily conventional firebombing, including the use of napalm and other incendiaries, Hiroshima and Nagasaki had been deliberately spared for the sole purpose of testing the destructiveness of the new atomic device (Claypool 11).

An even more important reason for employing the bomb was to scare Stalin, who had turned quickly from “Old Uncle Joe” at the time of the FDR presidency into “the Red Menace” in the eyes of Truman and his top advisers. President Truman had quickly abandoned FDR’s policy of cooperation with Moscow, replacing it with a new policy of hostile confrontation with Stalin, in which America’s newly-acquired monopoly over nuclear armaments would be exploited as an aggressive tool of Washington’s anti-Soviet diplomacy (Truman’s so-called “atomic diplomacy”). Fully two months before Hiroshima and Nagasaki, the same Leo Szilard had met privately with Truman’s Secretary of State, James F. Byrnes, and had tried unsuccessfully to persuade him that the nuclear weapon should not be used to destroy helpless civilian targets such as Japan’s cities. According to Dr. Szilard,

“Mr. Byrnes did not argue that it was necessary to use the bomb against the cities of Japan in order to win the war…. Mr. Byrnes’s view [was] that our possessing and demonstrating the bomb would make Russia more manageable in Europe” (Alperovitz Atomic Diplomacy 1, 290).

The Truman Administration had, in fact, postponed the Potsdam meeting of the Big Three until July 17, 1945—one day after the successful Trinity test of the first A-bomb at the Alamogordo testing range in New Mexico—to give Truman extra diplomatic leverage in negotiating with Stalin (Alperovitz Atomic Diplomacy 6). In Truman’s own words, the atom bomb “would keep the Russians straight” and “put us in a position to dictate our own terms at the end of the war” (Alperovitz Atomic Diplomacy 54, 63).

At this point, the Truman Administration was no longer interested in having Moscow’s Red Army liberate Northern China (Manchuria) from Japanese military occupation (as FDR, Churchill, and Stalin had jointly agreed at the Yalta Conference in February 1945)—let alone invade or capture Imperial Japan itself. Quite to the contrary. Publicly deploring the “political-diplomatic rather than military motives” behind Truman’s decision to nuke Japan, Albert Einstein complained that “a great majority of scientists were opposed to the sudden employment of the atom bomb. I suspect that the affair was precipitated by a desire to end the war in the Pacific by any means before Russia’s participation” (Alperovitz The Decision 444). Winston Churchill privately told his Foreign Secretary, Anthony Eden,at the Potsdam Conference that

“It is quite clear that the United States do not at the present time desire Russian participation in the war against Japan” (Claypool 78).

Not even Tokyo’s last-minute desperate offer (made during and after the Potsdam Conference) to surrender if the Allies promised not to prosecute Japan’s god-like emperor or remove him from office—could prevent this deadly decision, even though Truman “had indicated a willingness to maintain the emperor on the throne” (Dallek 25).

Therefore, sparing the lives of American GIs was hardly one of Truman’s more convincing arguments. In early 1945, FDR and Army General Dwight Eisenhower, Supreme Commander of the Allied Forces in Europe, had together decided to leave the capture of Berlin to Soviet Marshal Georgi Zhukov‘s battle-hardened troops in order to avoid heavy American casualties. After officially declaring war on Tokyo on August 8, 1945, and having destroyed the Japanese military forces in Manchuria, Stalin’s Red Army prepared to invade and occupy Japan’s home islands—which certainly would have saved the lives of thousands of U.S. servicemen about whom Truman seemed so vocally concerned. But following Nazi Germany’s unconditional surrender in May 1945, Truman had come to share Winston Churchill’s famous revisionist assessment that “We have slain the wrong swine.”

It is not even clear whether Tokyo finally surrendered on August 14 due to the two U.S. nuclear attacks carried out on August 6 and August 9, respectively (after which there were practically no more Japanese cities left to destroy nor any more U.S. A-bombs to drop)—or because of the threat of Soviet invasion and occupation after Moscow had entered the war against the Empire of Japan. Just days before the Soviet declaration of war, the Japanese ambassador to Moscow had cabled Foreign Minister Shigenori Togo in Tokyo that Moscow’s entry into the war would spell a total disaster for Japan:

“If Russia…should suddenly decide to take advantage of our weakness and intervene against us with force of arms, we would be in a completely hopeless situation. It is clear as day that the Imperial Army in Manchukuo [Manchuria] would be completely unable to oppose the Red Army which has just won a great victory and is superior to us on all points” (Barnes).

To nuke or not to nuke

General Eisenhower was later quoted as stating his conviction that it had not been “necessary” militarily to use the bomb to force Japanese surrender:

“Japan was, at that very moment, seeking some way to surrender with a minimum loss of ‘face’…it wasn’t necessary to hit them with that awful thing” (Alperovitz Atomic Diplomacy 14).

In private, Eisenhower repeated his objections to his direct boss, Truman’s Secretary of War Henry L. Stimson:

“I had been conscious of a feeling of depression and so I voiced to him my strong misgivings, first on the basis of my belief that Japan was already defeated and that dropping the bomb was completely unnecessary, and secondly because I thought that our country should avoid shocking world opinion by the use of a weapon whose employment was, I thought, no longer mandatory as a measure to save American lives” (Alperovitz Atomic Diplomacy 14).

Admiral William F. Halsey, commander of the U.S. Third Fleet (which conducted the bulk of naval operations against the Japanese in the Pacific during the entire war), agreed that there was “no military need” to employ the new weapon, which was used only because the Truman Administration had a “toy and they wanted to try it out…. The first atomic bomb was an unnecessary experiment…. It was a mistake to ever drop it” (Alperovitz The Decision 445). Indeed, it was quite “certain” at the time that a totally devastated Japan, which was on the verge of internal collapse, would have surrendered within weeks, if not days, without the atomic bombings of Hiroshima and Nagasaki or even without the Soviet declaration of war against Tokyo. As the official U.S. Strategic Bombing Survey concluded at the end of the war, “certainly prior to 31 December 1945, and in all probability prior to 1 November 1945, Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war, and even if no invasion had been planned or contemplated” (Alperovitz Atomic Diplomacy 10-11).

Major General Curtis E. Lemay, commander of the U.S. Twenty-first Bomber Command which had conducted the massive conventional bombing campaign against wartime Japan and dropped the atomic bombs on Hiroshima and Nagasaki, stated publicly: “I felt there was no need to use them [atomic weapons]. We were doing the job with incendiaries. We were hurting Japan badly…. We went ahead and dropped the bombs because President Truman told me to do it…. All the atomic bomb did was, in all probability, save a few days” (Alperovitz The Decision 340).

The fateful decision to drop the two atomic bombs code-named “Little Boy” and “Fat Man” on Japan may have been made a little bit more morally acceptable for Truman by the daily carpet bombing of German and Japanese cities throughout the war, including the firebombings of Hamburg, Dresden, and Tokyowhich had nearly wiped out their civilian populations. The declared goal of these relentless city-busting air raids was to destroy the morale and the will to fight of the German and Japanese people and thus shorten the war. But many years after the war Dr. Howard Zinn (himself a B-17 co-pilot and bombardier who had flown dozens of bombing missions against Nazi Germany) sadly mused: “No one seemed conscious of the irony—that one of the reasons for the general indignation against the fascist powers was their history of indiscriminate bombing of civilian populations” (Zinn 37). But, in fact, Secretary of War Henry Stimson, Admiral William Leahy, and Army General Douglas MacArthur were no less disturbed by what they saw as the barbarity of the “terror” air campaign, with Stimson privately fearing that the U.S. would “get the reputation for outdoing Hitler in atrocities” (Ham 63).

Clearly, Japan was defeated and was preparing to surrender before the bomb was used, whose main—if not the only—purpose was to intimidate the Soviet Union. But there had been several viable alternatives, some of which were discussed prior to the atomic bombings. The Under Secretary of the Navy, Ralph Bard, had become convinced that “the Japanese war was really won” and was so disturbed by the prospect of using atom bombs against defenseless civilians that he secured a meeting with President Truman, at which he unsuccessfully pressed his case “for warning the Japanese of the nature of the new weapon” (Alperovitz Atomic Diplomacy 19). Admiral Lewis L. Strauss, Special Assistant to the Secretary of the Navy, who replaced Bard after the latter’s angry resignation, also believed that “the war was very nearly over. The Japanese were nearly ready to capitulate.” That is why Admiral Strauss insisted that the atom bomb should be demonstrated in a way that would not kill large numbers of civilians, proposing that “…a satisfactory place for such a demonstration would be a large forest of cryptomeria trees not far from Tokyo” (Alperovitz Atomic Diplomacy 19). General George C. Marshall, U.S. Army Chief of Staff, was equally opposed to the bomb being used on civilian areas, arguing instead that

“…these weapons might be used against straight military objectives such as a large naval installation and then if no complete result was derived from the effect of that…we ought to designate a number of large manufacturing areas from which people would be warned to leave —telling the Japanese that we intend to destroy such centers…. Every effort should be made to keep our record of warning clear…. We must offset by such warning methods the opprobrium which might follow from an ill-considered employment of such force” (Alperovitz Atomic Diplomacy 20).

General Marshall also insisted that instead of surprising the Russians with the first use of the atom bomb, Moscow should be invited to send observers to the Alamogordo nuclear test. Many of the scientists working for the Manhattan Project likewise urged that a demonstration be arranged first, including a possible nuclear explosion at sea in close proximity to Japan’s coast, so that the bomb’s destructive power would be made clear to the Japanese before it was used against them. But, like the U.S. military’s dissenting views, the nuclear scientists’ opposition was never considered seriously by the Truman Administration (Alperovitz Atomic Diplomacy 20-21).

Conclusion

As a result of Truman’s immoral decision to use nuclear explosives against the “Japs” (a derogatory name for the Japanese commonly used in public in wartime America, including by President Truman himself), well over 200,000 civilians were instantly cremated and many thousands died later of radiation sickness. J. Robert Oppenheimer, scientific director of the Manhattan Project and “father” of the U.S. atom bomb, declared that Truman’s decision was “a grievous error,” because now “we have blood on our hands” (Claypool 17). Howard Zinn agreed with Dr. Oppenheimer’s judgment, remarking that “much of the argument defending the atomic bombings has been based on a mood of retaliation, as if the children of Hiroshima had bombed Pearl Harbor…. Did American children deserve to die because of the U.S. massacre of Vietnamese children at My Lai?” (Zinn 59).

The controversial General Curtis Lemay, who had opposed the two atomic blasts, later confided to former Secretary of Defense Robert McNamara (who had worked for Lemay during the war, helping select Japanese targets for the American firebombing raids): “If we’d lost the war, we’d all have been prosecuted as war criminals” (Schanberg). Given the unjustifiable and unnecessary use of such an inhumane and indiscriminate weapon of mass destruction as the atomic bombs dropped on Hiroshima and Nagasaki, Professor Elizabeth Anscombe called President Truman a murderer and a war criminal. Until the day she died, Dr. Anscombe believed that Truman should have been put on trial for having committed some of the worst war crimes and crimes against humanity during WWII.

 

Sources

Alperovitz, Gar. Atomic Diplomacy: Hisroshima and Potsdam. The Use of the Atomic Bomb and the American Confrontation with Soviet Power. London and Boulder, CO: Pluto Press. 1994. Print.

—-. The Decision to Use the Atomic Bomb. New York: Vintage Books. 1996. Print.

Barnes, Michael. “The Decision to Use the Atomic Bomb: Arguments Against.” Web. 14 Apr. 2019.
Claypool, Jane. Hisroshima and Nagasaki. New York and London: Franklin Watts, 1984. Print.

Dallek, Robert. Harry S. Truman. New York: Times Books, 2008. Print.

Ham, Paul. Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath. New York: St. Martin’s Press. 2011. Print.

Rachels, James, and Stuart Rachels. The Elements of Moral Philosophy (8th edition). McGraw-Hill Education, 2015. Print.

Schanberg, Sydney. “Soul on Ice.” The American Prospect, October 27, 2003. Web. 14 Apr. 2019.

Zinn, Howard. The Bomb. San Francisco, CA: City Lights Books, 2010. Print.

Jacques Ellul: A Prophet for Our Tech-Saturated Times

Read his works to understand how we’ve been caught in technology’s nightmarish hold.

By Andrew Nikiforuk

Source: The Tyee

By now you have probably read about the so-called “tech backlash.”

Facebook and other social media have undermined what’s left of the illusion of democracy, while smartphones damage young brains and erode the nature of discourse in the family.

Meanwhile computers and other gadgets have diminished our attention spans along with our ever-failing connection to reality.

The Foundation for Responsible Robotics recently created a small stir by asking if “sexual intimacy with robots could lead to greater social isolation.”

What could possibly go wrong?

The average teenager now works about two hours of every day — for free — providing Facebook and other social media companies with all the data they need to engineer young people’s behaviour for bigger Internet profits.

Without shame, technical wonks now talk of building artificial scientists to resolve climate change, poverty and, yes, even fake news.

The media backlash against Silicon Valley and its peevish moguls, however, typically ends with nothing more radical than an earnest call for regulation or a break-up of Internet monopolies such as Facebook and Google.

The problem, however, is much graver, and it is telling that most of the backlash stories invariably omit any mention of technology’s greatest critic, Jacques Ellul.

The ascent of technology

Ellul, the Karl Marx of the 20th century, predicted the chaotic tyranny many of us now pretend is the good and determined life in technological society.

He wrote of technique, about which he meant more than just technology, machines and digital gadgets but rather “the totality of methods rationally arrived at and having absolute efficiency” in the economic, social and political affairs of civilization.

For Ellul, technique, an ensemble of machine-based means, included administrative systems, medical tools, propaganda (just another communication technique) and genetic engineering.

The list is endless because technique, or what most of us would just call technology, has become the artificial blood of modern civilization.

“Technique has taken substance,” wrote Ellul, and “it has become a reality in itself. It is no longer merely a means and an intermediary. It is an object in itself, an independent reality with which we must reckon.”

Just as Marx deftly outlined how capitalism threw up new social classes, political institutions and economic powers in the 19th century, Ellul charted the ascent of technology and its impact on politics, society and economics in the 20th.

My copy of Ellul’s The Technological Society has yellowed with age, but it remains one of the most important books I own. Why?

Because it explains the nightmarish hold technology has on every aspect of life, and also remains a guide to the perplexing determinism that technology imposes on life.

Until the 18th century, technical progress occurred slowly and with restraint. But with the Industrial Revolution it morphed into something overwhelming due in part to population, cheap energy sources and capitalism itself.

Since then it has engulfed Western civilization and become the globe’s greatest colonizing force.

“Technique encompasses the totality of present-day society,” wrote Ellul. “Man is caught like a fly in a bottle. His attempts at culture, freedom, and creative endeavour have become mere entries in technique’s filing cabinet.”

Ellul, a brilliant historian, wrote like a physician caught in the middle of a plague or physicist exposed to radioactivity. He parsed the dynamics of technology with a cold lucidity.

Yet you’ve probably never heard of the French legal scholar and sociologist despite all the recent media about the corrosive influence of Silicon Valley.

His relative obscurity has many roots. He didn’t hail from Paris, but rural Bordeaux. He didn’t come from French blue blood; he was a “meteque.”

He didn’t travel much, criticized politics of every stripe and was a radical Christian.

But in 1954, just a year before American scientists started working on artificial intelligence, Ellul wrote his monumental book, The Technological Society.

The dense and discursive work lays out in 500 pages how technique became for civilization what British colonialism was for parts of 19th-century Africa: a force of total domination.

In the book Ellul explains in bold and uncompromising terms how the logic of technological innovation conquered every aspect of human culture.

Ellul didn’t regard technology as inherently evil; he just recognized that it was a self-augmenting force that engineered the world on its terms.

Machines, whether mechanical or digital, aren’t interested in truth, beauty or justice. Their goal is to make the world a more efficient place for more machines.

Their proliferation combined with our growing dependence on their services inevitably led to an erosion of human freedom and unintended consequences in every sphere of life.

Ellul was one of the first to note that you couldn’t distinguish between bad and good effects of technology. There were just effects and all technologies were disruptive.

In other words, it doesn’t matter if a drone is delivering a bomb or book or merely spying on the neighbourhood, because technique operates outside of human morality: “Technique tolerates no judgment from without and accepts no limitations.”

Facebook’s mantra “move fast and break things” epitomizes the technological mindset.

But some former Facebook executives such as Chamath Palihapitiya belatedly realized they have engineered a force beyond their control. (“The short-term dopamine-driven feedback loops that we have created are destroying how society works,” Palihapitiya has said.)

That, argued Ellul, is what technology does. It disrupts and then disrupts again with unforeseen consequences, requiring more techniques to solve the problems created by latest innovations.

As Ellul noted back in 1954, “History shows that every technical application from its beginnings presents certain unforeseeable secondary effects which are more disastrous than the lack of the technique would have been.”

Ellul also defined the key characteristics of technology.

For starters, the world of technique imposes a rational and mechanical order on all things. It embraces artificiality and seeks to replace all natural systems with engineered ones.

In a technological society a dam performs better than a running river, a car takes the place of the pedestrians — and may even kill them — and a fish farm offers more “efficiencies” than a natural wild salmon migration.

There is more. Technique automatically reduces actions to the “one best way.” Technical progress is also self-augmenting: it is irreversible and builds with a geometric progression.

(Just count the number of gadgets telling you what to do or where to go or even what music to play.)

Technology is indivisible and universal because everywhere it goes it shows the same deterministic face with the same consequences. And it is autonomous.

By autonomous, Ellul meant that technology had become a determining force that “elicits and conditions social, political and economic change.”

The role of propaganda

The French critic was the first to note that technologies build upon each other and therefore centralize power and control.

New techniques for teaching, selling things or organizing political parties also required propaganda.

Here again Ellul saw the future.

He argued that propaganda had to become as natural as breathing air in a technological society, because it was essential that people adapt to the disruptions of a technological society.

“The passions it provokes — which exist in everybody — are amplified. The suppression of the critical faculty — man’s growing incapacity to distinguish truth from falsehood, the individual from the collectivity, action from talk, reality from statistics, and so on — is one of the most evident results of the technical power of propaganda.”

Faking the news may have been a common practice on Soviet radio during Ellul’s day, but it is now a global phenomenon leading us towards what Ellul called “a sham universe.”

We now know that algorithms control every aspect of digital life and have subjected almost aspect of human behaviour to greater control by techniques whether employed by the state or the marketplace.

But in 1954 Ellul saw the beast emerging in infant form.

Technology, he wrote, can’t put up with human values and “must necessarily don mathematical vestments. Everything in human life that does not lend itself to mathematical treatment must be excluded… Who is too blind to not see that a profound mutation is being advocated here.”

He, too, warned about the promise of leisure provided by the mechanization and automatization of work.

“Instead of being a vacuum representing a break with society,” our leisure time will be “literally stuffed with technical mechanisms of compensation and integration.”

Good citizens today now leave their screens at work only to be guided by robots in their cars that tell them the most efficient route to drive home.

At home another battery of screens awaits to deliver entertainments and distractions, including apps that might deliver a pizza to the door.

Stalin and Mao would be impressed — or perhaps disappointed — that so much social control could be exercised with such sophistication and so little bloodletting.

Ellul wasn’t just worried about the impact of a single gadget such as the television or the phone but “the phenomenon of technical convergence.”

He feared the impact of systems or complexes of techniques on human society and warned the result could only be “an operational totalitarianism.”

“Convergence,” he wrote, “is a completely spontaneous phenomenon, representing a normal stage in the evolution of technique.”

Social media, a web of behavioural and psychological systems, is just the latest example of convergence.

Here psychological techniques, surveillance techniques and propaganda have all merged to give the Russians and many other groups a golden opportunity to intervene in the political lives of 126 million North Americans.

Social media has achieved something novel, according to former Facebook engineer Sam Lessin.

For the first time ever a political candidate or party can “effectively talk to each individual voter privately in their own home and tell them exactly what they want to hear… in a way that can’t be tracked or audited.”

In China the authorities have gone one step further. Using the Internet the government can now track the movements of every citizen and rank their political trustworthiness based on their history of purchases and associations. It is, of course, a fantastic “counterterrorism” tool.

The Silicon Valley moguls and the digerati promised something less totalitarian. They swore that social media would help citizens fight bad governments and would connect all of us.

Facebook, vowed the pathologically adolescent Mark Zuckerberg, would help the Internet become “a force for peace in the world.”

But technology obeys its own rules and prefers “the psychology of tyranny.”

The digerati also promised that digital technologies would usher in a new era of decentralization and undo what mechanical technologies have already done: centralize everything into big companies, big boxes and big government.

Technology assuredly fragments human communities, but in the world of technique centralization remains the norm.

“The idea of effecting decentralization while maintaining technical progress is purely utopian,” wrote Ellul.

Towards ‘hypernormalization’

It is worth noting that the word “normal” didn’t come into currency until the 1940s along with technological society.

In many respects global society resembles the Soviet Union just prior to its collapse when “hypernormalization” ruled the day.

A recent documentary defined what hypernormalization did for Russia: it “became a society where everyone knew that what their leaders said was not real, because they could see with their own eyes that the economy was falling apart. But everybody had to play along and pretend that it was real because no one could imagine any alternative.”

In many respects technology has hypernormalized a technological society in which citizens exercise less and less control over their lives every day and can’t imagine anything different.

Throughout his life Ellul maintained that he was “neither by nature, nor doctrinally, a pessimist, nor have I pessimistic prejudices. I am concerned only with knowing whether things are so or not.”

He called a spade a spade, and did not sugarcoat his observations.

If you are growing more anxious about our hypernormalized existence and are wondering why you own a phone that tracks your every movement, then read The Technological Society.

Ellul believed that the first act of freedom a citizen can exercise is to recognize the necessity of understanding technique and its colonizing powers.

Resistance, which is never futile, can only begin by becoming aware and bearing witness to the totalitarian nature of technological society.

Ellul believed that Christians had a special duty to condemn the worship of technology, which has become society’s new religion.

To Ellul, resistance meant teaching people how to be conscious amphibians, with one foot in traditional human societies, and to purposefully choose which technologies to bring into their communities.

Only citizens who remain connected to traditional human societies can see, hear and understand the disquiet of the smartphone blitzkrieg or the Internet circus.

Children raised by screens and vaccinated only by technology will not have the capacity to resist, let alone understand, this world any more than someone born in space could appreciate what it means to walk in a forest.

Ellul warned that if each of us abdicates our human responsibilities and leads a trivial existence in a technological society, then we will betray freedom.

And what is freedom but the ability to overcome and transcend the dictates of necessity?

In 1954, Ellul appealed to all sleepers to awake.

Read him. He remains the most revolutionary, prophetic and dangerous voice of this or any century.

Retconning History

By CH

Source: The Hipcrime Vocab

“He who controls the past, controls the future; and he who controls the present, controls the past.”–George Orwell

“The mistake of judging the men of other periods by the morality of our own day has its parallel in the mistake of supposing that every wheel and bolt in the modern social machine had its counterpart in more rudimentary societies…”–H.S. Maine

“The past is a foreign country; they do things differently there.” –L.P. Hartley

I’ve often referred to the “Flintstonization of history”—a concept I borrowed from the book Sex at Dawn. It’s the tendency to project our present-day circumstances onto the past, assuming that people basically thought and acted much as we do. But when we do that, we bring our “modern” sensibilities and worldview along with us. And those have been decisively shaped by the time and culture in which we live.

Today I’d like to introduce a related concept–the retconning of history.

Looking back, that’s been the theme of a lot of my writing over the past year. I’ve looked at a lot of history which challenges and overturns the conventional narrative that our present-day circumstances and social organization are basically the same as past societies, except with better technology and a few more creature comforts (i.e. the past, but with cell phones). Or that they are the way things have always been, and that there are no alternatives.

Now, most of you probably know what retconning is. It is short for the phrase “retroactive continuity”. In order to make a narrative coherent, the authors “rewrite” (or simply ignore) what has occurred in previous episodes or iterations of a long-running franchise in order to maintain continuity with the ongoing “new” narrative arc and characters. The phrase originated with comic books, and is typically used in reference to films, television shows, books, video games, etc.

From there, the word has passed into common parlance. Normally, retcon is still used in the context of a work of fiction. However, I’ve seen the word spread beyond just talking about movies and TV shows to the world in general. When people say retcon now, they are usually referring to an attempt to “rewrite” past events by deliberately distorting them or altering the record after the fact. That is, “[people] tell themselves a different story about what happened in prior events in order to maintain consistency with their current circumstances.” That story may include a blatant distortion of facts and a general disregard for reality. Much of this is derived from our current political situation. A politician may suddenly reverse their position, and then declare that what came before didn’t happen (“fake news”), or simply ignore it altogether if it doesn’t fit with the narrative “spin” of the political parties.

At it’s heart, it is an attempt to “erase” or “rewrite” the past for the sake of present circumstances. As one of it’s earliest descriptions had it“retroactive continuity ultimately means that history flows fundamentally from the future into the past.”

What’s any of this got to do with history? It strikes me that much of what we learn about history are attempts to “retcon” the past.

What do I mean by this? It seems that history often adopts a “modern” point of view to explain past events. In this narrative, we were always heading to exactly where we are: globalized free-market corporate monopoly capitalism.This is done to depict our present circumstances not as deliberately engineered, or contingent on any historical circumstances, or political choices, but rather as something “natural” and just an expression of unchanging human nature. With this retconning, we are unable to think of different ways of organizing things, because those ways—even in the very recent past—have been retconned out of history. Even things in recent living memory—such as not going into debt for an education, or being able to afford a single family house on 25 percent of your income—are retconned to make it so that they never happened.

Here are just a few of the major retcons I have discovered over the past year or so:

1. Economists tend to depict all of human history as heading towards “free and open” markets, if only government would only just “get out of the way” and drop all restrictions and regulations on merchant princes and wealthy oligarchs. That is, globalized corporate free trade is “natural” (as is currency), and collective governance is “artificial” and unnecessary. Our “natural instinct” is to “truck, barter and exchange” declared Adam Smith. John Locke argued that the reason governments came to exist was to protect and secure private property, and that they should do little else besides this.

Of course, all of this is false. For example, an attempt at retconning history was engaged in by economists Santhi Hejeebu and Deirdre McCloskey (of ‘bourgois virtues’fame) attempting to refute some of Karl Polanyi’s book The Great Transformation. As political economist Mark Blyth countered, citing the works of Polanyi and Albert Hirschmann:

“While gain-seeking has indeed existed throughout history…the historical oddity was that gain-seeking became equated with market transactions only relatively recently. This was a qualitative and not a quantitative change; otherwise Incas, Mayans, Romans, and contemporary Britons were/are all living in societies that were more or less similar in their economic structure, despite the differences in, for example, the presence of slaves.”

“Painting the history of all hitherto existing societies as the history of capitalism in vitro probably obscures more economic history than it illuminates…capitalism did not simply evolve, it was argued for. It was propagandized by Scottish enlightenment intellectuals, English liberals, and French physiocrats long “before its triumph”. And it was as much a project of governance; limiting the state; constructing the commodified individual; building a singular notion of economically based self-interest, as much as it was one of creating wealth…”
“Capitalism was created, it did not just ‘happen’, and labeling all hitherto existing societies as ‘almost capitalism’ hardly erases the distinctions between historical periods and economic systems. The fact the ‘we’ today accept Smith far more readily than ‘we’ accept Polanyi speaks directly to the power of ideas rather than the discovery of facts…”

The great transformation in understanding Polanyi: Reply to Hejeebu and Mccloskey(Critical Review)

As Polanyi himself summed it up: “Laissez-faire was planned, planning was not”. From The Great Transformation:

Indeed, on the evidence available it would be rash to assert that local markets ever developed from individual acts of barter.

Obscure as the beginnings of local markets are, this much can be asserted: that from the start this institution was surrounded by a number of safeguards designed to protect the prevailing economic organization of society from interference on the part of market practices. The peace of the market was secured at the price of rituals and ceremonies which restricted its scope while ensuring its ability to function within the given narrow limits. The most significant result of markets—the birth of towns and urban civilization—was, in effect, the outcome of a paradoxical development. Towns, insofar as they sprang from markets, were not only the protectors of those markets, but also the means of preventing them from expanding into the countryside and thus encroaching on the prevailing economic organization of society…
Such a permanent severance of local trade and long-distance trade within the organization of the town must come as another shock to the evolutionist, with whom things always seem so easily to grow into one another. And yet this peculiar fact forms the key to the social history of urban life in Western Europe…Internal trade in Western Europe was actually created by the intervention of the state.

Right up to the time of the Commercial Revolution what may appear to us as national trade was not national, but municipal…The trade map of Europe in this period should rightly show only towns, and leave blank the countryside—it might as well have not existed as far as organized trade was concerned. So-called nations were merely political units, and very loose ones at that, consisting economically of innumerable smaller and bigger self sufficing households and insignificant local markets in the villages. Trade was limited to organized townships which carried it on either locally, as neighborhood trade, or as long-distance trade—the two were strictly separated, and neither was allowed to infiltrate into the countryside indiscriminately…neither long-distance trade nor local trade was the parent of the internal trade of modern times—thus apparently leaving no alternative but to turn for an explanation to the deus ex machina of state intervention…

This retconning has been particularly egregious by the debunked “Austrian economic school” which was expressly created to overturn history and rewrite it for the benefit of capitalists and the wealthy. Michael Hudson, an economist who probably knows more about ancient economic organization than anyone since Polanyi, writes:

…Karl Polanyi[‘s] doctrine was designed to rescue economics from [the Austrian] school, which makes up a fake history of how economics and civilization originated.

One of the first Austrian’s [sic] was Carl Menger in the 1870s. His “individualistic” theory about the origins of money – without any role played by temples, palaces or other public institutions – still governs Austrian economics. Just as Margaret Thatcher said, “There’s no such thing as society,” the Austrians developed a picture of the economy without any positive role for government. It was as if money were created by producers and merchants bartering their output. This is a travesty of history.

All ancient money was issued by temples or public mints so as to guarantee standards of purity and weight. You can read Biblical and Babylonian denunciation of merchants using false weights and measures so see why money had to be public. The major trading areas were agora spaces in front of temples, which kept the official weights and measures. And much exchange was between the community’s families and the public institutions.

Most important, money was brought into being not for trade (which was conducted mainly on credit), but for paying debts. And most debts were owed to the temples and palaces for pubic services or tribute. But to the Austrians, the idea was that anything the government does to protect labor, consumers and society from rentiers and grabbers is deadweight overhead.

Above all, they opposed governments creating their own money, e.g. as the United States did with its greenbacks in the Civil War. They wanted to privatize money creation in the hands of commercial banks, so that they could receive interest on their privilege of credit creation and also to determine the allocation of resources.

Rewriting Economic Thought (Michael Hudson)

So we see that in this case that there is a very specific political agenda behind the retconning of history. It’s pressed in economic textbooks and expressly designed to promote a libertarian point of view. Much of retconning history does serve a political agenda that benefits a select group of people.

Trying to analyze all premodern economies as though they were just proto-capitalists lead to all sorts of errors, as Branko Milanovich points out in a recent post:

“The equilibrium (normal) price in a feudal economy, or in a guild system where capital is not allowed to move between the branches will be different from equilibrium prices in a capitalist economy with the free movement of capital. To many economists this is still not obvious. They use today’s capitalist categories for the Roman Empire where wage labor was (to quote Moses Finley) ‘spasmodic, casual and marginal’.”

Marx for me (and hopefully for others too) (globalinequality)

2. The individual has always been the basic unit of social organization. People have always thought of themselves primarily as citizens of territorial nation-states (British, German, French, Canadian, etc.) with well-defined borders. The neolocal monogamous nuclear family is the only natural and logical form of human social organization.

None of these statements are true, of course. Such arrangements are very contingent upon time and place and culture, and often very recent. For most of human history, the nation-state did not exist. There is nothing “natural” about it–it was created from above by oligarchic elites, just like the One Big Market. They are artificial creations.

And while families are, indeed, “natural,” the form they take varies widely. Most families were extended, and consisted of many generations living either on the same land or under the same roof, together with agnatic relations. Who was or was not considered a part of the family had to do with kinship structures, typically encoded into the language and culture.

Extended kinship networks were the primordial form of human social organization (as Lewis Henry Moran discovered). Religion, too, played a significant role, especially ancestor worship, collective rituals, and food-sharing meals and feasts (even bonobos do it).

This was the conclusion made by Henry Sumner Maine by studying ancient legal structures and comparing to them to surviving village communities in India, Java, North America, and elsewhere. He writes, “We have the strongest reason for thinking that property once belonged not to individuals nor even to isolated families, but to larger societies composed on the patriarchal model.” Concerning private property, he concludes,

“…[P]rivate property, in the shape in which we know it, was chiefly formed by the gradual disentanglement of the separate rights of individuals from the blended rights of a community. Our studies…seemed to show us the Family expanding into the Agnatic group of kinsmen, then the Agnatic group dissolving into separate households; lastly the household supplanted by the individual; and it is now suggested that each step in the change corresponds to an analogous alteration in the nature of Ownership.”

“…if it be true that far the most important passage in the history of Private Property is its gradual elimination from the co-ownership of kinsmen, then the great point of inquiry…what were the motives which originally prompted men to hold together in the family union? To such a question, Jurisprudence, unassisted by other sciences, is not competent to give a reply. The fact can only be noted.” (p. 159)

This is why Marxists argued that “primitive communism” was the original form of property ownership, i.e. socialism. Historically, this is correct. The problem was that this was predicated upon extended kinship networks and not large, industrial, nation states, composed of strangers. That is, primitive communism does not scale, which is why market economies came to supplant them over time.

Regarding the “lone individual” posited by Classical Liberals as the primordial atomic unit of society, this, too, is ahistorical. Like the primitive barter economy, anthropology has failed to turn it up anywhere it has looked for it:

It is here that archaic law renders us one of the greatest of its services, and fills up a gap which otherwise could have only been bridged by conjecture. It is full, in all its provinces, of the clearest indications that society in primitive times was not what it is assumed to be at present, a collection of *individuals*. In fact, and in the view of the men who composed it, it was an *aggregation of families*. The contrast may be most forcibly expressed by saying that the *unit* of an ancient society was the Family, or a modern society the individual. We must be prepared to find in ancient law all the consequences of this difference.

[Archaic Law] is so framed as to be adjusted to a system of small independent corporations. It is therefore scanty, because it is supplemented by the despotic commands of the heads of households. It is ceremonious, because the transactions to which it pays regard resemble international concerns much more than the quick play of intercourse between individuals.

Above all…it takes a view of *life* wholly unlike any which appears in developed jurisprudence. Corporations never die, and accordingly primitive law considers the entities with which it deals, i.e. the patriarchal or family groups, as perpetual and inditinguishable…
Ancient Law pp. 134-135

Surveying continental Europe and much of the colonial world, French scholar Emile de Lavaleye came to the same conclusion:

Originally the clan, or village, is the collective body owning the soil ; later on, it is the family, which has all the characteristics of a perpetual corporation. The father of the family is merely the administrator of the patrimony: when he dies, he is replaced by another administrator. There is no place for the testament, nor even for individual succession…Such was also the law everywhere where these communities have existed; and, probably, every nation has passed through the system.

The point of all this, of course, is not to advocate a rewind to the past. Rather, it is to show us that social forms change over time; and what may adaptive in one context (say, Fordism), will not work in another (say, an information economy). Lavaleye points this out himself:

“…the object of this book is not to advocate a return to the primitive agrarian community; but to establish historically the natural right of property as proclaimed by philosophers, as well as to show that ownership has assumed very various forms, and is consequently susceptible of progressive reform.”

3. Everyone before the Industrial Revolution was miserable, sick, and hungry all the time, irrespective of time and place. Life was, as Hobbes argued, “nasty, brutish and short” throughout prehistory before the last hundred years or so. We’ve doubled the human lifespan—a thirty year-old man was considered “old” just a few generations ago.

I’ve written so much disproving this idea that it’s not worth reiterating here. But here is yet another item that shows us that life in the past was not as horrible as it is commonly depicted by the evangelists of the Progress Gospel:

Medieval peasant food was frigging delicious (BoingBoing)

This Reddit Ask Historians question: Was there ever a civilization that had proper nutrition prior to modern society? begs the question. Its very formulation assumes that everyone was malnourished—a product of such retconning. Here are some good answers:

According to my history professor at Dalhousie University, Cynthia Neville (one of the top scholars in early medieval Scottish history), the Scots in medieval times had an incredibly healthy diet compared to many other parts of Europe at the time.

Wheat doesn’t grow well so far north, but hardier grains like oats and barley do quite well, and provide much better staple foodstock, along with many native vegetable varieties. Also, because cows weren’t as viable (except for the wealthiest lowland nobles), they lived on sheep’s milk and goat milk, which are much easier on the human digestive system. Much of their proteins came from seafood, which, as we know today, are loaded with omega fatty acids and essential vitamins.

There was a bit more to it, but that’s about all I can recall off the top of my head from her classes. This is one of the reasons why the Scots had a reputation for being taller and stronger, because their diets and hardy lifestyles kept them fit and healthy.

And:

When the Romans invaded Gaul, they noticed the Gauls were more than a foot taller, on average, than the Romans. This was due to better nutrition. Many prehistoric people’s had great nutrition. They were defeated by “civilized” people’s who had the advantages of greater numbers and organization. The same was true of the Indians of Massachusetts, when the Pilgrims arrived.

Not all prehistoric people had good nutrition, and not so people’s proliferate societies had bad nutrition. The Norse (Vikings) were dairy farmers and fishermen, and had excellent nutrition, like the Scotch, in medieval times.

4. People need “jobs” in order to feel valuable, or else they will go crazy. That is, we need to find a willing buyer for our labor, or we will feel like a useless burden on society. Furthermore, working forty hours a week is something we’ve just always done since forever. We would all be bored otherwise.

Of course, “jobs” are very recent invention. Most people in the past did not have formalized “jobs”—wage-labor was actually seen as a kind of slavery for much of ancient history. Yet today we’re told that jobs are an absolute necessity to feel “meaningful” and to have any kind of social outlet in today’s society.

Moreover, even when wages were paid, it was for a specific task and a specific duration (say, bringing in the harvest), not selling precisely 40 hours a week of your time to the highest bidder. Modern jobs are more of a babysitting operation than anything else. Of course people in earlier times had occupations and professions—farmers, craftsmen, warriors, artisans, clerks, priests, and so on. One of the biggest challenges capitalism faced was overcoming the previous work/leisure patterns and “disciplining” workers. Ryan Cooper sums up the very novelty of these ‘eternal’ notions:

The idea that work is a bedrock of society, that absolutely everyone who is not too old, too young, or disabled must have a job, was not handed down on tablets from Mount Sinai. It is the result of a historical development, one which may not continue forever. On the contrary, based on current trends, it is already breaking down.

The history of nearly universal labor participation is only about a century and a half old. Back in the early days of capitalism, demand for labor was so strong that all the ancient arrangements of society and family were shredded to accommodate it. Marx’s Capital famously described how women and very young children were press-ganged into the textile mills and coal mines, how the nighttime was colonized for additional shifts, and how capitalists fought to extend the working day to the very limits of human endurance (and often beyond).

The resulting misery, abuse, and wretchedness were so staggering, and the resulting class conflicts so intense, that various hard-won reforms were instituted: the eight-hour day, the weekend, the abolition of child labor, and so forth.

But this process of drawing more people into the labor force peaked in the late 1990s, when women finally finished joining the labor force (after having been forced out to make room for returning veterans after World War II). The valorization of work as the source of all that is good in life is to a great degree the result of the need to legitimate capital’s voracious demand for labor.

America is running out of jobs. It’s time for a universal basic income (The Week)

And here’s investigative journalist Yasha Levine recounting part of capitalism that have been retconned out of existence, citing the underappreciated work of economist Michael Perelman:

One thing that the historical record makes obviously clear is that Adam Smith and his laissez-faire buddies were a bunch of closet-case statists, who needed brutal government policies to whip the English peasantry into a good capitalistic workforce willing to accept wage slavery.

Francis Hutcheson, from whom Adam Smith learned all about the virtue of natural liberty, wrote: ”it is the one great design of civil laws to strengthen by political sanctions the several laws of nature. … The populace needs to be taught, and engaged by laws, into the best methods of managing their own affairs and exercising mechanic art.”

Yep, despite what you might have learned, the transition to a capitalistic society did not happen naturally or smoothly. See, English peasants didn’t want to give up their rural communal lifestyle, leave their land and go work for below-subsistence wages in shitty, dangerous factories being set up by a new, rich class of landowning capitalists. And for good reason, too. Using Adam Smith’s own estimates of factory wages being paid at the time in Scotland, a factory-peasant would have to toil for more than three days to buy a pair of commercially produced shoes. Or they could make their own traditional brogues using their own leather in a matter of hours, and spend the rest of the time getting wasted on ale. It’s really not much of a choice, is it?

But in order for capitalism to work, capitalists needed a pool of cheap, surplus labor. So what to do? Call in the National Guard!

Faced with a peasantry that didn’t feel like playing the role of slave, philosophers, economists, politicians, moralists and leading business figures began advocating for government action. Over time, they enacted a series of laws and measures designed to push peasants out of the old and into the new by destroying their traditional means of self-support.

“The brutal acts associated with the process of stripping the majority of the people of the means of producing for themselves might seem far removed from the laissez-faire reputation of classical political economy,” writes Perelman. “In reality, the dispossession of the majority of small-scale producers and the construction of laissez-faire are closely connected, so much so that Marx, or at least his translators, labeled this expropriation of the masses as “primitive accumulation.”

Yasha Levine: Recovered Economic History – “Everyone But an Idiot Knows That The Lower Classes Must Be Kept Poor, or They Will Never Be Industrious” (Naked Capitalism)

Indeed, average non-agricultural workers had much more autonomy and leisure time in the past, according to Perelman:

A medieval peasant had plenty of things to worry about, but the year-round control of daily life was not one of them. Perelman points out that in pre-capitalist societies, people toiled relatively few hours over the course of a year compared to what Americans work now. They labored like dogs during the harvest, but there was ample free time during the off-seasons. Holidays were abundant – as many as 200 per year. It was Karl Marx, in his Theory of Alienation, who saw that modern industrial production under capitalist conditions would rob workers of control of their lives as they lost control of their work. Unlike the blacksmith or the shoemaker who owned his shop, decided on his own working conditions, shaped his product, and had a say in how his goods were bartered or sold, the modern worker would have little autonomy. His relationships with the people at work would become impersonal and hollow.

Clearly, the technological wonders of our capitalist system have not released human beings from the burden of work. They have brought us more work. They have not brought most of us more freedom, but less.

Fifty Shades of Capitalism: Pain and Bondage in the American Workplace (Naked Capitalism)

Yet now we’re told that we need “jobs” to have any sort of meaning? Really?? WTF??? The vast majority of human existence has occurred outside of formalized wage work, as anthropologist James Suzman points out. Yet society will fall apart if we don’t submit ourselves to worker ‘discipline’ and scientific management? I don’t buy it. Whom does this narrative benefit, anyway?

See also this post from Reddit: What did an average day look like in medieval Europe?And this: Myths about the Medieval Times? Lots of good debunking in that last one.

In addition, laborers who recalled the previous autonomous lifeways–as late as the eighteenth century–were much more resistant to the constraints and insults of corporate capitalism. Now that the past has been retconned, we no longer even remember those past ways of being. Why is there no longer any resistance to the crushing or workers? Why do we not resist, even celebrate, the fortunes of today’s robber barons, unlike our forefathers? American resistance to our ruling elites has vanished. A lot of it has to do with the retconning of history, as this review of the Steve Fraser’s excellent book The Age of Acquiescence makes clear:

The fight against slavery had loosened the tongues of capitalism’s critics, forging a radical critique of the market’s capacity for barbarism. With bonded labor now illegal, the target pivoted to factory “wage slavery.” This comparison sounds strange to contemporary ears, but as Fraser reminds us, for European peasants and artisans, as well as American homesteaders, the idea of selling one’s labor for money was profoundly alien.

This is key to Fraser’s thesis. What ­fueled the resistance to the first Gilded Age, he argues, was the fact that many Americans had a recent memory of a different kind of economic system, whether in America or back in Europe. Many at the forefront of the resistance were actively fighting to protect a way of life, whether it was the family farm that was being lost to predatory creditors or small-scale artisanal businesses being wiped out by industrial capitalism. Having known something different from their grim present, they were capable of imagining — and fighting for — a radically better future.

It is this imaginative capacity that is missing from our second Gilded Age, a theme to which Fraser returns again and again in the latter half of the book. The latest inequality chasm has opened up at a time when there is no popular memory — in the United States, at least — of another kind of economic system. Whereas the activists and agitators of the first Gilded Age straddled two worlds, we find ourselves fully within capitalism’s matrix. So while we can demand slight improvements to our current conditions, we have a great deal of trouble believing in something else entirely.

A similar point is made in this review of the book in the London Review of Books:

Resistance to capitalism, it appeared, could look back as well as forwards; it was rooted not only in utopian visions of the future but also in concrete experience of the present and past, in older ways of being in the world, depending on family, craft, community, faith – all of which were threatened with dissolution (as Marx and Engels said) in ‘the icy waters of egotistical calculation’. Radical critiques of capitalism might well arise from conservative commitment to pre-capitalist ways of life, or memories of that life.

This wasn’t only an American pattern. E.P. Thompson, in The Making of the English Working Class (1963), rescued the Luddites and other artisans from ‘the enormous condescension of posterity’ by showing that their apparently reactionary attachments to custom and tradition created the leading edge of working-class consciousness. Soon American historians were making similar discoveries.
The Thompsonian history of the working class revealed a common pattern on both sides of the Atlantic: as workers became less grounded in traditional ways, their critique of capitalism tended to soften.

The Long Con (The London Review of Books)

5. New technology and innovation increases leisure time.The Industrial Revolution was accomplished purely by technological advances with no dislocation or bloodshed, and it made everyone better off with no government intervention whatsoever.

If there’s one consistent trend in technology, it’s this – new technology increases the amount of work! Greater leisure has only and ever been delivered due to worker insurrection and deliberate organization, and not by the “invisible hand” of the Market. Furthermore, entire generations were sacrificed and written out of the historical narrative to make the Industrial Revolution seem like a harmless win-win. As this commenter to Slashdot writes:

“Luddites weren’t just angry conservatives (literal, not political) trying to maintain some mythical “way of life”, it was a movement stated due to massive unemployment brought on by innovation in the textile industry. It became a generic insult because we’re so far removed from their (very real) suffering.”

There was [sic] close to 80 years of unemployment following the industrial revolution that is seldom talked about (if you took history in high school or college you got maybe a paragraph at best). This is because text book historians like to keep an upbeat tone and because school boards are often staffed by economically conservative (political now) who don’t want anyone speaking ill of capitalism. Go find a book called “A People’s History of the United States” if you want a sense for how screwed up American history actually is.”

https://hardware.slashdot.org/story/19/01/04/180226/robots-are-taking-some-jobs-but-not-all-world-bank

Or, just read this post: The US Government Has Always Been a Tool of Greedy Corporations (Vice)

5. Ancient people were uniformly ruled over by evil despots (i.e. ‘Oriental Despotism’). The “West” was all about freedom, justice, and democracy compared to the yoke of despotism the rest of the world lived under in primitive places such as Asia, Africa and the Americas.

As we’ve seen, Classical civilization–from the ancient Greeks to the Romans–was the most slave-driven economy in history to that point (only to be surpassed in the ‘Western’ colonial Americas). While that slavery decayed due to the dissolution of the Roman Empire, subsequent serfdom could hardly be considered freedom. By contrast, not all “primitive” societies were anywhere near as despotic as Western Europe and Imperial China were. That was a retconning of history to depict Western European civilization as “enlightened” in opposition to the ignorant “heathens.” For example, here is an excerpt from the book The Story of Manual Labor:

At no time in the history of ancient Mexico do we find that heartless oppression of the poor by the rich, that lack of humanity toward the wage-worker, that blackens the annals of so many European peoples. Luxury existed in the court of the Montezumas, it is true, but to support that luxury the poorer classes were not plunged into poverty and degradation. They were a simple people, and their needs were small and easily satisfied. Living in a tropical climate, upon a soil that repaid a thousandfold the slightest effort of the farmer; surrounded by forests full of game and rivers teeming with edible fish, the Mexican lived a life of comfort that to the Saxon churl or French bourgeoise of the same day would have seemed idyllic.

The Story of Manual Labor (Archive.org)

There are countless other examples, from long car commutes, to 20+ years of formalized schooling and expensive post-graduate degrees required for a job (or any formalized education at all), but I think you get the point.

As Chris Hedges poignantly writes in his latest book, America: the Farewell Tour:

If we do not know our history and our culture, if we accept the history and culture manufactured for us by the elites, we will never free ourselves from the forces of oppression. The recovery of memory and culture in the 1960s by radical movements terrified the elites. It gave people an understanding of their own power and agency. It articulated and celebrated the struggles of working men and women and the oppressed rather than the mythical beneficence of the powerful. It exposed the exploitation and mendacity of the ruling class. And that is why corporatists spent billions to crush and marginalize these movements and their histories in schools, culture, the press, and in our systems of entertainment.

Not only does the people have no precise consciousness of its own historical identity,” Gramsci lamented under fascism, “it is not even conscious of the historical identity or the exact limits of its adversary.

If we do not know our history we have no point of comparison. We cannot name the forces that control us or see the long continuity of capitalist oppression and resistance… p. 17

Anyway, here’s to a happy (or at least, tolerable) 2019, and I hope you all stick around and continue reading and commenting. Thanks!

And So This Is Christmas

By David Swanson

Source: Let’s Try Democracy

Christmas Day. Very late on this day and into the morning of the 26th in 1776, George Washington led a surprise night crossing of the Delaware River and bloody pre-dawn attack on unarmed hung-over-from-Christmas troops still in their underwear — a founding act of violence for the new nation to proudly remember as the progenitor of either the crimes of its “special” forces all over the globe or of peace on earth, I can never recall which.

A more useful memory is certainly that of the 1914 Christmas truce, which was actually more than one truce that year and in the subsequent years of the Great War. This is a true story of people not just managing to speak to each other but actually becoming friends with not just people they had a disagreement with but people who a moment before and for a longtime running had been trying to murder them. It’s a story of war enemies figuring out that the actual enemy is not any people but war itself. And they did it on Christmas. Maybe we can do something good on Christmas too.

Better still is the memory of Jesus, whether fictional character or real man, stripped of all the magical powers, but understood as an innovative and courageous nonviolent activist, or — if you prefer — engager in active nonviolence. In the account Terrence Rynne gives in Choosing Peace: The Catholic Church Returns to Gospel Nonviolence, edited by Marie Dennis, Jesus lived in a violent time and place, the people around him full of rage at the Roman occupiers and their proxy tyrants. (Maybe I should just say a violent time, as the place is still violent and full of rage at distant occupiers and their proxy tyrants.) Jesus predicted catastrophe if the violence was escalated, and he was not heeded, and he was proved right. But his unheeded advice has often been put to use and can still be put to use.

If an occupying soldier smacks you, you can calmly and courageously and lovingly look him in the eye and offer your other cheek. If he takes your coat, you can expose his cruelty by offering your shirt too. If he forces you to carry his pack for a mile, you can make him see your humanity by offering to carry it a second mile. Those of us in the heart of the empire can block distant operations and ask that only the person who has never done anything wrong fire the first missile. We can confront fictional humanitarian justifications with the principled clarity of Jesus who preferred to be killed rather than condone the use of violence against the Romans. We can reject the morality that places politeness above saving the world from climate chaos or nuclear war, with the righteous loving anger of a nonviolent activist overturning the tables of the oligarchs.

A still better memory, though we don’t quite have it and need to create it, is the memory of the cultures displaced by the war-mad Western culture that adopted Christianity and turned it into a powerful argument for both war and sloppy thinking through just war theory. I’m reading right now a book called Kayanerenko:Wa The Great Law of Peace by Kayanesenh Paul Williams, about the law of peace created among the Mohawk, Oneida, Onondaga, Cayuga, and Seneca who became collectively the Haudenosaunee. There is as much to be learned from cultures that did not invent imperialism as from those that invented both it and ways to resist it.

There is the possibility of something being done for peace. President Donald Trump claims he will remove all U.S. troops from Syria. There is very little correlation between what he says and what he does, but if he does this, we need to be prepared to thank him, to celebrate such an action, and to insist that bombing cease as well, and that actual humanitarian aid and unarmed peace workers replace the military force, and that weapons sales and gifts to the region be ended. Trump needs to be shocked by the support for this act from unexpected places.

The U.S. Congress is making a lot of noise about possibly for the very first time using the 1973 War Powers Resolution to end a war, the war on Yemen. We need to make sure this happens and then celebrate it, while insisting that the law be complied with, that all loopholes in it be closed, and that weapons sales and gifts to the region be ended. Similar talk is beginning about also ending the war on Afghanistan. Our responsibility there is the same.

If any of these wars can be ended, we need to rapidly build on that success to end more wars, and more wars, and the funding of the preparations for wars. The United Nations says that $11 billion per year could end the lack of clean drinking water globally. The United States is building a single ugly boat for $13 billion that has no defensive purpose but is likely to start wars. It’s time to change course.

Three years ago, the Pope said this to the U.S. Congress which stood and cheered for it:

“Being at the service of dialogue and peace also means being truly determined to minimize and, in the long term, to end the many armed conflicts throughout our world. Here we have to ask ourselves: Why are deadly weapons being sold to those who plan to inflict untold suffering on individuals and society? Sadly, the answer, as we all know, is simply for money: money that is drenched in blood, often innocent blood. In the face of this shameful and culpable silence, it is our duty to confront the problem and to stop the arms trade.”

If anyone else had said that — I mean anyone else at all — he or she would have been denounced and mocked by members of Congress and by the corporate communications system in the United States. The Pope was cheered instead (and then ignored rather than heeded) not because he has established himself as a moral leader. It’s not that he hasn’t, so much as that we just don’t have those in U.S. corporate media; it isn’t done. The Pope was cheered and tolerated because he is understood to be speaking as a moral leader and is widely believed to be associated in some way with magical powers.

This makes the project that so many great Catholic peace activists are engaged in of trying to move the Catholic Church toward a complete rejection of war very valuable. It also makes it valuable for us all to use Christmas as an occasion to urge the consideration of morality in the question of war or peace, and to demand an end to weapons dealing and death dealing, base building and occupying, killing and maiming and destroying, and threatening fire and fury, in every corner of the world.