As a fan of Eckhart Tolle I’ve always liked his description of Consciousness (or ‘Being’ which seems his preference) as “No Thing.”
This separates “Being” from the world of form, and puts it into the area of what Tesla called “nonmaterial reality.”
I’ve generally thought of this reality as (an) Infinite Mind (again as opposed to “God”) to take out the anthropomorphic bias which seems to permeate organized religion. Political Christianity and some other groups seem to relish an angry and vengeful God to keep the parishioners paying. But when you step away from beliefs that are easily debunked you are still left with a fact.
We seem to be thinking.
Of course, it was Descartes who famously equated thought with Being, which has led to all sorts of issues that Eckhart Tolle describes well in his work. When we identify with only our thoughts, we have narrowed our focus and reduced reality to labels.
But the reality of thought persists. What is it?
Is Thought Electricity in the Brain?
Neuroscientists seem to have identified the presence of thoughts in the brain through various instruments that can pick up electrical signals in parts of the brain and between synapses.
But so far, I don’t believe they can “download” these signals and decode them.
When we observe our thoughts, we can see that they seem to be comprised of “words”. In fact, I’ve had the experience of thinking in languages other than English (my native language is German) and of course, the thoughts come as words – sometimes in cogent sentences or perhaps just one word.
So, I was musing, what about ancient humans? Did they need to form a sentence in their brains to warn them that a lion might be in the bushes?
If you’ve ever experienced trauma, you know the answer – our limbic system activates, putting us in “fight or flight” well before any thought ever happens.
I would suggest that a primal, lower frequency of Mind operates in our limbic system, before thought and language.
So, when did we start thinking in “words”?
According to my AI friend,
“scholars believe it [language] originated at least 100,000 years ago during the Middle Stone Age. The development of language is linked to the increased complexity of human culture and cognition.”
Maybe a tribe of hunter-gatherers developed a sound for “lion” and it became a warning cry. Then perhaps “big” lion or “many lions”.
We know that our ancients memorialized beasts in petroglyphs of various kinds to communicate but the next big breakthrough was when the words, sentences and thus concepts were able to be preserved.
Writing Was the Big Game Changer
AI tells us that
“Writing systems were invented independently by different civilizations thousands of years ago as a means of recording information. The earliest writing emerged around 3,500-3,000 BCE in Mesopotamia and Egypt. Chinese writing developed around 1,200 BCE.”
So now I will do what they do on Ancient Aliens, which is take a speculative leap based on the foregoing.
It intrigues me that the cultures that seemed to “create” writing all have a version of the Prometheus myth – crediting the “Gods” with giving them the gift of higher knowledge.
To connect this to the beginning of writing seems to make sense, as we have precisely these myths in Mesopotamia (Annunaki) and Egypt.
And it seems clear that with the onset of the written word (and mathematical notations) great leaps in human progress came almost in quantum intervals. We got the printing press and eventually our modern technology.
We might speculate that it is likely that Mind has been with us forever, but that thought evolved and expanded dramatically with the beginning of writing – and that writing could easily be seen as a gift that transformed human civilization.
There May Have Been Consequences for Teaching Humanity
It is also very plausible that any entity that conveyed such a gift to humanity may well have angered other entities that wanted to keep humans in check.
Cuneiform tablets from the Sumerians describe how one “God” Enki created humans in the image of the Annunaki and gave them knowledge – but most of the humans were wiped out by his rival Enil in the great flood. We now have evidence in the geological record that such a flood happened about 12,000 years ago.
But just this little thought experiment can vastly expand our sense of our place in the cosmos along with providing a much-needed dose of humility.
What if we did not simply “evolve” with natural selection but received assistance in an area we are now beginning to understand – genetics? This would indicate a profound connection to the cosmos in a way that is disregarded by our current society.
It is also worth noting, as my AI explains,
“There is evidence that around 250,000-300,000 years ago there were some key genetic changes in early humans that contributed to increased brain size and advanced cognitive abilities compared to other primates.”
Where these came from or how they came about is still a mystery.
And now that it seems apparent that some visitation by “entities” from the sky is not likely fiction but a reality, it may help to broaden our understanding of Nature and how we got here.
My AI friend makes another statement which I think is exactly backwards:
“Some key developments that enabled writing include the evolution of symbolic thought, the invention of systems of counting, and the emergence of urban civilization needing record-keeping.”
Clearly, it was first language, and then writing and math that led to this evolution of our brains, not the other way around. Our original brains would have needed to expand to accommodate our first language which took us beyond the limbic system to labeling, and ultimately writing which led us to sharing ideas and thinking “symbolically” – using groups of letters as words and then sentences to convey increasingly complex concepts.
My own experience with neuroplasticity confirms that new uses for the brain expand its capacity, creating new pathways and neural networks. People who keep learning seem less susceptible to dementia.
Opening to the possibility that our evolution was “jump started” by extraterrestrials changes the narrative from chance and natural selection to a more profound connection to the universe in areas that our current science has mostly yet to penetrate. (Nonmaterial reality).
A Clue that Space Is Not Empty
But technology in particular seems to point us in the right direction – it was the offspring of the printing press – the computer – which eventually led us to a huge breakthrough in our awareness of the nonmaterial or seeming empty space as being potentially much much more.
When we developed WiFi suddenly the information encoded in words, thoughts and sentences could travel through space. So who knows what other information or Mind stuff has been around us all along?
Because Mind is everywhere and at the heart of Nature.
Most people think that George Orwell was writing about, and against, totalitarianism – especially when they encounter him through the prism of his great dystopian novel, Nineteen Eighty-Four.
This view of Orwell is not wrong, but it can miss something. For Orwell was concerned above all about the particular threat posed by totalitarianism to words and language. He was concerned about the threat it posed to our ability to think and speak freely and truthfully. About the threat it posed to our freedom.
He saw, clearly and vividly, that to lose control of words is to lose control of meaning. That is what frightened him about the totalitarianism of Nazi Germany and Stalinist Russia – these regimes wanted to control the very linguistic substance of thought itself.
And that is why Orwell continues to speak to us so powerfully today. Because words, language and meaning are under threat once more.
Totalitarianism in Orwell’s time
The totalitarian regimes of Nazi Germany and Stalin’s Soviet Union represented something new and frightening for Orwell. Authoritarian dictatorships, in which power was wielded unaccountably and arbitrarily, had existed before, of course. But what made the totalitarian regimes of the 20th century different was the extent to which they demanded every individual’s complete subservience to the state. They sought to abolish the very basis of individual freedom and autonomy. They wanted to use dictatorial powers to socially engineer the human soul itself, changing and shaping how people think and behave.
Totalitarian regimes set about breaking up clubs, trade unions and other voluntary associations. They were effectively dismantling those areas of social and political life in which people were able to freely and spontaneously associate. The spaces, that is, in which local and national culture develops free of the state and officialdom. These cultural spaces were always tremendously important to Orwell. As he put it in his 1941 essay, ‘England Your England’: ‘All the culture that is most truly native centres round things which even when they are communal are not official – the pub, the football match, the back garden, the fireside and the “nice cup of tea”.’
Totalitarianism may have reached its horrifying zenith in Nazi Germany and Stalin’s USSR. But Orwell was worried about its effect in the West, too. He was concerned about the Sovietisation of Europe through the increasingly prominent and powerful Stalinist Communist Parties. He was also worried about what he saw as Britain’s leftwing ‘Europeanised intelligentsia’, which, like the Communist Parties of Western Europe, seemed to worship state power, particularly in the supranational form of the USSR. And he was concerned above all about the emergence of the totalitarian mindset, and the attempt to re-engineer the deep structures of mind and feeling that lie at the heart of autonomy and liberty.
Orwell could see this mindset flourishing among Britain’s intellectual elite, from the eugenics and top-down socialism of Fabians, like Sidney and Beatrice Webb and HG Wells, to the broader technocratic impulses of the intelligentsia in general. They wanted to remake people ‘for their own good’, or for the benefit of the race or state power. They therefore saw it as desirable to force people to conform to certain prescribed behaviours and attitudes. This threatened the everyday freedom of people who wanted, as Orwell put it, ‘the liberty to have a home of your own, to do what you like in your spare time, to choose your own amusements instead of having them chosen for you from above’.
In the aftermath of the Second World War, this new intellectual elite started to gain ascendancy. It was effectively a clerisy – a cultural and ruling elite defined by its academic achievements. It had been forged through higher education and academia rather than through traditional forms of privilege and wealth, such as public schools.
Orwell was naturally predisposed against this emergent clerisy. He may have attended Eton, but that’s where Orwell’s education stopped. He was not part of the clerisy’s world. He was not an academic writer, nor did he position himself as such. On the contrary, he saw himself as a popular writer, addressing a broad, non-university-educated audience.
Moreover, Orwell’s antipathy towards this new elite type was long-standing. He had bristled against the rigidity and pomposity of imperial officialdom as a minor colonial police official in Burma between 1922 and 1927. And he had always battled against the top-down socialist great and good, and much of academia, too, who were often very much hand in glove with the Stalinised left.
The hostility was mutual. Indeed, it accounts for the disdain that many academics and their fellow travellers continue to display towards Orwell today.
The importance of words
Nowadays we are all too familiar with this university-educated ruling caste, and its desire to control words and meaning. Just think, for example, of the way in which our cultural and educational elites have turned ‘fascism’ from a historically specific phenomenon into a pejorative that has lost all meaning, to be used to describe anything from Brexit to Boris Johnson’s Tory government – a process Orwell saw beginning with the Stalinist practice of calling Spanish democratic revolutionaries ‘Trotsky-fascists’ (which he documented in Homage to Catalonia (1938)).
Or think of the way in which our cultural and educational elites have transformed the very meanings of the words ‘man’ and ‘woman’, divesting them of any connection to biological reality. Orwell would not have been surprised by this development. In Nineteen Eighty-Four, he shows how the totalitarian state and its intellectuals will try to suppress real facts, and even natural laws, if they diverge from their worldview. Through exerting power over ideas, they seek to shape reality. ‘Power is in tearing human minds to pieces and putting them together in new shapes of your own choosing’, says O’Brien, the sinister party intellectual. ‘We control matter because we control the mind. Reality is inside the skull… You must get rid of these 19th-century ideas about the laws of nature.’
In Nineteen Eighty-Four, the totalitarian regime tries to subject history to similar manipulation. As anti-hero Winston Smith tells his lover, Julia:
‘Every record has been destroyed or falsified, every book has been rewritten, every picture has been repainted, every statue and street and building has been renamed, every date has been altered. And that process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.’
As Orwell wrote elsewhere, ‘the historian believes that the past cannot be altered and that a correct knowledge of history is valuable as a matter of course. From the totalitarian point of view history is something to be created rather than learned.’
This totalitarian approach to history is dominant today, from the New York Times’ 1619 Project to statue-toppling. History is something to be erased or conjured up or reshaped as a moral lesson for today. It is used to demonstrate the rectitude of the contemporary establishment.
But it is language that is central to Orwell’s analysis of this form of intellectual manipulation and thought-control. Take ‘Ingsoc’, the philosophy that the regime follows and enforces through the linguistic system of Newspeak. Newspeak is more than mere censorship. It is an attempt to make certain ideas – freedom, autonomy and so on – actually unthinkable or impossible. It is an attempt to eliminate the very possibility of dissent (or ‘thoughtcrime’).
As Syme, who is working on a Newspeak dictionary, tells Winston Smith:
‘The whole aim… is to narrow the range of thought. In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every year fewer and fewer words, and the range of consciousness always a little smaller… Has it ever occurred to you, Winston, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?’
The parallels between Orwell’s nightmarish vision of totalitarianism and the totalitarian mindset of today, in which language is policed and controlled, should not be overstated. In the dystopia of Nineteen Eighty-Four, the project of eliminating freedom and dissent, as in Nazi Germany or Stalinist Russia, was backed up by a brutal, murderous secret police. There is little of that in our societies today – people are not forcibly silenced or disappeared.
However, they are cancelled, pushed out of their jobs, and sometimes even arrested by the police for what amounts to thoughtcrime. And many more people simply self-censor out of fear of saying the ‘wrong’ thing. Orwell’s concern that words could be erased or their meaning altered, and thought controlled, is not being realised in an openly dictatorial manner. No, it’s being achieved through a creeping cultural and intellectual conformism.
The intellectual turn against freedom
But then that was always Orwell’s worry – that intellectuals giving up on freedom would allow a Big Brother Britain to flourish. As he saw it in The Prevention of Literature (1946), the biggest danger to freedom of speech and thought came not from the threat of dictatorship (which was receding by then) but from intellectuals giving up on freedom, or worse, seeing it as an obstacle to the realisation of their worldview.
Interestingly, his concerns about an intellectual betrayal of freedom were reinforced by a 1944 meeting of the anti-censorship organisation, English PEN. Attending an event to mark the 300th anniversary of Milton’s Areopagitica, Milton’s famous 1644 speech making the case for the ‘Liberty of Unlicenc’d Printing’, Orwell noted that many of the left-wing intellectuals present were unwilling to criticise Soviet Russia or wartime censorship. Indeed, they had become profoundly indifferent or hostile to the question of political liberty and press freedom.
‘In England, the immediate enemies of truthfulness, and hence of freedom of thought, are the press lords, the film magnates, and the bureaucrats’, Orwell wrote, ‘but that on a long view the weakening of the desire for liberty among the intellectuals themselves is the most serious symptom of all’.
Orwell was concerned by the increasing popularity among influential left-wing intellectuals of ‘the much more tenable and dangerous proposition that freedom is undesirable and that intellectual honesty is a form of anti-social selfishness’. The exercise of freedom of speech and thought, the willingness to speak truth to power, was even then becoming seen as something to be frowned upon, a selfish, even elitist act.
An individual speaking freely and honestly, wrote Orwell, is ‘accused of either wanting to shut himself up in an ivory tower, or of making an exhibitionist display of his own personality, or of resisting the inevitable current of history in an attempt to cling to unjustified privilege’.
These are insights which have stood the test of time. Just think of the imprecations against those who challenge the consensus. They are dismissed as ‘contrarians’ and accused of selfishly upsetting people.
And worst of all, think of the way free speech is damned as the right of the privileged. This is possibly one of the greatest lies of our age. Free speech does not support privilege. We all have the capacity to speak, write, think and argue. We might not, as individuals or small groups, have the platforms of a press baron or the BBC. But it is only through our freedom to speak freely that we can challenge those with greater power.
Orwell’s legacy
Orwell is everywhere today. He is taught in schools and his ideas and phrases are part of our common culture. But his value and importance to us lies in his defence of freedom, especially the freedom to speak and write.
His outstanding 1946 essay, ‘Politics and the English Language’, can actually be read as a freedom manual. It is a guide on how to use words and language to fight back.
Of course, it is attacked today as an expression of privilege and of bigotry. Author and commentator Will Self cited ‘Politics and the English Language’ in a 2014 BBC Radio 4 show as proof that Orwell was an ‘authoritarian elitist’. He said: ‘Reading Orwell at his most lucid you can have the distinct impression he’s saying these things, in precisely this way, because he knows that you – and you alone – are exactly the sort of person who’s sufficiently intelligent to comprehend the very essence of what he’s trying to communicate. It’s this the mediocrity-loving English masses respond to – the talented dog-whistler calling them to chow down on a big bowl of conformity.’
Lionel Trilling, another writer and thinker, made a similar point to Self, but in a far more insightful, enlightening way. ‘[Orwell] liberates us’, he wrote in 1952:
‘He tells us that we can understand our political and social life merely by looking around us, he frees us from the need for the inside dope. He implies that our job is not to be intellectual, certainly not to be intellectual in this fashion or that, but merely to be intelligent according to our lights – he restores the old sense of the democracy of the mind, releasing us from the belief that the mind can work only in a technical, professional way and that it must work competitively. He has the effect of making us believe that we may become full members of the society of thinking men. That is why he is a figure for us.’
Orwell should be a figure for us, too – in our battle to restore the democracy of the mind and resist the totalitarian mindset of today. But this will require having the courage of our convictions and our words, as he so often did himself. As he put it in The Prevention of Literature, ‘To write in plain vigorous language one has to think fearlessly’. That Orwell did precisely that was a testament to his belief in the public just as much as his belief in himself. He sets an example and a challenge to us all.
In fits of, what might well be termed, masochism, some of us now-and-then tune in to the legacy media. When doing so, one is likely to hear western-aligned politicians rhetorize ad nauseam about the linguistically vogue rules-based order. Now and then, the word “international” is also inserted: the rules-based international order.
But what exactly is this rules-based order?
The way that the wording rules-based order is bandied about makes it sound like it has worldwide acceptance and that it has been around for a long time. Yet it comes across as a word-of-the-moment, both idealistic and disingenuous. Didn’t people just use to say international law or refer to the International Court of Justice, Nuremberg Law, the UN Security Council, or the newer institution — the International Criminal Court? Moreover, the word rules is contentious. Some will skirt the rules, perhaps chortling the aphorism that rules are meant to be broken. Rules can be unjust, and shouldn’t these unjust rules be broken, or better yet, disposed of? Wouldn’t a more preferable wording refer to justice? And yes, granted that justice can be upset by miscarriages. Or how about a morality-based order?
Nonetheless, it seems this wording of a rules-based order has jumped to the fore. And the word order makes it sound a lot like there is a ranking involved. Since China and Russia are advocating multipolarity, it has become clearer that the rules-based order, which is commonspeak among US and US-aligned politicians, is pointing at unipolarity, wherein the US rules a unipolar, US-dominated world.
An Australian thinktank, the Lowy Institute, has pointed to a need “to work towards a definition” for a rules-based order. It asks, “… what does America think the rules-based order is for?
Among the reasons cited are “… to entrench and even sanctify an American-led international system,” or “that the rules-based order is a fig leaf, a polite fiction that masks the harsh realities of power,” and that “… the rules-based order can protect US interests as its power wanes relative to China…”
The Hillwrote, “The much-vaunted liberal international order – recently re-branded as the rules-based international order or RBIO – is disintegrating before our very eyes.” As to what would replace the disintegrated order, The Hill posited, “The new order, reflecting a more multipolar and multicivilizational distribution of power, will not be built by Washington for Washington.”
The Asia Timesacknowledged that it has been a “West-led rules-based order” and argued that a “collective change is needed to keep the peace.”
It is a given that the rules-based order is an American linguistic instrument designed to preserve it as a global hegemon. To rule is America’s self-admitted intention. It has variously declared itself to be the leader of the free world, the beacon on the hill, exceptional, the indispensable nation (in making this latter distinction, a logical corollary is drawn that there must be dispensable nations — or in the ineloquent parlance of former president Donald Trump: “shithole” nations).
Thus, the US has placed itself at the apex of the international order. It seeks ultimate control through full-spectrum dominance. It situates its military throughout the world; it surrounds countries with bases and weapons that it is inimically disposed toward — for example, China and Russia. It refuses to reject the first use of nuclear weapons. It does not reject the use of landmines. It still has a chemical-weapons inventory, and it allegedly carries out bioweapons research, as alluded to by Russia, which uncovered several clandestine biowarfare labs in Ukraine. This news flummoxed Fox News’ Tucker Carlson. Dominance is not about following rules, it is about imposing rules. That is the nature of dominating. Ergo, the US rejects the jurisdiction of the International Criminal Court and went so far as to sanction the ICC and declare ICC officials persona non grata when its interests were threatened.*****Having placed itself at the forefront, the US empire needs to keep its aligned nations in line.
It is a commonly heard truism that actions speak louder than words. But an examination of Trudeau’s words compared to his actions speaks to a contradiction when it comes to Canada and the rule of law.
It seems Canada is just a lackey for the leader of the so-called free world.
One of the freedoms the US abuses is the freedom not to sign or ratify treaties. Even the right-wing thinktank, the Council on Foreign Relations lamented, “In lists of state parties to globally significant treaties, the United States is often notably absent. Ratification hesitancy is a chronic impairment to international U.S. credibility and influence.”
The CFR added, “In fact, the United States has one of the worst records of any country in ratifying human rights and environmental treaties.”
It is a matter of record that the US places itself above the law. As stated, the US does not recognize the ICC; as a permanent member of the UN Security Council, the US has serially abused its veto power to protect the racist, scofflaw nation of Israel; it ignored a World Court ruling that found the US guilty of de facto terrorism for mining the waters around Nicaragua.
The historical record reveals that the US, and its Anglo-European-Japanese-South Korean acolytes, are guilty of numerous violations of international law (i.e., the rules-based, international order).
When it comes to the US, the contraventions of the rules-based order are myriad. To mention a few:
Currently, the US is occupying Syria and stealing the oil of the Syrian people;
The US funded the Maidan coup that overthrew the elected president of Ukraine, leading to today’s special military operation devastating Ukraine, which continues to fight a US-NATO proxy war.
Then, there is the undeniable fact that the US exists because of a genocide wreaked by its colonizers, which has been perpetuated ever since.
Even the accommodations that the US imposed on the peoples it dispossessed are ignored, revealed by a slew of broken treaties.3
The history of US actions (as opposed to its words) and its complicit tributaries needs to be kept firmly in mind when the legacy media unquestioningly reports the pablum about adhering to a rules-based order.
Read Bob Joseph, 21 Things You May Not Know About the Indian Act: Helping Canadians Make Reconciliation with Indigenous Peoples a Reality, 2018. []
Vine Deloria, Jr., Behind the Trail of Broken Treaties: An Indian Declaration of Independence, 1985. This governmental infidelity to treaties is also true in the Canadian context. []
The deployment of clever linguistic tricks has created a hostile upside-down universe, where even the vaccine-injured are tarnished as “anti-vaxxers” or liars rather than acknowledged as ex-vaxxers who took risks that turned out to be life-changing.
Psychological and linguistic manipulation are, for those in power, proven tools for building, consolidating and maintaining dominance — a reality keenly depicted in George Orwell’s never-more-relevant novel, “1984.”
As phrased by master propagandist Edward Bernays, an approximate contemporary of Orwell’s, the mind of the people “is made up for it by the group leaders in whom it believes and by those persons who understand the manipulation of public opinion.”
Recent events surrounding COVID vaccines have shown that medicine and public health — with the help of a complicit media — are particularly skilled at “pull[ing] the wires which control the public mind.”
The clever bag of linguistic tricks deployed by the medical cartel includes seeding evocative terms such as “vaccine hesitancy” and “lockdowns” (which is prison terminology) into popular and scientific discourse, forging slippery new definitions of words with formerly fixed meanings (such as “pandemic,” “herd immunity” and “vaccine”), and circling failed products back around by giving them the positive spin of “boosters.”
Ominously, medicine’s and public health’s verbal assaults encourage shaming of, or violence against, those who ask questions, while upholding the disingenuous pretense that vaccine mandates are compatible with freedom.
In this hostile upside-down universe, even the vaccine-injured are tarnished as “anti-vaxxers” or liars rather than acknowledged as ex-vaxxers who took risks that turned out to be life-changing.
‘Much like other stressors’
One of the more insulting recent examples of linguistic weaponization involves a dubious psychiatric cover term, “functional neurological disorder” (FND), that is suddenly being trumpeted as an explanation for the tsunami of adverse events — especially severe neurological reactions — being reported all over the world in the aftermath of COVID vaccination.
Psychiatrists conveniently define FND — which they also refer to as a “psychogenic” (originating in the mind) or “conversion” disorder — as “real” nervous system symptoms that “cause significant distress or problems functioning” but are “incompatible with” or “can’t be explained by” recognized neurological diseases or other medical conditions.
Lest members of the public derive a “simplistic impression of potential links between the [COVID] vaccine and major neurological symptoms,” neurologists pushing the FND story have hastened to reassure people that the “close development of functional motor symptoms after the vaccine does not implicate the vaccine as the cause of those symptoms.”
One of these individuals is National Institutes of Health-funded neurologist Alberto Espay, who implausibly adds that COVID vaccination (which entails injection with high-risk substances and technologies) is just “a stressor or precipitant, much like any other stressor … such as a motor vehicle accident or sleep deprivation.”
Officials and the media are audaciously trotting out the FND narrative on both sides of the pond, as evidenced by a recent Daily Mail headline that read, “Videos of people ‘struggling to walk’ after getting their COVID vaccine are NOT result of jab itself but a condition triggered by stress or trauma.”
Helping with the spin, a member of the UK’s Joint Committee on Vaccination and Immunization straight-facedly attributed this “stress” to coercion, stating: “If people begin to feel they are being kind of forced against their will to do something, then in a sense that’s quite a damaging thing to do because it gives people the impression vaccination is something being imposed on them.”
Hammering home the point that “there is nothing to see here,” Kings College London physician Matthew Butler solemnly (and without evidence) agrees that FND — though “serious and debilitating” — “does not implicate any vaccine constituents and should not hamper ongoing vaccination efforts.”
Butler is the lead author of a May 2020 paper proposing FND patients’ “abnormal body-focussed attention” be treated with psychedelics such as LSD and psilocybin — never mind that psychedelics themselves, admit Butler and co-authors, “sometimes produce abnormal physical and motor effects,” including seizures.
An all-too-familiar game
To past victims of vaccine injury, the “it’s all in your mind” sleight-of-hand being summoned to dismiss COVID vaccine injuries is all too familiar.
Consider autism, which psychiatrists blamed, in its earliest days, on emotionally distant “refrigerator moms.”
In more recent decades, families affected by autism have experienced the double whammy of regulatory indifference to likely culprits (including not just neurotoxic vaccines but other probable environmental triggers) alongside brazen denial of autism’s escalating prevalence.
Young people injured by human papillomavirus (HPV) vaccines tell similar stories of “denial and dismissal of reported harms and deaths.” Researchers who in 2017 reviewed the serious adverse events reported during two of the largest HPV vaccine clinical trials noted that “Practically, none of the serious adverse events occurring in any arm of both studies were judged [by the manufacturers] to have been vaccine-related.”
In the face of severe symptoms such as heart-attack-like chest pain, numbness and swelling of extremities, hair loss, whole-body aches and extreme fatigue, boys and girls injured by HPV vaccines have been repeatedly subjected to medical gaslighting — told they are “crazy” and just need to “slow down.”
In one incident in Australia, after “26 girls presented to the school’s sick bay with symptoms including dizziness, syncope [fainting] and neurological complaints” within two hours of receiving HPV vaccines at school, pharma-funded researchers had the chutzpah to dismiss the safety signal and characterize the episode as a “mass psychogenic event” — which they defined as “the collective occurrence of a constellation of symptoms suggestive of organic illness but without an identified cause in a group of people with shared beliefs about the cause.”
Recognize, question and reclaim
The medical-public health-pharma cartel, the “small cabal of wealthy countries, corporations and individuals” that support it, and their media mouthpieces are supremely confident in their ability to manage public perceptions through words and narratives, whether for the purpose of “mystifying” the public about key events, securing buy-in for oppressive policies or sowing discord to divide and conquer. (As journalists Caitlin Johnstone and Glenn Greenwald also remind us, many media personalities are intelligence agency veterans or assets, and the “sole owner of the Washington Post is a CIA contractor.”)
Thus, it pays to be attentive to how health authorities use language, for “the more you know about language, the more immune you become to its effects.”
Beyond noticing the manipulation, we must also stop ceding the linguistic terrain to our would-be manipulators — for example, by eschewing weaponized vocabulary such as the pejorative term “vaccine hesitancy.”
Catholic journalist Jane Stannus points out that the term “vaccine hesitant” portrays those who decline COVID (or other) vaccines as “‘trapped by irrational fears’ in a state of inaction or ignorantly opposed to science,” with the strong suggestion “that such backward and weak-minded persons are worthy of contempt, especially compared with the enlightened, confident people who signed up for the vaccine immediately.”
The unfortunate corollary of such language is the “witch hunt on the unvaccinated” that we are already witnessing, “an act of violence against the fabric of society,” says Stannus, that is “a greater evil … than the shared suffering of disease.”
We can and urgently need to see through these shenanigans and reclaim our humanity.
Fast-moving current events are proving those who have declined COVID injections are the wise ones, with science proving them correct in just about every way.
Whether we consider the many suspected dangers of products unleashed on the public less than a year ago, or the injuries and deaths occurring on a never-before-seen scale (including in teens who had their lives ahead of them), or the clear superiority of natural immunity, or the fact that the injections don’t even do the one thing the clinical trials alleged they could do (i.e., keep more severe illness at bay), it is clear that citizens who would rather think for themselves than swallow prefabricated lies are the ones who are going to come out ahead.
The question How should we live? is one that many ask in a crisis, jolted out of normal patterns of life. But that question is not always a simple request for a straightforward answer, as if we could somehow read off the ‘correct’ answer from the world.
This sort of question can be like a pain that requires a response that soothes as much as it resolves. It is not obvious that academic philosophy can address such a question adequately. As the Australian philosopher Raimond Gaita has suggested, such a question emerges from deep within us all, from our humanity, and, as such, we share a common calling in coming to an answer. Academia often misses the point here, ignoring the depth, and responding as if problems about the meaning of life were logical puzzles, to be dissolved or dismissed as not real problems, or solved in a single way for all time. True, at various times philosophers such as Gilbert Ryle and more recently Mikel Burley have called for a revision of academia’s approach towards these sorts of questions, for a ‘thickened’ or expanded conception. But, while improving our awareness of their complexity and diversity, such approaches still fail to address the depth that their human origin provides.
The presence of a humanness, or a depth, to these sorts of questions comes not just from the context in which they’re asked, but also from their origin, their speaker. They are real questions for real people, and shouldn’t be dismissed with a logical flourish or treated like an interesting topic for a seminar. I would laugh if I heard a computer ask How should we live? after beating it at chess, but I would cry to hear a wife ask her husband, on the death of their son, How should we live? Although the same words have been uttered, these questions have a different form: the mother’s question contains a qualitative depth, a humanness that isn’t there in the computer’s question. We must acknowledge this if we want to find an answer to the specific question she asked with such poignancy.
The computer is a thing that cannot meaningfully ask those sorts of questions; in contrast, it’s offensive to call a person a ‘thing’. Only a human can ask that sort of question within this sort of context. We would hear the mother’s words and say that they contain a depth that’s revealing something perhaps previously hidden about herself; the computer’s question isn’t even said to be shallow. It seems to have nothing of that sort to reveal about itself whatsoever, like a parrot repeating the words it has been taught without the complexity of the human context that gives them their usual meaning. This isn’t to say that computers won’t one day be intelligent, ‘conscious’ or ‘sentient’, or that human language is ‘private’; it’s closer to the Wittgensteinian remark that: ‘If a lion could talk, we could not understand him.’
This means that the form that a language takes reflects the complex social context of the life of the speaker, and the degree to which I share a similar form of life with the speaker is the same degree to which I can meaningfully understand the utterance. The ‘life’ of the computer, we suppose, is either one-dimensional due to it lacking depth or, even if it has depth, it would be uncommunicable through human language, because, simply put, we and they differ so much. The humanness that provides the depth to our language is simply inaccessible to silicon chips and copper wires, and vice versa.
This depth to the human condition is part of what we mean when we speak of our humanity, spirit or soul, and anyone who wishes to question or explore this aspect of the human condition must do so in a form of language that can access and replicate its depth. We call those sorts of languages spiritual. But this way of speaking shouldn’t be taken literally. It doesn’t mean that spirits, souls and God exist, or that we must believe in their literal existence in order to use this sort of language.
Questions about the meaning of life and others of a similar kind are often misconstrued by those too ready to think of them as straightforward requests for an objective true answer.
Consider, for example, what atheists mean by ‘soul’ when they refute the cognitive proposition that asserts the literal existence of souls, in comparison with what I mean when I describe slavery as soul-destroying. If atheists were to argue that slavery cannot be soul-destroying because souls don’t exist, then I would say that there’s a meaning here that’s lost on them by being overly literal. If the statement ‘Slavery is soul-destroying’ is forced into a purely cognitive form, then not only does it misrepresent what I mean to say, it actively prevents me from ever saying it. I want to express something that represents the depth of the sort of experience I’m having: this isn’t a matter of making an implied statement about whether or not souls exist – it’s not affected by the literal existence or non-existence of souls. This sort of meaning to spiritual language is found at a different dimension to where cognitivists look, irrespective of their atheism, and this is achieved through our capacity to embed a dimension of depth to the form of our language through the non-cognitive process of expressing, describing and evoking our sense of humanity within one another.
When considering how to answer the question How should we live?, we should first reflect on how it is being asked – is it a cognitive question looking for a literal matter-of-fact answer, or is it also in part a non-cognitive spiritual remark in answer to a particular human, and particularly human, situation? This question, so often asked by us in times of crisis and despair, or love and joy, expresses and indeed defines our sense of humanity.
Many of Buddhism’s core tenets significantly overlap with findings from modern neurology and neuroscience. So how did Buddhism come close to getting the brain right?
Over the last few decades many Buddhists and quite a few neuroscientists have examined Buddhism and neuroscience, with both groups reporting overlap. I’m sorry to say I have been privately dismissive. One hears this sort of thing all the time, from any religion, and I was sure in this case it would break down upon closer scrutiny. When a scientific discovery seems to support any religious teaching, you can expect members of that religion to become strict empiricists, telling themselves and the world that their belief is grounded in reality. They are always less happy to accept scientific data they feel contradicts their preconceived beliefs. No surprise here; no human likes to be wrong.
But science isn’t supposed to care about preconceived notions. Science, at least good science, tells us about the world as it is, not as some wish it to be. Sometimes what science finds is consistent with a particular religion’s wishes. But usually not.
Despite my doubts, neurology and neuroscience do not appear to profoundly contradict Buddhist thought. Neuroscience tells us the thing we take as our unified mind is an illusion, that our mind is not unified and can barely be said to “exist” at all. Our feeling of unity and control is a post-hoc confabulation and is easily fractured into separate parts. As revealed by scientific inquiry, what we call a mind (or a self, or a soul) is actually something that changes so much and is so uncertain that our pre-scientific language struggles to find meaning.
Buddhists say pretty much the same thing. They believe in an impermanent and illusory self made of shifting parts. They’ve even come up with language to address the problem between perception and belief. Their word for self is anatta, which is usually translated as ‘non self.’ One might try to refer to the self, but the word cleverly reminds one’s self that there is no such thing.
When considering a Buddhist contemplating his soul, one is immediately struck by a disconnect between religious teaching and perception. While meditating in the temple, the self is an illusion. But when the Buddhist goes shopping he feels like we all do: unified, in control, and unchanged from moment to moment. The way things feel becomes suspect. And that’s pretty close to what neurologists deal with every day, like the case of Mr. Logosh.
Mr. Logosh was 37 years old when he suffered a stroke. It was a month after knee surgery and we never found a real reason other than trivially high cholesterol and smoking. Sometimes medicine is like that: bad things happen, seemingly without sufficient reasons. In the ER I found him aphasic, able to understand perfectly but unable to get a single word out, and with no movement of the right face, arm, and leg. We gave him the only treatment available for stroke, tissue plasminogen activator, but there was no improvement. He went to the ICU unchanged. A follow up CT scan showed that the dead brain tissue had filled up with blood. As the body digested the dead brain tissue, later scans showed a large hole in the left hemisphere.
Although I despaired, I comforted myself by looking at the overlying cortex. Here the damage was minimal and many neurons still survived. Still, I mostly despaired. It is a tragedy for an 80-year-old to spend life’s remainder as an aphasic hemiplegic. The tragedy grows when a young man looks towards decades of mute immobility. But you can never tell with early brain injuries to the young. I was yoked to optimism. After all, I’d treated him.
The next day Mr. Logosh woke up and started talking. Not much at first, just ‘yes’ and ‘no.’ Then ‘water,’ ‘thanks,’ ‘sure,’ and ‘me.’ We eventually sent him to rehab, barely able to speak, still able to understand.
One year later he came back to the office with an odd request. He was applying to become a driver and needed my clearance, which was a formality. He walked with only a slight limp, his right foot a bit unsure of itself. His voice had a slight hitch, as though he were choosing his words carefully.
When we consider our language, it seems unified and indivisible. We hear a word, attach meaning to it, and use other words to reply. It’s effortless. It seems part of the same unified language sphere. How easily we are tricked! Mr. Logosh shows us that unity of language is an illusion. The seeming unity of language is really the work of different parts of the brain, which shift and change over time, and which fracture into receptive and expressive parts.
Consider how easily Buddhism accepts what happened to Mr. Logosh. Anatta is not a unified, unchanging self. It is more like a concert, constantly changing emotions, perceptions, and thoughts. Our minds are fragmented and impermanent. A change occurred in the band, so it follows that one expects a change in the music.
Both Buddhism and neuroscience converge on a similar point of view: The way it feels isn’t how it is. There is no permanent, constant soul in the background. Even our language about ourselves is to be distrusted (requiring the tortured negation of anatta). In the broadest strokes then, neuroscience and Buddhism agree.
How did Buddhism get so much right? I speak here as an outsider, but it seems to me that Buddhism started with a bit of empiricism. Perhaps the founders of Buddhism were pre-scientific, but they did use empirical data. They noted the natural world: the sun sets, the wind blows into a field, one insect eats another. There is constant change, shifting parts, and impermanence. They called this impermanence anicca, and it forms a central dogma of Buddhism.
This seems appropriate as far as the natural world is concerned. Buddhists don’t apply this notion to mathematical truths or moral certainties, but sometimes, cleverly, apply it to their own dogmas. Buddhism has had millennia to work out seeming contradictions, and it is only someone who was not indoctrinated who finds any of it strange. (Or at least any stranger than, say, believing God literally breathed a soul into the first human.)
Early on, Buddhism grasped the nature of worldly change and divided parts, and then applied it to the human mind. The key step was overcoming egocentrism and recognizing the connection between the world and humans. We are part of the natural world; its processes apply themselves equally to rocks, trees, insects, and humans. Perhaps building on its heritage, early Buddhism simply did not allow room for human exceptionalism.
I should note my refusal to accept that they simply got this much right by accident, which I find improbable. Why would accident bring them to such a counterintuitive belief? Truth from subjective religious rapture is also highly suspect. Firstly, those who enter religious raptures tend to see what they already know. Secondly, if the self is an illusion, then aren’t subjective insights from meditation illusory as well?
I don’t mean to dismiss or gloss over the areas where Buddhism and neuroscience diverge. Some Buddhist dogmas deviate from what we know about the brain. Buddhism posits an immaterial thing that survives the brain’s death and is reincarnated. After a person’s death, the consciousness reincarnates. If you buy into the idea of a constantly changing immaterial soul, this isn’t as tricky and insane as it seems to the non-indoctrinated. During life, consciousness changes as mental states replace one another, so each moment can be considered a reincarnation from the moment before. The waves lap, the sand shifts. If you’re good, they might one day lap upon a nicer beach, a higher plane of existence. If you’re not, well, someone’s waves need to supply the baseline awareness of insects, worms, and other creepy-crawlies.
The problem is that there’s no evidence for an immaterial thing that gets reincarnated after death. In fact, there’s even evidence against it. Reincarnation would require an entity (even the vague, impermanent one called anatta) to exist independently of brain function. But brain function has been so closely tied to every mental function (every bit of consciousness, perception, emotion, everything self and non-self about you) that there appears to be no remainder. Reincarnation is not a trivial part of most forms of Buddhism. For example, the Dalai Lama’s followers chose him because they believe him to be the living reincarnation of a long line of respected teachers.
Why have the dominant Western religious traditions gotten their permanent, independent souls so wrong? Taking note of change was not limited to Buddhism. The same sort of thinking pops up in Western thought as well. The pre-Socratic Heraclitus said, “Nothing endures but change.” But that observation didn’t really go anywhere. It wasn’t adopted by monotheistic religions or held up as a central natural truth. Instead, pure Platonic ideals won out, perhaps because they seemed more divine.
Western thought is hardly monolithic or simple, but monotheistic religions made a simple misstep when they didn’t apply naturalism to themselves and their notions of their souls. Time and again, their prominent scholars and philosophers rendered the human soul exceptional and otherworldly, falsely elevating our species above and beyond nature. We see the effects today. When Judeo-Christian belief conflicts with science, it nearly always concerns science removing humans from a putative pedestal, a central place in creation. Yet science has shown us that we reside on the fringes of our galaxy, which itself doesn’t seem to hold a particularly precious location in the universe. Our species came from common ape-like ancestors, many of which in all likelihood possessed brains capable of experiencing and manifesting some of our most precious “human” sentiments and traits. Our own brains produce the thing we call a mind, which is not a soul. Human exceptionalism increasingly seems a vain fantasy. In its modest rejection of that vanity, Buddhism exhibits less error and less original sin, this one of pride.
How well will any religion apply the lessons of neuroscience to the soul? Mr. Logosh, like every person who’s brain lesion changes their mind, challenges the Western religions. An immaterial soul cannot easily account for even a stroke associated with aphasia. Will monotheistic religions change their idea of the soul to accommodate data? Will they even try? It is doubtful. The rigid human exceptionalism is cemented firmly into dogma.
Will Buddhists allow neuroscience to render their idea of reincarnation obsolete? This is akin to asking if the Dalai Lama and his followers will decide he’s only the symbolic reincarnation of past teachers. This is also doubtful, but Buddhism’s first steps at least made it possible. Unrelated to neuroscience and neurology, in 1969 the Dalai Lama said his “office was an institution created to benefit others. It is possible that it will soon have outlived its usefulness.” Impermanence and shifting parts entail constant change, so perhaps it is no surprise that he’s lately said he may choose the next office holder before his death.
Buddhism’s success was to apply the world’s impermanence to humans and their souls. The results have carried this religion from ancient antiquity into modernity, an impressive distance. With no fear of impermanent beliefs or constant change, how far will they go?
By Luther Blissett and J. F. Sebastian of Arkesoul
A few years ago, Neal Stephenson wrote a widely-shared article called Innovation Starvation for the World Policy Institute. He began the piece lamenting our inability to fulfill the hopes and dreams of mid-20th century mainstream American society. Looking back at the majority of sci-fi visions of the era, it’s clear many thought we’d be living in a utopian golden age and exploring other planets by now. In reality, the speed of technological innovation has seemingly declined compared to the first half of the 20th century which saw the creation of cars, airplanes, electronic computers, etc. Stephenson also mentions the Deepwater Horizon oil spill and Fukushima disasters as examples of how we’ve collectively lost our ability to “execute on the big stuff”.
Stephenson’s explanation for this predicament is two-fold; outdated bureaucratic structures which discourage risk-taking and innovation, and the failure of cultural creatives to provide “big visions” which dispute the notion that we have all the technology we’ll ever need. While there’s much to be said about archaic, inefficient (and corrupt) bureaucracies, there’s also a compelling argument invoked over the cultural importance of storytelling and art and how best to utilize it. One of the solutions offered by Stephenson, in this regard, is Project Hieroglyph which he describes as “an effort to produce an anthology of new SF that will be in some ways a conscious throwback to the practical techno-optimism of the Golden Age.”
While Project Hieroglyph may be a noble endeavor, one could argue that it’s based on a flawed premise. The role of science fiction has never been just about supplying grand visions for a better future, but to make sense of the present. There seems to be an assumption that the optimistic Golden Age had a causal relationship with a perceived technological golden age when it may have simply been a reflection of it— just as dystopian sci-fi reflects and strongly resonates with the world today. Stephenson may be correct in his view that much SF today is written in a “generally darker, more skeptical and ambiguous tone”, but this more nuanced perspective does not necessarily signify the belief that “we have all the technology we’ll ever need”. Rather, it reflects decades of collective experience and knowledge of unforeseen and cumulative effects of technologies. Nor does such fiction focus only on destructive effects of technology, as large a component of the narrative it may be simply because it makes for better drama and the subtext is often intended as a critique rather than celebration. For example, the archetypal hacker protagonists of technocratic cyberpunk dystopias employ technology for more positive ends (though some question whether good SF, as in speculative fiction, needs to involve new technology at all).
A particularly positive function for dystopian sci-fi is its use as rhetorical shorthand. It’s increasingly common in public discourse on major issues of the day to invoke dystopian references. Disastrous social effects of peak oil or post-collapse are often characterized as Mad Max scenarios. Various negative aspects of genetic modification and pharmaceutical development conjure Brave New World. Anxiety over out-of-control AI and resultant devaluing of human life brings to mind films as varied as Blade Runner, The Matrix and Terminator films. The expanding police/surveillance state is reminiscent of 1984 and numerous classics which have followed in its footsteps including V for Vendetta and Brazil. General fears of duplicitous, psychopathic power elites and social manipulation have elevated They Live from relatively obscure b-movie to cult classic. The entry of the term “zombie apocalypse” into the popular lexicon may in part stem from fear (and uncomfortable recognition) of images of viral social disintegration and martial law-enforced containment efforts depicted throughout various media. The burgeoning omnipotence of multinational corporations and hackers in Mr. Robot may have been the stuff of cyberpunk dystopias such as Neuromancer and Max Headroom 30 years ago, yet, it still has much to contribute to the public discourse as contemporary drama. Such visions may not prevent (or have not prevented) the scenarios they warn us of but have provided a vocabulary and framework for understanding such problems, and who’s to say how much worse it could be had such cautionary memes never existed?
The prophetic nature of storytelling, inasmuch as it derives from the minds of authors, artists and commentators that coexist with tensions and contexts particular to their epochs, resonate with the oughts, ifs, and whats inherent to our daily lives. As it were, the cautionary element of narrative is a natural product of the human mind, and the premium of what involves sharing our mental reserves to the world. To creatively dwelve and concoct problems and solutions from experience, is an axiom analogous to that of the categorical imperative—purely, and in abstract terms of what rationality involves. Yet, often times, we find material that is in favor of cultural malaise; of all things pathological in our society, such as censorship, conformity, bureaucracy, authoritarianism, militarism, and capital marketing; things which underpin issues that, if left untouched, can engulf the real brilliance of our spirit.
Stephenson fails to see this point. SF, as any form of intelligent culture, denounces and opposes systems of oppression, and even shows us the how, when, and why—the frameworks, the makings of apparent utopias into dystopias. Dystopian storytelling can serve the efforts of downtrodden creators with utopian ideals as effectively as utopian stories can reframe a societal trajectory led by beneficiaries of real world dystopia (though it may be experienced as utopia for a privileged few). SF does not only conjure visions of better futures. They lend us vocabularies and syntaxes to understand, and impede the fallenness of a confused, and ever increasingly isolated humanity. They are languages that pervade our interiorities, and that allow the exterior to change.
At the core, SF is prophecy through reasoned extrapolation and artistic intuition. This is what SF stands for when properly aligned with the subjectivities of the oppressed, and not with the voices of oppression: true testaments of a space and a time; visions of the future that carefully partake in not committing the mistakes of the past; and tools for our personal and collective flourishing.
What a farce. Telling everyone “everything is under control” is perhaps the most dulling and disempowering phrase ever uttered.
My fascination with language just keeps expanding. I love looking at expressions, words, colloquialisms, so-called “sayings” and the like with fresh awakened eyes. I can’t help it. The supposed, accepted and unconscious meanings of these imposed “expressions” are what direct our minds and turn our attention.
So much of what we’ve been handed down is contorted, manipulated and eventually nestled in the collective mindset to twist our hearts away from simple truth.
How many times has this expression been used to bring seeming comfort to someone; “Don’t worry, everything’s under control.” Really? What control? Who’s controlling what? And why?
So often this is used to imply some powerful force is behind everything directing what’s going on. Remind you of anything? Yes – religion, hierarchy, and social, political and economic so-called “controllers”. How debilitating can you get when it comes right down to it, playing on people’s insecurity and lack of conscious awareness?
“It’s under control” is comforting to people? It’s the picture of personal disempowerment!
External Control? Or Creative Freedom!
Sure, there is a wonderful creative Source we are all intrinsically part of, but it’s not “controlling” anything in a living, expanding Universe with beings of all sorts with free will and self determination in an alive multidimensional environment. If anything, this Creative impetus is tearing down control systems that attempt to foist themselves on its process, put there either by conscious intent or the manifestation of hardening mindsets in the social fabric.
Earth processes attest to this, as well as our spectacular expanding and ever changing Universe. Nature itself is alive with new sprouts of life in the animal, plant and mineral worlds, never mind other realms of existence.
Look at earthquakes and volcanoes, or the sun and astral influences. As much as some try to analyze or predict major events that affect earth that still brings no control, only a measure of preparedness on rare occasion. And that type of insecurity is good for us. It’s humbling and keeps us in check.
This process is what the awakening is all about!
While some earth changes may be exacerbated by human activity, the big stuff is way out of our control. Thankfully. That’s what makes life life, and also why the insane would-be captivators of humanity and its planet work so feverishly in their mad pursuits to try to control natural processes. Control is their yardstick and without it they have no temporal security or power, which to them equates some weird form of normalcy or equilibrium.
How upside down can you get.
Be it geoengineering our climate, genetically modifying the natural progression of life forms, or attempting to install artificial intelligence to run their soulless programs, these maniacs are desperate for control in every shape and form. Why? They cannot meld or harmonize with what’s natural since their psychopathic, demonic intentions have nothing to gain, all while the rest of us thrive on being part of the fantastic empowering natural processes of Creation Itself.
Therein lies the rub. Quite apparently we’re in a world of conflicts of interest. That’s our current playing field, if you will. Or won’t. It’s just the way it is.
Let Go Into Conscious Anarchy
I like the anarchy approach. Anarchy is an example of another twisted word. It’s doesn’t mean putting a society into deliberate chaotic destruction, it means living without hierarchical control mechanisms. Something most groomed humans are scared spitless of thanks to generations of social programming.
You mean to tell me if we didn’t have so-called “government” that everything would fall apart? Baloney. People are resourceful and essentially responsible, at least their inner nature is, that hasn’t been perverted by all of this programming to the contrary. That very dependence on external control systems is what the hierarchy is literally banking on which is why the repeated memes of fear of scarcity or personal security.
There’s plenty for everybody. All we need to do is work together locally and share with other communities in a range of sizes and distances. Real commerce in loving cooperation, not the regulated systems that have been foisted upon us. We’ve been weaned from personal responsibility into a system of statist dependence, like someone who’s stopped using their muscles and is dependent on some mass produced contraption for their mobility when there’s nothing wrong with them at all.
They just need to exercise their innate capabilities.
Just because people have become spiritually atrophied in large numbers doesn’t mean that’s the way it’s supposed to be. Exactly like waking up, it’s time to arise and use the magnificent body of capabilities we’ve all been given, and let go of these false crutches and systems of hierarchy and walk into life and live!
Epilogue
That false assurance that everything’s under control by external forces as if we individually have virtually none has got to go. Being comforting in times of stress and turmoil is one thing, but propping people up with some external dependence reinforcement is fundamentally wrong.
It’s time to be conscious – in our words, our thoughts and our actions.
It’s really not that difficult. The main thing you’ll confront is ignorant, unenlightened opposition from those who’ve grown deeply accustomed to this dependency programming, as if it’s some form of respect for the “great ones” who rule them.
It’s very deep and will take time to overcome for most. But remaining in that conscious space, no matter what ridicule or obstacles assail you, is the very solution we each are longing for.
It begins with each of us. Standing our ground and then moving forward in conscious, loving action.
Do it. Bravely. The time for humanity to arise is now.