First Do No Harm

By Emmy Bee

Source: Dissident Voice

Let me preface what I am about to say by stating that I have the utmost esteem for mainstream medicine’s skill in emergency situations — the do or die surgeries, the dispensing of powerful life-saving drugs necessary in that setting are second to none; and its mastery of cosmetic surgery in cases of deformities and the advances made in prosthetics are nothing less than spectacular. These are what make mainstream medicine great.

I would also like to add that I am not an expert of any kind. I hold no degrees or certifications, and neither do I represent, belong to, or work for any party, organization or corporation. I speak for myself, a sixty-two year old woman, and from my experiences with, and extensive research of, a topic I find fascinating, intriguing and bothersome — mainstream medicine and how the belief in its infallibility harms us in so many ways.

The pompous certainty of mainstream medicine’s powerful proponents — be they multi-billion dollar pharmaceutical companies, medical associations, disease-specific charities, government agencies, Madison Avenue selling the diseases and the pills, TV or magazines, the news media parroting its cash cow’s every claim — combined have most people, hook, line and sinker, believing in the impeccable record of mainstream medicine. No questions asked.

Here, I would like to throw out some alarming statistics — ones that can be easily found in a variety of journals from Forbes to JAMA to CounterPunch, etc.

The estimated annual mortality rate for adverse drug reactions to “correctly” prescribed drugs is the 5th leading cause of death in the U.S.1 Over the counter (OTC) cold medications are among the top twenty substances causing death in children.2 Used according to direction, NSAIDs (Non-Steroidal Anti-inflammatory Drugs) are responsible for more than 20,000 deaths every year.3 There are over 400,000 deaths each year from drug and medical errors and tens of thousands more deaths from unnecessary procedures.4 Add those together and mainstream medicine is the third leading cause of death in the U.S.

So, why is it that most people trust, without question, the omnipotence of mainstream medicine in the same way religious zealots believe in their chosen religion or atheists in theirs? When well over 200,000 people die in the U.S. each year from prescription drug use alone — not abuse, but use; when we spend more, per capita than any other nation on earth and yet our health indices and life expectancy are near the bottom of all other developed nations5 why is there no sense of outrage (except for price gouging!) or, at the very least, a sense that something is not right, that something is terribly wrong?

Yet, as has happened many times, should a doctor, a scientist, a researcher or a curious layperson question conventional medical creed the herald is quickly battered down with jeers of derision, and swiftly “discredited” and shunned by the medical community. The media then parrots what they are told and soon everyone is asking, “How dare they question science? Haven’t they heard of collateral damage? Every war (and they are constantly reminding us of the war we are fighting against diseases) has collateral damage”. Yet when a few people die from dirty spinach, improper use of some herbal product, or a handful of people (some even vaccinated) catch the measles (and live to tell about it!) panic overruns the media.

Does anyone remember or know of the ad campaigns telling us that “nine out of ten doctors smoke Camel cigarettes” or that DDT pesticide spray is “good for you!”? We may laugh now but what about the more recent debacles such as HRT (hormone replacement therapy), Vioxx, swine flu vaccines and GMOs — all of which received the seal of approval from industry scientists, government agencies and all were pushed by Madison Ave. — just like the cigarettes given to my father for heart disease and the DDT sprayed on everything in sight, including children.

The number of TV commercials for drugs, medical clinics, hospitals, and doctor-related reality TV shows is mind blowing. It is a constant barrage of “a pill for every ill” and “don’t forget to ask your doctor about it”, while people with vapid eyes move in slow motion through white rooms or a meadow filmed through gauze, while a voice, soft and soothing, tells you of the pill’s benefits and then the same voice, just as soft but at breakneck speed, spews a partial list of possible side effects and a series of unwanted symptoms, some of which sound, and are (such as death) worse than the “disease” itself.

And interspersed between the ad for an over-the-counter (OTC) medication that had not long ago been given “by prescription only” and another ad for the new six story billion dollar specialty clinic are yet more commercials inviting us to join in what has become a celebration of you fill in the blank disease. There’s a “walk” or a “run”, even a paddle! for this disease and a different colored “ribbon” for that disease. It is almost as if having a disease has become the new “in” thing — fashionable, admirable, heroic even.  Are we being groomed to embrace our diseases, while at the same time being told to give, give, give to find a “cure”? According to Dr. Robert Sharpe, author of The Cruel Deception, a book about animal testing in medical research,” . . . in our culture treating disease is enormously profitable, preventing it is not.”

We have been told we are living longer but the sad fact is that the trend has reversed and now for the first time in decades life expectancy has dropped in the U.S.6 Even more alarming is that, along with adults, the number of children with chronic diseases has risen sharply. Think about it. How many of us make it past seventy (hell, even sixty!) without some major medical catastrophe (or two) requiring surgery and/or special apparatuses to help us do what used to come naturally and/or prescribed no less than three or four drugs? And how many “new” (iatrogenic) diseases do we then acquire from taking those drugs or undergoing those procedures that require even more drugs and/or more procedures?

And just what is conventional medicine’s track record for curing disease — any disease — not palliation or suppression or masking (all of which suppress and weaken the immune system) — but curing?  Forty years ago I knew one woman with breast cancer while today I know dozens, all of who underwent tortuous procedures, surgeries and drugging, and yes, some of them died. And why is it that when people die after making use of conventional medicine — surgery, chemotherapy, drugs, etc. — there are no cries of foul against their choice of healthcare? Instead they are hailed as heroes who fought a courageous battle, but when someone dies after trying an alternative medicine the cries against their choice are nothing less than vitriolic, as if no one ever dies using mainstream medicine, when in actuality many thousands die each year from mishaps alone, never mind the many hundreds of thousands who die from the diseases that have remained rampant — heart disease, cancer and diabetes, etc.7

Despite unprecedented technical and scientific advances, mainstream medicine’s only answer to disease is to destroy—with toxic substances, ingested or injected, with life-threatening procedures and with the removal of diseased (and often times healthy) body parts.  Kill germs, fight cancer, destroy cells, kick (name a disease)’s ass, crush, terminate, rub out, blast; never build up, heal, cure. Are we, as a society, even capable of imagining alternatives to mainstream medicine? I once told an MD I knew that a friend’s kidney stone passed with relative ease after drinking a herbal tea prescribed by an Acupuncturist. “If there was something out there that can do that,” he told me, “we would know about it”.  Not with that attitude!

When contemplating all that led up to the economic debacle of 2008, I would venture to guess that most people would be leery now (if they weren’t already!) of any advise given by the banking industry and Wall Street concerning, let’s say, home loans. And the same wariness would prevail when listening to the oil or coal industries’ take on environmental issues, or the weapons makers’ spin on whether to go to war or sell arms, or the pesticide- producing conglomerates on the safety of their products. The conflict of interest in each case should be obvious because when one considers that the very ones who profit by limiting the field of allowable research, who selectively choose among research papers to discredit alternative theories or boost their own are the very ones who control the message, it becomes obvious that we are seeing conflict of interest on a massive scale.

And, what of the research done by pharmaceutical companies that tell us a certain drug, or procedure, or vaccine is safe and effective? Does it make you comfortable to know that President Obama’s pick for FDA (Food and Drug Administration) Commissioner, Robert Califf, had received research funds from twenty-five drug companies while director of Duke University clinical research department where a major research fraud scandal had erupted under his watch8 or that Julie Gerderding, former head of the CDC (Center for Disease Control) concealed and then destroyed evidence of a link between the MMR vaccine and autism in African-American boys9, and yet congress refuses to subpoena her and the whistleblower from the CDC and the media never mentions it, and that this same Julie Gerderding left the CDC to become the president of Merck’s vaccine division and then executive VP of Merck, the sole manufacturer of the MMR vaccine? These examples are just two of many that are not only about a colossal conflict of interest but also about a dangerous threat to true scientific discovery affecting millions of lives.

So, why is it that pharmaceutical companies (which, by the way, have more lobbyists than there are members of congress and the senate combined) and which have a woeful track record when it comes to conflict of interest in medical research, drug research and alternative medicine viability research, are given a pass, a green light, a pat on the back of confidence and, besides, are vehemently defended and vociferously cheered on? What marketing magic do they spin that makes people overlook their complicity in fraudulent research, their over-the-top demonizing of opposing viewpoints, and above all their abhorrent safety record?

Why can’t we question the effectiveness, the safety or the necessity of some vaccines without being rudely shouted down?  I wonder if those who shout the, “Shut up! They are safe!” mantra have ever taken the time to study the long history of infectious disease and the history of vaccine use? Do they know there are no long-term studies on the effects of vaccines, or that vaccinated people are not necessarily protected from the diseases they are vaccinated against, or that the pharmaceutical companies and the government agencies refuse to do a vaccinated vs. unvaccinated population study as to their overall health indices, that vaccines, unlike other drugs are not tested against a placebo but against another vaccine, or that childhood infectious diseases had been on a downward trend for many years (measles deaths had declined by almost 100 percent!) well before vaccines were introduced as had many of the other infectious diseases — running their course, improving as our sanitary conditions and treatment of the illness improved?  So, why not let them continue to decline until they naturally disappear? Why introduce crude disease substances and a mixture of lethal chemicals (of which no one knows or bothers to test their long-term effects) into our bodies in an attempt to eradicate diseases that seemed to be doing a fine job of doing just that naturally?

Could there be a connection between the plethora of “new” or increasing diseases and the crude drugs (including vaccines) we have been putting into our bodies for decades now? If we stop to think about it does it make sense to inject ourselves with hazardous material we know nothing about to prevent diseases like the measles, mumps and the flu and others that are now so simple to treat?

But we are told, ad nauseam, to, “Shut up and just get your shots! All your questions have already been answered!”  However, when you look behind the scenes of medical research and find the pharmaceutical companies paying the bills, writing the reports and working closely with government agencies, research colleges, medical journals and the media to get their message out, it should raise a red flag.

What is the great harm brought about by this absolutism of the proponents of mainstream medicine? There are many but two are outstanding. One is that freedom of choice in one’s healthcare decisions can and will be taken away — it has begun already and is picking up momentum. I do not use conventional medicine except in some emergency situations, but that doesn’t mean I wouldn’t fight for the right of others to choose to use it exclusively if they believe it to be their best or only option. Being comfortable with one’s healthcare choice is, I firmly believe, of utmost importance. Yet if it were up to many people I should not be allowed to choose the kind of healthcare I want for my family and me.

And secondly, that same vitriolic certainty and insular thinking is truly harmful to the very essence of scientific inquiry. Great discoveries could be ignored simply because of a refusal to look beyond what we are told is scientifically acceptable today, the realm of inquiry having been limited by the greed of those in power and their manipulation of the masses by way of the fear factor.

  1. To Err is Human: Building a Safer Healthcare System: Institute of Medicine, Committee on Quality of Health Care in America, 2000.
  2. 2009 annual report of the American Association of Poison Control Centers’ National Poison Data System (27th report).
  3. Healing the NSAID Nation, E. Goldman, 2012.
  4. Leah Binder, Stunning News on Preventable Deaths in Hospitals, September 23, 2013; see also: Gary Null, PhD; Carolyn Dean MD, ND; Martin Feldman, MD; Debora Rasio, MD; and Dorothy Smith, PhD. Death by Medicine, Integral Options Cafe, January 12, 2010.
  5. Numbeo. Health Care Index for Country 2016.
  6. Public Health, Life Expectany in the U.S. drops for First Time in Decades, Report Finds, Health News from NPR, December 8, 2016.
  7. The Marshall Protocol Knowledge Base, Autoimmunity Research Foundation.
  8. Martha Rosenberg, Obama’s Latest FDA Nominee: No Hidden Big Pharma Links, They are all in Plain Sight, Counterpunch, November 19, 2015.
  9. Sharyl Attkisson, CDC Scientist:  “We scheduled Meeting to Destroy Vaccine Autism Study Documents“, March 23, 2016.

Alex Schlegel on Imagination, the Brain and ‘the Mental Workspace’

By Rob Hopkins

Source: Resilience

What happens in the brain when we’re being imaginative?  Neuroscientists are moving away from the idea of what’s called ‘localisationism’ (the idea that each capacity of the brain is linked to a particular ‘area’ of the brain) towards the idea that what’s more important is to identify the networks that fire in order to enable particular activities or insights.  Alex Schlegel is a cognitive neuroscientist, which he describes as being about “trying to understand how the structure and function of the brain creates the mind and the consciousness we experience and everything that makes us human, like imagination”.

He recently co-published fascinating research entitled “Network structure and dynamics of the mental workspace” which appeared in Proceedings of the National Academy of Sciences, which identified what the authors called “the mental workspace”, the network that fires in the brain when we are being imaginative.  I spoke to Alex via Skype, and started by asking him to explain what the mental workspace is [here is the podcast of our full conversation, an edited transcript appears below].

This is maybe just a product of the historical moment we’re in with cognitive neuroscience researching, that most of neuroscience research, I think I would say even now, is still focused on finding where is the neuro correlate of some function?  Where does language happen?  Where does vision happen?  Where does memory happen?  Those kinds of things.

It was very easy to ask those questions when fRMI came around, because we could stick someone in the scanner and have them do one task, and do a control test, and then do the real test, and see what part of the brain lights up, in one case rather than the other.  Those very well controlled reductionist kinds of paradigms behind these very clean blobs where something happens in one case versus the other.  I think that led a lot to the story of one place in the brain for every function and we just have to map out those places.

But in reality, the brain is a complex system.  It works in a real world which is a complex environment, and in any kind of real behaviour that we engage in, the entire brain is going to be involved in one way or the other.  Especially when you start to get into these more complex abilities that are very hard to reduce to this highly controlled A versus B kind of thing.

To really understand the behaviour itself, like imagination, it’s not that surprising that it’s going to be a complex, multi-network kind of phenomenon. I think why we were able to show that is maybe primarily because the techniques are advancing in the field and we’re starting to figure out how to look at these behaviours in a more realistic way. One of the big limitations of cognitive neuroscience research right now, because of fMRI, because of the techniques we’ve had, is that we tend of think of behaviour as activating, or not activating the brain.

When we’re doing analyses of brain activity, we’re looking for areas that become more active than another. This is changing a lot in the last few years, but at least for the first fifteen, twenty years, that was one of the only ways we would look at brain activity. So it simplistically thinks of the brain as of some other organ where it’s either buzzing, or it’s not buzzing, or it’s buzzing, or it’s not buzzing, or if it buzzes, the language happens. But really the brain is a complex computational system.

It’s doing complex computations and information processing and that’s not something you’re really going to see if you’re just looking for, in a large area, increased versus decreased activity. When we start to be able to look at the brain more in terms of the information that is processing, and where we can see information, how we can see communication between different areas, then you can start to look at things like imagination, or mental workspace, in a more complex light.

So how does that idea sit alongside the ideas firstly of the ‘Default Network’, which is often linked to creativity and imagination as well, and also to the idea that the hippocampus is the area that is essential to a healthy, functioning imagination?  Do those three ideas just fit seamlessly together, or are they heading off in different directions?

I can give you my opinion, that’s not very well founded in any kind of data, but this is something that we’ve talked about a lot in the lab.  I have a suspicion that actually we had been thinking about how to test for a while.  So the Default Mode Network was first seen as this network that would become more active in between tasks.  So when we’re doing an fRMI experiment what we’ll usually do is you’ll have some period where you’re doing the task, and then there’s a period where you’re just resting, so you can get the baseline brain activity when you’re not doing anything.  And this was a surprising result, is that actually during rest periods, some areas of the brain become more active.  And, you know, “Oh wow, it’s a surprise, the person’s not just sitting there blankly doing nothing.”  The brain doesn’t just totally deactivate.  They’re doing other stuff during those blank periods where there’s no stimulus on the screen.

From my personal experience, what you do in those rest periods is you daydream.  Your mind wanders.  You think about what you’re going to do afterwards, or stuff that’s happened during the day.  There’s a lot of research since then to back that up.  It seems to be this kind of network that’s highly involved in daydreaming like behaviour, or social imagination, those kinds of things.

My opinion, or my suspicion, is that this is illustrating how our term ‘imagination’ really encompasses a lot of different things.  When you try to lump it under this one term, this one mega term, you’re going to be missing out on a lot of the complexity, or subtlety.  So what I suspect is going on is that there’s this more like daydreaming mode of control over your inner space, where you’re not really consciously, volitionally, directing yourself to have certain experiences.  There’s a default control network that’s more taking over the daydreaming.

When I daydream I’m not trying to think about anything, it’s just letting the thoughts come.  That’s maybe part of what imagination is, but a very important part of imagination is you trying to imagine things, trying to direct yourself, thinking, “Well, what is the relationship between these two things?  Or “how can I build community?””  Or something like that.  In that case you’re taking active volitional control over these systems.  So that would be my suspicion of what’s going on.

How the results we found would differ from default mode network is that in our study we would show people some stimulus (see below) and we would say, “Rotate this 90 degrees clockwise”, so they had this fairly difficult task that they had to do and it was effortful.  This more frontal parietal network probably took over then.  And you see that a lot in other studies.  Our frontal parietal, I think they sometimes call it like an Executive Attention Network, that directs when you’re consciously trying to engage in some tests, that takes over, and if you’re not doing anything, the default mode network takes over.

So they’re both different manifestations of the imagination?  Like an active and a more passive, less conscious version?  They’re two versions of the same thing, in a sense?

Yeah, I would think that.  It fits well with what I’ve seen.  There have been studies that show that they’re in some ways antagonistic or mutually inhibiting, the default mode network and this executive attention network…

It’s like oil or water, it’s one or the other?  Or Ying and Yang, as I’ve read in some papers?

Right, but a simple way of describing these that people often resort to is that the Executive Attention Network is designed for attention to the outside world, and the Default Mode Network is attention to the inner space.  Where I would disagree with that, or suggest that that’s not the case, is that I think a better way to classify it would be that executive attention is more of this volitionally driven attention, which is usually associated with attention to the outside world.  And default mode network is more – I don’t know how to describe it exactly, but it’s more of this daydreaming network.  But the point is that your executive volitional attention can be driven to the inner space just as much as it can be driven to the outside world.

Is the mental space network the same kind of network that would be firing in people as when they’re thinking about the future and trying to be imaginative about how the future could be?

Yeah, I would think so.  I think an important difference, or an important additional part that you might start to see if you’re thinking about imagining the future, is that practically most of the time when you’re imagining the future, you’re thinking about people, and social groups, and how to navigate those kinds of dynamics.

So I would guess that then you would get added into the mix all the social processing networks that we have.  That’s actually another thing that we’re thinking about how to look at, is that practically a big chunk of human cognition is spent thinking about your relationship with other people, and how to navigate that.  There’s a good argument to be made that that kind of complex processing space was one of the main drivers of us becoming who we are.  Because social cognition is some of the most complex cognition we do, trying to imagine what somebody’s thinking by looking at their facial expression, or imagine how do I resolve a conflict between these two people who are fighting.  Things like that.

We do have very specialised regions and networks in the brain that have evolved to do that kind of processing.  So yeah, it’s a very interesting question.  That how would these other mental workspace areas, at least that we looked at, that had nothing to do with it, you know, it’s like, “Here’s this abstract shape.  What does it look like if you flip it horizontally here”, things like that.  How would they interact with these socially evolved areas?  It’s a very interesting question.

A lot of the research that I’ve been looking at is about how when people are in states of trauma, or when people grow up in states of fear, that the hippocampus visibly shrinks and that cells are burnt out in the hippocampus, and that people become less able to imagine the future.  People get stuck in the present, and it’s one of the indicators, particularly with post-traumatic stress, is that inability to look forward, and inability to imagine a future.  Do you have any knowledge of, or any speculation about, what happens to the mental workspace when people are in states of trauma or when people are in states of fear?

Definitely no data, only speculation.  As with anything real and interesting involving humans, it’s going to be incredibly complex.  So it would be very difficult, and may be impossible to distil it down to simple understandable things that are happening in the brain, but what I would guess is that, in people that are in stressful situations, and experiencing trauma, you tend to focus – like you were hinting at – you tend to focus on present.  What’s there immediately?  How do I survive this day?

You don’t tend to think much about planning for the future.  Synthesising everything that’s happened to you in the past, you just react in the moment because you don’t know what the next moments are going to be like.  It’s no more cognitive load that you can deal with because of all the stress you have.  So I would guess that for one you’re not really synthesising or processing your experiences into something brought to bear on decisions in the future as much.

And you’re not exercising those muscles of planning far into the future.  So just like any other muscle in the body, if you don’t practice the skills, and you don’t use various parts of your brain, they’re going to atrophy.  They’re not going to develop in the way that they would if you did use them.  In that sense it seems perfectly understandable and not that surprising that these areas and these networks that we found associated with these kinds of activities of projecting oneself in the future, or imaging that things don’t exist, in people for whatever reason aren’t doing that kind of thing regularly in their lives, they’re not going to be developed as much as they would from people that were happy and healthy and imaginative.

The paper that Kyung Hee Kim published in 2010, ‘The Creativity Crisis’ suggested that we might be seeing a decline in our collective imagination.  Do you have any thoughts on why that might be, or what might be some of the processes at work here?

I could speculate a couple of things.  The first thing that pops to mind obviously is education.  How we think about the educational system, how we train children.  And I don’t know about 1990 in particular but definitely starting in 1999 when we became test-crazed, that would be a very obvious culprit.

One thing to think about with the Torrance test and pretty much all tests, these standardised tests of creativity that we use, is that one of the major components that determines the outcome on the test is this divergent thinking idea.  How many ideas can you come up with?  So this has, I think, fairly detrimentally become one of the working definitions we have in psychology research of creativity, is “how much?”  And not really focusing on quality so much, and just using how many ideas you can think of as a stand in for how creative someone is.

The Torrance Test is better because it does get into other dimensions as well, but still some of the major dimensions determining the score are fluency, when you’re doing these drawings, how many components are there in the drawing?  That kind of thing.  So for instance if there were educational trends starting in the 1990s and continuing to now that were leading people to try to converge rather than diverge – you know, “What’s the one right answer?” versus, “What are lots of possible answers?” – then that could definitely lead to these changes we’ve been seeing in the tests.

Even if that were the case though, is that really a problem? Obviously we want people to be able to think of lots of possibilities but if it’s just, for instance, people who have been brought up in an educational system where they’ve been taking standardised tests all the time, and they’re trying to figure out which of the four bubbles is the right one to fill in, then that could just be a habit they’ve developed that carries over to these tests.  I don’t know exactly.

Another idea that maybe would be related to this is we’re definitely much less idle than we were in the past.  I guess we lament all the time how overscheduled kids are.  They go from soccer practice to band practice to art class, to blah, blah, blah, blah, trying to fill up their resume for college or whatever.  So if somebody is just constantly buzzing, busy, not really just stopping and daydreaming, and throwing rocks in creeks or whatever, then that’s again, it’s a habit they’re not going to have developed and they’re not going to be able to use as well.

This idleness, or giving up control to the Default Mode Network maybe, if you will, letting those ideas come in, exploring possibilities, those are things that I think often come out of boredom. And if you’re never bored, you’re never really letting those processes happen.  So that would be another thing to think about.

So if somebody is less imaginative, is that because that when the mental workspace fires, it’s including less places, or that it’s joining them up less vigorously? I don’t have all the terminology.  It all fires, but it fires to less places?  Or it fires less strongly to all those different places?

I think it would be basically everything, to give you a terrible answer.  For instance, this is where we’re really getting at how imagination is a very, very complex process that we’re distilling to a single word, and it’s really thousands of parts to come together.

For instance, if you can imagine visual experiences more or less vividly, then that’s going to play a role.  Somebody who can have very vivid mental images of things is going to probably have an easier time recombining things than somebody who really struggles to form a visual image.  Or on the flip side, there’s a lot of circumstantial evidence that people tend to go to one end or another of being very visual people, and I consider myself on those…  When I think, I tend to think a lot in terms of visual representations.  So it’s very easy for me to do the kinds of tasks that I ask subjects to do, where you know, “Here’s this weird random shape, what would it look like if it was rotated 90 degrees?”

Some people have a really hard time doing that kind of stuff though.  They’ve very smart people, but they’re just terrible at mentally manipulating images.  But if you have them think about other things, like more verbal kinds of verbal logical representations, they’re really good at that.  So even trying to talk about the mental workspace network as one static network of areas in the brain is probably not true, or probably not accurate because different people will have different connections, or different parts of it will be more active than others.

When I’m trying to mentally imagine things, for some people like me, that might involve mental or visual images, and that’s the way I think about it, but for other people it might involve much more the language areas of the brain, exercising that language network in a more mental way.  And that might lead to strengths for some people versus others, and vice versa, depending on what kinds of tests you’re trying to do, or whether you’re a verbal person that’s being forced to try to do something visual, or vice versa.

So given that these networks are involved are these complex information processing systems, there’s any number of ways where they can differ or fail, or become strengthened or become atrophied.

One of the questions I’ve asked everybody that I’ve interviewed has been if you had been elected last year as the President on a platform of ‘Make America Imaginative Again’, if you had thought actually one of the most important things we need is to have young people have a society that really cherishes the imagination, an education system where people come out really fired up and passionate, what might be some of the things you would do in your first 100 days in office?

First 100 days?  Well I think the real solutions are things that are more like 20 year solutions.  So you can start at a 100 days I guess but you definitely won’t solve it in 100 days.  For me it all comes down to how we choose to educate people.  I come at this all from a perspective of the US education system, so one thing is that we don’t view a teacher as a profession really, in the same way that we do as a medical doctor, or a lawyer.

I would say we need the equivalent training and residencies and professional degrees for teachers that we would have with anything else that’s as important a profession as teaching is.  Obviously we shouldn’t be focused on tests in the way that we are.  If you teach tests, and you teach to the kind of competencies a child should achieve by fifth grade, you’re going to be ignoring all the things that are hard to measure, for one thing, like imagination, creativity, curiosity.  How do you evaluate whether a kid’s curious?  I don’t know.

One of the changes I would want to see is that we trust more that the outcomes that we want will come rather than need to see them happen, because if you need to see a result, then you’ll only focus the things that you can see.   And for a lot of what education really does, it’s very hard to measure it in any reliable way.  If your goal is create a society of people that are civically engaged, that are curious, that are creative, compassionate, that’s all stuff that you just have to set up a system to do that, and hope that the outcome you measure will be the society you create, basically.  So that it frees you to focus on those things, and not focus on maths skills, reading skills, that kind of thing.

So in the first 100 days, what do you do? I don’t know. One concrete thing you could do is try to reorganise the teacher training system to make it more professionally aligned.

Like they have in Finland, where teachers are basically trained to Masters level, and then there’s no testing in schools of teachers.  They are then just empowered to teach, and they have the most amount of play and the shortest school hours of any country in Europe, and they constantly gain the best results and the brightest students.

Maybe that would be the first thing we could do, just copy Scandinavia.

The Stomach-churning Violence of Monsanto, Bayer and the Agrochemical Oligopoly

By Colin Todhunter

Source: RINF

As humans, we have evolved with the natural environment over millennia. We have learned what to eat and what not to eat, what to grow and how to grow it and our diets have developed accordingly. We have hunted, gathered, planted and harvested. Our overall survival as a species has been based on gradual, emerging relationships with the seasons, insects, soil, animals, trees and seeds. And out of these relationships, we have seen the development of communities whose rituals and bonds have a deep connection with food production and the natural environment.

However, over the last couple generations, agriculture and food production has changed more than it had done over previous millennia. These changes have involved massive social upheaval as communities and traditions have been uprooted and have entailed modifying what we eat, how we grow our food and what we apply to it. All of this has been driven by geopolitical concerns and powerful commercial interests with their proprietary chemicals and patented seeds. The process of neoliberal globalisation is accelerating the process as farmers are encouraged to produce for global supply chains dominated by transnational agribusiness.

Certain crops are now genetically engineered, the range of crops we grow has become less diverse, synthetic biocides have been poured on crops and soil and our bodies have been subjected to a chemical bombardment. We have arrived at a point where we have lost touch with our deep-rooted microbiological and social connection with nature and have developed an arrogance that has placed ‘man’ above the environment and all other species. One of the consequences is that we have paid an enormous price in terms of the consequent social, environmental and health-related devastation.

Despite the promise and potential of science, it has too often in modern society become a tool of vested interests, an ideology wrapped in the vestiges of authority and the ‘superstition’ that its corporate-appointed priesthood should not be challenged nor questioned. Instead of liberating humankind, it has now too often become a tool of deception in the hands of companies like Monsanto, Bayer and Syngenta which make up the oligopoly that controls what is an increasingly globalised system of modern food and agriculture.

These corporations have successfully instituted the notion that the mass application of biocides, monocropping and industrial agriculture are necessary and desirable. They are not. However, these companies have used their science and propaganda to project certainty in order to hide the fact that they have no real idea what their products and practices are doing to human health or the environment (and in cases when they do know, they do their best to cover it up or hide behind the notion of ‘commercial confidentiality‘).

Based on their limited, tainted studies and co-opted version of science, they say with certainty that, for example, genetically engineered food and glyphosate are ‘safe’. And when inconvenient truths do emerge, they will mobilise their massive lobbying resources to evade regulations, they will seek to hide the dangers of their products or they will set out to destroy scientists whose findings challenge their commercial bottom line.

Soil microbiologists are still trying to fully comprehend soil microbes and how they function as anintegrated network in relation to plants. The agrochemical sector has little idea of how their biocides have affected soils. It merely churns out public relations spin that their inputs are harmless for soil, plants and human health. Such claims are not based on proper, in-depth, long-term studies. They are based on a don’t look, don’t find approach or a manipulation of standards and procedures that ensure their products make it on to the commercial market and stay there. The devastating impacts on soil are increasingly clear to see.

And what are these biocides doing to us as humans? Numerous studies have linked the increase in pesticide us with spiralling rates of ill health. Kat Carrol of the National Health Federation is concerned about the impacts on human gut bacteria that play a big role in how organs function and our neurological health. The gut microbiome can contain up to six pounds of bacteria and is what Carroll calls ‘human soil’. She says that with their agrochemicals and food additives, powerful companies are attacking this ‘soil’ and with it the sanctity of the human body.

And her concerns seem valid. Many important neurotransmitters are located in the gut. Aside from affecting the functioning of major organs, these transmitters affect our moods and thinking. Feed gut bacteria a cocktail of biocides and is it any surprise that many diseases are increasing?

For instance, findings published in the journal ‘Translational Psychiatry’ provide strong evidence that gut bacteria can have a direct physical impact on the brain. Alterations in the composition of the gut microbiome have been implicated in a wide range of neurological and psychiatric conditions, including autism, chronic pain, depression, and Parkinson’s Disease.

Environmental campaigner Dr Rosemary Mason has written extensively on the impacts of agrochemicals (especially glyphosate) on humans, not least during child and adolescent development. In her numerous documents and papers, she cites a plethora of data and studies that link the use of agrochemicals with various diseases and ailments. She has also noted the impact of these chemicals on the human gut microbiome.

Writing in The Guardian, Mo Costandi discusses the importance of gut bacteria and their balance. In adolescence the brain undergoes a protracted period of heightened neural plasticity, during which large numbers of synapses are eliminated in the prefrontal cortex and a wave of ‘myelination’ sweeps across this part of the brain. These processes refine the circuitry in the prefrontal cortex and increase its connectivity to other brain regions. Myelination is also critical for normal, everyday functioning of the brain. Myelin increases a nerve fibre’s conduction velocity by up to a hundred times, and so when it breaks down, the consequences can be devastating.

Other recent work shows that gut microbes control the maturation and function of microglia, the immune cells that eliminate unwanted synapses in the brain; age-related changes to gut microbe composition might regulate myelination and synaptic pruning in adolescence and could, therefore, contribute to cognitive development. Upset those changes, and, As Mason argues, there are going to be serious implications for children and adolescents. Mason places glyphosate at the core of the ailments and disorders currently affecting young people in Wales and the UK in general.

Yet we are still being subjected to an unregulated cocktail of agrochemicals which end up interacting with each other in the gut. Regulatory agencies and governments appear to work hand in glove with the agrochemical sector.

Carol Van Strum has released documents indicating collusion between the manufacturers of dangerous chemicals and regulatory bodies. Evaggelos Vallianatos has highlighted the massive fraud surrounding the regulation of biocides and the wide scale corruption at laboratories that were supposed to test these chemicals for safety. Many of these substances were not subjected to what was deemed proper testing in the first place yet they remain on the market. Shiv Chopra has also highlighted how various dangerous products were allowed on the commercial market and into the food chain due to collusion between these companies and public officials.

Powerful transnational corporations are using humanity as their collective guinea pig. But those who question them or their corporate science are automatically labelled anti-science and accused of committing crimes against humanity because they are preventing their products from being commercialised ‘to help the poor or hungry’. Such attacks on critics by company mouthpieces who masquerade as public officials, independent scientists or independent journalists are mere spin. They are, moreover, based on the sheer hypocrisy that these companies (owned and controlled by elite interests) have humanity’s and the environment’s best interests at heart.

Many of these companies have historically profited from violence. Unfortunately, that character of persists. They directly profit on the back of militarism, whether as a result of the US-backed ‘regime change’ in Ukraine or the US invasion of Iraq. They also believe they can cajole (poison) nature by means of chemicals and bully governments and attack critics, while rolling out propaganda campaigns for public consumption.

Whether it involves neocolonialism and the destruction of indigenous practices and cultures under the guise of ‘development’, the impoverishment of farmers in India, the twisting and writing of national and international laws, the destruction of rural communities, the globalisation of bad food and illness, the deleterious impacts on health and soil, the hollowing out of public institutions and the range of human rights abuses we saw documented during The Monsanto Tribunal, what we are witnessing is structural violence in many forms.

Pesticides are in fact “a global human rights concern” and are in no way vital to ensuring food security. Ultimately, what we see is ignorance, arrogance and corruption masquerading as certainty and science.

“… when we wound the planet grievously by excavating its treasures – the gold, mineral and oil, destroy its ability to breathe by converting forests into urban wastelands, poison its waters with toxic wastes and exterminate other living organisms – we are in fact doing all this to our own bodies… all other species are to be enslaved or driven to extinction if need be in the interests of human ‘progress’… we are part of the same web of life –where every difference we construct artificially between ‘them’ and ‘us’ adds only one more brick to the tombstone of humankind itself.” – from ‘Micobes of the World Unite!’ By Satya Sager

The Pentagon’s New Wonder Weapons for World Dominion

Or Buck Rogers in the 21st Century

By Alfred McCoy

Source: The Unz Review

[This piece has been adapted and expanded from Alfred W. McCoy’s new book, In the Shadows of the American Century: The Rise and Decline of U.S. Global Power.]

Not quite a century ago, on January 7, 1929, newspaper readers across America were captivated by a brand-new comic strip, Buck Rogers in the 25th Century. It offered the country its first images of space-age death rays, atomic explosions, and inter-planetary travel.

“I was twenty years old,” World War I veteran Anthony “Buck” Rogers told readers in the very first strip, “surveying the lower levels of an abandoned mine near Pittsburgh… when suddenly… gas knocked me out. But I didn’t die. The peculiar gas… preserved me in suspended animation. Finally, another shifting of strata admitted fresh air and I revived.”

Staggering out of that mine, he finds himself in the 25th century surrounded by flying warriors shooting ray guns at each other. A Mongol spaceship overhead promptly spots him on its “television view plate” and fires its “disintegrator ray” at him. He’s saved from certain death by a flying woman warrior named Wilma who explains to him how this all came to be.

“Many years ago,” she says, “the Mongol Reds from the Gobi Desert conquered Asia from their great airships held aloft by gravity Repellor Rays. They destroyed Europe, then turned toward peace-loving America.” As their disintegrator beams boiled the oceans, annihilated the U.S. Navy, and demolished Washington, D.C. in just three hours, “government ceased to exist, and mobs, reduced to savagery, fought their way out of the cities to scatter and hide in the country. It was the death of a nation.” While the Mongols rebuilt 15 cities as centers of “super scientific magnificence” under their evil emperor, Americans led “hunted lives in the forests” until their “undying flame of freedom” led them to recapture “lost science” and “once more strike for freedom.”

After a year of such cartoons filled with the worst of early-twentieth-century Asian stereotypes, just as Wilma is clinging to the airship of the Mongol Viceroy as it speeds across the Pacific , a mysterious metallic orb appears high in the sky and fires death rays, sending the Mongol ship “hissing into the sea.” With her anti-gravity “inertron” belt, the intrepid Wilma dives safely into the waves only to have a giant metal arm shoot out from the mysterious orb and pull her on board to reveal — “Horrors! What strange beings!” — Martians!

With that strip, Buck Rogers in the 25th Century moved from Earth-bound combat against racialized Asians into space wars against monsters from other planets that, over the next 70 years, would take the strip into comic books, radio broadcasts, feature films, television serials, video games, and the country’s collective conscious. It would offer defining visions of space warfare for generations of Americans.

Back in the 21st Century

Now imagine us back in the 21st century. It’s 2030 and an American “triple canopy” of pervasive surveillance systems and armed drones already fills the heavens from the lower stratosphere to the exo-atmosphere. It can deliver its weaponry anywhere on the planet with staggering speed, knock out enemy satellite communications at a moment’s notice, or follow individuals biometrically for great distances. It’s a wonder of the modern age. Along with the country’s advanced cyberwar capacity, it’s also the most sophisticated military information system ever created and an insurance policy for global dominion deep into the twenty-first century.

That is, in fact, the future as the Pentagon imagines it and it’s actually under development, even though most Americans know little or nothing about it. They are still operating in another age, as was Mitt Romney during the 2012 presidential debates when he complained that “our Navy is smaller now than at any time since 1917.”

With words of withering mockery, President Obama shot back: “Well, Governor, we also have fewer horses and bayonets, because the nature of our military’s changed… the question is not a game of Battleship, where we’re counting ships. It’s what are our capabilities.” Obama then offered just a hint of what those capabilities might be: “We need to be thinking about cyber security. We need to be talking about space.”

Indeed, working in secrecy, the Obama administration was presiding over a revolution in defense planning, moving the nation far beyond bayonets and battleships to cyberwarfare and the future full-scale weaponization of space. From stratosphere to exosphere, the Pentagon is now producing an armada of fantastical new aerospace weapons worthy of Buck Rogers.

In 2009, building on advances in digital surveillance under the Bush administration, Obama launched the U.S. Cyber Command. Its headquarters were set up inside the National Security Agency (NSA) at Fort Meade, Maryland, and a cyberwar center staffed by 7,000 Air Force employees was established at Lackland Air Base in Texas. Two years later, the Pentagon moved beyond conventional combat on air, land, or sea to declare cyberspace both an offensive and defensive “operational domain.” In August, despite his wide-ranging attempt to purge the government of anything connected to Barack Obama’s “legacy,” President Trump implemented his predecessor’s long-delayed plan to separate that cyber command from the NSA in a bid to “strengthen our cyberspace operations.”

And what is all this technology being prepared for? In study after study, the intelligence community, the Pentagon, and related think tanks have been unanimous in identifying the main threat to future U.S. global hegemony as a rival power with an expanding economy, a strengthening military, and global ambitions: China, the home of those denizens of the Gobi Desert who would, in that old Buck Rogers fable, destroy Washington four centuries from now. Given that America’s economic preeminence is fading fast, breakthroughs in “information warfare” might indeed prove Washington’s best bet for extending its global hegemony further into this century — but don’t count on it, given the history of techno-weaponry in past wars.

Techno-Triumph in Vietnam

Ever since the Pentagon with its 17 miles of corridors was completed in 1943, that massive bureaucratic maze has presided over a creative fusion of science and industry that President Dwight Eisenhower would dub “the military-industrial complex” in his farewell address to the nation in 1961. “We can no longer risk emergency improvisation of national defense,” he told the American people. “We have been compelled to create a permanent armaments industry of vast proportions” sustained by a “technological revolution” that is “complex and costly.” As part of his own contribution to that complex, Eisenhower had overseen the creation of both the National Aeronautics and Space Administration, or NASA, and a “high-risk, high-gain” research unit called the Advanced Research Projects Agency, or ARPA, that later added the word “Defense” to its name and became DARPA.

For 70 years, this close alliance between the Pentagon and major defense contractors has produced an unbroken succession of “wonder weapons” that at least theoretically gave it a critical edge in all major military domains. Even when defeated or fought to a draw, as in Vietnam, Iraq, and Afghanistan, the Pentagon’s research matrix has demonstrated a recurring resilience that could turn disaster into further technological advance.

The Vietnam War, for example, was a thoroughgoing tactical failure, yet it would also prove a technological triumph for the military-industrial complex. Although most Americans remember only the Army’s soul-destroying ground combat in the villages of South Vietnam, the Air Force fought the biggest air war in military history there and, while it too failed dismally and destructively, it turned out to be a crucial testing ground for a revolution in robotic weaponry.

To stop truck convoys that the North Vietnamese were sending through southern Laos into South Vietnam, the Pentagon’s techno-wizards combined a network of sensors, computers, and aircraft in a coordinated electronic bombing campaign that, from 1968 to 1973, dropped more than a million tons of munitions — equal to the total tonnage for the whole Korean War — in that limited area. At a cost of $800 million a year, Operation Igloo White laced that narrow mountain corridor with 20,000 acoustic, seismic, and thermal sensors that sent signals to four EC-121 communications aircraft circling ceaselessly overhead.

At a U.S. air base just across the Mekong River in Thailand, Task Force Alpha deployed two powerful IBM 360/65 mainframe computers, equipped with history’s first visual display monitors, to translate all those sensor signals into “an illuminated line of light” and so launch jet fighters over the Ho Chi Minh Trail where computers discharged laser-guided bombs automatically. Bristling with antennae and filled with the latest computers, its massive concrete bunker seemed, at the time, a futuristic marvel to a visiting Pentagon official who spoke rapturously about “being swept up in the beauty and majesty of the Task Force Alpha temple.”

However, after more than 100,000 North Vietnamese troops with tanks, trucks, and artillery somehow moved through that sensor field undetected for a massive offensive in 1972, the Air Force had to admit that its $6 billion “electronic battlefield” was an unqualified failure. Yet that same bombing campaign would prove to be the first crude step toward a future electronic battlefield for unmanned robotic warfare.

In the pressure cooker of history’s largest air war, the Air Force also transformed an old weapon, the “Firebee” target drone, into a new technology that would rise to significance three decades later. By 1972, the Air Force could send an “SC/TV” drone, equipped with a camera in its nose, up to 2,400 miles across communist China or North Vietnam while controlling it via a low-resolution television image. The Air Force also made aviation history by test firing the first missile from one of those drones.

The air war in Vietnam was also an impetus for the development of the Pentagon’s global telecommunications satellite system, another important first. After the Initial Defense Satellite Communications System launched seven orbital satellites in 1966, ground terminals in Vietnam started transmitting high-resolution aerial surveillance photos to Washington — something NASA called a “revolutionary development.” Those images proved so useful that the Pentagon quickly launched an additional 21 satellites and soon had the first system that could communicate from anywhere on the globe. Today, according to an Air Force website, the third phase of that system provides secure command, control, and communications for “the Army’s ground mobile forces, the Air Force’s airborne terminals, Navy ships at sea, the White House Communications Agency, the State Department, and special users” like the CIA and NSA.

At great cost, the Vietnam War marked a watershed in Washington’s global information architecture. Turning defeat into innovation, the Air Force had developed the key components — satellite communications, remote sensing, computer-triggered bombing, and unmanned aircraft — that would merge 40 years later into a new system of robotic warfare.

The War on Terror

Facing another set of defeats in Afghanistan and Iraq, the twenty-first-century Pentagon again accelerated the development of new military technologies. After six years of failing counterinsurgency campaigns in both countries, the Pentagon discovered the power of biometric identification and electronic surveillance to help pacify sprawling urban areas. And when President Obama later conducted his troop “surge” in Afghanistan, that country became a frontier for testing and perfecting drone warfare.

Launched as an experimental aircraft in 1994, the Predator drone was deployed in the Balkans that very year for photo-reconnaissance. In 2000, it was adapted for real-time surveillance under the CIA’s Operation Afghan Eyes. It would be armed with the tank-killing Hellfire missile for the agency’s first lethal strike in Kandahar, Afghanistan, in October 2001. Seven years later, the Air Force introduced the larger MQ-9 “Reaper” drone with a flying range of 1,150 miles when fully loaded with Hellfire missiles and GBU-30 bombs, allowing it to strike targets almost anywhere in Europe, Africa, or Asia. To fulfill its expanding mission as Washington’s global assassin, the Air Force plans to have 346 Reapers in service by 2021, including 80 for the CIA.

Between 2004 and 2010, total flying time for all unmanned aerial vehicles rose sharply from just 71 hours to 250,000 hours. By 2011, there were already 7,000 drones in a growing U.S. armada of unmanned aircraft. So central had they become to its military power that the Pentagon was planning to spend $40 billion to expand their numbers by 35% over the following decade. To service all this growth, the Air Force was training 350 drone pilots, more than all its bomber and fighter pilots combined.

Miniature or monstrous, hand-held or runway-launched, drones were becoming so commonplace and so critical for so many military missions that they emerged from the war on terror as one of America’s wonder weapons for preserving its global power. Yet the striking innovations in drone warfare are, in the long run, likely to be overshadowed by stunning aerospace advances in the stratosphere and exosphere.

The Pentagon’s Triple Canopy

As in Vietnam, despite bitter reverses on the ground in Iraq and Afghanistan, Washington’s recent wars have been catalysts for the fusion of aerospace, cyberspace, and artificial intelligence into a new military regime of robotic warfare.

To effect this technological transformation, starting in 2009 the Pentagon planned to spend $55 billion annually to develop robotics for a data-dense interface of space, cyberspace, and terrestrial battle space. Through an annual allocation for new technologies reaching $18 billion in 2016, the Pentagon had, according to the New York Times, “put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power,” exemplified by future drones that will be capable of identifying and eliminating enemy targets without recourse to human overseers. By 2025, the United States will likely deploy advanced aerospace and cyberwarfare to envelop the planet in a robotic matrix theoretically capable of blinding entire armies or atomizing an individual insurgent.

During 15 years of nearly limitless military budgets for the war on terror, DARPA has spent billions of dollars trying to develop new weapons systems worthy of Buck Rogers that usually die on the drawing board or end in spectacular crashes. Through this astronomically costly process of trial and error, Pentagon planners seem to have come to the slow realization that established systems, particularly drones and satellites, could in combination create an effective aerospace architecture.

Within a decade, the Pentagon apparently hopes to patrol the entire planet ceaselessly via a triple-canopy aerospace shield that would reach from sky to space and be secured by an armada of drones with lethal missiles and Argus-eyed sensors, monitored through an electronic matrix and controlled by robotic systems. It’s even possible to take you on a tour of the super-secret realm where future space wars will be fought, if the Pentagon’s dreams become reality, by exploring both DARPA websites and those of its various defense contractors.

Drones in the Lower Stratosphere

At the bottom tier of this emerging aerospace shield in the lower stratosphere (about 30,000 to 60,000 feet high), the Pentagon is working with defense contractors to develop high-altitude drones that will replace manned aircraft. To supersede the manned U-2 surveillance aircraft, for instance, the Pentagon has been preparing a projected armada of 99 Global Hawk drones at a mind-boggling cost of $223 million each, seven times the price of the current Reaper model. Its extended 116-foot wingspan (bigger than that of a Boeing 737) is geared to operating at 60,000 feet. Each Global Hawk is equipped with high-resolution cameras, advanced electronic sensors, and efficient engines for a continuous 32-hour flight, which means that it can potentially survey up to 40,000 square miles of the planet’s surface daily. With its enormous bandwidth needed to bounce a torrent of audio-visual data between satellites and ground stations, however, the Global Hawk, like other long-distance drones in America’s armada, may prove vulnerable to a hostile hack attack in some future conflict.

The sophistication, and limitations, of this developing aerospace technology were exposed in December 2011 when an advanced RQ-170 Sentinel drone suddenly landed in Iran, whose officials then released photos of its dart-shaped, 65-foot wingspan meant for flights up to 50,000 feet. Under a highly classified “black” contract, Lockheed Martin had built 20 of these espionage drones at a cost of about $200 million with radar-evading stealth and advanced optics that were meant to provide “surveillance support to forward-deployed combat forces.”

So what was this super-secret drone doing in hostile Iran? By simply jamming its GPS navigation system, whose signals are notoriously susceptible to hacking, Iranian engineers took control of the drone and landed it at a local base of theirs with the same elevation as its home field in neighboring Afghanistan. Although Washington first denied the capture, the event sent shock waves down the Pentagon’s endless corridors.

In the aftermath of this debacle, the Defense Department worked with one of its top contractors, Northrop Grumman, to accelerate development of its super-stealth RQ-180 drone with an enormous 130-foot wingspan, an extended range of 1,200 miles, and 24 hours of flying time. Its record cost, $300 million a plane, could be thought of as inaugurating a new era of lavishly expensive war-fighting drones.

Simultaneously, the Navy’s dart-shaped X-47B surveillance and strike drone has proven capable both of in-flight refueling and of carrying up to 4,000 pounds of bombs or missiles. Three years after it passed its most crucial test by a joy-stick landing on the deck of an aircraft carrier, the USS George H.W. Bush in July 2013, the Navy announced that this experimental drone would enter service sometime after 2020 as the “MQ-25 Stingray” aircraft.

Dominating the Upper Stratosphere

To dominate the higher altitudes of the upper stratosphere (about 70,000 to 160,000 feet), the Pentagon has pushed its contractors to the technological edge, spending billions of dollars on experimentation with fanciful, futuristic aircraft.

For more than 20 years, DARPA pursued the dream of a globe-girding armada of solar-powered drones that could fly ceaselessly at 90,000 feet and would serve as the equivalent of low-flying satellites, that is, as platforms for surveillance intercepts or signals transmission. With an arching 250-foot wingspan covered with ultra-light solar panels, the “Helios” drone achieved a world-record altitude of 98,000 feet in 2001 before breaking up in a spectacular crash two years later. Nonetheless, DARPA launched the ambitious “Vulture” project in 2008 to build solar-powered aircraft with hugewingspans of 300 to 500 feet capable of ceaseless flight at 90,000 feet for five years at a time. After DARPA abandoned the project as impractical in 2012, Google and Facebook took over the technology with the goal of building future platforms for their customers’ Internet connections.

Since 2003, both DARPA and the Air Force have struggled to shatter the barrier for suborbital speeds by developing the dart-shaped Falcon Hypersonic Cruise Vehicle. Flying at an altitude of 100,000 feet, it was expected to “deliver 12,000 pounds of payload at a distance of 9,000 nautical miles from the continental United States in less than two hours.” Although the first test launches in 2010 and 2011 crashed in midflight, they did briefly reach an amazing 13,000 miles per hour, 22 times the speed of sound.

As often happens, failure produced progress. In the wake of the Falcon’s crashes, DARPA has applied its hypersonics to develop a missile capable of penetrating China’s air-defenses at an altitude of 70,000 feet and a speed of Mach 5 (about 3,300 miles per hour).

Simultaneously, Lockheed’s secret “Skunk Works” experimental unit is using the hypersonic technology to develop the SR-72 unmanned surveillance aircraft as a successor to its SR-71 Blackbird, the world’s fastest manned aircraft. When operational by 2030, the SR-72 is supposed to fly at about 4,500 mph, double the speed of its manned predecessor, with an extreme stealth fuselage making it undetectable as it crosses any continent in an hour at 80,000 feet scooping up electronic intelligence.

Space Wars in the Exosphere

In the exosphere, 200 miles above Earth, the age of space warfare dawned in April 2010 when the Defense Department launched the robotic X-37B spacecraft, just 29 feet long, into orbit for a seven-month mission. By removing pilots and their costly life-support systems, the Air Force’s secretive Rapid Capabilities Office had created a miniaturized, militarized space drone with thrusters to elude missile attacks and a cargo bay for possible air-to-air missiles. By the time the second X-37B prototype landed in June 2012, its flawless 15-month flight had established the viability of “robotically controlled reusable spacecraft.”

In the exosphere where these space drones will someday roam, orbital satellites will be the prime targets in any future world war. The vulnerability of U.S. satellite systems became obvious in 2007 when China used a ground-to-air missile to shoot down one of its own satellites in orbit 500 miles above the Earth. A year later, the Pentagon accomplished the same feat, firing an SM-3 missile from a Navy cruiser to score a direct hit on a U.S. satellite 150 miles high.

Unsuccessful in developing an advanced F-6 satellite, despite spending over $200 million in an attempt to split the module into more resilient microwave-linked components, the Pentagon has opted instead to upgrade its more conventional single-module satellites, such as the Navy’s five interconnected Mobile User Objective Systems (MUOS) satellites. These were launched between 2013 and 2016 into geostationary orbits for communications with aircraft, ships, and motorized infantry.

Reflecting its role as a player in the preparation for future and futuristic wars, the Joint Functional Component Command for Space, established in 2006, operates the Space Surveillance Network. To prevent a high-altitude attack on America, this worldwide system of radar and telescopes in 29 remote locations like Ascension Island and Kwajalein Atoll makes about 400,000 observations daily, monitoring every object in the skies.

The Future of Wonder Weapons

By the mid-2020s, if the military’s dreams are realized, the Pentagon’s triple-canopy shield should be able to atomize a single “terrorist” with a missile strike or, with equal ease, blind an entire army by knocking out all of its ground communications, avionics, and naval navigation. It’s a system that, were it to work as imagined, just might allow the United States a diplomatic veto of global lethality, an equalizer for any further loss of international influence.

But as in Vietnam, where aerospace wonders could not prevent a searing defeat, history offers some harsh lessons when it comes to technology trumping insurgencies, no less the fusion of forces (diplomatic, economic, and military) whose sum is geopolitical power. After all, the Third Reich failed to win World War II even though it had amazingly advanced “wonder weapons,” including the devastating V-2 missile, the unstoppable Me-262 jet fighter, and the ship-killing Hs-293 guided missile.

Washington’s dogged reliance on and faith in military technology to maintain its hegemony will certainly guarantee endless combat operations with uncertain outcomes in the forever war against terrorists along the ragged edge of Asia and Africa and incessant future low-level aggression in space and cyberspace. Someday, it may even lead to armed conflict with rivals China and Russia.

Whether the Pentagon’s robotic weapon systems will offer the U.S. an extended lease on global hegemony or prove a fantasy plucked from the frames of a Buck Rogers comic book, only the future can tell. Whether, in that moment to come, America will play the role of the indomitable Buck Rogers or the Martians he eventually defeated is another question worth asking. One thing is likely, however: that future is coming far more quickly and possibly far more painfully than any of us might imagine.

A New Concept of Consciousness

By Ervin Laszlo

Source: Reality Sandwich

The following is excerpted from The Intelligence of the Cosmos by Ervin Laszlo, published by Inner Traditions.

What about mind? If the world is vibration, is also mind and consciousness a form of vibration? Or on the contrary, are all vibrations, the observed world, a manifestation of mind?

Although it is true that when all is said and done all we know is our consciousness, it is also true that we do not know our own consciousness, not to mention the consciousness of anyone else. We do not know what consciousness really is or how it is related to the brain. Since our consciousness is the basis of our identity, we do not know who we really are. Are we a body that generates the stream of sensations we call consciousness, or are we a consciousness associated with a body that displays it? Do we have consciousness, or are we consciousness? Consciousness could be a kind of illusion, a set of sensations produced by the workings of our brain. But it could also be that our body is a vehicle, a transmitter of a consciousness that is the basic reality of the world. The world could be material, and mind could be an illusion. Or the world could be consciousness, and the materiality of the world could be the illusion.

Both of these possibilities have been explored in the history of philosophy, and today we are a step closer than before to understanding which of them is true. There are important insights emerging at the expanding frontiers where physical science joins consciousness research.

On the basis of a growing series of observations and experiments, a new consensus is emerging. It is that “my” consciousness is not just my consciousness, meaning the consciousness produced by my brain, any more than a program transmitted over the air would be a program produced by my TV set. Just like a program broadcast over the air continues to exist when my TV set is turned off, my consciousness continues to exist when my brain is turned off.

Consciousness is a real element in the real world. The brain and body do not produce it; they display it. And it does not cease when life in the body does. Consciousness is a reflection, a projection, a manifestation of the intelligence that “in-forms” the world.

Mystics and shamans have known that this is true for millennia, and artists and spiritual people know it to this day. Its rediscovery at the leading edge of science augurs a profound shift in our view of the world. It overcomes the answer the now outdated materialist science gives to the question regarding the nature of mind: the answer according to which consciousness is an epiphenomenon, a product or by-product of the workings of the brain. In that case, the brain would be like an electricity-generating turbine. The turbine is material, while the current it generates is not (or not strictly) material. In the same way, the brain could be material, even if the consciousness it generates proves to be something that is not quite material.

On first sight, this makes good sense. On a second look, however, the materialist concept encounters major problems. First, a conceptual problem. How could a material brain give rise to a truly immaterial stream of sensations? How could anything that is material produce anything immaterial? In modern consciousness research this is known as the “hard problem.” It has no reasonable answer. As researchers point out, we do not have the slightest idea how “matter” could produce “mind.” One is a measurable entity with properties such as hardness, extension, force, and the like, and the other is an ineffable series of sensations with no definite location in space and an ephemeral presence in time.

Fortunately, the hard problem does not need to be solved: it is not a real problem. There is another possibility: mind is a real element in the real world and is not produced by the brain; it is manifested and displayed by the brain.

 

Mind beyond Brain: Evidence for a New Concept of Consciousness

If mind is a real element in the real world only manifested rather than produced by the brain, it can also exist without the brain. There is evidence that mind does exist on occasion beyond the brain: surprisingly, conscious experience seems possible in the absence of a functioning brain. There are cases—the near-death experience (NDE) is the paradigm case—where consciousness persists when brain function is impaired, or even halted.

Thousands of observations and experiments show that people whose brain stopped working but then regained normal functioning can experience consciousness during the time they are without a functioning brain. This cannot be accounted for on the premises of the production theory: if there is no working brain, there cannot be consciousness. Yet there are cases of consciousness appearing beyond the living and working brain, and some of these cases are not easy to dismiss as mere imagination.

A striking NDE was recounted by a young woman named Pamela. Hers has been just one among scores of NDEs;* it is cited here to illustrate that such experiences exist, and can be documented.

*For a more extensive sampling see Ervin Laszlo with Anthony Peake from The ­Immortal Mind (Rochester, Vt.: Inner Traditions, 2014).

Pamela died on May 29, 2010, at the age of fifty-three. But for hours she was effectively dead on the operating table nineteen years earlier. Her near-demise was induced by a surgical team attempting to remove an aneurism in her brain stem.

After the operation, when her brain and body returned to normal functioning, Pamela described in detail what had taken place in the operating theater. She recalled among other things the music that was playing (“Hotel California” by the Eagles). She described a whole series of conversations among the medical team. She reported having watched the opening of her skull by the surgeon from a position above him and described in detail the “Midas Rex” ­bone-cutting device and the distinct sound it made.

About ninety minutes into the operation, she saw her body from the outside and felt herself being pulled out of it and into a tunnel of light. And she heard the bone saw activate, even though there were specially designed speakers in each of her ears that shut out all external sounds. The speakers themselves were broadcasting audible clicks in order to confirm that there was no activity in her brain stem. Moreover, she had been given a general anesthetic that should have assured that she was fully unconscious. Pamela should not have been able either to see or to hear anything.

It appears that consciousness is not, or not entirely, tied to the living brain. In addition to NDEs, there are cases in which consciousness is detached from the brain in regard to its location. In these cases consciousness originates above the eyes and the head, or near the ceiling, or above the roof. These are the out-of-body experiences: OBEs.

There are OBEs where congenitally blind people have visual awareness. They describe their surroundings in considerable detail and with remarkable accuracy. What the blind experience is not restored eyesight, because they are aware of things that are shielded from their eyes or are beyond the range of normal eyesight. Consciousness researcher Kenneth Ring called these experiences “transcendental awareness.”

Visual awareness in the blind joins a growing repertory of experiences collected and researched by Stanislav Grof: “transcendental ­experiences.” As Grof found, these beyond-the-brain and ­beyond-here-and-now experiences are widespread—more widespread than anyone would have suspected even a few years ago.

There are also reports of ADEs, after-death experiences. Thousands of psychic mediums claim to have channeled the conscious experience of deceased people, and some of these reports are not easy to dismiss as mere imagination. One of the most robust of these reports has come from Bertrand Russell, the renowned English philosopher. Lord Russell was a skeptic, an outspoken debunker of esoteric phenomena, including the survival of the mind or soul beyond the body. He once wrote, “I believe that when I die I shall rot, and nothing of my ego will survive.” Yet after he died he conveyed the following message to the medium Rosemary Brown.

You may not believe that it is I, Bertrand Arthur William Russell, who am saying these things, and perhaps there is no conclusive proof that I can offer through this somewhat restrictive medium. Those with an ear to hear may catch the echo of my voice in my phrases, the tenor of my tongue in my tautology; those who do not wish to hear will no doubt conjure up a whole table of tricks to disprove my retrospective rhetoric.

. . . After breathing my last breath in my mortal body, I found myself in some sort of extension of existence that held no parallel as far as I could estimate, in the material dimension I had recently experienced. I observed that I was occupying a body predominantly bearing similarities to the physical one I had vacated forever; but this new body in which I now resided seemed virtually weightless and very volatile, and able to move in any direction with the minimum of effort. I began to think I was dreaming and would awaken all too soon in that old world, of which I had become somewhat weary to find myself imprisoned once more in that ageing form which encased a brain that had waxed weary also and did not always want to think when I wanted to think. . . .

Several times in my life [Lord Russell continued] I had thought I was about to die; several times I had resigned myself with the best will that I could muster to ceasing to be. The idea of B.R. no longer inhabiting the world did not trouble me unduly. Befitting, I thought, to give the chap (myself) a decent burial and let him be. Now here I was, still the same I, with the capacities to think and observe sharpened to an incredible degree. I felt earth-life suddenly seemed very unreal almost as it had never happened. It took me quite a long while to understand that feeling until I realized at last that matter is certainly illusory although it does exist in actuality; the material world seemed now nothing more than a seething, changing, restless sea of indeterminable density and volume.

This report “from beyond” appears hardly credible, were it not that it is supported by other ADEs. One of the most striking and difficult to dismiss of these ADEs is the case of a deceased chess grand master who played a game with a living grand master.*

*For details see Laszlo with Peake, The Immortal Mind.

Wolfgang Eisenbeiss, an amateur chess player, engaged the medium Robert Rollans to transmit the moves of a game to be played with Viktor Korchnoi, the world’s third-ranking grand master. His ­opponent was to be a player whom Rollans was to find in his trance state. Eisenbeiss gave Rollans a list of deceased grand masters and asked him to contact them and ask who would be willing to play. Rollans entered his state of trance and did so. On June 15, 1985, the former grand master Geza Maroczy responded and said that he was available. Maroczy was the third-ranking grand master in the year 1900. He was born in 1870 and died in 1951 at the age of eighty-one. Rollans reported that Maroczy responded to his invitation as follows.

I will be at your disposal in this peculiar game of chess for two reasons. First, because I also want to do something to aid mankind ­living on Earth to become convinced that death does not end everything, but instead the mind is separated from the physical body and comes up to us in a new world, where individual life continues to manifest itself in a new unknown dimension. Second, being a Hungarian patriot, I want to guide the eyes of the world into the direction of my beloved Hungary.

Korchnoi and Maroczy began a game that was frequently interrupted due to Korchnoi’s poor health and numerous travels. It lasted seven years and eight months. Speaking through Robert Rollans, Maroczy gave his moves in the standard form: for example, “5. A3 – Bxc3+”; Korchnoi gave his own moves to Rollans in the same form, but by ordinary communication. Every move was analyzed and recorded. It turned out that the game was played at the grand-master level and that it exhibited the style for which Maroczy was famous. It ended on February 11, 1993, when at move forty-eight Maroczy resigned. Subsequent analysis showed that it was a wise decision: five moves later Korchnoi would have achieved checkmate.

In this case the medium Rollans channeled information he did not possess in his ordinary state of consciousness. And this information was so expert and precise that it is extremely unlikely that any person Rollans could have contacted would have possessed it.

There are also firsthand testimonies of consciousness without a functioning brain. The well-known Harvard neurosurgeon Eben Alexander, who was just as insistently skeptical about consciousness beyond the brain as Lord Russell had been, gave a detailed account of his conscious experience during the seven days he spent in deep coma. In the condition in which he found himself, conscious experience, he previously said, is completely excluded. Yet his experience—which he described in detail in several articles and three bestselling books—was so clear and convincing that it has changed his mind. Consciousness, he is now claiming, can exist beyond the brain.

The above-cited cases illustrate that there is remarkable, and on occasion remarkably robust, evidence that consciousness is not confined to the living brain. Although this evidence is widespread, it is not widely known. There are still people, including scientists, who refuse to take cognizance of it. This is not surprising, given that the evidence is anomalous for the dominant world concept. Those who strongly disbelieve that such phenomena exist, not only refuse to consider evidence to the contrary, they often fail to perceive evidence to the contrary.

Nonetheless, the view that consciousness is a fundamental element in the world is gaining recognition. The Manifesto of the Summit on Post-Materialist Science, Spirituality and Society (Tucson, Arizona, 2015) declared: “Mind represents an aspect of reality as primordial as the physical world. Mind is fundamental in the universe, i.e., it cannot be derived from matter and reduced to anything more
basic.”

What Science Isn’t

The Cult of Lay-Positivism

By Equanimous Rex

Source: Modern Mythology

Science. Does the word bring images of space ships and high-tech equipment doing miraculous things? Wonder drugs and new solutions to old problems? Good, because this article isn’t a declaration that science is evil, or dead. Keep that shining gaze on the things that amaze you, and buckle up.

Never before has it been so easy as in the modern era to find something to fill a person’s God-Hole with. What’s a God-Hole, you ask? Well, put simply its a metaphor for the part of our psyche where religious surety and faithful fanaticism would have been reserved for Yahweh and his earthly cohorts, as was the case with generations and generations of many of our ancestors. These days you can’t walk three steps without crushing some cult or dealing with apologists for yet another embryonic subculture, and one of the most wide-spread and pervasive modern cults is that of the materialistpositivists.

You might not be familiar with the names but you’re definitely familiar with the faces, the words, the general attitudes of the MP’s. They tend to identify themselves by their atheism though their atheism is the least descriptive part of their belief system. Erroneously, MP’s have been in the spotlight for so long that nowadays people assume that if you are an atheist, you must fall in line and along trends of attitudes of this group, despite the fact that an atheist could be a Buddhist, LaVeyan Satanist, religious naturalist, believe in the supernatural, ghosts, psychic powers, or what have you, (since none of these things are theoi, or gods). MP’s on the other hand are intrinsically opposed to the study of or contemplation of seemingly paranormal or preternatural phenomena.

Likewise they have taken hold of the term “skeptic” and have become its face in mainstream discourse. These days all you have to do to be thought of as a skeptic is to, firstly, tell everyone quite loudly that you are one, and secondly, start engaging in specified, surgical doubt of only the belief systems and ideologies you are already antagonistic towards, while neglecting to perform the same upon anything you tacitly presume to be true. Engage in rampant polemics against your opponents and frequently craft apologetics for your own beliefs, use your inquisitiveness and doubt like a blade with which to carve out the proverbial flesh of those whom you despise ideologically. This is far from philosophical skepticism as it was originally intended, but as long as you tell people often enough you are a skeptic, you must be one.

The combination of a certain brand of materialism (philosophical monism, in which they deem all things that exist are material) and positivism (belief that everything that exists can be verified scientifically and anything that can’t be verified scientifically doesn’t exist) forms a unique cocktail, an anti-belief based upon a sense of superiority against all other beliefs. “You have beliefs. We have facts.” It is a potentially useful worldview, which many people use as a metric with which to quantify worthiness, but use is only the same as truth to the strictest of pragmatists.

Most proponents of MP eschew philosophy as navel-gazing aphorisms and platitudes, seeing the field as the decrepit grandfather of science. Given that they are mostly unaware of philosophy — due to their aversion — they don’t usually know that their beliefs fall squarely under philosophy, and they don’t usually seem to know that there are still to this day debates about the validity of their philosophical presuppositions.

Again, being critical of the philosophies of positivism and materialism is not being anti-science, though such a claim is inevitable should you question the sacrosanct nature of anything tangentially related to science or adjacent to science. Karl Popper — the guy who came up with the concept of falsifiability  — was a major opponent to positivism, for example. They’d probably say he was anti-science as well, though that is entirely inaccurate.

This isn’t a condemnation of (most) scientists. I’ve had the pleasure to meet a few over the years and I’ve always found them quite humble in terms of facing the mysterious of the universe. They were the least likely to make outrageous claims or swerve outside of their proverbial lanes. The problem lies mostly with what I’ve come to think of as “positivist laity”.

The “lay-person” is a concept found generally from religion, and refers to someone who is not a part of the clergy, who are not ordained or educated on the ‘inner mysteries’ of the religious order. They are deferential to the priests and clerics and put great faith in them but do not themselves have the same information, education or knowledge.

The materialist-positivist laity seems to consist of people who have no formal or informal education dealing with the scientific method(s) or in the fields of science. They often come from a Post-Christian background (at least in the United States) and are angry that they believed the literalism their parents or geographical region shoved down their throats. They end up seeing a few debunking videos, or those in which someone who is self-identified with atheism points out the inconsistencies of the Christian cosmological mythos. They start to notice that the explanations and descriptions of the world that scientists and science educators give are more functional, and are trusted to be provable, even though the concept of ‘proof’ is mathematical, not scientific.

Either way, for whatever reason, they come to replace Yahweh and his priests for their conception of Science and it’s own educated, inner-circle experts. Once again, not a critique of science nor atheism. Not even really a critique of materialist-positivism. We must focus on the issue at hand. Large swathes of lay-positivists are turning the concept of science into a cargo-cult religion, using it to fill in their empty God-Hole and clutching to their conception of a cohesive and explanatory world-story.

It was the jobs of priests for thousands upon thousands of years to give the simple folk a world-story, without such as story anxieties rise and existential doubt creeps in. The crafting and dissemination of a world-story has since been split up from between priests to spread into other areas and specialists such as philosophers, academics and of course, scientists.

Science isn’t a religion. Science isn’t a good many things. For example, science isn’t technology, which would probably shock quite a few lay-positivists. Humanity used and invented technology since as far back as we can find evidence for humans at all. We created aqueducts, agricultural technology, wartime technology, shipbuilding and navigational technology, calendars and time keeping technology, architectural technology, psychological techniques and so on and so forth, well before science was a twinkle in the eye of natural philosophy. People with no education on science or without any formal training to this day still invent new technology. So, next time someone points to a piece of technology and tries to conflate it with science, keep this in mind. One cannot simply anachronistically and retroactively claim for ‘science’ everything which works or is useful, though this does not prevent some from attempting such a thing.

Science is also not ethics. Nor can science tell you what to do with the information you glean from the universe via science. I’m sure people will disagree, but I’m also sure most of them are not scientists and/or do not have a clear understanding of the scientific method(s). Not every moment of clear-thinking and rationality is science, not every free-thought is evidence of science in the works. By trying to make it appear as though everything which makes sense and works is science, lay-positivists have before-the-fact designated everything that is not science as nonsensical and nonexistent.

Intuition and introspection are cast aside by positivism because they are not scientific, and I agree wholeheartedly that intuition can be flawed, rife with bias and misconceptions. But it’s a ‘don’t throw the baby out with the bathwater’ scenario. Don’t forget that most of us get up every day and manage to navigate this world without using the slightest bit of the scientific method. For instance, we’ve got inductive, deductive and abductive reasoning, none of which are isolated to scientific methods. You’ve got examples of people thought to be great scientists in bygone eras who never used an ounce of empiricism to figure out their great contributions. (Galileo, for example, used rational but non-empirical means to infer a heliocentric model).

Science isn’t a cult, a religion, or anything of the sort. But lay-positivism stands to become just that. People seeking to fill a God-Hole, to give their life a sense of meaning and to provide a cohesive world-story so that they do not feel they exist in a state of uncertainty and chaos. Nietzsche, Freud, Feuerbach, and many others have recognized this fact: the need for Gods is not so easily replaced as the Gods themselves are. If scientific findings are used with an ideological agenda to offer fragile humanity a security blanket against the cold, unpredictable unknown, misrepresented and misunderstood by those who have never even bothered to Google “scientific method” who are merely disenfranchised with their old Church, it very well stands that the word “science” will be appropriated akin to the terms “skepticism” and “atheism” to refer to specified, pigeon-holed belief systems, made sacred and subject to no criticism.

“Meet the old laity, same as the new laity.”

Carl Sagan’s Thoughts On The War on Drugs

cS8BmAg

Since today marks the birthday of the late Carl Sagan, we can remember a lesser-known aspect of his greatness by reading (or re-reading) the following 1969 essay he wrote under the pseudonym “Mr. X” published in 1971 in Lester Grinspoon’s “Marihuana Reconsidered”:

It all began about ten years ago. I had reached a considerably more relaxed period in my life – a time when I had come to feel that there was more to living than science, a time of awakening of my social consciousness and amiability, a time when I was open to new experiences. I had become friendly with a group of people who occasionally smoked cannabis, irregularly, but with evident pleasure. Initially I was unwilling to partake, but the apparent euphoria that cannabis produced and the fact that there was no physiological addiction to the plant eventually persuaded me to try. My initial experiences were entirely disappointing; there was no effect at all, and I began to entertain a variety of hypotheses about cannabis being a placebo which worked by expectation and hyperventilation rather than by chemistry. After about five or six unsuccessful attempts, however, it happened. I was lying on my back in a friend’s living room idly examining the pattern of shadows on the ceiling cast by a potted plant (not cannabis!). I suddenly realized that I was examining an intricately detailed miniature Volkswagen, distinctly outlined by the shadows. I was very skeptical at this perception, and tried to find inconsistencies between Volkswagens and what I viewed on the ceiling. But it was all there, down to hubcaps, license plate, chrome, and even the small handle used for opening the trunk. When I closed my eyes, I was stunned to find that there was a movie going on the inside of my eyelids. Flash . . . a simple country scene with red farmhouse, a blue sky, white clouds, yellow path meandering over green hills to the horizon. . . Flash . . . same scene, orange house, brown sky, red clouds, yellow path, violet fields . . . Flash . . . Flash . . . Flash. The flashes came about once a heartbeat. Each flash brought the same simple scene into view, but each time with a different set of colors . . . exquisitely deep hues, and astonishingly harmonious in their juxtaposition. Since then I have smoked occasionally and enjoyed it thoroughly. It amplifies torpid sensibilities and produces what to me are even more interesting effects, as I will explain shortly.

I can remember another early visual experience with cannabis, in which I viewed a candle flame and discovered in the heart of the flame, standing with magnificent indifference, the black-hatted and -cloaked Spanish gentleman who appears on the label of the Sandeman sherry bottle. Looking at fires when high, by the way, especially through one of those prism kaleidoscopes which image their surroundings, is an extraordinarily moving and beautiful experience.

I want to explain that at no time did I think these things ‘really’ were out there. I knew there was no Volkswagen on the ceiling and there was no Sandeman salamander man in the flame. I don’t feel any contradiction in these experiences. There’s a part of me making, creating the perceptions which in everyday life would be bizarre; there’s another part of me which is a kind of observer. About half of the pleasure comes from the observer-part appreciating the work of the creator-part. I smile, or sometimes even laugh out loud at the pictures on the insides of my eyelids. In this sense, I suppose cannabis is psychotomimetic, but I find none of the panic or terror that accompanies some psychoses. Possibly this is because I know it’s my own trip, and that I can come down rapidly any time I want to.

While my early perceptions were all visual, and curiously lacking in images of human beings, both of these items have changed over the intervening years. I find that today a single joint is enough to get me high. I test whether I’m high by closing my eyes and looking for the flashes. They come long before there are any alterations in my visual or other perceptions. I would guess this is a signal-to-noise problem, the visual noise level being very low with my eyes closed. Another interesting information-theoretical aspects is the prevalence – at least in my flashed images – of cartoons: just the outlines of figures, caricatures, not photographs. I think this is simply a matter of information compression; it would be impossible to grasp the total content of an image with the information content of an ordinary photograph, say 108 bits, in the fraction of a second which a flash occupies. And the flash experience is designed, if I may use that word, for instant appreciation. The artist and viewer are one. This is not to say that the images are not marvelously detailed and complex. I recently had an image in which two people were talking, and the words they were saying would form and disappear in yellow above their heads, at about a sentence per heartbeat. In this way it was possible to follow the conversation. At the same time an occasional word would appear in red letters among the yellows above their heads, perfectly in context with the conversation; but if one remembered these red words, they would enunciate a quite different set of statements, penetratingly critical of the conversation. The entire image set which I’ve outlined here, with I would say at least 100 yellow words and something like 10 red words, occurred in something under a minute.

The cannabis experience has greatly improved my appreciation for art, a subject which I had never much appreciated before. The understanding of the intent of the artist which I can achieve when high sometimes carries over to when I’m down. This is one of many human frontiers which cannabis has helped me traverse. There also have been some art-related insights – I don’t know whether they are true or false, but they were fun to formulate. For example, I have spent some time high looking at the work of the Belgian surrealist Yves Tanguey. Some years later, I emerged from a long swim in the Caribbean and sank exhausted onto a beach formed from the erosion of a nearby coral reef. In idly examining the arcuate pastel-colored coral fragments which made up the beach, I saw before me a vast Tanguey painting. Perhaps Tanguey visited such a beach in his childhood.

A very similar improvement in my appreciation of music has occurred with cannabis. For the first time I have been able to hear the separate parts of a three-part harmony and the richness of the counterpoint. I have since discovered that professional musicians can quite easily keep many separate parts going simultaneously in their heads, but this was the first time for me. Again, the learning experience when high has at least to some extent carried over when I’m down. The enjoyment of food is amplified; tastes and aromas emerge that for some reason we ordinarily seem to be too busy to notice. I am able to give my full attention to the sensation. A potato will have a texture, a body, and taste like that of other potatoes, but much more so. Cannabis also enhances the enjoyment of sex – on the one hand it gives an exquisite sensitivity, but on the other hand it postpones orgasm: in part by distracting me with the profusion of image passing before my eyes. The actual duration of orgasm seems to lengthen greatly, but this may be the usual experience of time expansion which comes with cannabis smoking.

I do not consider myself a religious person in the usual sense, but there is a religious aspect to some highs. The heightened sensitivity in all areas gives me a feeling of communion with my surroundings, both animate and inanimate. Sometimes a kind of existential perception of the absurd comes over me and I see with awful certainty the hypocrisies and posturing of myself and my fellow men. And at other times, there is a different sense of the absurd, a playful and whimsical awareness. Both of these senses of the absurd can be communicated, and some of the most rewarding highs I’ve had have been in sharing talk and perceptions and humor. Cannabis brings us an awareness that we spend a lifetime being trained to overlook and forget and put out of our minds. A sense of what the world is really like can be maddening; cannabis has brought me some feelings for what it is like to be crazy, and how we use that word ‘crazy’ to avoid thinking about things that are too painful for us. In the Soviet Union political dissidents are routinely placed in insane asylums. The same kind of thing, a little more subtle perhaps, occurs here: ‘did you hear what Lenny Bruce said yesterday? He must be crazy.’ When high on cannabis I discovered that there’s somebody inside in those people we call mad.

When I’m high I can penetrate into the past, recall childhood memories, friends, relatives, playthings, streets, smells, sounds, and tastes from a vanished era. I can reconstruct the actual occurrences in childhood events only half understood at the time. Many but not all my cannabis trips have somewhere in them a symbolism significant to me which I won’t attempt to describe here, a kind of mandala embossed on the high. Free-associating to this mandala, both visually and as plays on words, has produced a very rich array of insights.

There is a myth about such highs: the user has an illusion of great insight, but it does not survive scrutiny in the morning. I am convinced that this is an error, and that the devastating insights achieved when high are real insights; the main problem is putting these insights in a form acceptable to the quite different self that we are when we’re down the next day. Some of the hardest work I’ve ever done has been to put such insights down on tape or in writing. The problem is that ten even more interesting ideas or images have to be lost in the effort of recording one. It is easy to understand why someone might think it’s a waste of effort going to all that trouble to set the thought down, a kind of intrusion of the Protestant Ethic. But since I live almost all my life down I’ve made the effort – successfully, I think. Incidentally, I find that reasonably good insights can be remembered the next day, but only if some effort has been made to set them down another way. If I write the insight down or tell it to someone, then I can remember it with no assistance the following morning; but if I merely say to myself that I must make an effort to remember, I never do.

I find that most of the insights I achieve when high are into social issues, an area of creative scholarship very different from the one I am generally known for. I can remember one occasion, taking a shower with my wife while high, in which I had an idea on the origins and invalidities of racism in terms of gaussian distribution curves. It was a point obvious in a way, but rarely talked about. I drew the curves in soap on the shower wall, and went to write the idea down. One idea led to another, and at the end of about an hour of extremely hard work I found I had written eleven short essays on a wide range of social, political, philosophical, and human biological topics. Because of problems of space, I can’t go into the details of these essays, but from all external signs, such as public reactions and expert commentary, they seem to contain valid insights. I have used them in university commencement addresses, public lectures, and in my books.

But let me try to at least give the flavor of such an insight and its accompaniments. One night, high on cannabis, I was delving into my childhood, a little self-analysis, and making what seemed to me to be very good progress. I then paused and thought how extraordinary it was that Sigmund Freud, with no assistance from drugs, had been able to achieve his own remarkable self-analysis. But then it hit me like a thunderclap that this was wrong, that Freud had spent the decade before his self-analysis as an experimenter with and a proselytizer for cocaine; and it seemed to me very apparent that the genuine psychological insights that Freud brought to the world were at least in part derived from his drug experience. I have no idea whether this is in fact true, or whether the historians of Freud would agree with this interpretation, or even if such an idea has been published in the past, but it is an interesting hypothesis and one which passes first scrutiny in the world of the downs.

I can remember the night that I suddenly realized what it was like to be crazy, or nights when my feelings and perceptions were of a religious nature. I had a very accurate sense that these feelings and perceptions, written down casually, would not stand the usual critical scrutiny that is my stock in trade as a scientist. If I find in the morning a message from myself the night before informing me that there is a world around us which we barely sense, or that we can become one with the universe, or even that certain politicians are desperately frightened men, I may tend to disbelieve; but when I’m high I know about this disbelief. And so I have a tape in which I exhort myself to take such remarks seriously. I say ‘Listen closely, you sonofabitch of the morning! This stuff is real!’ I try to show that my mind is working clearly; I recall the name of a high school acquaintance I have not thought of in thirty years; I describe the color, typography, and format of a book in another room and these memories do pass critical scrutiny in the morning. I am convinced that there are genuine and valid levels of perception available with cannabis (and probably with other drugs) which are, through the defects of our society and our educational system, unavailable to us without such drugs. Such a remark applies not only to self-awareness and to intellectual pursuits, but also to perceptions of real people, a vastly enhanced sensitivity to facial expression, intonations, and choice of words which sometimes yields a rapport so close it’s as if two people are reading each other’s minds.

Cannabis enables nonmusicians to know a little about what it is like to be a musician, and nonartists to grasp the joys of art. But I am neither an artist nor a musician. What about my own scientific work? While I find a curious disinclination to think of my professional concerns when high – the attractive intellectual adventures always seem to be in every other area – I have made a conscious effort to think of a few particularly difficult current problems in my field when high. It works, at least to a degree. I find I can bring to bear, for example, a range of relevant experimental facts which appear to be mutually inconsistent. So far, so good. At least the recall works. Then in trying to conceive of a way of reconciling the disparate facts, I was able to come up with a very bizarre possibility, one that I’m sure I would never have thought of down. I’ve written a paper which mentions this idea in passing. I think it’s very unlikely to be true, but it has consequences which are experimentally testable, which is the hallmark of an acceptable theory.

I have mentioned that in the cannabis experience there is a part of your mind that remains a dispassionate observer, who is able to take you down in a hurry if need be. I have on a few occasions been forced to drive in heavy traffic when high. I’ve negotiated it with no difficult at all, though I did have some thoughts about the marvelous cherry-red color of traffic lights. I find that after the drive I’m not high at all. There are no flashes on the insides of my eyelids. If you’re high and your child is calling, you can respond about as capably as you usually do. I don’t advocate driving when high on cannabis, but I can tell you from personal experience that it certainly can be done. My high is always reflective, peaceable, intellectually exciting, and sociable, unlike most alcohol highs, and there is never a hangover. Through the years I find that slightly smaller amounts of cannabis suffice to produce the same degree of high, and in one movie theater recently I found I could get high just by inhaling the cannabis smoke which permeated the theater.

There is a very nice self-titering aspect to cannabis. Each puff is a very small dose; the time lag between inhaling a puff and sensing its effect is small; and there is no desire for more after the high is there. I think the ratio, R, of the time to sense the dose taken to the time required to take an excessive dose is an important quantity. R is very large for LSD (which I’ve never taken) and reasonably short for cannabis. Small values of R should be one measure of the safety of psychedelic drugs. When cannabis is legalized, I hope to see this ratio as one of he parameters printed on the pack. I hope that time isn’t too distant; the illegality of cannabis is outrageous, an impediment to full utilization of a drug which helps produce the serenity and insight, sensitivity and fellowship so desperately needed in this increasingly mad and dangerous world.

Are We Living Haunted Lives?

By Kingsley L. Dennis

Source: Waking Times

‘Fear has many eyes
And can see things underground’
~Cervantes, Don Quixote

The world as we know it has gone from being flat to round; from being the center of the universe to the center of the solar system; from being animistic and supernatural to raw in tooth and claw; from being particle-atomic to wavy-quantum. And now we are disappearing into the digital domains of virtual-augmented spaces and false information, bombarded with the spectacle and the image. And somewhere in the midst of all this is the human soul, still largely wrapped and unopened. If there’s a crime here then it is that we’ve allowed ourselves to become haunted – to live haunted lives that lack significance and meaning.

The ‘objects’ or values that we have attempted to live by, or that we pursue, – such as power, truth, understanding, dreams, work, love, and the rest – have all seemingly vanished into some warped, elusive reality where the presence of these things no longer tangibly exist. However, the doubt, uncertainty, and pain of their absence – or ‘fake presence’ – are indeed real enough to affect us deeply. We seek the already disappeared and stalk their substitutes.

We are now close to the stage where we end up just acting out our fantasies upon the phantasmal theatre of our lives and thinking it is reality. This theatre, or screen, of fantasies and the fantastical is like the cave wall in Plato’s allegory where the flickering shadows that move across are taken to be the real. In an updating of Plato’s famous allegory we no longer have shadows projected upon the cave wall; they are now projected upon the green screens that form the back-drop for computer-generated imagery (CGI) that adorn our movies, television programs, and video games.

Whole societies, notably in the technologically-advanced western world, are arranging for our lives to be enacted amidst a scenery backdrop of events and issues artificially projected for us as CGI onto a fake canvas. Within this encroaching visual world, full of misinformation that influences our worldview, we are made to believe in a different kind of reality. It is a reality that is uncertain and insecure, and that requires for us to hold deep obedience to our state institutions to protect us. And within this projection of reality, meanings are provided for us as ready-made meals. In other words, full of too much salt, saturated fats, and laziness.

These socially manufactured meanings are provided as a substitute for the genuine lack. Of these choices offered we often take our pick, as consumers in a marketplace. It may be career, wealth, fame, achievement, or a combination of these and more. Yet the manufactured consent in our sense of meaning, no matter how thoroughly pursued and potentially obtained, is still not genuine. And like the ready-made meal, it soon leaves us with a continued hunger. The illusion of meaning is a vital illusion, yet it still remains an illusion. We may say then that the world we have come to know is a great spectacle of illusion and play; of movement, distraction, simulation, and excess. Yet rather than critically confronting the illusions and distractions we are cleverly persuaded to indulge in them.

The world we share now is also shared with our collective doubts, fears, anger, and frustrations. And these new emotions upon the global stage are blurring our picture of the world and its future. Whilst there are many of us who are excited and genuinely inspired by this increased complexity and diversity there has been a cultural backlash, in the western nations especially, to cover this up with a sheen of simplicity through generic news, bland reporting, and excruciatingly trivial entertainment. This clash of the complex with the simple is creating an odd reality where things just don’t feel right anymore.

We are participants on a ride through the flippant and the flimsy, the significant and the necessary, as we are expected to find our foothold – our human soul – in a world seemingly on the verge of insanity. In such a world, Disneyland may seem to some as the greatest of sanctuaries; whilst to rest of us it stands as a superficial sign of our times.

Most of us do not have the capacity to verify the truth claims of the mainstream media and yet we are more than willing it accept the veracity of their claims. We suspend our own disbelief by trusting in others, especially when it comes to authority and experts. In other words, we have been conditioned to respect the positions of authority and ‘the expert,’ often without critical thought.

And this is the context which frames the telling of his-story and also our-stories. The singer-songwriter Lou Reed once sang, ‘Don’t believe anything you hear and only half of what you see.’ Documentary film-maker Adam Curtis discusses this phenomenon of how the mainstream media projects a simplified, fake reality in his film HyperNormalization (2016). The term hypernormaliztion was taken from an account of life in the Soviet Union during the twenty years before it collapsed. In this account everyone knew the system was failing but they couldn’t envision any alternative and so everyone is resigned to maintaining the pretence of a functioning society. Over time this delusion is accepted as real, an effect termed as hypernormalization. In other words, when the fake is finally accepted as the real then we are living in a hypernormalized state. Does this sound familiar?

The question is – does Reality ever take place?

Our bodies of authority, our mainstream media channels, and our centers of learning – that is, a majority of our significant institutions – have turned, or are in the process of turning, into advertising gimmicks. They peddle publicity and propaganda as endless programs stuck on a loop. They serve to produce the appearance of reality; yet they fail to represent a sense of reality. And this fundamental difference has produced a feeling of living haunted lives. We wander as ghosts in liminal zones, hungry for meaning.

In this sense of loss we no longer seem to know, or distinguish, between oppositions. Almost all of our value systems are based on relative terms – good, bad, my history, your history, etc. Often, the values we take to be ‘our values’ were inculcated in us depending upon which culture we happened to be born into. It is true there are some values more universally shared – such as thou shalt not kill – but the majority of them are culturally relative. Take for instance, sex before marriage – good or bad? Same-sex partnerships? Freedom of religious speech? Eating pork? Eating rats? Democrats or Republicans? Labour or Conservative? Which is good and which is bad? In the case of political parties it is neither – they are false oppositions. More than that, they are also distractions. When you’re arguing (sorry, debating) over political parties you are not observing the system behind them that created this false lack of choice in the first place. False oppositions plague our haunted hinterland. We don’t see this if we are the aimless ghosts, or the walking dead. It’s not pleasant – it’s eerie. And we are in eerie times.

Modernity in its current form is haunted by a sense of loss; of not knowing where it is heading. There are a great many aspects of our age that are in disruption and dislocation. All forms of stability are in question; old and incumbent patterns and models are in dispute; and too many people are experiencing moods of despair and anguish. It is as if our human civilization has come loose from its moorings and is now adrift upon the waters of uncertainty, insignificance, and the loss of meaning.

And so it seems that our civilization is careering dangerously close to some kind of blind spot where we no longer can tell what is true or false anymore. Truth is replaced by a fake substitute and the false becomes a parody of the truth. They are the haunted spaces where the mist drifts by. It’s like a Zen joke. It’s the same as a voice whispering in the darkness saying there is no such thing as a voice whispering in the darkness. They couldn’t have written a better riddle if they had tried.

So what went wrong? Where did it all go? What is it, in fact?

A profound sense of unease has crept into many of us, and also into our social systems, our cultures, our art, our news, and into the very collective soul of humanity. It is an eeriness; an uncertain disquiet – almost an unsettling foreboding. Something has come loose, and we’re not sure what it is. Further still, most of us are fairly certain that those institutions supposedly ‘looking after our best interests’, or running the show – whether they be governmental bodies, financial elites, or shadowy organizations and cabals – are not really in control or are sure either. It feels as if something is amiss, and we just can’t quite put our finger on it. Welcome to our haunted modernity.

We have disarray over a consensus concerning climate chaos, stock market panics and economic crashes, offshore tax evasions and leaked documents, political scandals, pandemic threats and contested vaccines, state and terrorist violence, congenital anxiety and existential fear – a whole cauldron of terror, dread, disquiet, nervousness, angst, and what-the-hell-is-going-on collective confusion is bubbling both under and over in many of our societies.

We have been infiltrated with a virus and it is infecting not only our bodies but our very minds. It’s a pure mind-bending virus and it’s playing havoc with our insecurities and indulging in our sense of lostness. The Spanish have a phrase for this state – “de perdidos al río” – and it roughly translates as from lost to the river. It may not make complete literal sense in English but that’s just it; you can get the sense of it and its vagueness is exactly where we are – from lost to the river!

In such ‘haunted lives’ we can easily become accustomed to metaphysical anguish as just another pain. It is like a pulled muscle or a sprained ankle; something unpleasant and yet we continue to move around with it. In the end we learn how to direct and project this metaphysical anguish onto other things – we choose intoxicating entertainment, sports, and other cultural pastimes and diversions. Angst just becomes a factor that appears to come as a default setting with our species. There is the danger that we become accepting of the ghostly flimsiness to life, which ends up being hypernormalized so that the sense of absence of something real becomes the new reality. There is the dangerous potential here for a state of indifference to emerge and seep into our cultures, which then becomes an ennui-creep into the world until…oh, well, what does it matter anyway?

During these years of disarray and turbulence it is essential that we create meaning for ourselves, otherwise the ‘distant algorithmic’ universe that runs the life around us will create a deep sense of alienation. In a world of scrambled code and big data, transcendence will seem another chimera not within grasp or even real. Or, at worse, the very notion of transcendence will seem the delirium of unstable minds – for those people not able to ‘get real’ with the world of Now.

In this instance, transcendence will appear as a form of spiritual autism. And yet the notion of going beyond ourselves, of developing our capacities for higher perception, are the saving grace inherent within our human species. We are incomplete, and this haunts us, and yet it should also give us meaning and a higher aim in life in knowing that there is further to go. In knowing that there are tools within us for creating, shaping, and cultivating these finer faculties. In being haunted we are also being reminded of what is lacking and this urge should compel us to find a solution within ourselves. We are in fact being ‘haunted into remembrance.’

However, for many of us a haunted modernity offers us a conditioned life where there is little or no space for transcendence. In such social and cultural hauntings there are no navigable locations. We have stepped into an unsouling from the wilderness. We are compelled to walk on.