Five Questions We’re Asking About the Ebola Scare

Viral-Hemorrhagic-Fever-Erupts-in-Guinea-Caused-by-the-Ebola-Virus-650x433

By Aaron Dykes and Melissa Melton

Source: Truthstream Media

Now that the Ebola situation has hit the 24/7 mainstream media zoo, serious questions are being raised as to why now.

After all, people were dying of Ebola in the hundreds in West Africa before this week. Aid workers and doctors were getting infected before. These things are not new, but the sudden media focus raises lots of questions.

To start…

Why are they shipping Ebola-infected patients onto American soil for the first time?

As many have pointed out, this move seems particularly…ill-advised. The U.S. Centers for Disease Control and Prevention cautioned people not to fly to the affected areas, but our State Department is going to go out of its way to put together heavily publicized, special containment tents inside planes to fly two Americans here while the media in lockstep makes a huge play-by-play deal?

It isn’t exactly level 4 containment all the way, either, as Underground Medic‘s Lizzie Bennett pointed out yesterday: the one guy arrived at the hospital and just got out of the ambulance and walked on in.

Why is Obama amending executive orders about quarantining people infected with Ebola when he already had that power?

The president just amended a G.W. Bush-era executive order 13295 which allows “apprehension, detention, or conditional release of individuals to prevent the introduction, transmission, or spread of suspected communicable diseases.”

Section 1, subjection b has now been replaced with the following:

“(b)  Severe acute respiratory syndromes, which are diseases that are associated with fever and signs and symptoms of pneumonia or other respiratory illness, are capable of being transmitted from person to person, and that either are causing, or have the potential to cause, a pandemic, or, upon infection, are highly likely to cause mortality or serious morbidity if not properly controlled.  This subsection does not apply to influenza.”

Sure sounds like Ebola, doesn’t it?

But in reality, those quarantine powers were already in place. It even says so on this CDC map of U.S. quarantine stations fact sheet the agency released in August 2013.

cdcfactsheetquarantine

Ebola definitely counts under the category “viral hemorrhagic fevers”.

So why make a big deal amending an executive order when the power to detain people who have, or are suspected to have Ebola, already exists?

The CDC also just released a brand new, timely webpage “Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Hemorrhagic Fever in U.S. Hospitals” as if it just completely slipped the agency’s mind to disseminate information to medical professionals on how to deal with Ebola before now. Come on. The page does, however, mention guidance for exposure to “contaminated air,” which is odd considering the CDC director has gone out of his way to say that there’s no way an Ebola outbreak could ever happen in the U.S. 

What exactly have Ft. Detrick biowarfare researchers been doing in the Ebola hot zone in West Africa all this time?

Independent investigative reporter Jon Rappaport asked this very same question the day before yesterday, but it seems like a good one. He had several other questions, and they are all good ones:

What exactly have they been doing?

Exactly what diagnostic tests have they been performing on citizens of Sierra Leone?

Why do we have reports that the government of Sierra Leone has recently told Tulane researchers to stop this testing?

Have Tulane researchers and their associates attempted any experimental treatments (e.g., injecting monoclonal antibodies) using citizens of the region? If so, what adverse events have occurred?

The research program, occurring in Sierra Leone, the Republic of Guinea, and Liberia—said to be the epicenter of the 2014 Ebola outbreak—has the announced purpose, among others, of detecting the future use of fever-viruses as bioweapons.

Is this purely defensive research? Or as we have seen in the past, is this research being covertly used to develop offensive bioweapons?

The same day, Navy Times published an article talking about how U.S. biowarfare scientists have been highly interested in Ebola since at least the late 1970s for engineering bioweapons: “mainly because Ebola and its fellow viruses have high mortality rates…and its stable nature in aerosol make it attractive as a potential biological weapon.”

But the article goes on to say that scientists from the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) have been working on a vaccine since then, a purely defensive measure. Of course, they can’t come out and say they’re working on offensive weapons. The Biological Weapons Convention went into effect in 1975, supposedly putting an end to the government’s biological weapons program.

Why does the U.S. government own a patent on a novel strain of Ebola that those same Ft. Detrick researchers quietly admitted in a CDC journal article last month may actually be the cause of the current Sierra Leone outbreak, not Ebola Zaire as widely reported?

This one gets tricky.

There are five types of Ebola virus and the newest strain is named Bundibugyo, or Ebobun for short. The U.S. government actually holds a patent on this strain — US 20120251502 A1, for “Human Ebola Virus Species and Compositions and Methods Thereof” related to the Bundibugyo version of the virus.

Last month, the same Ft. Detrick researchers who have been over in the Ebola hot zone published an article in the CDC’s Emerging Infectious Diseases where they discuss the human testing that has been going on over there and down near the bottom of the article, they quietly admit, “Ebolavirus infections in Sierra Leone might be the result of Bundibugyo virus or an ebolavirus genetic variant and not EBOV.”

The kicker?

The Ebobun version of Ebola, which is apparently been found to be “genetically distinct,” as it differs by more than 30% at the genome level from all other known ebolavirus species, apparently has a much lower death rate than the Zaire version the media keeps talking about.

Not that Ebola in any form isn’t dangerous. It’s deadly, period. But Ebobun had a 36% mortality rate at the initial outbreak in 2007, versus 70-90% on average for Zaire.

Additionally, because it is much more unique, researchers have suggested that if a vaccine or treatment is created for Ebola and the Ebobun strain is not taken into account, the resulting treatment or vaccine obviously might not work on it.

Regardless, all the mainstream media seems interested in driving home on repeat these days is that this outbreak is the Zaire strain which has a 90% mortality rate and no cure. Well…even that isn’t entirely true…

A NOVA presentation from 1995 clearly shows survivors and discusses how a nurse named Nicole was given blood transfusions from an infected patient who survived, to build up antibodies. A review sums it up:

After one week, Nicole began to recover. Spurred by this result, the Zairian doctors transfused an additional eight patients. Seven of the eight patients survived, but the Western doctors remain unconvinced. Because the experiment was completely uncontrolled, they argue that we will never know that the transfusion saved the lives of those patients.

That was 20 years ago. Current news stories even discuss how the doctor who was flown here infected with Ebola was given a unit of blood from a 14-year-old who survived Ebola. The female patient flown in was also reportedly given an experimental serum no one seems to elaborate much on.

On top of that, articles from 2008 show a vaccine was highly effective in monkeys and even used experimentally in a human patient with success. Where did those vaccines go? Why aren’t they widely available six years later?

And finally, as with any crisis, who stands to gain from this, and what is it they are ultimately after?

Just asking.

One company Tekmira, who has been performing Phase I clinical trials for an Ebola drug it has been working on in otherwise healthy adult patients has seen its stock skyrocket over the last two weeks, even though its experiments in humans have now been halted due to safety concerns.

Tekmira apparently has a $140 million contract with none other than the USAMRIID to work on this drug, along with a multi-million contract with biotech giant Monsanto for the same technology. The drug was granted FDA fast track status back in March. As the company’s site says, however, the drug is apparently for the Zaire strain of the virus.

So has Tekmira taken the Ebobun strain into account?

In addition, now Reuters is reporting that Ebola vaccines have been fast tracked as well, with human experiments starting as early as next month. Wow, that was fast. Will those vaccines take Ebobun into account?

The last time a vaccine was fast tracked in such a manner, it was for the purposefully overblown swine flu “pandemic” — a created “campaign of panic” basically designed to sell vaccines and grant more emergency powers.

As Aaron Dykes reported in 2010:

Wolfgang Wodarg, head of health at the Council of Europe, claims that the threshold for alert was deliberately lowered at the WHO, allowing a “pandemic” to be declared despite the mildness of the ‘swine flu.’ That designation would force a demand for the vaccine, which was subsequently purchased by governments or health facilities and pushed on the public through a full-scale fear campaign in the media…

Wodarg is focusing on the motives for profit, as well as the ties between the World Health Organization (WHO), the pharmaceutical-industrial complex and research scientists, a nexus which Canada Free Press points out is eerily similar to the Climategate revelations that CRU research scientists fudged data to “hide the decline” in proxy temperatures in order to support global warming claims.

Wodarg made several disconcerting statements to the media, including:

“Never before the search for traces of a virus was carried out so broadly and intensively, besides, many cases of death that happen to coincide with seropositive H1N1 lab-findings were simply attributed to “swine-flu” and used to foster fear.”

“A group of people in the WHO is associated very closely with the pharmaceutical industry.”

“The great campaign of panic we have seen provided a golden opportunity for representatives from labs who knew they would hit the jackpot in the case of a pandemic being declared.”

In fact, that’s what CBS investigative reporter Sharyl Attkisson was set to expose, but her bosses refused to air her story. The mainstream media completely shut her down. Fear sells. The truth, by contrast, doesn’t.

“With the CDC keeping the true Swine Flu stats secret, it meant that many in the public took and gave their children an experimental vaccine that may not have been necessary,” Attkisson said. Read this piece on her 2009 interview with Jon Rappaport for more on how the CDC stopped counting cases of swine flu altogether and hyped the public into a panic that ultimately led to millions of people receiving potentially dangerous, fast-tracked vaccinations.

That’s right. Countries the world over reported many deaths and disabilities suffered in the wake of the fast-tracked H1N1 vaccine, a vaccine people scrambled to get after the hysteria over swine flu was over hyped everywhere, from government agencies to the mainstream media.

But hey, a lot of people in the military-medical-media industrial complex made a lot of money.

Much worse than mere greed, though, is the possibility of martial law and a forced mass vaccination scenario — a scenario where the military is “forced” to step in to contain “bio-threats” (regardless of whether or not those threats are real or made up). For more on that, see DARPA’s “Blue Angel” project.

Talking about something scary isn’t automatically scaremongering — but if the powers that shouldn’t be are scaremongering, we should talk about it.

(Meanwhile, headlines about ‘Ebola fear going viral’ are already screaming at people to be afraid…very very afraid.)

 

The Real Reasons America Used Nuclear Weapons Against Japan

Hiroshima_Capp

Today marks the anniversary of the atomic bombing of Hiroshima in 1945, an event responsible for the deaths of up to 166,000 lives, most of whom were innocent civilians. This was followed by the use of another atomic bomb (using plutonium instead of uranium) on Nagasaki three days later, claiming at least 80,000 (mostly civilian) lives. As the U.S. continues to kill innocent people through long distance weapons such as drones, it’s more important than ever to question official claims about what they are doing in the name of “freedom” and “democracy” and the real reasons why. Historians such as Howard Zinn have long known that official government justifications for the bombings were lies. The fundamental reasons were economic and geopolitical, and much supporting documentation has been helpfully compiled in the following post.

The REAL Reason America Used Nuclear Weapons Against Japan (It Was Not To End the War Or Save Lives)

Source: Washington’s Blog

Atomic Weapons Were Not Needed to End the War or Save Lives

Like all Americans, I was taught that the U.S. dropped nuclear bombs on Hiroshima and Nagasaki in order to end WWII and save both American and Japanese lives.

But most of the top American military officials at the time said otherwise.

The U.S. Strategic Bombing Survey group, assigned by President Truman to study the air attacks on Japan, produced a report in July of 1946 that concluded (52-56):

Based on a detailed investigation of all the facts and supported by the testimony of the surviving Japanese leaders involved, it is the Survey’s opinion that certainly prior to 31 December 1945 and in all probability prior to 1 November 1945, Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war, and even if no invasion had been planned or contemplated.

General (and later president) Dwight Eisenhower – then Supreme Commander of all Allied Forces, and the officer who created most of America’s WWII military plans for Europe and Japan – said:

The Japanese were ready to surrender and it wasn’t necessary to hit them with that awful thing.

Newsweek, 11/11/63, Ike on Ike

Eisenhower also noted (pg. 380):

In [July] 1945… Secretary of War Stimson, visiting my headquarters in Germany, informed me that our government was preparing to drop an atomic bomb on Japan. I was one of those who felt that there were a number of cogent reasons to question the wisdom of such an act. …the Secretary, upon giving me the news of the successful bomb test in New Mexico, and of the plan for using it, asked for my reaction, apparently expecting a vigorous assent.

During his recitation of the relevant facts, I had been conscious of a feeling of depression and so I voiced to him my grave misgivings, first on the basis of my belief that Japan was already defeated and that dropping the bomb was completely unnecessary, and secondly because I thought that our country should avoid shocking world opinion by the use of a weapon whose employment was, I thought, no longer mandatory as a measure to save American lives. It was my belief that Japan was, at that very moment, seeking some way to surrender with a minimum loss of ‘face’. The Secretary was deeply perturbed by my attitude….

Admiral William Leahy – the highest ranking member of the U.S. military from 1942 until retiring in 1949, who was the first de facto Chairman of the Joint Chiefs of Staff, and who was at the center of all major American military decisions in World War II – wrote (pg. 441):

It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons.

The lethal possibilities of atomic warfare in the future are frightening. My own feeling was that in being the first to use it, we had adopted an ethical standard common to the barbarians of the Dark Ages. I was not taught to make war in that fashion, and wars cannot be won by destroying women and children.

General Douglas MacArthur agreed (pg. 65, 70-71):

MacArthur’s views about the decision to drop the atomic bomb on Hiroshima and Nagasaki were starkly different from what the general public supposed …. When I asked General MacArthur about the decision to drop the bomb, I was surprised to learn he had not even been consulted. What, I asked, would his advice have been? He replied that he saw no military justification for the dropping of the bomb. The war might have ended weeks earlier, he said, if the United States had agreed, as it later did anyway, to the retention of the institution of the emperor.

Moreover (pg. 512):

The Potsdam declaration in July, demand[ed] that Japan surrender unconditionally or face ‘prompt and utter destruction.’ MacArthur was appalled. He knew that the Japanese would never renounce their emperor, and that without him an orderly transition to peace would be impossible anyhow, because his people would never submit to Allied occupation unless he ordered it. Ironically, when the surrender did come, it was conditional, and the condition was a continuation of the imperial reign. Had the General’s advice been followed, the resort to atomic weapons at Hiroshima and Nagasaki might have been unnecessary.

Similarly, Assistant Secretary of War John McLoy noted (pg. 500):

I have always felt that if, in our ultimatum to the Japanese government issued from Potsdam [in July 1945], we had referred to the retention of the emperor as a constitutional monarch and had made some reference to the reasonable accessibility of raw materials to the future Japanese government, it would have been accepted. Indeed, I believe that even in the form it was delivered, there was some disposition on the part of the Japanese to give it favorable consideration. When the war was over I arrived at this conclusion after talking with a number of Japanese officials who had been closely associated with the decision of the then Japanese government, to reject the ultimatum, as it was presented. I believe we missed the opportunity of effecting a Japanese surrender, completely satisfactory to us, without the necessity of dropping the bombs.

Under Secretary of the Navy Ralph Bird said:

I think that the Japanese were ready for peace, and they already had approached the Russians and, I think, the Swiss. And that suggestion of [giving] a warning [of the atomic bomb] was a face-saving proposition for them, and one that they could have readily accepted.

***

In my opinion, the Japanese war was really won before we ever used the atom bomb. Thus, it wouldn’t have been necessary for us to disclose our nuclear position and stimulate the Russians to develop the same thing much more rapidly than they would have if we had not dropped the bomb.

War Was Really Won Before We Used A-Bomb, U.S. News and World Report, 8/15/60, pg. 73-75.

He also noted (pg. 144-145, 324):

It definitely seemed to me that the Japanese were becoming weaker and weaker. They were surrounded by the Navy. They couldn’t get any imports and they couldn’t export anything. Naturally, as time went on and the war developed in our favor it was quite logical to hope and expect that with the proper kind of a warning the Japanese would then be in a position to make peace, which would have made it unnecessary for us to drop the bomb and have had to bring Russia in.

Alfred McCormack – Director of Military Intelligence for the Pacific Theater of War, who was probably in as good position as anyone for judging the situation – believed that the Japanese surrender could have been obtained in a few weeks by blockade alone:

The Japanese had no longer enough food in stock, and their fuel reserves were practically exhausted. We had begun a secret process of mining all their harbors, which was steadily isolating them from the rest of the world. If we had brought this project to its logical conclusion, the destruction of Japan’s cities with incendiary and other bombs would have been quite unnecessary.

General Curtis LeMay, the tough cigar-smoking Army Air Force “hawk,” stated publicly shortly before the nuclear bombs were dropped on Japan:

The war would have been over in two weeks. . . . The atomic bomb had nothing to do with the end of the war at all.

The Vice Chairman of the U.S. Bombing Survey Paul Nitze wrote (pg. 36-37, 44-45):

[I] concluded that even without the atomic bomb, Japan was likely to surrender in a matter of months. My own view was that Japan would capitulate by November 1945.

***

Even without the attacks on Hiroshima and Nagasaki, it seemed highly unlikely, given what we found to have been the mood of the Japanese government, that a U.S. invasion of the islands [scheduled for November 1, 1945] would have been necessary.

Deputy Director of the Office of Naval Intelligence Ellis Zacharias wrote:

Just when the Japanese were ready to capitulate, we went ahead and introduced to the world the most devastating weapon it had ever seen and, in effect, gave the go-ahead to Russia to swarm over Eastern Asia.

Washington decided that Japan had been given its chance and now it was time to use the A-bomb.

I submit that it was the wrong decision. It was wrong on strategic grounds. And it was wrong on humanitarian grounds.

Ellis Zacharias, How We Bungled the Japanese Surrender, Look, 6/6/50, pg. 19-21.

Brigadier General Carter Clarke – the military intelligence officer in charge of preparing summaries of intercepted Japanese cables for President Truman and his advisors – said (pg. 359):

When we didn’t need to do it, and we knew we didn’t need to do it, and they knew that we knew we didn’t need to do it, we used them as an experiment for two atomic bombs.

Many other high-level military officers concurred. For example:

The commander in chief of the U.S. Fleet and Chief of Naval Operations, Ernest J. King, stated that the naval blockade and prior bombing of Japan in March of 1945, had rendered the Japanese helpless and that the use of the atomic bomb was both unnecessary and immoral. Also, the opinion of Fleet Admiral Chester W. Nimitz was reported to have said in a press conference on September 22, 1945, that “The Admiral took the opportunity of adding his voice to those insisting that Japan had been defeated before the atomic bombing and Russia’s entry into the war.” In a subsequent speech at the Washington Monument on October 5, 1945, Admiral Nimitz stated “The Japanese had, in fact, already sued for peace before the atomic age was announced to the world with the destruction of Hiroshima and before the Russian entry into the war.” It was learned also that on or about July 20, 1945, General Eisenhower had urged Truman, in a personal visit, not to use the atomic bomb. Eisenhower’s assessment was “It wasn’t necessary to hit them with that awful thing . . . to use the atomic bomb, to kill and terrorize civilians, without even attempting [negotiations], was a double crime.” Eisenhower also stated that it wasn’t necessary for Truman to “succumb” to [the tiny handful of people putting pressure on the president to drop atom bombs on Japan.]

British officers were of the same mind. For example, General Sir Hastings Ismay, Chief of Staff to the British Minister of Defence, said to Prime Minister Churchill that “when Russia came into the war against Japan, the Japanese would probably wish to get out on almost any terms short of the dethronement of the Emperor.”

On hearing that the atomic test was successful, Ismay’s private reaction was one of “revulsion.”

Why Were Bombs Dropped on Populated Cities Without Military Value?

Even military officers who favored use of nuclear weapons mainly favored using them on unpopulated areas or Japanese military targets … not cities.

For example, Special Assistant to the Secretary of the Navy Lewis Strauss proposed to Secretary of the Navy James Forrestal that a non-lethal demonstration of  atomic weapons would be enough to convince the Japanese to surrender … and the Navy Secretary agreed (pg. 145, 325):

I proposed to Secretary Forrestal that the weapon should be demonstrated before it was used. Primarily it was because it was clear to a number of people, myself among them, that the war was very nearly over. The Japanese were nearly ready to capitulate… My proposal to the Secretary was that the weapon should be demonstrated over some area accessible to Japanese observers and where its effects would be dramatic. I remember suggesting that a satisfactory place for such a demonstration would be a large forest of cryptomeria trees not far from Tokyo. The cryptomeria tree is the Japanese version of our redwood… I anticipated that a bomb detonated at a suitable height above such a forest… would lay the trees out in windrows from the center of the explosion in all directions as though they were matchsticks, and, of course, set them afire in the center. It seemed to me that a demonstration of this sort would prove to the Japanese that we could destroy any of their cities at will… Secretary Forrestal agreed wholeheartedly with the recommendation

It seemed to me that such a weapon was not necessary to bring the war to a successful conclusion, that once used it would find its way into the armaments of the world…

General George Marshall agreed:

Contemporary documents show that Marshall felt “these weapons might first be used against straight military objectives such as a large naval installation and then if no complete result was derived from the effect of that, he thought we ought to designate a number of large manufacturing areas from which the people would be warned to leave–telling the Japanese that we intend to destroy such centers….”

As the document concerning Marshall’s views suggests, the question of whether the use of the atomic bomb was justified turns  … on whether the bombs had to be used against a largely civilian target rather than a strictly military target—which, in fact, was the explicit choice since although there were Japanese troops in the cities, neither Hiroshima nor Nagasaki was deemed militarily vital by U.S. planners. (This is one of the reasons neither had been heavily bombed up to this point in the war.) Moreover, targeting [at Hiroshima and Nagasaki] was aimed explicitly on non-military facilities surrounded by workers’ homes.

Historians Agree that the Bomb Wasn’t Needed

Historians agree that nuclear weapons did not need to be used to stop the war or save lives.

As historian Doug Long notes:

U.S. Nuclear Regulatory Commission historian J. Samuel Walker has studied the history of research on the decision to use nuclear weapons on Japan. In his conclusion he writes, “The consensus among scholars is that the bomb was not needed to avoid an invasion of Japan and to end the war within a relatively short time. It is clear that alternatives to the bomb existed and that Truman and his advisors knew it.” (J. Samuel Walker, The Decision to Use the Bomb: A Historiographical Update, Diplomatic History, Winter 1990, pg. 110).

Politicians Agreed

Many high-level politicians agreed.  For example, Herbert Hoover said (pg. 142):

The Japanese were prepared to negotiate all the way from February 1945…up to and before the time the atomic bombs were dropped; …if such leads had been followed up, there would have been no occasion to drop the [atomic] bombs.

Under Secretary of State Joseph Grew noted (pg. 29-32):

In the light of available evidence I myself and others felt that if such a categorical statement about the [retention of the] dynasty had been issued in May, 1945, the surrender-minded elements in the [Japanese] Government might well have been afforded by such a statement a valid reason and the necessary strength to come to an early clearcut decision.

If surrender could have been brought about in May, 1945, or even in June or July, before the entrance of Soviet Russia into the [Pacific] war and the use of the atomic bomb, the world would have been the gainer.

Why Then Were Atom Bombs Dropped on Japan?

If dropping nuclear bombs was unnecessary to end the war or to save lives, why was the decision to drop them made? Especially over the objections of so many top military and political figures?

One theory is that scientists like to play with their toys:

On September 9, 1945, Admiral William F. Halsey, commander of the Third Fleet, was publicly quoted extensively as stating that the atomic bomb was used because the scientists had a “toy and they wanted to try it out . . . .” He further stated, “The first atomic bomb was an unnecessary experiment . . . . It was a mistake to ever drop it.”

However, most of the Manhattan Project scientists who developed the atom bomb were opposed to using it on Japan.

Albert Einstein – an important catalyst for the development of the atom bomb (but not directly connected with the Manhattan Project) – said differently:

“A great majority of scientists were opposed to the sudden employment of the atom bomb.” In Einstein’s judgment, the dropping of the bomb was a political – diplomatic decision rather than a military or scientific decision.

Indeed, some of the Manhattan Project scientists wrote directly to the secretary of defense in 1945 to try to dissuade him from dropping the bomb:

We believe that these considerations make the use of nuclear bombs for an early, unannounced attack against Japan inadvisable. If the United States would be the first to release this new means of indiscriminate destruction upon mankind, she would sacrifice public support throughout the world, precipitate the race of armaments, and prejudice the possibility of reaching an international agreement on the future control of such weapons.

Political and Social Problems, Manhattan Engineer District Records, Harrison-Bundy files, folder # 76, National Archives (also contained in: Martin Sherwin, A World Destroyed, 1987 edition, pg. 323-333).

The scientists questioned the ability of destroying Japanese cities with atomic bombs to bring surrender when destroying Japanese cities with conventional bombs had not done so, and – like some of the military officers quoted above – recommended a demonstration of the atomic bomb for Japan in an unpopulated area.

The Real Explanation?

History.com notes:

In the years since the two atomic bombs were dropped on Japan, a number of historians have suggested that the weapons had a two-pronged objective …. It has been suggested that the second objective was to demonstrate the new weapon of mass destruction to the Soviet Union. By August 1945, relations between the Soviet Union and the United States had deteriorated badly. The Potsdam Conference between U.S. President Harry S. Truman, Russian leader Joseph Stalin, and Winston Churchill (before being replaced by Clement Attlee) ended just four days before the bombing of Hiroshima. The meeting was marked by recriminations and suspicion between the Americans and Soviets. Russian armies were occupying most of Eastern Europe. Truman and many of his advisers hoped that the U.S. atomic monopoly might offer diplomatic leverage with the Soviets. In this fashion, the dropping of the atomic bomb on Japan can be seen as the first shot of the Cold War.

New Scientist reported in 2005:

The US decision to drop atomic bombs on Hiroshima and Nagasaki in 1945 was meant to kick-start the Cold War rather than end the Second World War, according to two nuclear historians who say they have new evidence backing the controversial theory.

Causing a fission reaction in several kilograms of uranium and plutonium and killing over 200,000 people 60 years ago was done more to impress the Soviet Union than to cow Japan, they say. And the US President who took the decision, Harry Truman, was culpable, they add.

“He knew he was beginning the process of annihilation of the species,” says Peter Kuznick, director of the Nuclear Studies Institute at American University in Washington DC, US. “It was not just a war crime; it was a crime against humanity.”

***

[The conventional explanation of using the bombs to end the war and save lives] is disputed by Kuznick and Mark Selden, a historian from Cornell University in Ithaca, New York, US.

***

New studies of the US, Japanese and Soviet diplomatic archives suggest that Truman’s main motive was to limit Soviet expansion in Asia, Kuznick claims. Japan surrendered because the Soviet Union began an invasion a few days after the Hiroshima bombing, not because of the atomic bombs themselves, he says.

According to an account by Walter Brown, assistant to then-US secretary of state James Byrnes, Truman agreed at a meeting three days before the bomb was dropped on Hiroshima that Japan was “looking for peace”. Truman was told by his army generals, Douglas Macarthur and Dwight Eisenhower, and his naval chief of staff, William Leahy, that there was no military need to use the bomb.

“Impressing Russia was more important than ending the war in Japan,” says Selden.

John Pilger points out:

The US secretary of war, Henry Stimson, told President Truman he was “fearful” that the US air force would have Japan so “bombed out” that the new weapon would not be able “to show its strength”. He later admitted that “no effort was made, and none was seriously considered, to achieve surrender merely in order not to have to use the bomb”. His foreign policy colleagues were eager “to browbeat the Russians with the bomb held rather ostentatiously on our hip”. General Leslie Groves, director of the Manhattan Project that made the bomb, testified: “There was never any illusion on my part that Russia was our enemy, and that the project was conducted on that basis.” The day after Hiroshima was obliterated, President Truman voiced his satisfaction with the “overwhelming success” of “the experiment”.

We’ll give the last word to University of Maryland professor of political economy – and former Legislative Director in the U.S. House of Representatives and the U.S. Senate, and Special Assistant in the Department of State – Gar Alperovitz:

Though most Americans are unaware of the fact, increasing numbers of historians now recognize the United States did not need to use the atomic bomb to end the war against Japan in 1945. Moreover, this essential judgment was expressed by the vast majority of top American military leaders in all three services in the years after the war ended: Army, Navy and Army Air Force. Nor was this the judgment of “liberals,” as is sometimes thought today. In fact, leading conservatives were far more outspoken in challenging the decision as unjustified and immoral than American liberals in the years following World War II.

***

Instead [of allowing other options to end the war, such as letting the Soviets attack Japan with ground forces], the United States rushed to use two atomic bombs at almost exactly the time that an August 8 Soviet attack had originally been scheduled: Hiroshima on August 6 and Nagasaki on August 9. The timing itself has obviously raised questions among many historians. The available evidence, though not conclusive, strongly suggests that the atomic bombs may well have been used in part because American leaders “preferred”—as Pulitzer Prize–winning historian Martin Sherwin has put it—to end the war with the bombs rather than the Soviet attack. Impressing the Soviets during the early diplomatic sparring that ultimately became the Cold War also appears likely to have been a significant factor.

***

The most illuminating perspective, however, comes from top World War II American military leaders. The conventional wisdom that the atomic bomb saved a million lives is so widespread that … most Americans haven’t paused to ponder something rather striking to anyone seriously concerned with the issue: Not only did most top U.S. military leaders think the bombings were unnecessary and unjustified, many were morally offended by what they regarded as the unnecessary destruction of Japanese cities and what were essentially noncombat populations. Moreover, they spoke about it quite openly and publicly.

***

Shortly before his death General George C. Marshall quietly defended the decision, but for the most part he is on record as repeatedly saying that it was not a military decision, but rather a political one.

Related Articles:
Truman Knew
The Bombing of Nagasaki August 9, 1945. Unwelcome Truths for Church and State

Why We Can’t Wage War on Drugs

drugs-win-war-on-drugs

The war on drugs was always a war against an idea. But ideas have a shelf-life, too, and this one has lost its potency

By Mike Jay

Source: Aeon Magazine

When the US President Richard Nixon announced his ‘war on drugs’ in 1971, there was no need to define the enemy. He meant, as everybody knew, the type of stuff you couldn’t buy in a drugstore. Drugs were trafficked exclusively on ‘the street’, within a subculture that was immediately identifiable (and never going to vote for Nixon anyway). His declaration of war was for the benefit of the majority of voters who saw these drugs, and the people who used them, as a threat to their way of life. If any further clarification was needed, the drugs Nixon had in his sights were the kind that were illegal.

Today, such certainties seem quaint and distant. This May, the UN office on drugs and crime announced that at least 348 ‘legal highs’ are being traded on the global market, a number that dwarfs the total of illegal drugs. This loosely defined cohort of substances is no longer being passed surreptitiously among an underground network of ‘drug users’ but sold to anybody on the internet, at street markets and petrol stations. It is hardly a surprise these days when someone from any stratum of society – police chiefs, corporate executives, royalty – turns out to be a drug user. The war on drugs has conspicuously failed on its own terms: it has not reduced the prevalence of drugs in society, or the harms they cause, or the criminal economy they feed. But it has also, at a deeper level, become incoherent. What is a drug these days?

Consider, for example, the category of stimulants, into which the majority of ‘legal highs’ are bundled. In Nixon’s day there was, on the popular radar at least, only ‘speed’: amphetamine, manufactured by biker gangs for hippies and junkies. This unambiguously criminal trade still thrives, mostly in the more potent form of methamphetamine: the world knows its face from the US TV series Breaking Bad, though it is at least as prevalent these days in Prague, Bangkok or Cape Town. But there are now many stimulants whose provenance is far more ambiguous.

Pharmaceuticals such as modafinil and Adderall have become the stay-awake drugs of choice for students, shiftworkers and the jet-lagged: they can be bought without prescription via the internet, host to a vast and vigorously expanding grey zone between medical and illicit supply. Traditional stimulant plants such as khat or coca leaf remain legal and socially normalised in their places of origin, though they are banned as ‘drugs’ elsewhere. La hoja de coca no es droga! (the coca leaf is not a drug) has become the slogan behind which Andean coca-growers rally, as the UN attempts to eradicate their crops in an effort to block the global supply of cocaine. Meanwhile, caffeine has become the indispensable stimulant of modern life, freely available in concentrated forms such as double espressos and energy shots, and indeed sold legally at 100 per cent purity on the internet, with deadly consequences. ‘Legal’ and ‘illegal’ are no longer adequate terms for making sense of this hyperactive global market.

The unfortunate term ‘legal highs’ reflects this confusion. It has become a cliché to note its imprecision: most of the substances it designates are not strictly legal to sell, while at the same time it never seems to include the obvious candidates – alcohol, caffeine and nicotine. The phrase hasn’t quite outgrown its apologetic inverted commas, yet viable alternatives are thin on the ground: ‘novel psychoactive substance’ (NPS), the clunky circumlocution that is preferred in drug-policy circles, is unlikely to enter common parlance. ‘Legal highs’, for all its inaccuracies, points to a zone beyond the linguistic reach of the war on drugs, that fervid state of mind in which any separation between ‘drugs’ and ‘illegal’ seems like a contradiction in terms. Then again, if that conceptual link breaks down, what does become of the old idea of drugs? When the whiff of criminality finally disperses, what are we left with?

I said ‘old idea’, but the word ‘drug’, at least in the sense that has been familiar throughout our lifetimes, turns out to be a recent coinage, peculiar to the 20th century. The word itself is, of course, centuries old: as a general term for any medication or chemical remedy, it dates back to the 14th century. But its more specific sense – as in ‘drug addict’, ‘drug control’ or ‘drug culture’ – can be dated quite precisely to the years around 1900. And on examination, it proves to be a curious hybrid, bridging two quite separate meanings.

The first is psychoactivity. A ‘drug’ is a substance that acts on the mind, changing the way we think or feel. But this descriptive meaning also carries a strong suggestion of judgment, less easily defined but unmistakably negative. ‘Drug’, in this sense, is a label to be avoided. Thus, according to the industries that produce and promote them, alcohol and tobacco are not drugs; cannabis advocates insist it is not a drug but a herb; and LSD enthusiasts say that it is not a drug but a sacrament. Indigenous users of coca, betel nut or ayahuasca are appalled at the suggestion these substances might be drugs. A cup of tea is psychoactive, but we would only call it a drug if we wished to make a point. An indeterminate white powder bought off the internet, on the other hand, might be legal, but it is undoubtedly still a drug.

Before the 20th century, it would have been difficult to express this idea. Many of today’s ‘drugs’, such as cannabis, cocaine and morphine, were sold in any high-street pharmacy. ‘Heroin’, for instance, emerged in 1898 as Bayer Pharmaceuticals’ new brand of over-the-counter cough medicine. Did the authorities simply turn a blind eye to the dangers that these substances posed? They did not: opium was classified as a poison because of its overdose risk, and cannabis was known to cause mental disturbance in some users. Yet these properties did not confer any exceptional status. And why should they? Even today, there are still plenty of prescription medicines that are toxic, habit-forming or that have deliriant side-effects. What made the drug-drugs special? In the 20th century, they came to be defined by their illegality, but of course they could not have been created by it. Only once certain hostile perceptions about drugs were in place could it make sense to ban them. What caused the perceptions?

We might start with the temperance movement. In the 19th century, alcohol was being recognised as a causal factor in all sorts of social ills, and so temperance campaigns promoted sobriety as the path to personal health, moral virtue and social respectability. Progressive social reformers joined forces with doctors and religious authorities to condemn the habitual intoxication of previous generations. Other intoxicating drugs might not have presented such a widespread problem, but they all got swept up in the same mixture of medical, moral and social opprobrium.

By the late 19th century, consumer groups were campaigning against the heavy doses of opiates and cocaine concealed in patent medicines

Global trade, meanwhile, made imported drugs such as opium and cocaine cheap and abundant; industry refined them into newly potent forms, which an energetic and largely unregulated business sector advertised and distributed to a booming consumer market. At the same time, the hypodermic syringe was transforming medical practice. It allowed doctors – and, increasingly, the general public – to inject large quantities of pure and potentially dangerous opiates such as morphine. This brought a breakthrough in pain relief, but also new risks such as abscesses and blood poisoning and, for some patients, compulsive and self-destructive overuse.

By the late 19th century, consumer groups were campaigning against the heavy doses of opiates and cocaine concealed in patent medicines, and doctors were diagnosing addiction as a medical pathology with serious social consequences. The first uses of ‘drug’ in its modern sense date from this era: in its earliest occurrences, it stood as an abbreviation of phrases such as ‘addictive drug’ or ‘dangerous drug’. Doctors advised governments and the public that injections of powerful narcotics should be confined to professionals. Use without medical supervision was classified as ‘abuse’.

Largely couched in medical terms as it was, the whole notion of ‘drugs’ carried moral and cultural implications from the start. Within the temperance debate, intoxication was an evil in itself and abstinence a corresponding virtue. Also, a good many of the substances that caused concern in the West were associated with immigrant communities: opium in the Chinese districts of San Francisco or London’s docklands, cocaine among the black communities of the southern US. In the racially charged debates of the day, these substances were presented as the ‘degenerate habits’ of ‘inferior races’, a ‘plague’ or ‘contagion’ that might infect the wider population. Such ideas might no longer be explicit, but the drug concept certainly carries a murky sense of the foreign and alien even now. That’s why it rarely applies to the psychoactive substances that we see as part of normal life, whether caffeine in the west, coca in the Andes, or ayahuasca in the Amazon.

During the first years of the 20th century, opium, morphine and cocaine became less socially acceptable, rather as tobacco has in our era. Their use was now viewed through the prism of medical harm, and their users correspondingly started to seem feckless or morally weak. The drugs themselves became, in a sense, ‘legal highs’: not technically prohibited but retreating into the shadows, available only under the counter or from those in the know. And then, once their sale was formally banned in the years around the Great War, ‘drugs’ became a term with legal weight: a specified list of substances that were not merely medically dangerous or culturally foreign, but confined to the criminal classes.

The banning of drugs occasioned strikingly little public debate, certainly compared with the prohibition of alcohol in the US. Then again, the ‘drug problem’ was pretty marginal at that point, and confined to subcultures (ethnic, bohemian, criminal) without a public voice. The only organised resistance to this new language of condemnation came from the pharmaceutical industry, concerned that its legitimate trade was being tarnished by unfortunate associations. What’s now the American Pharmacists Association, pressured by its major corporate sponsors such as Johnson & Johnson, complained about the casual use of terms such as ‘drug evil’, ‘drug fiend’ and ‘drug habit’, and lobbied newspapers to specify the drugs in question as ‘narcotics’ or ‘opiates’.

But ‘drugs’ was too vague and too useful to replace with more precise terms. It conveyed not simply particular chemicals, but a moral position on the use of them by certain people and for certain purposes. This position was eventually enshrined in the legal frameworks that emerged to prohibit them. The 1961 UN Single Convention on Narcotic Drugs, the founding document of the international drug laws, is unique among UN conventions in using the word ‘evil’ to describe the problem it seeks to address.

Legislators celebrated the 1961 Convention as the culmination of a 50-year battle to prohibit drugs, a battle that had begun at the Hague Opium Conference of 1911. Yet with hindsight, 1961 was the moment at which the consensus around the evils of drugs began to fracture. An adventurous postwar generation, the first to be raised as truly global consumers, was awakening to the realisation that alcohol was not the world’s only intoxicant. An international underground was beginning to spread news of hashish-smoking in Morocco and LSD synthesised in Swiss laboratories, as well as Benzedrine pills that propelled truck drivers through the night, and hallucinogenic mushrooms in Mexican mountain villages. For many, the resounding denunciations of drugs as dangerous, foreign and criminal no longer rang quite true. Within a booming youth culture, controlled substances were becoming the talismans of a new morality, an entire view of life that valorised pleasure, experiment and self-discovery.

In a sense, Nixon’s war on drugs was lost before it was even announced. It could have succeeded only by uniting an already polarised society in the belief that drugs were a genuine threat to civilisation, and that there was a genuine possibility of returning to a world without them. These propositions grew ever harder to sell over the intervening decades, as drug use became increasingly normal, while the vast sums of money spent trying to control it not only failed to reduce it, but actually created a global criminal market on a scale that Nixon could never have imagined.

psychiatric diagnoses such as low self-esteem and social anxiety open the door to new ‘feel-good’ drugs designed to enhance confidence and happiness

The problem is not just one of unintended consequences. As the war on drugs has dragged on, the medical, moral and cultural certainties that interlocked so tightly to create the very concept of ‘drugs’ have been drifting out of focus. In medical terms, the category rested on a clear distinction between sanctioned ‘use’ and criminal ‘abuse’. Yet today’s consumers are in practice free to make this distinction themselves. The arrival of online pharmacies means we can all take our chances with the prescription drugs of our choice: generic, pirated, off-label, out of date or semi-legitimately dispensed by doctors and pharmacists on the other side of the world. As a result, the line between pharmaceutical and illicit drugs is blurring. Recent studies in the US have found opiate users moving from prescription drugs such as OxyContin and Vicodin to street heroin and back again, depending on price and availability. As new ‘legal highs’ with opiate-like effects come on-stream, any such line may eventually become impossible to draw.

Within the pharmaceutical industry as a whole, other pressures and trends are conspiring to soften the distinction between recreation and medicine, ‘feeling good’ and ‘feeling better’. Smart drugs and nootropics promise to make us feel ‘better than well’; the broadening of psychiatric diagnoses to encompass conditions such as low self-esteem and social anxiety opens the door to new ‘feel-good’ drugs designed to enhance confidence and happiness. Pop-science catchphrases such as ‘serotonin-booster’ might apply equally to antidepressants or to MDMA. At the cutting edge of brain research, neural network studies are pointing the way towards implants for deep-brain stimulation or brain-embedded fibre-optic cables: a brave new world in which moods and perceptions might be controlled electronically and drugs, good or bad, would be redundant.

At the same time, the cultural landscape in which ‘drugs’ were defined is receding from view. Nixon launched his war on drugs in a country where even cannabis was a profoundly alien substance to almost everybody over the age of 30; today, most Westerners below retirement age recognise drugs, for better or worse, as part of the culture in which they grew up. We have long been comfortable global consumers, seeking out the novel and exotic in everything from food to travel, music to spirituality; our appetite for intoxicants participates in this pursuit of novel sensations, and is explicitly linked to it by corporate advertising that uses the visual lexicon of mind-expanding drugs to sell us everything from energy drinks to smartphones. ‘Drugs’, in its original sense, drew on a reflexive distaste for the culturally alien. This distaste has itself become alien to the inhabitants of the 21st century.

As drugs have swirled into this kaleidoscope of lifestyle and consumer choices, the identity of the ‘drug user’ has slipped out of view. A unitary class of ‘drugs’ depended for its coherence on an identifiable class of users, clearly recognised as deviant. But drug use has long ceased to function as a reliable indicator of class, ethnicity, age, political views or any criminality beyond itself. Plenty of drug users self-identify with confidence these days and, if conspicuous drug ‘scenes’ are easily located, the majority of drug use nevertheless takes place outside them. Buying and selling, the point of greatest visibility and risk for the user, has been rendered virtual: the shady street deals of the past can now be conducted online via PayPal or bitcoin, the incriminating package delivered through the letterbox in an innocuous Jiffy bag.

Though its medical and cultural underpinnings might be shifting, the category of drugs is still firmly defined by the law. At their margins, the drug laws could be starting to reflect the reality of what we might call a post-drug world, but it seems unlikely that they will drive the process. When the drug laws were first passed a century ago, they reflected a cultural shift that had already taken place; we can expect them to be dismantled only after the landscape of a post-drug world is plain for all to see. But even now, it is not hard to discern in outline. Alcohol prohibition, when it eventually collapsed, was superseded by a patchwork of regulatory controls – licensing, insurance, tax – that either existed already or were devised on the basis of pragmatic policy goals.

We can envisage a similar patchwork for a day – however close or distant – when drugs are removed from the ambit of criminal law. In so far as any drug presents medical risks, it requires regulation to minimise them, and a well-established spectrum, from labelling to licensing to prescription, already exists for this purpose. In so far as they constitute a luxury market, we might expect them to be taxed. As with alcohol, in some jurisdictions they might remain illegal by broad popular consent. The prohibition of drugs, including alcohol, was an emergency measure that overrode the logic of pragmatism. The alternative is not another leap in the dark, but a return to the routine regulatory calculus.

But what lies beyond the idea of ‘drugs’ itself? The simple answer is that there is nothing to replace. Behind the term lies a disparate group of chemicals whose varied effects – stimulant, narcotic, psychedelic, euphoriant – offer a more accurate language of description. Value-laden terms, both positive and negative, would doubtless emerge to complement them. A post-drug world would require not a new language but the recovery of an older one. The category of ‘drugs’ was an attempt, characteristic of its historical moment, to separate out good chemicals from bad ones. But as we have known since antiquity, good and evil, virtue and vice are not inherent in a plant or a molecule. Pedanius Dioscorides, the great classical authority on medicine, maintained that no substance is intrinsically good: it all depends on the dose at which it is administered, the use to which it is put, and the intentions behind that use. The Greek term pharmakon could mean both a medicine and a poison: there was no such thing as a harmless remedy, since anything with the power to heal also had the power to harm. All drugs, psychoactive or otherwise, are a technology, a prosthetic that extends our physical and mental reach. Like so many of the other technologies that are transforming our world, their benefits and dangers must ultimately be understood as extensions of ourselves.

James Tracy Answers Questions About Conspiracy Theories

conspiracy-theory-definition

By Jaime Ortega

Source: The Daily Journalist

James Tracy teaches courses at Florida Atlantic University examining the relationship between commercial and alternative news media and socio-political issues and events.

1. There is a certain danger in the way conspiracy theories have eroded social media, especially on such platforms as YouTube.  Do people distrust mainstream television, radio, and print media?

First of all, we have to seriously think about what we mean by “conspiracy theories” before delving into such a discussion. What are the term’s origins?  How and why is it used?  Without nailing these things down at the outset any discussion of such communicative and sociopolitical dynamics tends toward the nonsensical and comes to eventually become absorbed in the discourse it is seeking the examine or critique.

A cursory look at reportage and commentary in major US news media from the late 1800s through the 1950s indicates that the term “conspiracy theory” is used sporadically in stories on criminal and court proceedings.  In the late 1960s, however, there is a major spike in usage of the term, specifically in items discussing criticism of the Warren Commission Report—President Lyndon Johnson’s commission mandated to investigate the assassination of President John F. Kennedy.  On January 4, 1967 the Central Intelligence Agency issued a memorandum that became known as Document 1035-960.  The communique was directed at the Agency’s foreign bureaus recommending the deployment of the term by “media assets” to counter critics of the Warren Commission.  The main strategy involved suggestion that such individuals and their inquiries were flawed by slipshod methods and ulterior motives.  The then-foremost Warren Commission critic and JFK assassination researcher Mark Lane was even referenced in the document.

This document was indicative of an apparent strategy via press and public relations maneuvers to undermine New Orleans District Attorney Jim Garrison’s then-fledgling investigation of the assassination.  1035-960 explained quite rightly that the CIA had a substantial investment in the credibility of the Warren Report.  Press reportage of Garrison’s ongoing probe revealed a heavy bias from the very outlets that had been long-compromised by Agency-friendly owners, editors, and reporters.  These included NBC and CBS networks, in addition to Time and Newsweek magazines, where the disparaging coverage of Garrison and his inquiry reached truly farcical proportions.

Though he was repeatedly and vociferously decried as a “conspiracy theorist,” a corrupt and opportunistic politician, and even mentally deranged by such outlets, Garrison has been vindicated by the historical record.  For example, we now know, through copious records released as a result of the John F. Kennedy Assassination Records Review Board, that the CIA was intimately involved in the assassination and cover-up, as were other US government agencies.  Yet the same news media that denounced Garrison almost fifty years ago still tout the legitimacy of the Warren Commission Report.

Since the Garrison episode, but in an especially pronounced fashion over the past twenty years, the conspiracy theory label is routinely mobilized by major corporate media to denigrate honest and intelligent individuals who bring forth important questions on vital events and issues.  Keep in mind that most major media still have often strong ties to the US intelligence and military communities.  With this in mind, a rational citizenry has an obligation to scrutinize what is reported and analyzed in corporate media, and balance their observations and conclusions by considering reportage of foreign and independent alternative media. In this regard the Internet provides a wealth of opportunity.  One needs only exercise the fundamental principles of logic to locate and assess quality information and research.

At the end of the day what we have in the “Conspiracy theory/ist” label is a psychological warfare weapon that has from the perspective of its creators been overwhelmingly effective.  Here is a set of words that is used to threaten, discipline and punish the intellectual class—mainly journalists and academics—who might question or otherwise refuse to tow the party line.  Using the term to designate pedestrian skeptics and researchers is redundant.  After all, as Orwell said, “The proles don’t count.”

Thus, unless we forthrightly interrogate the phrase and its unfortunate history we will be prone to the same confusion and misdirection that its originators
intended.

2. We did a poll here at The Daily Journalist a few weeks back, and the results indicated that 60% of people believed there was US government involvement in the Boston Marathon bombings, in addition to the events of September 11.  When people suspect their own government is involved on these attacks in US soil, what comes to mind?

It is cause for optimism because the US government was almost without question involved in the Boston Marathon bombing and the events of September 11, 2001.  Major media were also complicit in wide-scale public acceptance of the official narrative put forth concerning each incident.

For example, with the Boston bombing the New York Times played a key role in persuading the nation’s professional class and intelligentsia that a terror drill using actors, complete with a multitude of gaffes and outright blunders, was genuine.  In reality there were no severed limbs, no deaths, no injuries from shrapnel—only pyrotechnics and actors responding on cue. This is not only my view, but also that of multiple independent researchers and even former CIA officer Robert David Steele.

The Federal Bureau of Investigation is well-known for entrapping and otherwise orchestrating such events to justify its own existence.  With the Boston bombing there were numerous federal, state and local agencies involved in an exercise that had been taking place in the city annually over the past few years with a similar scenario.  A plan for what would become the Boston Marathon bombing was authored by Director of Boston’s Emergency Medical Services Richard Serino in 2008.  Serino was tapped by President Obama in 2009 to become Deputy Administrator of the Federal Emergency Management Agency and there are photos of him directing the aftermath of the April 15, 2013 “bombing.”

The public is being asked to believe that two Chechen immigrants expertly devised extremely sophisticated and deadly explosives with consumer fireworks, scrap metal and pressure cookers.  No such refractory ordnance was found at the scene because no thorough forensic investigation ever took place.  The entire affair was a photo shoot and an opportunity for federal authorities to gauge public response to a military-style lockdown in a major metropolitan region.

With such a transparently phony event being proffered as “real” one needs to ask what the other 40% in your poll are actually thinking.  One can fool some of the people some of the time, and there’s still a significant portion of the population—including those who are highly educated, who can’t imagine it’s own government could be so corrupt.   This is a testament to the continued effectiveness of our educational and media apparatuses, each of which emphasize an unhistorical worldview and unquestioning deference to authority figures.

3. Modern media seems to have commercialized and sold its soul to sponsors, and media giants that profit from investments.  Is modern day news a fictional representation of reality?  Are journalists allowed to do their job of investigating serious cases?  Is there an agenda to not report on stories with higher impact?

If a news media outlet gets most of its revenue from advertising it is to a significant degree compromised.  If its main revenue source is advertising and its owned by a transnational corporate conglomerate, “compromised” is not sufficiently powerful enough of a term to describe the given outlet’s probable journalistic vulnerabilities.  It should be barred from tying the term “journalism” to any of its information-related activities.

When we use the term, “transnational corporate conglomerate,” which is often used to denote companies like News Corp and Viacom, we should include the US and British governments, each of which are in the practice of imperial expansion while either subsidizing or forthrightly funding news media.  All such powerful entities understand the importance of concealing, disseminating, and using information to shape public opinion in ways that will be favorable to its corporate and policy interests.  Walter Lippmann describes how this dynamic played out in World War One.  Such powerful corporations and governments shouldn’t even be involved in journalism, unless of course they describe what they are doing in honest and appropriate terms, which is often, as your question suggests, entertainment and public relations masquerading as journalism.

The best journalism today is being produced by independent writers and news media.  At present there is a renaissance taking place in this regard because of the internet.  Corporate news media don’t want to invest the money in true journalism because for them it’s a net loss anyway they figure.  If major outlets fund investigative journalistic ventures and there’s little impact on readership (and thus advertising/revenue) then there’s no return on investment.  On the other hand, if such investigative work is genuine and worthwhile, it’s often delving into areas that reveal how political or economic power operate, which can bring complaints or retaliation from influential entities.  Real investigative journalism from mainstream outlets has been subdued for decades because of this very dynamic.

4. It’s hard not to distrust the government in some cases.  Take for example, the assassination of John F. Kennedy or CIA involvement in the Watergate scandal, to name a few.  Has the government have to change its ways for people not to believe in conspiracies?

The US government doesn’t have to care a great deal about what the public thinks so long as it has major news media that’s committed to producing a steady stream of non-journalism and infotainment to distract the people from considering the things that really impact on their lives.  Events such as 9/11 and the Boston Marathon bombing aren’t questioned by such media because those media are more or less part of the operations.  As was the case almost 50 years ago with figures such as Mark Lane and Jim Garrison, those asking serious questions and conducting potentially meaningful research are dismissed within the parameters of permissible dissent as “conspiracy theorists,” at least long enough for a majority of the public to stop caring and forget.

What is somewhat new is how the government and psychiatry are now involved in psychologizing the practice or tendency of asking questions about or interrogating disputed events.  In other words, certain interests want to deem “conspiracy theorizing” as mental illness, or otherwise associate it with aberrant and perhaps violent behavior.  In other words, ponder ideas that certain forces deem beyond question and one runs the risk of being institutionalized, losing their job, and so on.

We saw this take place in the case of upstate New York school teacher Adam Heller, who, under the direction of the FBI, was involuntarily institutionalized and later fired from his tenured teaching position simply because of private exchanges where he discussed his views on the Sandy Hook massacre and probable government involvement in weather modification.  We have to keep in mind that the punitive use of psychiatry to punish thought crimes was common practice in the darkest days of the Soviet Union.  Now it’s emerging here.  In this way, government is changing its ways in order to force its own versions of reality on the public.

5. Looking at this from a logical perspective, overall, is it harder to trust the government over the conspiracy theorist?

The US government is responsible for devising and publicizing some of the most outrageous conspiracy theories in modern history while it accuses independent journalists and authors of being conspiracy theorists.  The major political assassinations of the 1960s (JFK, RFK, MLK) were all government operations, and “patsies” were produced with untenable scenarios accompanying the overall events.  The Gulf of Tonkin incident, the Oklahoma City bombing, 9/11, and the Boston Marathon bombing were all “false flag” terror events that were intentionally misrepresented to the American public.  One need look no further than the plans for Operation Northwoods, or the attack on the USS Liberty, to develop a distinct understanding of how certain forces within government regard the public and those who fight their wars.

6. Conspiracy theories through the use of social media could cause irreparable effects on the future of mainstream news media because they report on stories, where journalists might not have done a good job or gone deep enough reporting.  When there is distrust, what follows next for the future and credibility of most media outlets, particularly if people believe media such as YouTube?

Again, we need to be precise.  YouTube is a medium with a multitude of “channels,” information, interpretations, and perspectives.  Some are potentially reliable and others may be dubious. This is, again, where education and, more specifically, the ability to employ logic and reasoning come to the fore.  How can we distinguish between good information and analysis versus that which is unhelpful or even purposefully misleading.

Many researchers who use YouTube or blogs are sincere in what they are seeking to do, which is relate ideas and information to broader public.

They may not be professionally-trained journalists, yet they are also subject to often profuse commentary and criticism from peers in a given research community examining a particular issue or event.  This process of scrutiny frequently yields fruitful exchanges where new information and insights are collectively revealed.  The participants may not have gone to graduate school to study politics or the media, and yet many of these exchanges are much more intense than that which takes place between a journalist and her editor as they vet a potential story.  There’s something going on there.  Of course, this assumes that those involved are serious in their participation, which is usually the case.  This depends on the quality and sincerity of participants.  The comments sections of many mainstream online news outlets can be bereft of serious exchanges.

In my view, certain YouTube channels or blogs are successful and worth checking out as forms of citizen journalism because they have something of substance along the lines described above to offer.

Mainstream commercial journalism has been challenged by counter forces since at least the early 1990s.  An initial challenge came from Hollywood in Oliver Stone’s JFK film.  That project incensed many establishment journalists and their institutions because it contested their fundamental investment and propagation of the flawed “lone gunman/magic bullet” explanation of the event ensconced in the Warren Report.

If truth be told, Stone’s screenplay is among the most accurate renderings of the Garrison investigation and the events surrounding the murder itself.  This is because it was based on key works by Colonel L. Fletcher Prouty, journalist Jim Marrs, and Garrison himself.  JFK was in retrospect the initial last rights of mainstream journalism proper, which sold its soul to protect John Kennedy’s executioners.  The advent of the internet and Gary Webb’s brilliant exposé of the role played by the CIA in the crack cocaine epidemic vis-à-vis Webb’s excoriation by his own journalistic peers confirmed corporate journalism’s absolute demise.

7. Do conspiracy theorists have a solid opinion of the problems they observe when interpreting raw data, or is such data made to create propaganda to feed their belief systems?

There is sometimes an undue amount of paranoia among some conspiracy researchers that can contribute to flawed observations and analysis.  Again, this is where one must use careful discretion to interpret between worthwhile information and evaluation versus misguided and poorly-conceived study.

Because conspiracy research communities have no institutional bearings or specific research theories and traditions, as do academic schools of thought that take the shape of “disciplines” or “fields” with often considerable organizational and financial resources, there is a tendency toward infighting and fractiousness.  This is much more so the case than in academe where such disagreements, in the rare event they are exhibited, are often subsumed in other actions that enforce ideological conformity.  These include the refusal by scholarly organizations and their publications to entertain countervailing analyses and, ultimately, the denial of employment, promotion, tenure, and meaningful professional relationships.  Compulsory toleration of peers is entirely absent given the voluntary nature of conspiracy research collectives.  At the same time, a critical sense that comes with researching government conspiracies, combined with known attempts by government to “cognitively infiltrate” such research communities, can sometimes lead to unwarranted suspicion of colleagues or public figures and their motives.

8. Since the rise of conspiracies is higher than ever before, and un-education accompanies this, how do you think it will affect the government’s relationship with its citizens, particularly if government credibility vanished?  Could there be a future uprising of people who will oppose the government?

As my above responses suggest, I am unconvinced that interest or acceptance of “conspiracy theories” has any correlation with a lack of intelligence or education.  In fact, some recent research suggests that entertaining conspiratorial explanations of reality—meaning that one does not take what their political leaders offer as explanations of policies or events—is likely indicative of a higher intelligence and simply good citizenship.

I’m not sure if there is any more credibility left for the government to lose, at least among those inclined to rebel in the first place.  I think it’s important for us to keep in mind that the government is regarded by some as paternal or maternal protectors.  President Franklin Roosevelt was emblematic of the welfare state—a savior of the common man—even though he further established the banking sector’s control over the country and laid the groundwork for the present technocracy.  Since the Roosevelt administration and the aggressive expansion of the government in the post-World War Two era we have largely had a government by cult of personality.  For example, Barack Obama is the equivalent of a rock star, nevermind his family’s ties to the intelligence community and otherwise opaque background.  Like other recent presidents, his personality and charisma supersede public realization of the actual policies and trade deals he is enacting on the behalf of his sponsors—mostly powerful, anti-democratic interests.

As this response is written, the United States is arguably being undermined by the Obama administration’s politicization and exploitation of the nation’s immigration policies.  The notion that such maneuvers will ultimately change the overall constitution of the American polity is subsumed by Obama’s simple rejoinder, “Let’s give these people a break.”  Enough of the population is trusting enough of Obama to dismiss his critics.  Many of those who know better are too afraid of either being called “racists” or “conspiracy theorists.”  And so it goes.

We See Things Differently

Bob Marley

Along with William Gibson, Bruce Sterling was at the forefront of the early cyberpunk movement of the 1980s. Similar to Philip K. Dick, Sterling is very empathic, politically astute, and much of his sociological predictions have turned out to be eerily prophetic.

One of my favorite short stories of his, “We See Things Differently”, is also a highlight of the excellent anthology Semiotext(e) SF. What makes it particularly intriguing is that it’s narrated from the point of view of a Muslim journalist visiting post-collapse America on a mission to interview the most influential musician of the era (who happens to have much in common with Bob Marley).

You can read Sterling’s complete story here: http://www.revolutionsf.com/fiction/weseethings/01.html

Or if you prefer, listen to an audiobook version here.

Saturday Matinee: Privilege

2647priv
Source: Dangerous Minds

‘Privilege’: Peter Watkins powerful antidote to 1960s pop hysteria

Set sometime in a none too distant future, Peter Watkins’ debut feature Privilege from 1967 told the story of god-like pop superstar Steven Shorter, who is worshiped by millions and manipulated by a coalition government to keep the youth “off the streets and out of politics.”

Inspired by a story from sitcom writer Johnny Speight (creator of Till Death Us Do Part which was remade in America as All in the Family), Privilege was an antidote to Swinging Sixties’ pop naivety. While Speight may have had a more biting satirical tale in mind, screenwriter Norman Bogner together with director Watkins made the film a mix of “mockumentary” and political fable, which was a difficult balance to maintain over a full ninety minutes without falling into parody.

Though it has its faults, Watkins succeeded overall, and presented the viewer with a selection of set pieces that later influenced scenes in Stanley Kubrick’s A Clockwork Orange, Lindsay Anderson’s O, Lucky Man! and Ken Russell’s Tommy.

Watkins also later noted how his film:

….was prescient of the way that Popular Culture and the media in the US commercialized the anti-war and counter-culture movement in that country as well. Privilege also ominously predicted what was to happen in Margaret Thatcher’s Britain of the 1980s – especially during the period of the Falkland Islands War.

On its release, most of the press hated it as Privilege didn’t fit with their naive optimism that pop music would somehow free the workers from their chains and bring peace and love and drugs and fairies at the bottom of the garden, la-de-da-de-dah, no doubt.

In fact Privilege was at the vanguard of a series of similarly styled films (see above) that would come to define the best of British seventies cinema. The movie would also have its fair share of (unacknowledged) influence on pop artists like David Bowie and Pink Floyd, while Patti Smith covered the film’s opening song “Set Me Free.”

What’s also surprising is how the film’s lead, Paul Jones (then better known as lead singer of Manfred Mann) never became a star. As can be seen from his performance here as Steven Shorter, Jones could have made a good Mick Travis in If…, or Alex in A Clockwork Orange.

Jones went onto make the equally good The Committee but (shamefully) little work came thereafter apart from reading stories on children’s TV.

Ah, the fickle nature of fame, but perhaps he should have known that from playing Steven Shorter.

Corporatism Stifles Innovation

Corporatism-630-360

By James Hall

Source: BATR.org

For an economy to grow and create actual wealth, innovation is a bedrock component in the development of enhanced prosperity. Prosperity is an intriguing concept. Simply making and accumulating money falls short of establishing a successful economic model. This recent report illustrates a prime example. Facebook stock soars, as company briefly passes IBM in market value. “By most measures, Facebook is dwarfed by IBM: With about 7,000 employees, ten-year-old Facebook is on track to garner $12 billion in sales this year. The 103-year-old IBM has more than 400,000 workers and sold almost $100 billion of computer hardware and software in 2013.”

Is Facebook a shining star of innovation? Or is it the product of a society, which has given up on working towards a successful economy, whereby a prosperous middle class partakes in the engine of continued and shared affluence? Conversely, is IBM an innovated inventor, or is it a dinosaur of a previous era?

However, you answer such questions, Edmund Phelps in an opinion article, Corporatism not capitalism is to blame for inequality argues that tangible innovation is in decline.

“Innovation tailed off between 1940 and about 1970, while the top decile’s share of wealth and income began rising in the 1970s.

The causation runs the other way: losses of dynamism have tended to sharpen wealth inequality because it hits workers of modest means more than it hurts the wealthy. Developing new products is labour-intensive. So is producing the capital goods needed to make them. These jobs disappear when innovation stalls.”

An explanation of The Rise and Fall of Western Innovation, addresses the plight of economic reality. “It is undeniable that the U.S. economy is not delivering the steadily improving wages and living standards the nation’s residents expect.”

Mass Flourishing is meant to be Edmund Phelps’s chef d’oeuvre, the capstone to a half century of research into the sources of national wealth.

A decline in the pace of innovation threatens prosperity, in the U.S. and everywhere else.

The main cause of this decline, according to Phelps, is corporatism—the inevitable tendency of businesses, workers, and other interests to band together to protect what they have. In modern economies, he says, corporations, unions, and other interests turn government into an agency for forestalling change and preserving the status quo.”

The noted economic expert and academic Robert J. Shiller, in his article, Why Innovation Is Still Capitalism’s Star, provides a critique of “Mass Flourishing: How Grassroots Innovation Created Jobs, Challenge and Change”.

“Professor Phelps discerns a troubling trend in many countries, however, even the United States. He is worried about corporatism, a political philosophy in which economic activity is controlled by large interest groups or the government. Once corporatism takes hold in a society, he says, people don’t adequately appreciate the contributions and the travails of individuals who create and innovate. An economy with a corporatist culture can copy and even outgrow others for a while, he says, but, in the end, it will always be left behind. Only an entrepreneurial culture can lead.”

Jared Bernstein does not agree as stated in a response to Professor Shiller in ‘Does the Government Stifle Innovation? I Don’t See It (To the Contrary…)’. “My points are that a) many important innovations have involved government support somewhere along the way, and b) while one could and should worry about waste in this area, I’ve not seen evidence, nor does Shiller provide any, of stifling. …”.

Well, that viewpoint sure sound like Obama’s infamous endorsement of a government corporatism model, “If you’ve got a business — you didn’t build that. Somebody else made that happen.”

When examining the way corporate transnational businesses operate in the real world, their lack of internal in-house innovation is notorious. These gigantic corporatist carnivores excel at mergers and accusations in the quest of monopolization of markets. In these endeavors, their best friend is government cronyism and favoritism.

The essay, Blaming Capitalism for Corporatism, makes an important point.

“In various ways, corporatism chokes off the dynamism that makes for engaging work, faster economic growth, and greater opportunity and inclusiveness. It maintains lethargic, wasteful, unproductive, and well-connected firms at the expense of dynamic newcomers and outsiders, and favors declared goals such as industrialization, economic development, and national greatness over individuals’ economic freedom and responsibility. Today, airlines, auto manufacturers, agricultural companies, media, investment banks, hedge funds, and much more has at some point been deemed too important to weather the free market on its own, receiving a helping hand from government in the name of the “public good.”

Here within dwells the essential dilemma, defining the genuine “Public Good“. Government often acts according to Mr. Bernstein’s idea of progress, especially in technological terms. Focus on the notion – “economically important” – and the language filter that translates into a benefit for the Corporatocracy matrix.

“While “entrepreneurial culture” will always be essential, many innovations that turned out to be economically important in the US have government fingerprints all over them. From machine tools, to railroads, transistors, radar, lasers, computing, the internet, GPS, fracking, biotech, nanotech—from the days of the Revolutionary War to today—the federal government has supported innovation often well before private capital would risk the investment.”

Corporatism invents methods of greater control and market dominance for select elitist beneficiaries. When government provides seed money to fund research projects, pure science is seldom the objective. As examined in the essay, How Government Did (and Didn’t) Invent the Internet, Harry McCracken writes: (I’m even prepared to believe that if the Internet hadn’t been invented at DARPA, the private sector would have stepped in and done the job. But we’ll never know for sure, since it was invented at DARPA.)

Thus, the paradox most people are unable to see beyond the guise of “so called” progress. Is government involvement or outright development of transhumanist futuristic technology a valid form of innovation or is it the terminal scourge of central planning and a controlled economy? Surely, under such a system, the “Mass Flourishing” of sharing the wealth, will no longer be an issue among the “useless eaters”, designated for extinction.

– See more at: http://www.batr.org/corporatocracy/073014.html#sthash.RFUeioGh.dpuf