Judith Miller’s Blame-Shifting Memoir

Judy Miller 409By Veteran Intelligence Professionals for Sanity

Source: Consortium News

U.S. intelligence veterans recall the real story of how New York Times reporter Judith Miller disgraced herself and her profession by helping to mislead Americans into the disastrous war in Iraq. They challenge the slick, self-aggrandizing rewrite of history in her new memoir.

MEMORANDUM FOR: Americans Malnourished on the Truth About Iraq

FROM: Veteran Intelligence Professionals for Sanity (VIPS)

SUBJECT: A New “Miller’s Tale” (with apologies to Geoffrey Chaucer)

On April 3, former New York Times journalist Judith Miller published an article in the Wall Street Journal entitled “The Iraq War and Stubborn Myths: Officials Didn’t Lie, and I Wasn’t Fed a Line.” If this sounds a bit defensive, Miller has tons to be defensive about.

In the article, Miller claims, “false narratives [about what she did as a New York Times reporter] deserve, at last, to be retired.” The article appears to be the initial salvo in a major attempt at self-rehabilitation and, coincidentally, comes just as her new book, The Story: A Reporter’s Journey, is to be published today.

In reviewing Miller’s book, her “mainstream media” friends are not likely to mention the stunning conclusion reached recently by the Nobel Prize-winning International Physicians for the Prevention of Nuclear War and other respected groups that the Iraq War, for which she was lead drum majorette, killed one million people. One might think that, in such circumstances – and with bedlam reigning in Iraq and the wider neighborhood – a decent respect for the opinions of mankind, so to speak, might prompt Miller to keep her head down for a while more.

In all candor, after more than a dozen years, we are tired of exposing the lies spread by Judith Miller and had thought we were finished. We have not seen her new book, but we cannot in good conscience leave her WSJ article without comment from those of us who have closely followed U.S. policy and actions in Iraq.

Miller’s Tale in the WSJ begins with a vintage Miller-style reductio ad absurdum: “I took America to war in Iraq. It was all me.” Since one of us, former UN inspector Scott Ritter, has historical experience and technical expertise that just won’t quit, we asked him to draft a few paragraphs keyed to Miller’s latest tale. He shared the following critique:

Miller’s Revisionist History

“Judith Miller did not take America to war in Iraq. Even a journalist with an ego the size of Ms. Miller’s cannot presume to usurp the war power authorities of the President of the United States, or even the now-dormant Constitutional prerogatives of Congress. What she is guilty of, however, is being a bad journalist.

“She can try to hide this fact by wrapping herself in a collective Pulitzer Prize, or citing past achievements like authoring best-selling books. But this is like former Secretary of State Colin Powell trying to remind people about his past as the National Security Advisor for President Reagan or Chairman of the Joint Chiefs of Staff under Presidents George H. W. Bush and Bill Clinton.

“At the end of the day Mr. Powell will be judged not on his previous achievements, but rather on his biggest failure – his appearance before the United Nations Security Council touting an illusory Iraqi weapons-of-mass-destruction threat as being worthy of war. In this same vein, Judith Miller will be judged by her authoring stories for the ‘newspaper of record’ that were questionably sourced and very often misleading. One needs only to examine Ms. Miller’s role while embedded in U.S. Army Mobile Exploitation Team Alpha, hunting for weapons of mass destruction during the 2003 invasion, for this point to be illustrated.

“Miller may not have singlehandedly taken America and the world to war, but she certainly played a pivotal role in building the public case for the attack on Iraq based upon shoddy reporting that even her editor at the New York Times has since discredited – including over reliance on a single-source of easy virtue and questionable credibility – Ahmed Chalabi of the Iraqi National Congress. The fact that she chose to keep this ‘source’ anonymous underscores the journalistic malfeasance at play in her reporting.

“Chalabi had been discredited by the State Department and CIA as a reliable source of information on Iraq long before Judith Miller started using him to underpin her front-page ‘scoops’ for the New York Times. She knew this, and yet chose to use him nonetheless, knowing that then Secretary of Defense Donald Rumsfeld was fully as eager to don the swindlers’ magic suit of clothes, as was the king in Hans Christian Anderson’s fairy tale. In Ms. Miller’s tale, the fairy-tale clothes came with a WMD label and no washing instructions.

“Ms. Miller’s self-described ‘newsworthy claims’ of pre-war weapons of mass destruction stories often were – as we now know (and many of us knew at the time) – handouts from the hawks in the Bush administration and fundamentally wrong.

“Like her early reporting on Iraq, Ms. Miller’s re-working of history to disguise her malfeasance/misfeasance as a reporter does not bear close scrutiny. Her errors of integrity are hers and hers alone, and will forever mar her reputation as a journalist, no matter how hard she tries to spin the facts and revise a history that is highly inconvenient to her. Of course, worst of all, her flaws were consequential – almost 4,500 U.S. troops and 1,000,000 Iraqis dead.”

Relying on the Mistakes of Others

In her WSJ article, Miller protests that “relying on the mistakes of others and errors of judgment are not the same as lying.” It is almost as though she is saying that if Ahmed Chalabi told her that, in Iraq, the sun rises in the west, and she duly reported it, that would not be “the same as lying.”

Miller appears to have worked out some kind of an accommodation with George W. Bush and others who planned and conducted what the post-World War II Nuremburg Tribunal called the “supreme international crime,” a war of aggression. She takes strong issue with what she calls “the enduring, pernicious accusation that the Bush administration fabricated WMD intelligence to take the country to war.”

Does she not know, even now, that there is abundant proof that this is exactly what took place? Has she not read the Downing Street Memorandum based on what CIA Director George Tenet told the head of British Intelligence at CIA headquarters on July 20, 2002; i. e., that “the intelligence and facts were being fixed around the policy” of making war for “regime change” in Iraq?

Does she not know, even at this late date, that the “intelligence” served up to “justify” attacking Iraq was NOT “mistaken,” but outright fraud, in which Bush had the full cooperation of Tenet and his deputy John McLaughlin? Is she unaware that the Assistant Secretary of State for Intelligence at the time, Carl Ford, has said, on the record, that Tenet and McLaughlin were “not just wrong, they lied … they should have been shot” for their lies about WMD? (See Hubris: The Inside Story of Spin, Scandal, and the Selling of the Iraq War by Michael Isikoff and David Corn.)

Blame Blix

Miller’s tale about Hans Blix in her WSJ article shows she has lost none of her edge for disingenuousness: “One could argue … that Hans Blix, the former chief of the international inspectors, bears some responsibility,” writes Miller. She cherry-picks what Blix said in January 2003 about “many proscribed weapons and items,” including 1,000 tons of chemical agent, were still “not accounted for.”

Yes, Blix said that on Jan. 27, 2003. But Blix also included this that same day in his written report to his UN superiors, something the New York Times, for some reason, did not include in its report:

“Iraq has on the whole cooperated rather well so far with UNMOVIC in this field. The most important point to make is that access has been provided to all sites we have wanted to inspect and with one exception it has been prompt. We have further had great help in building up the infrastructure of our office in Baghdad and the field office in Mosul. Arrangements and services for our plane and our helicopters have been good. The environment has been workable.

“Our inspections have included universities, military bases, presidential sites and private residences. Inspections have also taken place on Fridays, the Muslim day of rest, on Christmas day and New Years day. These inspections have been conducted in the same manner as all other inspections.” [See “Steve M.” writing (appropriately) for “Crooks and Liars” as he corrected the record.]

Yes, there was some resistance by Iraq up to that point. Blix said so. However, on Jan. 30, 2003, Blix made it abundantly clear, in an interview published in The New York Times, that nothing he’d seen at the time justified war. (The byline was Judith Miller and Julia Preston.)

The Miller-Preston report said: “Mr. Blix said he continued to endorse disarmament through peaceful means. ‘I think it would be terrible if this comes to an end by armed force, and I wish for this process of disarmament through the peaceful avenue of inspections,’ he said. …

“Mr. Blix took issue with what he said were Secretary of State Colin L. Powell’s claims that the inspectors had found that Iraqi officials were hiding and moving illicit materials within and outside of Iraq to prevent their discovery. He said that the inspectors had reported no such incidents. …

“He further disputed the Bush administration’s allegations that his inspection agency might have been penetrated by Iraqi agents, and that sensitive information might have been leaked to Baghdad, compromising the inspections. Finally, he said, he had seen no persuasive indications of Iraqi ties to Al Qaeda, which Mr. Bush also mentioned in his speech. ‘There are other states where there appear to be stronger links,’ such as Afghanistan, Mr. Blix said, noting that he had no intelligence reports on this issue.”

Although she co-authored that New York Times report of Jan. 30, 2003, Judith Miller remembers what seems convenient to remember. Her acumen at cherry picking may be an occupational hazard occasioned by spending too much time with Chalabi, Rumsfeld and other professional Pentagon pickers.

Moreover, Blix’s February 2003 report showed that, for the most part, Iraq was cooperating and the process was working well:

“Since we arrived in Iraq, we have conducted more than 400 inspections covering more than 300 sites. All inspections were performed without notice, and access was almost always provided promptly. In no case have we seen convincing evidence that the Iraqi side knew in advance that the inspectors were coming. …

“The inspections have taken place throughout Iraq at industrial sites, ammunition depots, research centres, universities, presidential sites, mobile laboratories, private houses, missile production facilities, military camps and agricultural sites. …

“In my 27 January update to the Council, I said that it seemed from our experience that Iraq had decided in principle to provide cooperation on process, most importantly prompt access to all sites and assistance to UNMOVIC in the establishment of the necessary infrastructure. This impression remains, and we note that access to sites has so far been without problems, including those that had never been declared or inspected, as well as to Presidential sites and private residences. …

“The presentation of intelligence information by the US Secretary of State suggested that Iraq had prepared for inspections by cleaning up sites and removing evidence of proscribed weapons programmes.

“I would like to comment only on one case, which we are familiar with, namely, the trucks identified by analysts as being for chemical decontamination at a munitions depot. This was a declared site, and it was certainly one of the sites Iraq would have expected us to inspect.

“We have noted that the two satellite images of the site were taken several weeks apart. The reported movement of munitions at the site could just as easily have been a routine activity as a movement of proscribed munitions in anticipation of imminent inspection.”

Blix made it clear that he needed more time, but the Bush administration had other plans. In other words, the war wasn’t Blix’s fault, as Judy Miller suggests. The fault lay elsewhere.

When Blix retired at the end of June 2004, he politely suggested to the “prestigious” Council on Foreign Relations in New York the possibility that Baghdad had actually destroyed its weapons of mass destruction after the first Gulf War in 1991 (as Saddam Hussein’s son-in-law, Hussein Kamel, who had been in charge of the WMD and rocket programs assured his debriefers when he defected in 1995). Blix then allowed himself an undiplomatic jibe:

“It is sort of fascinating that you can have 100 per cent certainty about weapons of mass destruction and zero certainty of about where they are.”

For the Steering Group, Veteran Intelligence Professionals for Sanity (VIPS)

William Binney, former Technical Director, National Security Agency (ret.)

Thomas Drake, former Senior Executive, NSA

Daniel Ellsberg, former State and Defense Department official, associate VIPS

Frank Grevil, former Maj., Army Intelligence, Denmark, associate VIPS

Katharine Gun, former analyst, GCHQ (the NSA equivalent in the UK), associate VIPS

Matthew Hoh, former Capt., USMC, Iraq & Foreign Service Officer, Afghanistan, associate VIPS

Brady Kiesling, former Political Counseler, U.S. Embassy, Athens, resigned in protest before the attack on Iraq, associate VIPS.

Karen Kwiatkowski, former Lt. Col., US Air Force (ret.), at Office of Secretary of Defense watching the manufacture of lies on Iraq, 2001-2003.

Annie Machon, former officer, MI5 (the CIA equivalent in the UK), associate VIPS

David MacMichael, former Capt., USMC & senior analyst, National Intelligence Council (ret.)

Ray McGovern, former Capt., Army Infantry/Intelligence & CIA presidential briefer (ret.)

Elizabeth Murray, former Deputy National Intelligence Officer for the Near East, National Intelligence Council (ret.)

Todd E. Pierce, Maj., former U.S. Army Judge Advocate (ret.)

Scott Ritter, former Maj., USMC, former UN Weapon Inspector, Iraq

Coleen Rowley, Division Council & Special Agent, FBI (ret.)

Greg Thielmann, former Office Director for Strategic, Proliferation, and Military Affairs in the State Department’s Bureau of Intelligence and Research

Peter Van Buren, former diplomat, Department of State, associate VIPS

Ann Wright, Col., US Army (ret.) & US diplomat (resigned in March, 2003 in opposition to the war on Iraq)

Tsarnaev Guilty of 30 Counts in Boston Bombing Show Trial

show tri·al

noun
noun: show trial; plural noun: show trials
  1. a judicial trial held in public with the intention of influencing or satisfying public opinion, rather than of ensuring justice.

Yesterday Dzhokhar Tsarnaev was found guilty of all 30 counts he was charged for in the Boston Marathon bombing trial. For those following the case who think critically, this came as no surprise not because of any hard evidence proving Tsarnaev’s guilt, but because on the second day of the trial Tsarnaev’s attorney Judy Clarke declared Tsarnaev was guilty in her opening statement saving the state the time and effort of having to prove its case and answer numerous glaring unanswered questions such as the ones asked by WhoWhatWhy and 21st Century Wire.

Now that this particular show trial is over, the government and corporate media will attempt to brush all uncomfortable questions under the rug and, as with JFK, WACO, Oklahoma City Bombing, Columbine, 9/11, Sandy Hook, etc., it will be left to independent researchers and journalists to search for the truth.

For more information about the Boston bombing that the government/corporate-stream media has largely ignored, read this compendium of research and analysis from the Memory Hole blog: http://memoryholeblog.com/2014/04/13/boston-marathon-bombing-a-compendium-of-research-and-analysis/

Ferguson and the Logic of Neoliberalism

Ferguson-RiotA Political Economy Premised on Exploitation and Social Repression

By Rob Urie

Source: Counterpunch.org

While the U.S. Department of Justice report on racist policing practices in Ferguson, Missouri provides direct evidence for skeptical Whites that institutional racism is fact, limiting the investigation to Ferguson implausibly delimits the scope of race based repression in the U.S. Additionally, from slavery to convict leasing to funding the Ferguson city budget with fines and penalties overwhelmingly extracted from poor and middle class Blacks, the economic basis of police repression is isolated in an improbable present. And in fact, the ‘tricks and traps’ used by the Ferguson police for economic extraction closely resembles corporate practices of using contract law, state institutions and monopoly power to take economic resources from those who lack the social power to resist.

A cognitive challenge for White Americans (and ‘conservative’ Blacks) is the distance between facts like police repression in Ferguson and the mythology of capitalist democracy that we live by. Use of the police for economic extraction in Ferguson, for funding the town budget through racial repression, ties state power to economic power within the particular circumstances of American racial and economic history. In a most basic sense this integration reframes state-market relations claimed to relate capitalism to democracy. More broadly, the TPP and TIPP ‘trade’ deals being pushed by President Obama are a variation on the racist shakedown in Ferguson. Their intent is to replace state power with corporate power while leaving Western states intact to provide state services for the benefit of corporations and the illusion of democratic control.

Discovery of a police ‘black site’ in Chicago, the prevalence of racist violence by the police across the U.S., the return of debtor’s prisons and ‘civil forfeiture’ laws that allow the police to take belongings without evidence of a crime illustrate the growing lawlessness of the police. When tied to illegal surveillance carried out by the NSA, DEA and FBI against citizens and non-citizens alike and the extra-judicial powers claimed by Mr. Obama a picture of widespread state lawlessness emerges. When considered in the context of no criminal prosecutions for war crimes against the (George W) Bush administration or against prominent bankers in the financial and economic debacle of the last decade a picture of widespread elite lawlessness emerges. Clearly the state, including local police departments, exists for purposes other than enforcing fealty to the law.

Based on supporting economic theories it is superficially ironic that the resurgence of neo-liberalism since the 1970s is coincident with this growing integration of state and ‘private’ power. Premised on clearly delineated state and market roles, neo-liberalism was / is in theory the economic realm unhindered by state restrictions. This state-market delineation facilitates the facade that capitalism is related to democracy— political freedom in the realm of the political and economic freedom in the realm of the economic. As fact and metaphor the role of the Ferguson police using asymmetrical social power to take economic wealth from vulnerable citizens demonstrates the implausibility of this theorized differentiation in the realm of the political. And new debtor’s prisons (link above) have police and the prison system acting as collection agents for Payday Lenders.

The TPP and TTIP trade deals being pushed by Mr. Obama are designed with analogous levers for extorting wealth. The investor resolution clauses in TTIP have a supranational judiciary ruling on ‘investor’ lawsuits against governments for hypothetical lost profits and taxpayers on the hook for adverse rulings. The relative absence of remaining trade restrictions and tariffs is well covered territory. What remains to be accomplished with these ‘agreements’ is the consolidation of economic power as the power to extract wealth. As with proposals for tradable carbon credits, the ‘product’ of the agreements combines the right to extort by putting forward projects never intended to be built with guarantees against adverse economic developments.

The police in Ferguson used a particular social lever, the residual of slavery, for gratuitous racial repression and for economic extraction. Slavery is a social institution, but it most particularly is an economic institution. It is a social mechanism for accruing the product of slave labor to the slave master. And slavery in the U.S. was ‘legal’ until it wasn’t. Convict leasing was explicit use of ‘the law’ and the judicial system to force poor Blacks to work for little or no pay. ‘The law’ was used as an instrument of economic exploitation and extraction. The push back from Whites and conservative Blacks that the murdered Mike Brown was a criminal because he likely stole a box of cigars takes this same law at face value. This view of the law depends on a similarly improbable separation of political and economic realms as neo-liberal theory.

As political theory might have it, if all of the citizens of Ferguson were intended to benefit from city resources while poor and middle class Blacks were disproportionately forced to pay for them that represents economic taking by some citizens for the benefit of others. The racial character of this taking places it in history. The history of Western colonialism, neo-colonialism and imperialism places it in broader internal and external context. And this history is evidence that distinct realms of the economic and the political never described existing circumstance. The practical relevance is that it places the actions of the police in Ferguson, past and pending ‘trade’ agreements and global economic relations in the space where economic and political power act in an integrated social dimension.

The effect is to reframe ‘the law’ in terms of who is committing particular acts rather than the acts being committed. The police in Ferguson can murder with impunity and shake down citizens at their discretion to fund the city budget (and their paychecks) while poor and middle class Blacks are disproportionately murdered and sent to prison for similar acts. What is legal and what isn’t is determined by who has social power, not by the acts themselves. In a racist and classist society the law is codification of class and race interests. If a black citizen of Ferguson puts a gun to someone’s head and demands their valuables they are a criminal but if the same act is committed by a cop it is within the law. Here events in Ferguson are fact and metaphor— overwhelming evidence (links above) suggests that similar social relations exist across much of the country.

This view of the law has precedence in Richard Nixon’s contention that “when the President does it that means that it is not illegal.” Earlier precedence can be found in Nazi law and in the laws of fascist Italy in the 1930s and 1940s. This isn’t to call anyone who isn’t a self-proclaimed Nazi a Nazi. The precedence lies in the view that the law is the will of a leadership class, be it the Nazi leadership in Germany or city government in Ferguson. One problem with this theory is that it makes the law capricious and ultimately impossible to follow. Race based law enforcement criminalizes race, not nominally proscribed acts. Stories of the Chicago police department’s black site (link above) have political protesters and poor Blacks accused of no crimes taken there. If people can be arrested without evidence that a crime was committed then what is the difference in outcomes between committing and not committing crimes?

A relation of neo-liberalism to fascism can be made through replacement of civil governance with corporate governance that subordinates the rights and privileges of civil society to corporate interests. The investor-state dispute mechanisms (link above) being broadened and formally codified in the TTIP trade deal will be used to demand compensation for environmental regulations that keep drinking water safe and limit greenhouse gas emissions, the metaphorical equivalent of threatening to end the planet if we don’t pay up. Civil forfeiture has the police taking valuables they might want at the point of a gun if necessary. The Ferguson police shake down poor Blacks using the law as a weapon. At the same time a ruling elite has immunity from prosecution for well documented crimes.

Much of what is written here was well understood in the 1950s, 1960s and 1970s. It hardly seems an accident that this collective memory was lost to narrow ideological dogma. Across the country property taxes are being cut with partial differences made up through regressive fees and penalties. This fits the neo-liberal preference for property over labor incomes. And neo-liberal theory has no place for history because all acts within it take place in a temporally isolated present. This dissociates racist policing in Ferguson, Chicago, New York, Detroit and Philadelphia from the roles of the legislature, judiciary, police and prisons in reconstituting the economic exploitation of slavery under the guise of free choice in capitalist democracy. Race is the particular case in America; class is the broader expression of economic power.

The tension between the DOJ report (link above) on racist policing in Ferguson and the Obama administration’s broad support for neo-liberal policies will likely produce a tight circle drawn around events in Ferguson. Already supporters of police repression are raising the argument that the words “hands up, don’t shoot” never transpired. What bearing does precise wording have on a Black child being murdered by the police? And why wouldn’t Black youth have a right to be hostile to police who, as the DOJ reports concludes, are running a racist shakedown operation to force poor and middle class Blacks to fund city government? How would White readers react to being harassed, intimidated, disproportionately jailed and forced to pay for the privilege? Ultimately the problem is larger than Ferguson and social accountability should address political economy premised in exploitation and social repression.

Rob Urie is an artist and political economist. His book Zen Economics is written and awaiting publication.

 

Pentagon Admits that Israel is a Nuclear Power

netanyahu_un_bomb_cartoon_2012_09_28

By Vladimir Platov

Source: Land Destroyer

In early February, the Pentagon declassified reports on Israel’s nuclear weapons program which was carried out until 1987. According to these documents, Israeli scientists were capable of producing a hydrogen bomb by that time. Although these facts were largely ignored by the Western media, some analysts have noticed that the declassification of these secret reports suspiciously coincided with the recent, rapidly deteriorating relationship between the US and Israel. As Tel Aviv started a massive campaign of criticism aimed at the Obama administration, both in the US media and worldwide, the Pentagon’s revelations were quick to follow. It is also noteworthy that only the facts on the Israeli nuclear weapons program were declassified, while information regarding similar activities of NATO allies (in particular Italy, France, and West Germany) remained locked up.

The 386 page report “Сritical technology assessment in Israel and Nato nations,” was prepared in 1987 by the Institute for Defense Analyses (IDA) and examined the capabilities Israel had already had at that time to produce nuclear weapons. In particular, the study underlines the fact that Israel’s secret laboratories, engaged in the development of an atomic bomb, were on par with the key research nuclear arsenals of the US: Los Alamos, Lawrence Livermore and Oak Ridge National Laboratory.

According to this report, by the mid-80s Israeli experts were at the same stage of research and development of various nuclear weapons the hydrogen bomb in particular, reached by American scientists between 1955-1960. IDA experts were courageous enough to recognize that in certain areas the Israelis have even surpassed their American colleagues of the time, in particular those working in the “Raphael” Israeli secret lab, who had managed to propose unconventional ways of achieving nuclear fission that would have allowed them to create their own version of the hydrogen bomb.

Under these conditions, one should revisit The Sunday Times article “Revealed: The Secrets of Israel’s Nuclear Arsenal” that was published on October 5, 1986. This article was based on the revelations of an Israeli nuclear scientist – Mordechai Vanunu – who disclosed the secrets of the Israeli nuclear program.

This 31 year-old Israeli expert on nuclear weapons had, by 1986, already been working for 10 years in a secret atomic center, Machon 2, that was built under the Negev desert and from the mid-60s had already been producing nuclear weapons. Then, facts and pictures that were presented by Mordechai to international experts caught them by surprise. They had to admit that by the mid-80s Israel became the sixth nuclear power after the United States, Soviet Union, Britain, France and China, although it did its best to conceal this information. Even by that time the Israeli nuclear potential was much higher than that of India, Pakistan and South Africa, which were also suspected of developing nuclear weapons.

According to this whistle-blowing Israeli scientist, by the mid-80s the Jewish state had secret capabilities of plutonium production for more than 20 years, which would eventually reach over the years to the level of 40 kilograms annually, which is enough to produce 10 nuclear bombs. During the 80s, Israel also came into possession of equipment necessary for the production of thermonuclear devices. In particular, a French built reactor with a capacity of 26 megawatts was upgraded by Israeli scientists to reach a capacity of 150 megawatts, which allowed Israel to engage in the production of plutonium.

Nuclear specialists, which were commenting on this article in the The Sunday Times, confirmed that by 1986 Israel could have had 100-200 nuclear bombs.

This information provides a reasonable understanding of Israel’s commitment to maintaining a nuclear monopoly in the Middle East at whatever cost by blocking their potential adversaries from acquiring nuclear weapons. In particular, Tel Aviv recklessly launched air strikes on the Osirak nuclear reactor in Iraq on June 7, 1981, and is now followed by a likewise negative approach toward the Iranian nuclear program.

In light of these publications and official US recognition of Israel as a nuclear power that has been in possession of nuclear devices for more than half a century, it is imperative for international players to begin a discussion of this issue in the UN, forcing Israel to sign the Treaty on the Non-Proliferation of Nuclear Weapons and taking the shipment of such weapons in and out Tel Aviv under rigid international control.

The Real American Exceptionalism

6834773008_e885e32526_b-700x394

From Torture to Drone Assassination, How Washington Gave Itself a Global Get-Out-of-Jail-Free Card

By Alfred W. McCoy

Source: TomDispatch.com

“The sovereign is he who decides on the exception,” said conservative thinker Carl Schmitt in 1922, meaning that a nation’s leader can defy the law to serve the greater good. Though Schmitt’s service as Nazi Germany’s chief jurist and his unwavering support for Hitler from the night of the long knives to Kristallnacht and beyond damaged his reputation for decades, today his ideas have achieved unimagined influence. They have, in fact, shaped the neo-conservative view of presidential power that has become broadly bipartisan since 9/11. Indeed, Schmitt has influenced American politics directly through his intellectual protégé Leo Strauss who, as an émigré professor at the University of Chicago, trained Bush administration architects of the Iraq war Paul Wolfowitz and Abram Shulsky.

All that should be impressive enough for a discredited, long dead authoritarian thinker. But Schmitt’s dictum also became a philosophical foundation for the exercise of American global power in the quarter century that followed the end of the Cold War. Washington, more than any other power, created the modern international community of laws and treaties, yet it now reserves the right to defy those same laws with impunity. A sovereign ruler should, said Schmitt, discard laws in times of national emergency. So the United States, as the planet’s last superpower or, in Schmitt’s terms, its global sovereign, has in these years repeatedly ignored international law, following instead its own unwritten rules of the road for the exercise of world power.

Just as Schmitt’s sovereign preferred to rule in a state of endless exception without a constitution for his Reich, so Washington is now well into the second decade of an endless War on Terror that seems the sum of its exceptions to international law: endless incarceration, extrajudicial killing, pervasive surveillance, drone strikes in defiance of national boundaries, torture on demand, and immunity for all of the above on the grounds of state secrecy. Yet these many American exceptions are just surface manifestations of the ever-expanding clandestine dimension of the American state. Created at the cost of more than a trillion dollars since 9/11, the purpose of this vast apparatus is to control a covert domain that is fast becoming the main arena for geopolitical contestation in the twenty-first century.

This should be (but seldom is considered) a jarring, disconcerting path for a country that, more than any other, nurtured the idea of, and wrote the rules for, an international community of nations governed by the rule of law. At the First Hague Peace Conference in 1899, the U.S. delegate, Andrew Dickson White, the founder of Cornell University, pushed for the creation of a Permanent Court of Arbitration and persuaded Andrew Carnegie to build the monumental Peace Palace at The Hague as its home. At the Second Hague Conference in 1907, Secretary of State Elihu Root urged that future international conflicts be resolved by a court of professional jurists, an idea realized when the Permanent Court of International Justice was established in 1920.

After World War II, the U.S. used its triumph to help create the United Nations, push for the adoption of its Universal Declaration of Human Rights, and ratify the Geneva Conventions for humanitarian treatment in war. If you throw in other American-backed initiatives like the World Health Organization, the World Trade Organization, and the World Bank, you pretty much have the entire infrastructure of what we now casually call “the international community.”

Breaking the Rules

Not only did the U.S. play a crucial role in writing the new rules for that community, but it almost immediately began breaking them. After all, despite the rise of the other superpower, the Soviet Union, Washington was by then the world sovereign and so could decide which should be the exceptions to its own rules, particularly to the foundational principle for all this global governance: sovereignty. As it struggled to dominate the hundred new nations that started appearing right after the war, each one invested with an inviolable sovereignty, Washington needed a new means of projecting power beyond conventional diplomacy or military force. As a result, CIA covert operations became its way of intervening within a new world order where you couldn’t or at least shouldn’t intervene openly.

All of the exceptions that really matter spring from America’s decision to join what former spy John Le Carré called that “squalid procession of vain fools, traitors… sadists, and drunkards,” and embrace espionage in a big way after World War II. Until the creation of the CIA in 1947, the United States had been an innocent abroad in the world of intelligence. When General John J. Pershing led two million American troops to Europe during World War I, the U.S. had the only army on either side of the battle lines without an intelligence service. Even though Washington built a substantial security apparatus during that war, it was quickly scaled back by Republican conservatives during the 1920s. For decades, the impulse to cut or constrain such secret agencies remained robustly bipartisan, as when President Harry Truman abolished the CIA’s predecessor, the Office of Strategic Services (OSS), right after World War II or when President Jimmy Carter fired 800 CIA covert operatives after the Vietnam War.

Yet by fits and starts, the covert domain inside the U.S. government has grown stealthily from the early twentieth century to this moment. It began with the formation of the FBI in 1908 and Military Intelligence in 1917. The Central Intelligence Agency followed after World War II along with most of the alphabet agencies that make up the present U.S. Intelligence Community, including the National Security Agency (NSA), the Defense Intelligence Agency (DIA), and last but hardly least, in 2004, the Office of the Director of National Intelligence. Make no mistake: there is a clear correlation between state secrecy and the rule of law — as one grows, the other surely shrinks.

World Sovereign

America’s irrevocable entry into this covert netherworld came when President Truman deployed his new CIA to contain Soviet subversion in Europe. This was a continent then thick with spies of every stripe: failed fascists, aspirant communists, and everything in between. Introduced to spycraft by its British “cousins,” the CIA soon mastered it in part by establishing sub rosa ties to networks of ex-Nazi spies, Italian fascist operatives, and dozens of continental secret services.

As the world’s new sovereign, Washington used the CIA to enforce its chosen exceptions to the international rule of law, particularly to the core principle of sovereignty. During his two terms, President Dwight Eisenhower authorized 104 covert operations on four continents, focused largely on controlling the many new nations then emerging from centuries of colonialism. Eisenhower’s exceptions included blatant transgressions of national sovereignty such as turning northern Burma into an unwilling springboard for abortive invasions of China, arming regional revolts to partition Indonesia, and overthrowing elected governments in Guatemala and Iran. By the time Eisenhower left office in 1961, covert ops had acquired such a powerful mystique in Washington that President John F. Kennedy would authorize 163 of them in the three years that preceded his assassination.

As a senior CIA official posted to the Near East in the early 1950s put it, the Agency then saw every Muslim leader who was not pro-American as “a target legally authorized by statute for CIA political action.” Applied on a global scale and not just to Muslims, this policy helped produce a distinct “reverse wave” in the global trend towards democracy from 1958 to 1975, as coups — most of them U.S.-sanctioned — allowed military men to seize power in more than three-dozen nations, representing a quarter of the world’s sovereign states.

The White House’s “exceptions” also produced a deeply contradictory U.S. attitude toward torture from the early years of the Cold War onward. Publicly, Washington’s opposition to torture was manifest in its advocacy of the U.N. Universal Declaration of Human Rights in 1948 and the Geneva Conventions in 1949. Simultaneously and secretly, however, the CIA began developing ingenious new torture techniques in contravention of those same international conventions. After a decade of mind-control research, the CIA actually codified its new method of psychological torture in a secret instructional handbook, the “KUBARK Counterintelligence Interrogation” manual, which it then disseminated within the U.S. Intelligence Community and to allied security services worldwide.

Much of the torture that became synonymous with the era of authoritarian rule in Asia and Latin America during the 1960s and 1970s seems to have originated in U.S. training programs that provided sophisticated techniques, up-to-date equipment, and moral legitimacy for the practice. From 1962 to 1974, the CIA worked through the Office of Public Safety (OPS), a division of the U.S. Agency for International Development that sent American police advisers to developing nations. Established by President Kennedy in 1962, in just six years OPS grew into a global anti-communist operation with over 400 U.S. police advisers.  By 1971, it had trained more than a million policemen in 47 nations, including 85,000 in South Vietnam and 100,000 in Brazil.

Concealed within this larger OPS effort, CIA interrogation training became synonymous with serious human rights abuses, particularly in Iran, the Philippines, South Vietnam, Brazil, and Uruguay. Amnesty International documented widespread torture, usually by local police, in 24 of the 49 nations that had hosted OPS police-training teams. In tracking torturers across the globe, Amnesty seemed to be following the trail of CIA training programs. Significantly, torture began to recede when America again turned resolutely against the practice at the end of the Cold War.

The War on Terror 

Although the CIA’s authority for assassination, covert intervention, surveillance, and torture was curtailed at the close of the Cold War, the terror attacks of September 2001 sparked an unprecedented expansion in the scale of the intelligence community and a corresponding resurgence in executive exceptions.  The War on Terror’s voracious appetite for information produced, in its first decade, what the Washington Post branded a veritable “fourth branch” of the U.S. federal government with 854,000 vetted security officials, 263 security organizations, over 3,000 private and public intelligence agencies, and 33 new security complexes — all pumping out a total of 50,000 classified intelligence reports annually by 2010.

By that time, one of the newest members of the Intelligence Community, the National Geospatial-Intelligence Agency, already had 16,000 employees, a $5 billion budget, and a massive nearly $2 billion headquarters at Fort Belvoir, Virginia — all aimed at coordinating the flood of surveillance data pouring in from drones, U-2 spy planes, Google Earth, and orbiting satellites.

According to documents whistleblower Edward Snowden leaked to the Washington Post, the U.S. spent $500 billion on its intelligence agencies in the dozen years after the 9/11 attacks, including annual appropriations in 2012 of $11 billion for the National Security Agency (NSA) and $15 billion for the CIA. If we add the $790 billion expended on the Department of Homeland Security to that $500 billion for overseas intelligence, then Washington had spent nearly $1.3 trillion to build a secret state-within-the-state of absolutely unprecedented size and power.

As this secret state swelled, the world’s sovereign decided that some extraordinary exceptions to civil liberties at home and sovereignty abroad were in order. The most glaring came with the CIA’s now-notorious renewed use of torture on suspected terrorists and its setting up of its own global network of private prisons, or “black sites,” beyond the reach of any court or legal authority. Along with piracy and slavery, the abolition of torture had long been a signature issue when it came to the international rule of law. So strong was this principle that the U.N. General Assembly voted unanimously in 1984 to adopt the Convention Against Torture. When it came to ratifying it, however, Washington dithered on the subject until the end of the Cold War when it finally resumed its advocacy of international justice, participating in the World Conference on Human Rights at Vienna in 1993 and, a year later, ratifying the U.N. Convention Against Torture.

Even then, the sovereign decided to reserve some exceptions for his country alone. Only a year after President Bill Clinton signed the U.N. Convention, CIA agents started snatching terror suspects in the Balkans, some of them Egyptian nationals, and sending them to Cairo, where a torture-friendly autocracy could do whatever it wanted to them in its prisons. Former CIA director George Tenet later testified that, in the years before 9/11, the CIA shipped some 70 individuals to foreign countries without formal extradition — a process dubbed “extraordinary rendition” that had been explicitly banned under Article 3 of the U.N. Convention.

Right after his public address to a shaken nation on September 11, 2001, President George W. Bush gave his staff wide-ranging secret orders to use torture, adding (in a vernacular version of Schmitt’s dictum),“I don’t care what the international lawyers say, we are going to kick some ass.” In this spirit, the White House authorized the CIA to develop that global matrix of secret prisons, as well as an armada of planes for spiriting kidnapped terror suspects to them, and a network of allies who could help seize those suspects from sovereign states and levitate them into a supranational gulag of eight agency black sites from Thailand to Poland or into the crown jewel of the system, Guantánamo, thus eluding laws and treaties that remained grounded in territorially based concepts of sovereignty.

Once the CIA closed the black sites in 2008-2009, its collaborators in this global gulag began to feel the force of law for their crimes against humanity. Under pressure from the Council of Europe, Poland started an ongoing criminal investigation in 2008 into its security officers who had facilitated the CIA’s secret prison in the country’s northeast. In September 2012, Italy’s supreme court confirmed the convictions of 22 CIA agents for the illegal rendition of Egyptian exile Abu Omar from Milan to Cairo, and ordered a trial for Italy’s military intelligence chief on charges that sentenced him to 10 years in prison. In 2012, Scotland Yard opened a criminal investigation into MI6 agents who rendered Libyan dissidents to Colonel Gaddafi’s prisons for torture, and two years later the Court of Appeal allowed some of those Libyans to file a civil suit against MI6 for kidnapping and torture.

But not the CIA. Even after the Senate’s 2014 Torture Report documented the Agency’s abusive tortures in painstaking detail, there was no move for either criminal or civil sanctions against those who had ordered torture or those who had carried it out. In a strong editorial on December 21, 2014, the New York Times asked “whether the nation will stand by and allow the perpetrators of torture to have perpetual immunity.” The answer, of course, was yes. Immunity for hirelings is one of the sovereign’s most important exceptions.

As President Bush finished his second term in 2008, an inquiry by the International Commission of Jurists found that the CIA’s mobilization of allied security agencies worldwide had done serious damage to the international rule of law. “The executive… should under no circumstance invoke a situation of crisis to deprive victims of human rights violations… of their… access to justice,” the Commission recommended after documenting the degradation of civil liberties in some 40 countries. “State secrecy and similar restrictions must not impede the right to an effective remedy for human rights violations.”

The Bush years also brought Washington’s most blatant repudiation of the rule of law. Once the newly established International Criminal Court (ICC) convened at The Hague in 2002, the Bush White House “un-signed” or “de-signed” the U.N. agreement creating the court and then mounted a sustained diplomatic effort to immunize U.S. military operations from its writ. This was an extraordinary abdication for the nation that had breathed the concept of an international tribunal into being.

The Sovereign’s Unbounded Domains

While Presidents Eisenhower and Bush decided on exceptions that violated national boundaries and international treaties, President Obama is exercising his exceptional prerogatives in the unbounded domains of aerospace and cyberspace.

Both are new, unregulated realms of military conflict beyond the rubric of international law and Washington believes it can use them as Archimedean levers for global dominion. Just as Britain once ruled from the seas and postwar America exercised its global reach via airpower, so Washington now sees aerospace and cyberspace as special realms for domination in the twenty-first century.

Under Obama, drones have grown from a tactical Band-Aid in Afghanistan into a strategic weapon for the exercise of global power. From 2009 to 2015, the CIA and the U.S. Air Force deployed a drone armada of over 200 Predators and Reapers, launching 413 strikes in Pakistan alone, killing as many as 3,800 people. Every Tuesday inside the White House Situation Room, as the New York Times reported in 2012, President Obama reviews a CIA drone “kill list” and stares at the faces of those who are targeted for possible assassination from the air.  He then decides, without any legal procedure, who will live and who will die, even in the case of American citizens. Unlike other world leaders, this sovereign applies the ultimate exception across the Greater Middle East, parts of Africa, and elsewhere if he chooses.

This lethal success is the cutting edge of a top-secret Pentagon project that will, by 2020, deploy a triple-canopy space “shield” from stratosphere to exosphere, patrolled by Global Hawk and X-37B drones armed with agile missiles.

As Washington seeks to police a restless globe from sky and space, the world might well ask: How high is any nation’s sovereignty? After the successive failures of the Paris flight conference of 1910, the Hague Rules of Aerial Warfare of 1923, and Geneva’s Protocol I of 1977 to establish the extent of sovereign airspace or restrain aerial warfare, some puckish Pentagon lawyer might reply: only as high as you can enforce it.

President Obama has also adopted the NSA’s vast surveillance system as a permanent weapon for the exercise of global power. At the broadest level, such surveillance complements Obama’s overall defense strategy, announced in 2012, of cutting conventional forces while preserving U.S. global power through a capacity for “a combined arms campaign across all domains: land, air, maritime, space, and cyberspace.” In addition, it should be no surprise that, having pioneered the war-making possibilities of cyberspace, the president did not hesitate to launch the first cyberwar in history against Iran.

By the end of Obama’s first term, the NSA could sweep up billions of messages worldwide through its agile surveillance architecture. This included hundreds of access points for penetration of the Worldwide Web’s fiber optic cables; ancillary intercepts through special protocols and “backdoor” software flaws; supercomputers to crack the encryption of this digital torrent; and a massive data farm in Bluffdale, Utah, built at a cost of $2 billion to store yottabytes of purloined data.

Even after angry Silicon Valley executives protested that the NSA’s “backdoor” software surveillance threatened their multi-trillion-dollar industry, Obama called the combination of Internet information and supercomputers “a powerful tool.” He insisted that, as “the world’s only superpower,” the United States “cannot unilaterally disarm our intelligence agencies.” In other words, the sovereign cannot sanction any exceptions to his panoply of exceptions.

Revelations from Edward Snowden’s cache of leaked documents in late 2013 indicate that the NSA has conducted surveillance of leaders in some 122 nations worldwide, 35 of them closely, including Brazil’s president Dilma Rousseff, former Mexican president Felipe Calderón, and German Chancellor Angela Merkel. After her forceful protest, Obama agreed to exempt Merkel’s phone from future NSA surveillance, but reserved the right, as he put it, to continue to “gather information about the intentions of governments… around the world.” The sovereign declined to say which world leaders might be exempted from his omniscient gaze.

Can there be any question that, in the decades to come, Washington will continue to violate national sovereignty through old-style covert as well as open interventions, even as it insists on rejecting any international conventions that restrain its use of aerospace or cyberspace for unchecked force projection, anywhere, anytime? Extant laws or conventions that in any way check this power will be violated when the sovereign so decides. These are now the unwritten rules of the road for our planet.  They represent the real American exceptionalism.

Alfred W. McCoy is professor of history at the University of Wisconsin-Madison. A TomDispatch regular, he is the author of Torture & Impunity: The U.S. Doctrine of Coercive Interrogation, among other works.

Confronting Industrialism

cost-of-coal_detail

By Derrick Jensen

Source: Counterpunch.org

Some of the most important questions confronting us are: what should we do about this culture’s industrial wastes, from greenhouse gases to pesticides to ocean microplastics?

Can the capitalists clean up the messes they create? Or is the whole industrial system beyond reform? The answers become clear with a little context.

Let’s start the discussion of context with two riddles that aren’t very funny.

Q: What do you get when a cross a long drug habit, a quick temper, and a gun?

A: Two life terms for murder, with earliest release date 2026.

And,

Q: What do you get when you cross a large corporation, two nation states, 40 tons of poison, and at least 8,000 dead human beings?

A: Retirement with full pay and benefits. Warren Anderson, CEO of Union Carbide. Bhopal.

The point of these riddles is not merely that when it comes to murder and many other atrocities, different rules apply to the poor than to the rich. And it’s not merely that ‘economic production’ is a get-out-of-jail free card for whatever atrocities the ‘producers’ commit, whether it’s genocide, gynocide, ecocide, slaving, mass murder, mass poisoning, and so on.

Do we even care? We already know they don’t …

The point here is that this culture is clearly not particularly interested in cleaning up its toxic messes. Obviously, or it wouldn’t keep making them. It wouldn’t allow those who make these messes to do so with impunity. It certainly wouldn’t socially reward those who make them.

This may or may not be the appropriate time to mention that this culture has created, for example, 14 quadrillion (yes, quadrillion) lethal doses of Plutonium 239, which has a half-life of over 24,000 years, which means that in a mere 100,000 years that number will be all the way down to only about 3.5 quadrillion lethal doses: Yay!

And socially reward them it does. I could have used a whole host of examples other than Warren Anderson, who was playing on the back nine long after he should have been hanging by the neck (he was sentenced to death in absentia, but the US refused to extradite him).

There’s Tony Hayward, who oversaw BP’s devastation of the Gulf of Mexico and who was ‘punished’ for this with a severance package worth well over $30 million. Or we could throw another couple of riddles at you, which are really the same riddles:

Q: What do you call someone who puts poison in the subways of Tokyo?

A: A terrorist.

Q: What do you call someone who puts poison (cyanide) into groundwater?

A: A capitalist: CEO of a gold mining corporation.

We could talk about frackers, who make money as they poison groundwater. We could talk about anyone associated with Monsanto. You can add your own examples. I’d say you can ‘choose your poison’ but of course you can’t. Those are chosen for you by those doing the poisoning.

Civilization’s ability to overcome our native common sense

I keep thinking about one of the most fundamentally sound (and fundamentally disregarded) statements I’ve ever read. After Bhopal, one of the doctors trying to help survivors stated that corporations (and by extension, all organizations and individuals) “shouldn’t be permitted to make poison for which there is no antidote.”

Please note, by the way, that far from having antidotes, nine out of ten chemicals used in pesticides in the US haven’t even been thoroughly tested for (human) toxicity.

Isn’t that something we were all supposed to learn by the time we were three? Isn’t it one of the first lessons our parents are supposed to teach us? Don’t make a mess you can’t clean up!

Yet that is precisely the foundational motivator of this culture. Sure, we can use fancy phrases to describe the processes of creating messes we have no intention of cleaning up, and in many cases cannot clean up.

And so we get phrases like ‘developing natural resources’, or ‘sustainable development’, or ‘technological progress’ (like the invention and production of plastics, the bathing of the world in endocrine disruptors, and so on), or ‘mining’, or ‘agriculture’, or ‘the Green Revolution’, or ‘fueling growth’, or ‘creating jobs’, or ‘building empire’, or ‘global trade’.

But physical reality is always more important than what we call it or how we rationalize it. And the truth is that this culture has been based from the beginning to the present on privatizing benefits and externalizing costs. In other words, on exploiting others and leaving messes behind.

Hell, they call them ‘limited liability corporations’ because a primary purpose is to limit the legal and financial liability of those who benefit from the actions of corporations for the harm these actions cause.

Internalizing insanity

This is no way to run a childhood, and it’s an even worse way to run a culture. It’s killing the planet. Part of the problem is that most of us are insane, having been made so by this culture. We should never forget what RD Laing wrote about this insanity:

“In order to rationalize our industrial-military complex [and I would say this entire way of life, including the creation of messes we have neither the interest nor capacity to clean up], we have to destroy our capacity to see clearly any more what is in front of, and to imagine what is beyond, our noses. Long before a thermonuclear war can come about, we have had to lay waste to our own sanity.

“We begin with the children. It is imperative to catch them in time. Without the most thorough and rapid brainwashing their dirty minds would see through our dirty tricks. Children are not yet fools, but we shall turn them into imbeciles like ourselves, with high IQs, if possible.”

We’ve all seen this too many times. If you ask any reasonably intelligent seven-year-old how to stop global warming caused in great measure by the burning of oil and gas and by the destruction of forests and prairies and wetlands, this child might well say, “Stop burning oil and gas, and stop destroying forests and prairies and wetlands!”

If you ask a reasonably intelligent thirty-year-old who works for a ‘green’ high tech industry, you’ll probably get an answer that primarily helps the industry that pays his or her salary.

Part of the brainwashing process of turning us into imbeciles consists of getting us to identify more closely with-and care more about the fate of-this culture rather than the real physical world. We are taught that the economy is the ‘real world’, and the real world is merely a place from which to steal and on which to dump externalities.

Does nature have to adapt to us? Or us to nature?

Most of us internalize this lesson so completely that it becomes entirely transparent to us. Even most environmentalists internalize this. What do most mainstream solutions to global warming have in common? They all take industrialism as a given, and the natural world as having to conform to industrialism.

They all take empire as a given. They all take overshoot as a given. All of this is literally insane, in terms of being out of touch with physical reality. The real world must always be more important than our social system, in part because without a real world you can’t have any social system whatsoever. It’s embarrassing to have to write this.

Upton Sinclair famously said that it’s hard to make a man understand something, when his job depends on him not understanding it.

I would add that it’s hard to make people understand something when the benefits they accrue through their exploitative and destructive way of life depend on it. So we suddenly get really stupid about the waste products produced by this culture.

When people ask how we can stop polluting the oceans with plastic, they don’t really mean, “How can we stop polluting the oceans with plastic?” They mean, “How can we stop polluting the oceans with plastic and still have this way of life?”

And when they ask how we can stop global warming, they really mean, “How can we stop global warming without stopping this level of energy usage?”. When they ask how we can have clean groundwater, they really mean, “How can we have clean groundwater while we continue to use and spread all over the environment thousands of useful but toxic chemicals that end up in groundwater?”

The answer to all of these is: you can’t.

First we must recover our sanity. Then we must act

As I’ve been writing this essay about the messes caused by this culture, there’s an allegorical image I can’t get out of my mind. It’s of a half-dozen Emergency Medical Technicians putting bandages on a person who has been assaulted by a knife-wielding psychopath.

The EMTs are trying desperately to stop this person from bleeding out. It’s all very tense and suspenseful as to whether they’ll be able to staunch the flow of blood before the person dies.

But here’s the problem: as these EMTs are applying bandages as fast as they can, the psychopath is continuing to stab the victim. Worse, the psychopath is making wounds faster than the EMTs are able to bandage them. And the psychopath is paid very well for stabbing the victim, while most of the EMTs are bandaging in their spare time.

And in fact the health of the economy is based on how much blood the victim loses – as in this culture, where economic production is measured by the conversion of living landbase into raw materials, e.g., living forests into two-by-fours, living mountains into coal.

How do we stop the victim from bleeding out? Any child can tell you. And any sane person who cares more about the health of the victim than the health of the economy that is based on dismembering the victim can tell you. The first thing you need to do is stop the stabbing. No amount of bandages will make up for an assault that is ongoing, indeed, one that is accelerating.

What do we do about this culture’s fabrication of industrial wastes? The first step is stop their production. Actually the first step is that we regain our sanity, that is, we transfer our loyalty away from the psychopaths, and toward the victim, toward, in this case, the planet that is our only home.

Once we do that, everything else is technical. How do we stop them? We stop them.

Derrick Jensen is Member of the Steering Committee of Deep Green Resistance. See more details. Read Derrick Jensen’s blog.

 

Get Big or Get Out: Complex Systems and Reciprocal Ecocide

Europe-must-change-policy-on-GM-crops-warn-experts

By Gary Gripp

Source: The Hampton Institute

For awhile now I have been saying that the complex systems which supposedly serve us actually serve themselves: they call the tune and we dance as directed. But I haven’t offered a whole lot of examples of what I mean. Now I would like to remedy that by offering some examples of how systems may interlock with each other and lock us into their individual and collective agendas. I will jump in – not at the beginning, but in medias res – the world I was born into, in the middle of World War Two.

At this time, the bomb factories were manufacturing great guns here in America thanks to a discovery made in Germany in the early part of the twentieth century by Fritz Haber. The Haber process, for which Haber received the Nobel Prize, is a way of turning atmospheric nitrogen into ammonia, which can in turn be used as a basis for making military weapons like bombs. Munitions factories built amazing industrial capacity during the war years, but then, finally, the war came to an end. With such industrial infrastructure already in place, but with the cash flow drying up, there was incentive within these corporate-owned businesses to keep all their interconnected systems of extraction, production, and distribution chugging along, which, thanks to the Green Revolution, they were able to do by cranking out artificial fertilizer, pesticides, and other agro-chemicals.

During these same war years scientist Norman Borlaug was developing hybrid strains of wheat and other grains that required intensive irrigation and just the kinds of artificial fertilizers that these erstwhile bomb factories were now turning out. And thus began a revolution in land use, a population explosion, and a movement of people off the land and into cities. The institution of the small family farm, where parents and children worked together to make a living off the land, would come to be seen as an archaic way of life, and American Secretary of Agriculture, Ezra Taft Benson, would intone the new mantra of “Get big or get out” of agriculture. A later Secretary of Agriculture, Earl Butz, would enjoin those still on the farm to “plant fencerow to fencerow,” getting rid even of kitchen gardens and the trees that acted as wind breaks and thermal insulation in order to maximize “efficiency” in this industrial model of the economies of scale. In this atmosphere of postwar boom-times, America’s once small-scale farming became large-scale agribusiness where giant machines, artificial fertilizer, hybridized seed, and imported irrigation water became the order of the day. This trend continues, as less than two percent of Americans now make their living farming, while genetic engineering is touted as a technological breakthrough that will “feed the world.”

Many, many systems are involved in this revolution that has changed the face of America in our lifetimes. Two cultural institutions that preceded this land-use and societal revolution are the corporation and the banks, and both these have served as important drivers to the way things played out on the ground and in people’s lives. What keeps the banks in business is the culturally established convention of interest on debt. Money is borrowed to accomplish some desired project with the understanding (in the form of a contract) that all the money would be paid back plus a large bonus to the lenders: interest paid on debt is a huge factor in our economic system and a driver of continual growth. The system imperative of interest on debt is in fact a pyramid scheme that requires new players to enter the game in order to keep this system going. Likewise, the corporation, with its imperative to earn profits for shareholders above any societal or other value, requires management decisions that maximize profits while minimizing costs and risks to that single class of people. And this imperative is also a driver of growth. The “get big or get out” injunction applies not only to farmers; it applies at nearly all levels of business.

Between them, Fritz Haber and Norman Borlaug are credited with allowing the human population to grow to twice the size that it could have without the intervention of the systems their innovations set in motion. A burgeoning population in turn drives all the systems to do more and more: more extraction, more production, more distribution, more consumption, along with more waste products coming out of each one of these systems of the global industrial economy. Add to this the revolution of rising expectations, where everybody wants to live in the lavish way we do, and you have a recipe for using up every last asset of a living planet, until it is stripped down to a lifeless cinder. This is the direction we are headed in, and we are not slowing this juggernaut down; in fact, it is accelerating, as we add more people, more systems, and more drivers to push us at breakneck speed, toward what?

But let’s go back and consider some other implications of bomb factories becoming a driver of industrial-style agriculture. We have built one hell of a lot of dams in the last half of the twentieth century in order to supply irrigation water to chemically-enhanced crops on machine-carved, corporately-owned land. Redistributing the natural flow of rivers has been less than a boon to fish populations, including migratory fish like steelhead and salmon. Runoff of nitrogen-rich chemical fertilizers has created dead zones in the Gulf of Mexico, and everywhere this form of agriculture (temporarily) flourishes. All the little scraps of land that were once saved for wildlife by the small farmer have been effectively removed in the name of efficiency. The relationship that the small farmer once had to the land is all but gone now, replaced by a relationship to massive machines, and to the banks. All those small farmers who have lost their land to the economics of giant-sized agribusiness have surrendered a life they loved for something far less satisfactory, and how much less satisfactory is attested to by many a farmer suicide-sometimes by drinking the poisonous chemicals used to saturate the land. And the land itself is now all but dead, its living topsoil blown and washed away, and what is left depleted of its living, soil-building organisms. When the organisms that build soil health are drenched with poisons and leached away, the plants that grow in this diminished medium are robbed of much of their nutritional value, including many of the vitamins, minerals, and phytonutrients that are so important to human health. Deprived of full nutrition, the health of the people suffers-as we now see all around us.

This is just a sketch of some the interconnected systems that impinge on our lives. I personally don’t see much opportunity here for human interventions that are going to make meaningful change, and the reasons for this are several. The systems we find ourselves entangled in all seem to share in the same imperative for growth, and this growth manifests in several ways. One way it manifests can be seen is in the growth of medium-sized corporations in global mega-corporations, through mergers, buyouts, and hostile takeovers, resulting in an ever greater concentration of power in the hands of a few. This is a trend that became evident in post-war America, and has only intensified in the years since-despite lip service to anti-trust laws designed to prevent monopolistic distortions of a market that calls itself ‘free.’ The explosion of the human population, from 1.6 billion at the twentieth century’s start to 6.1 billion at its end, is another obvious example of the growth imperative gone off the rails. What may not be so obvious is how feedback loops between our population growth and the complex systems in which we were – and are – entangled, have swapped roles as driver in the growth of the other; were, and are, mutually reinforcing causes, while also being mutually reinforcing effects, of synergistic runaway growth. I personally don’t see that we humans have the clear option of disconnecting ourselves from these systems that both serve us and cause us to serve them. Something from outside this entangled relationship could break these very sticky bonds-something big, like Mother Nature, for instance. Short of such an intervention, I don’t expect to see our trajectory changing direction anytime soon.

Does Work Undermine our Freedom?

1365203033_2628_auschwitz

By John Danaher

Source: Philosophical Disquisitions

Work is a dominant feature of contemporary life. Most of us spend most of our time working. Or if not actually working then preparing for, recovering from, and commuting to work. Work is the focal point, something around which all else is organised. We either work to live, or live to work. I am fortunate in that I generally enjoy my work. I get paid to read, write and teach for a living. I can’t imagine doing anything else. But others are less fortunate. For them, work is drudgery, a necessary means to a more desirable end. They would prefer not to work, or to spend much less time doing so. But they don’t have that option. Society, law and economic necessity all conspire to make work a near-essential requirement. Would it be better if this were not the case?

In recent months, I have explored a number of affirmative answers to this question. Back in July 2014, I looked at Joe Levine’s argument for the right not to work. This argument rested on a particular reading of the requirements of Rawlsian egalitarianism. In brief, Levine felt that Rawlsian neutrality with respect to an individual’s conception of the good life required some recognition of a right to opt out of paid labour. Then, in October 2014, I offered my own general overview of the anti-work literature, dividing the arguments up into two categories: intrinsic badness arguments (which claimed that there was something intrinsically bad about work) and opportunity cost arguments (which claimed that even if work was okay, non-work was better).

In this post, I want to explore one more anti-work argument. This one comes from an article by Julia Maskivker entitled “Employment as a Limitation on Self-Ownership”. Although this argument retreads some of the territory covered in previous posts, I think it also offers some novel insights, and I want to go over them. I do so in several parts. First, I offer a brief overview of Maskivker’s central anti-work argument. As we’ll see, this argument has two contentious premises, each based on three claims about freedom and justice. I then spend the next three sections looking at Maskivker’s defence of those three claims. I will then focus on some criticisms of her argument, before concluding with a general review.

1. Maskivker’s Anti-Work Argument
I’ll actually start with a mild criticism. Although I see much of value in Maskivker’s article, and although I learned a lot from it, I can’t honestly say that I enjoyed reading it. Large parts of it felt disorganised, needlessly convoluted, and occasionally repetitious. Although she introduced a central normative claim early on — viz. a claim about the need for effective control self-ownership — later parts of her argument seemed to stray from the strict requirements of that concept. This left me somewhat confused as to what her central argument really was. So what follows is very much my own interpretation of things and should be read with that caveat in mind.

Anyway, let’s start by clarifying what it is we are arguing against. In the past, I have lamented the fact that definitions of work are highly problematic. They are often value-laden, and prone to the sins of under and over-inclusiveness. I’m not sure that there can ever be a perfect definition of work, one that precisely captures all the phenomena of interest to those making the anti-work critique. Nevertheless, we need something more concrete, and Maskivker duly provides. She defines work as paid labour. That is, labour that is undertaken for the purposes of remuneration. This definition is simple and covers what is central to her own argument. My only complaint is that it may need to be expanded to cover forms of labour that are not directly remunerated but are undertaken in the hope of eventually being remunerated (e.g. work of entrepreneurs in the early stages of a business, or the work of unpaid interns). But this is just a quibble.

With that definition in place, we can proceed to Maskivker’s anti-work argument itself. That argument is all about the freedom-undermining effect of work. Although this argument is initially framed in terms of a particular conception of freedom as effective control self-ownership (something I previously covered when looking at the work of Karl Widerquist), I believe it ends up appealing to a much broader and more ecumenical understanding of freedom. As follows:

  • (1) If a phenomenon undermines our freedom, then it is fundamentally unjust and we should seek to minimise or constrain it.
  • (2) A phenomenon undermines our freedom if: (a) it limits our ability to choose how to make use of our time; (b) it limits our ability to be the authors of our own lives; and/or (c) it involves exploitative/coercive offers.
  • (3) Work, in modern society, (a) limits our ability to choose how to make use of our time; (b) limits our ability to be the authors of our own lives; and c) involves an exploitative/coercive offer.
  • (4) Therefore, work undermines our freedom.
  • (5) Therefore, work is fundamentally unjust and should be minimised or constrained.

You could alter this, as Maskivker seems to wish to do, by turning it into an argument for a right not to work. Though I will discuss this general idea later on, I’m avoiding that construal of the argument for the simple reason that it requires additional explanation. Specifically, it requires some explanation of what it would mean to have a right not to work, and some answer to the question as to why it is felt that we do not currently have a right not to work (after all, we can choose not to work, can’t we?). I think time would be better spent focusing specifically on the freedom-undermining effect of work and its injustice, rather than on the precise social remedy to this problem.

What about the rest of the argument. Well, premise (1) is a foundational normative assumption, resting on the value of freedom in a liberal society. We won’t question it here. Premise (2) is crucial because it provides more detail on the nature of freedom. Although Maskivker may argue that the three freedom-undermining conditions mentioned in that premise are all part of the what she means by effective control self-ownership, I think it better not to take that view. Why? Because I think some of the conditions appeal to other concepts of freedom that are popular among other political theorists, and it would be better not to limit the argument to any particular conception. Moving on, premise (3) is the specific claim about the freedom-undermining effect of work. Obviously, this too is crucial to Maskivker’s overall case. The two conclusions then follow.

Let’s go through the two central premises in more detail. Let’s do so in an alternating structure. That is to say, by looking at the defence of condition (a) in premise (2) and premise (3); then at the defence of condition (b) in premise (2) and premise (3); and finally at the defence of condition (c) in premise (2) and premise (3).

2. Freedom, Time and the 24/7 Workplace
Condition (a) is all about the need for an ability to choose how to use our time. Maskivker defends this requirement by starting out with a Lockean conception of freedom, one that is often beloved by libertarians. The Lockean conception holds that individuals are free in the sense that they have self-ownership. That is to say: they have ownership rights over their own bodies, and the fruits of their labour. This fundamental right of self-ownership in turn implies a bundle of other rights (e.g. the right to transfer the fruits of one’s labour to another). Any system of political authority must respect this fundamental right and its necessary implications.

The problem for Maskivker is that many fans of self-ownership limit themselves to a formal, rather than an effective, conception of that right. In other words, they simply hold, in the abstract, that individuals have this right of self-ownership and that they should not be interfered with when exercising it. They don’t think seriously about what it would take to ensure that everybody was really able to effectively enjoy this right. If they did this, they would realise that there are a number of social and evolutionary imbalances and injustices in the ability of individuals to exercise self-ownership. They would realise that, in order to effectively enjoy the right, individuals will also need access to resources.

Now, to be fair, some writers do recognise this. And they highlight the need for things like adequate education and healthcare in order for the right to self-ownership to be effective. Maskivker agrees with their approach. The originality of her contribution comes in its insistence on the importance of time as an essential resource for self-ownership. Time is, in many ways, the ultimate resource. Time is necessary for everything we do. Everything takes time. Other skills and abilities that we may have, only really have value when we have the time to exercise them. Furthermore, time is a peculiarly non-manipulable resource. There is a limited amount of time in which we get to act out our lives. This makes it all the more important for people to have access to time.

You can probably see where this is going. The problem with work is that it robs us of time. We need jobs in order to live, and they take up most of our time. Some people argue that the modern realities of work are particularly insidious in this regard. Jonathan Crary, in his slightly dystopian and alarmist work, 24/7: Late Capitalism and the Ends of Sleep, notes how work has colonised our every waking hour and how it threatens to colonise our sleep too. We are encouraged to make our time more productive, but also to be available to our workplaces at more times of the day, through email or social media. Indeed, the slow death of the regular 9-to-5 workday has, if anything, encouraged work to monopolise more of time. We have flexible working hours and our work may be more outcome-driven, but the marketplaces are open 24/7 and they demand more outcomes from us. The result is an infiltration of work into every hour of the day.

Some people may not resent this. They may feel that they are living the kind of life they wish to live, that their work is enjoyable, and that it gives them a sense of purpose. But others will feel differently. They will feel that work takes away valuable opportunities to truly express themselves as they wish.

In sum, access to time and the time-limiting nature of work, is one thing to think about when designing a scheme of distributive justice. An ability to opt out of work, or to have much less of it one’s lives may be necessary if we are to have a just society.

3. Freedom and Authorship of One’s Life
There is a related argument to made here about the ability to choose one’s time. It can be connected to Maskivker’s account of effective self-ownership, but it can also be separated from it. That’s what condition (b) is about. It appeals to a distinctive notion of freedom as being the ability to exercise true authorship over one’s life. This is a slightly more metaphysical ideal of freedom, one that joins up with the debate about free will and responsibility.

To understand the idea, we need to think more about the individual who truly enjoys their work. As I suggested at the end of the previous section, you could argue that there is nothing unjust about the current realities of work for such an individual. Granting them more free time, won’t really help them to exercise more effective self-ownership. They are getting what they want from life. Take me for example. I have already said that I enjoy my work, and I have been able to (I think) select a career that best suits my talents and abilities. I’m pretty sure I’m employing the scarce resource of time in a way that allows me to maximise my potential. I’m pretty sure there is nothing fundamentally unjust or freedom-undermining about my predicament.

Nevertheless, Maskivker wants to argue that there is something fundamentally unjust about my predicament. My freedom is not being respected in the way that it should. Despite all my claims about how much I enjoy my work, the reality is that I have to work. I have no real say in the matter. She uses an analogy between starving and fasting to make her point. When a person is starving or fasting, the physical results are often the same: their bodies are being deprived of essential nutrients. But there is something morally distinct about the two cases. The person who fasts has control over what is happening to their body. The person who is starving does not. The person who chooses to fast has authorship over their lives; the person who is starving is having their story written by someone else.

When it comes to our work, there is a sense in which we are all starving not fasting. We may enjoy it, embrace it and endorse it, but at the end of the day we have to do it. That’s true even in societies with generous welfare provisions, as most of those welfare provisions are conditional upon us either looking for work (and proving that we are doing so), or being in some state of unavoidable disability or deprivation. We are not provided us with an “easy out”, or with the freedom we need to become the true authors of our lives. (Maskivker notes that the introduction of a universal basic income could be a game-changer in this regard).

As I said, in appealing to this notion of self-authorship, Maskivker is touching upon a more metaphysical ideal of freedom. Within the debate about free will, there are those that argue that the ability to do otherwise is essential for having free will. But there are also those (e.g. Harry Frankfurt and John Fischer) who argue that it is not. They sometimes say that being free and responsible simply requires the reflexive self-endorsement of one’s actions and attitudes. The ability to do otherwise is irrelevant. So what Maskivker is arguing is somewhat contentious, at least when considered in light of these other theories of freedom. She claims that her theory better captures the normative ideal of freedom. But there is much more to be said about this issue.

4. Freedom and the Absence of Coercive Offers
The final condition of freedom — condition c) — is probably the most straightforward. It has its origins in the classic liberal accounts of freedom as non-interference by coercion. It is introduced by Maskivker in an attempt to address a possible weakness in the argument thus far. Someone could argue that the mere absence of acceptable alternatives to work is not enough to imply that it undermines our freedom, or that it creates a fundamental injustice.

An analogy might help to make the point. Suppose you are crossing the desert. You have run out of water and are unlikely to make it out alive. As you are literally on your last legs, you come across a man who is selling water from a small stand. He is, however, selling it at an obscene price. It will cost you everything you have to get one litre of water (which will be just enough to make it out). Because of your desperate situation, you hand over everything you have. Was your choice to hand over everything free? Was it just for the man to sell the water at that price? Many would argue “no” because you had no acceptable alternative.

But now consider a variation on this scenario. Suppose that this time the man is selling water at a very low price, well below the typical market rate. It will cost you less than one dollar to get a litre of water. You gratefully hand over the money. Was your choice free this time? Remember, you are still in a desperate state. All that has changed is the price. Nevertheless, there is something less disturbing about this example. Your choice seems more “free”, and the whole scenario seems more just.

The problem with the first case is that the man is exploiting your unfortunate situation. He knows you have no other choice and he wants to take you for everything that you’ve got. The second scenario lacks this feature. In that case, he doesn’t undermine your freedom, or violate some fundamental principle of justice, because he doesn’t exploit your misfortune.

How does this apply to Maskivker’s anti-work argument? Very simply. She claims that work, in the modern world, involves an exploitative bargain. There is no particular agent behind this exploitation. Rather, it is the broader society, with its embrace of the work ethic and its commitment to the necessity of work, that renders the decision to work exploitative:

Demanding fulltime work in exchange for a decent livelihood is comparable to demanding an exorbitant price for a bottle of water in the absence of competition. It leaves the individual vulnerable to the powerful party (society) in the face of the great loss to be suffered if the “offer” as stipulated is not taken (if one opts not to work while not independently wealthy)

(Maskivker 2010)

5. But isn’t the abolition of work impossible?
Thus ends the defence of Maskivker’s central argument. As you can see, her claim is that the modern realities of work are such that they undermine our freedom and create a fundamental injustice in our society. This is because (conjunctively or disjunctively) work monopolises our time and limits our effective self-ownership; the absence of a viable alternative to work prevents us from being the true authors of our live; and/or society is presenting us with an exploitative bargain “you better be working or looking for work or else…”. You may be persuaded on each of these points. You may agree that a full (positive?) right not to work would be nice. But you may think that it is naive and unrealistic. You may think that it is impossible to really avoid a life of work. Maskivker closes by considering two versions of this “impossibility” objection.

The first, which we might call the “strict impossibility” objection, works something like this:

  • (6) We all have basic needs (food, clothing, shelter etc); without these things we would die.
  • (7) We have to work in order to secure these basic needs.
  • (8) Therefore, we have to work.

Maskivker has a very simply reply to this version of the objection. She holds that premise (7) is false. Not all activities that are conducive to our survival are inevitable. At one point in time, we had to take the furs and hides of animals in order to stay warm enough to survive. We no longer have to do this. The connection between survival and procuring the furs and hides of animals has been severed. The same could happen to the connection between work and our basic needs. Indeed, it is arguable that we no longer need to work all that much to secure our basic needs. There are many labour saving devices in manufacturing and agriculture (and there are soon to be more) that obviate the need for work. And yet the social demand for work has, for some reason, not diminished. Surely this doesn’t have to be the case? Surely we could allow more machines to secure our basic needs?

The second impossibility objection, which we might call the “collective action” objection, is probably more serious. It holds that while a right not to work might be all well and good, the reality is that if everyone exercised that right, society would not be able to support its implementation. After all, somebody has to pay for the system. Maskivker’s responses to this objection are, in my opinion, somewhat problematic.

She makes one basic point. She says that the existence of a right is not contingent upon whether it may be impossible to recognise it in certain social contexts, or whether universal exercise of that right would lead to negative outcomes. She uses two analogies to support this point. First, she asks us to suppose that there is a universal right to healthcare. She then asks us to imagine that we live in a society in which there is some terrible natural disaster, which places huge strains on the healthcare system. The strains are such that the available resources will not be sufficient to save everyone. Maskivker argues that the universal right to healthcare still exists in this society. The limitations imposed by the natural disaster do not take away people’s rights. Second, she asks us to consider the right not to have children. She then points out that if everyone exercised the right not to have children, it would lead to a bad outcome: humanity would go extinct. Nevertheless, she argues, that this does not mean that the right not to have children does not exist.

In some ways, I accept Maskivker’s point. I agree that a right may exist in the abstract even if its implementation creates problems. But I don’t think that really addresses the collective action objection, and I don’t think her analogies work that well. With regards to the right to healthcare in the disasterzone, I’m inclined to think that the limitations of the available resources would compromise or limit the right to healthcare. And with regards to the right not to have children, I think there is something fundamentally different about the problems that arise when we collectively head towards our own extinction and the problems that might arise if everyone stopped working. In the former case, no individuals would be harmed by the collective exercise of the right: the future generations who would have existed, do not exist and cannot be harmed. But in the latter case, there are individuals who might be harmed. For example, if doctors and nurses stopped working, their patients would be harmed. So I’m not sure that Maskivker has really grappled with the collective action objection. I think she tries to sidestep it, but in a manner that will be unpersuasive to its proponents.

6. Conclusion
That brings me to the end of this post. To briefly sum up, Maskivker presents an anti-work argument that focuses on the ways in which work undermines our freedom. She argues that this happens in three ways. First, work robs us of time, which is an essential resource if we are to have effective self-ownership. Second, work prevents us from being the true authors of our lives because there is no acceptable alternative to work (even in societies with social welfare). And third, because work involves an exploitative bargain: we must work, or else.

I think there is much of value in Maskivker’s article. I like how she focuses on time as a resource, one which should be included in any scheme of distributive justice. I also like how she integrates the anti-work critique with certain aspects of the mainstream literature on freedom, self-control and justice. Nevertheless, I fear she dodges the collective action objection to the anti-work position. This is where I think that technology, and in particular a deeper awareness of the drive toward automation and technological unemployment could be a useful addition to the anti-work critique. But that’s an argument for another day.