Oceania Forever: Rise of the Global Police State

1984-screenhillary-1024x576

By Patrick Henningsen

Source: Waking Times

Much has been written about the approaching Police State in alternative media. Commentary ranges from various warnings, to shock and outrage, and fear over an impending martial law takeover in North America and Western Europe. It’s hitting us from so many different angles, and yet the mainstream conversation continues to be woefully inadequate in both characterising the situation and offering a remedy.

In order to really understand the modern Police State, we need to explore some very profound and difficult questions. Many people who consider themselves aware think Western society has already reached the tipping point and the deteriorating situation is simply inevitable. If you feel like Winston Smith right about now you aren’t alone.

Prior to the mid 1990s, one might have described the militarisation of public law enforcement something of a creeping paradigm, but one that was still a long way off. Society explored many aspects of the Police State, both the physical and Orwellian psychological scenario, through literature and film. American science fiction writer Philip K. Dick penned some significant works like The Minority Report, and cinematic hits like Paul Verhoeven’s Robocop and Terry Gilliam’s Brazil also explored what this dystopic, future vision of fascist technocracy might look like. As it turned out, and far from fantasy, countless devices, systems and themes depicted in so many of these supposedly ‘fictional’ classics have since made their way into our day to day lives. The dark dream became real.

Unfortunately, as humanity’s freshmen class of the early 21st century, we can no longer afford the intellectual distance enjoyed by previous generations between life today and that blurry, far-off spectre of something that might arrive sometime at some point in the future.

Any modern globalised Police State requires a social engineering framework in order to provide its shape and scope of law enforcement. The latest social engineering blueprint for global technocratic management was unveiled at this year’s 70th United Nations General Assembly in New York City. Their ‘new’ agenda (newer than the old one) entitled, Agenda 2030,1 hopes to “transform our world for the better by 2030.” Author Michael Snyder from the blogEnd of The American Dream’ explains: “The entire planet is going to be committing to work toward 17 sustainable development goals and 169 specific sustainable development targets, and yet there has been almost a total media blackout about this…”2

Within its 17 ‘universal goals’, the actual Police State provision for Agenda 2030 can be found within Goal 11, which states how the new global government will, “Make cities and human settlements inclusive, safe, resilient and sustainable.” Translated in technocracy terms, this means more Big Brother tech, smart grid tracking and big data surveillance states.

The age of computerisation and database integration, along with advances in military and crowd control technology perfected overseas, have enabled a sharp advance toward the Police State. Trying to make sense of ‘it’ is a major challenge, to say the least. In its totality, the control system is both multifaceted and multilayered. It may have been possible to describe it, or even define it 20, 30, or 40 years ago, as Philip K. Dick and so many others did. Today, as society has already eclipsed the possible, we face a situation whereby the very thing we are trying to describe is woven through nearly every fabric of modern social, professional, family, religious and political life.

If you happen to live in one of the technocratic nations, you can’t opt out, nor can you fully repeal the advances already made by the control system. What other options are available?

Firstly, we have to try and understand, from an economic, cultural and political perspective at least, how this control system came to be.

What are its strongest areas? Can we reform those areas? Where is it still emerging? Cannot those areas be slowed down? What was the political climate that enabled it?

How to Build a Police State

When you observe a modern Police State, the first things you might notice will not necessarily be the batons, shield, helmets or MRAPs. Think Switzerland or Singapore. A modern Police State will be neat, clean and efficient. Retail zones will be shiny and feature all the top designer brands. Many of the people you see in public will be well-groomed, well-healed and beautiful, but often with only one political party and a strict public code.

Just like admirers of the modern Chinese State, Singapore’s proponents refer to the single party State as “a great argument for Authoritarianism.” Order and civility rule the day, so long as you don’t fall foul of the narrow perimeters set by the State.

What has been accomplished in Southeast Asia since 1965, and what is possible in previously ‘free’ countries like the US, UK and Australia, are two very different social and political evolutions. Still, the modern Police State is advancing globally and it’s being driven primarily by three factors: technology, for-profit industry, and an age-old obsession by the ruling class to manage the masses.

The first and easiest area to challenge is the physical realm of the control system. The most obvious of these are the gadgets and toys. They are easy to see. Look at your local police department and notice the difference between what officers looked like and what they wore in the 1970s, 1980s, 1990s and now in the 21st century. Notice the firearms and tasers, the ‘Bat-Belts’, and now the body cameras. Your average officer today looks like a cross between a soldier and an android. Dress them like robots and don’t be surprised when they act like machines (and it won’t be long until many of them are replaced by machines).

If you’ve ever attended a street protest or witnessed some civil unrest, then you’ll have noticed the high-tech body armour, the riot and ‘crowd suppression’ equipment.

My first intense experience where I felt the full force of the modern Police State was in 2009, at the G20 Protests in the City of London, England. It was early in the evening and approximately 4,000 demonstrators suddenly found themselves trapped at Bishopsgate. Several hundred police officers on foot and horseback had blocked all the entrances and egresses in and out of the main road. Even alleyways were manned by riot police. Then police began charging the crowds, and beating protesters with clubs. They alternated their ‘surge’ efforts, from different ends of the street, north to south, one brutal flurry after another. The worst part about it was there was no escape route away from the police. Many were beaten and trampled on that evening. It was as if police planners were playing a video game.

Finally, at around 9pm, after being forced to stand, surrounded by police in a ‘Kettle’ for nearly three hours, along with 500 other demonstrators and press, who spent most of that time pressed up against police shields and not knowing what would happen next – I realised this is an impersonal, disinterested and totally uncompromising machine. It does not care who you are, what your views and opinions are, or whether you were innocent or guilty. The lesson was simple: “next time, stay home.” The only detail this machine is concerned with is that you comply with orders, and if no orders are given, then the machine demands you stay where you are until the machine decides what to do with you. If you complain too much, or become emotional, or heaven forbid act out in any way, then the machine will move in to subdue and detain you. That is all there is to it.

Big Brother Reality

It’s well-known that Great Britain is home of the world’s largest and most sophisticated physical Police State, including tens of millions of closed circuit television (CCTV) cameras, covering every conceivable inch of habitable space, both indoors and outdoors. The CCTV phenomenon in Britain was fuelled by an obsession with cameras that became increasingly popular with both government and corporate technocrats in the 1980s and 1990s. The psychology behind the exponential proliferation in cameras was mainly a fairly crude bit of criminology which held that the cameras would somehow act as a deterrent to criminal behaviour, and thus subdue the feral population into a more docile state. Industry used this line too, as sales persons were deployed en masse with endless flip charts and statistical models that claimed CCTV cameras would prevent the UK’s spiralling social malaise.

The only problem is that more cameras don’t equal less crime. Canadian writer Cory Doctorow observed this reality back in 2011, explaining: “After all, that’s how we were sold on CCTV – not mere forensics after the fact, but deterrence. And although study after study has concluded that CCTVs don’t deter most crime (a famous San Francisco study showed that, at best, street crime shifted a few metres down the pavement when the CCTV went up), we’ve been told for years that we must all submit to being photographed all the time because it would keep the people around us from beating us, robbing us, burning our buildings and burglarising our homes.”3

The CCTV is only one single aspect of Big Brother. It turns out that the real value of the CCTV camera grid is not so much the monitoring of crime per se, as it is in mass applied behavioural psychology.

The Panopticon

The physical Police State could not exist without some philosophical underpinning. Before Orwell, there was Bentham…

In the mid 19th century Britain developed a new style of prison architecture known as the ‘Panopticon’ under the aegis of utilitarian philosopher Jeremy Bentham.4 The unique feature of this Panopticon concept was the transparent nature of each prisoner cell, visible to a central surveillance guard tower that could eye inmates at all times. The result of this psychological experiment, according to the pragmatic Benthamite philosophy, was to produce a regime of “self-policing” amongst the inmates, a kind of early behavioural conditioning. For technocrats and emerging utilitarian social managers of that era, this was seen as the most economic and efficient solution. Ultimately, this Benthamite concept is what underpinned phase one of the mass CCTV deployment throughout the UK. Sitting well above the security minions and the industry profiteers, elite scholars knew full well that CCTV cameras do not stop crime.

The real power of the Panopticon is in convincing the general population they are under constant surveillance. After that point, through a long-term process of nudging, diversions and scare tactics, the State gradually moulds the behaviour and thoughts of its subjects.

In order to keep citizens locked into this new conscious state of fear and trepidation, the State needs anenemy…

The Long War & ‘The Extremist’

One of the chief campaigns to nudge society towards a fully-functional Orwellian State is the War on Terror. Ever since September 11, 2001, the concept of an endless war against the ‘terrorists’ – a seemingly ubiquitous and constantly shape-shifting enemy – has been used to justify nearly every large new security expenditure and policy. Back in 2006, US President George W. Bush’s chief architect of the ‘long war’, Secretary of Defense Donald Rumsfeld, laid out the tea leaves for the next 100 years, stating: “It does not have to do with deployment of US military forces, necessarily. It has to do with the struggle that’s taking place within that faith between violent extremists – a small number of them, relatively – who are capable of going out and killing a great many people, as they’re doing, and the overwhelming majority of that religion that does not believe in violent extremism or terrorism.”5

In George Orwell’s classic novel 1984, Winston Smith also grappled with the State’s endless war. “Oceania was at war with Eurasia: therefore Oceania had always been at war with Eurasia.”

In Oceania, people eventually forgot what started the long war. The news was just one terrorist attack after another. They enemy was everywhere, but nowhere too. The population learned to acquiesce to the idea that war was the permanent state of affairs, and that questioning the provenance of this idea was futile.

“Winston could not definitely remember a time when his country had not been at war, but it was evident that there had been a fairly long interval of peace during his childhood, because one of his early memories was of an air raid, which appeared to take everyone by surprise. Perhaps it was the time when the atomic bomb had fallen on Colchester. He did not remember the raid itself.”

And so it was, in the early moments of the 21st century, Orwell’s dream suddenly became a waking reality. Social engineers are firm believers that if the Panopticon (married with the threat of an invisible enemy) can remain in place for a generation, then the State could fundamentally change a once free-thinking society into something noticeably different – a much more fearful and compliant populace.

The Social Media Panopticon

As terror scares and attacks become somewhat of a daily event in the West, identifying and quarantining the ‘extremist’ becomes a primary fetish of the Police State and its media arms. This is very much evident in how terrorists and ‘active shooters’ (dead or alive) are now profiled after the event. The mainstream media has integrated this into its work practice by crafting the post hoc guilty verdict of the accused, prior to a trial, with circumstantial or non sequitur accusations based on an individual’s “web history” that may have “radicalised” the suspect. In effect, the mainstream media’s function as an establishment propaganda arm results in trial by media – the bypassing of any trial by jury as the accused have already been implicitly or explicitly declared guilty by association or something as nebulous as “web history.”

Such incidents, as they are portrayed in the media for psychological conditioning purposes, are intended to cause the public mind to dismiss outdated notions of fair and due process and rule of law in favour of fiat corporate news and government “official” pronouncements. The net effect of this trend is that social media users, ie. the majority of the population, are adopting self-policing habits in their communications online. According to the principals of applied behavioural psychology, if you change the language people use, then eventually you change the way they think and act.

Like Bentham’s Panopticon, this new social media monitoring system works by utilising the digital web, which is arguably the most economic and efficient solution. The acceptance of self-policing and vague terms such as “radicalised” that are subject to the increasingly elastic definitions of the social engineering establishment.

This leads to one of the most profound questions one might ask in the wake of Edward Snowden’s NSA spying revelations: Knowing what we know now, are people more outspoken or are they more self-policing because of the Snowden leaks?

‘The Daily Shooter’

By extension, once the technocrat has regained some modicum of physical control, then the next domain to be conquered is the mind. In 1984, the technocracy was viewed through the eyes of the protagonist Winston Smith, who while remaining a physical prisoner of the Police State, could still retreat into his own mental state.

In our day, the expansion of the surveillance State and vast spying by the likes of the NSA and GCHQ are precisely intended to achieve this same effect, with the justification for such intrusions being an endless series of terror spectacles and lone wolf public shooting events. In the US, these mass shootings and terror scares are happening on an almost daily basis, hence, ‘The Daily Shooter’. Media coverage is both chaotic and relentless. As a result, the pubic are left stupefied and completely unable to challenge whatever narrative the government-media complex is selling at that time. The Police State marches forward.

A similar psychodrama also played out for 1984’s protagonist Winston Smith. As time progressed, however, maintaining some level of autonomy in one’s own thoughts became increasingly difficult for Winston. The final objective of the Police State, it seemed, was not only to fundamentally transform the way citizens act, but how they think too. The all-seeing and all-controlling “Big Brother” State was also the de facto social authority figure. The State’s law enforcement police force also became the “thought police.”

We see this same exact narrative playing out today as the State’s political figureheads continue in their mission to widen their definition of “extremism” along with other State-issued euphemisms used to describe citizens who should be regarded with suspicion.

Fall out of line and you might even be segregated or sent away to a special camp. Following the recent mass shooting in Chattanooga, Tennessee, retired US General and NATO Commander Wesley Clark proposed that any “disloyal Americans” should be sent to internment camps for the “duration of the conflict.” Notice the language: “for the duration of the conflict.” Indeed, it seems that Oceania is at war. He went even further, calling for the US government to identify people most likely to be “radicalised” so we can “cut this off at the beginning.”

“At the beginning?” Here, it seems Clark might be alluding to pre-crime, which will be powered by A.I…

Artificial Intelligence

Post-September 11, UK society was still hooked on their CCTV matrix, and with millions of cameras already in place and crime continuing to rise, security ‘experts’ and politicians simply doubled down on their previous wager, insisting that what the country really needed was more cameras. They believed that once a certain CCTV saturation was reached, by default they would somehow reached their twisted utopia.

It turned out that’s not humanly possible for security workers, most of whom are on a mere £7-10 (aud$14-20) per hour, to keep track, let alone analyse, a seemingly endless stream of footage. For the technocrat, the operative word here is ‘humanly’. Enter A.I…

Once again, advanced technology enters the narrative and supplies the solution to this previous insurmountable problem. The age of Artificial Intelligence, or A.I., is nearly upon us, and this next step in technological development is certain to radically change the entire concept of the Police State.

Laying down the framework an A.I. grid is not easy because the grid must be designed to cope with the application of A.I. As A.I.’s potential and practical applications have not yet been fully realised, designing the grid upon which it will be unleashed has been problematic up to this point. Sadly, society on the whole appears disinterested in questioning the social and unethical imperative currently driving the adoption of these new technologies.

At present, the big money is on the Smart Grid. Technocrats and their corporate partners are hoping to usher in their new surveillance grid under the auspices of ‘smart’ technologies. With A.I. in play, technocrats will be able to utilise the smart grid – which includes your mobile phone – to detect and track multiple targets over a wide area.6 Add facial recognition and data profiling to the mix and it’s a recipe for a full-on A.I. Smart Grid future. The ultimate hands-free, ‘surveillance selfie’ – compliments of Big Brother.

Just imagine, one day you’re simply walking down the street and pointing to something in the air. All of it is being captured on a 1.8 billion pixel video stream from the sky. They already know your identity and location with the phone in your pocket, and they already have your face logged and tracked.7

At this point we introduce Philip K. Dick’s concept of “pre-crime” whereby an A.I. system can predict an action you are likely to take.8 The system will then close the ‘Big Data’ loop by storing the video footage alongside your profile into a massive data ‘mash-up’. It will then compare with other potentially ‘suspicious’ activity in the area. Great Britain’s national police force, the Metropolitan Police, are already using a type of pre-crime software that British technocrats believe will somehow ‘revolutionalise’ modern policing in the 21st century.9

UK consumer advocate Pippa King explains how CCTV is already being phased out: “CCTV, closed circuit television, is not quite what is operating on our streets today. What we have now is IPTV, an internet protocol television network that can relay images to analytical software that uses algorithms to determine pre-crime area in real time.”

“Currently this AI looks at areas that may be targeted for crimes such as burglaries or joyriding,10 with the predicted hotspot information being sent direct to law enforcement smart phones in the field. This analytical software is being used in Glasgow, hailed as Britain’s first ‘smart city’,11 where the Israeli security firm NICE Systems are running the CCTV/IPTV network, analysing data from the 442 fixed HD surveillance cameras and 30 mobile units under a project called ‘Community Safety Glasgow’,12 whose primary objectives are described as ‘delivering Glasgow a more efficient traffic management system, identifying crime in the city and tracking individuals’.”13

This all can happen thanks to the US Defense Advanced Research Projects Agency’s (DARPA) latest creation – the ARGUS camera, Autonomous Real-Time Ground Ubiquitous Surveillance.14 According to its designers ARGUS, “melds together video from each of its 368 chips to create a 1.8 billion pixel video stream” all in real-time and archived. It’s just one of the many new toys used by the State to realise its Orwellian ambitions.

Who’s Paying For It All?

Aside from its ability to trample over the rights of law abiding citizens, the Police State has one other chief characteristic which may also be its Achilles heal: it’s bankrupting the State. Here’s how it works:

The gravy chain is endless, but only with the help of taxpayers’ money, along with a series of bribes and favours between politicians and corporates. If you have ‘friends’ in government administration, then you are more likely to cash in on any number of lucrative ‘domestic defense’ contracts.

Where you have constant crisis you also have constant business opportunity. In this dark paradigm, timing is everything. As US President Barack Obama’s sociopathic15 former chief of staff, now Mayor of Chicago, Rahm Emmanuel, once said:

“You never let a serious crisis go to waste. And what I mean by that it’s an opportunity to do things you think you could not do before.”

With that mantra in mind, in the wake of any shooting, terror scare, or crisis, industrial lobbyists and their elected political gophers will waste no time pushing for new federally-funded add-ons like training courses, workplace psychologists, regulators, specialist contractors, police cameras and other big-ticket items16 – anything to help “solve the crisis.” One such program in the US is known simply as the ‘1033’.

Joseph Lemieux writes:

“The 1033 program has flooded our local police forces with military equipment, and has turned them from Peace Officers, to a domestic army.”

“Officers stopped looking like officers, and more like soldiers all kitted out with fully automatic weapons, armoured vehicles, body armour, grenades launchers, night vision, and even bayonets! Besides the cost of liberty, how much has this domestic army cost you the tax payer?”17

In the US, no single entity embodies the Police State gravy train more than the Department of Homeland Security (DHS), where federal grants are used to bribe local law enforcement and absorb them into a larger framework of institutional dependency.

At over $200 billion per year, the DHS is now America’s most expensive federal agency. As any sane local law enforcement chief will tell you, once you smoke from the federal crack pipe, you’re hooked for life. Remember that each federal Police State agenda item has a lucrative contract attached to it. With each move central government makes, a large amount of money is also made (by someone).

By cutting off public money that is driving the runaway federal Police State in Western countries, the people have a chance to mitigate and potentially reform the current agenda.

If we hope to preserve what is left of our hard fought democracy, then now is the time to put it to the test. The alternative is unthinkable.

 

About the Author

Patrick Henningsen is an independent investigative reporter, editor, and journalist. A native of Omaha, Nebraska and a graduate of Cal Poly San Luis Obispo in California, he is currently based in London, England and is the managing editor of 21st Century Wire – News for the Waking Generation (www.21stCenturyWire.com) which covers exposés on intelligence, geopolitics, foreign policy, the war on terror, technology and Wall Street. Patrick is a regular commentator on Russia Today.

Footnotes:

  1. ‘Transforming our world: the 2030 Agenda for Sustainable Development’,https://sustainabledevelopment.un.org/post2015/transforming
    ourworld
  2. ‘The 2030 Agenda: This Month The UN Launches A Blueprint For A New World Order With The Help Of The Pope’ by Michael Snyder, 2 Sept 2015, http://endoftheamericandream.com/archives/the-2030-agenda-this-month-the-un-launches-a-blueprint-for-a-new-world-order-with-the-help-of-the-pope
  3. ‘Why CCTV has failed to deter criminals’ by Cory Doctorow, The Guardian, 17 August 2011
  4. www.ucl.ac.uk/Bentham-Project/who/panopticon
  5. www.sourcewatch.org/index.php/The_Long_War
  6. ‘Bilderberg 2015: Implementation of the A.I. Grid’ by Jay Dyer, 21st Century Wire (www.21stcenturywire.com), 14 June 2015
  7. ‘Britain Launches “Big Brother” System, Uploads One Third of Population to Facial Recognition Database’, 21st Century Wire, 3 Feb 2015
  8. ‘Already Underway: Smart A.I. Running Our Police and Cities’ by Pippa King, 21st Century Wire, 13 Mar 2015
  9. ‘British Police Roll Out New “Precrime” Software to Catch Would-Be Criminals’, 21st Century Wire, 13 Mar 2015
  10. ‘Pre-crime software recruited to track gang of thieves’ by Chris Baraniuk, New Scientist, 11 Mar 2015
  11. ‘Glasgow wins “smart city” government cash’, BBC News, www.bbc.com/news/technology-21180007
  12. www.saferglasgow.com
  13. ‘Already Underway: Smart A.I. Running Our Police and Cities’, op.cit.
  14. www.darpa.mil/program/autonomous-real-time-ground-ubiquitous-surveillance-infrared
  15. ‘The Two Sides of Rahm Emanuel: Sociopathic Political Hitman and Puppy Lover’ by Foster Kamer, 16 Aug 2009, gawker.com
  16. ‘Mayor de Blasio Announces Retraining of New York Police’ by Marc Santoradec, The New York Times,4 Dec 2014
  17. ‘How Much Money Have American Taxpayers Spent on Building a Domestic Police State?’ by Joseph Lemieux, 1 Dec 2014, http://theantimedia.org/taxpayers-police-state/

The above article appeared in New Dawn 153 (Nov-Dec 2015)

BREAKING: Leaked FBI Alert Admits Hackers Penetrated US Election Systems

jf-7

By Matt Agorist

Source: The Free Thought Project

On Monday, an official FBI alert from August 18 was leaked to Yahoo News. The alert stated the FBI had uncovered evidence showing that at least two state election systems were penetrated by hackers in recent weeks. The FBI quickly issued warnings to election officials across the country to ramp up security on their systems.

It appears from the Flash Alert that the public was not supposed to know about it.

This FLASH has been released TLP: AMBER: The information in this product is only for members of their own organization and those with DIRECT NEED TO KNOW. This information is NOT to be forwarded on beyond NEED TO KNOW recipients.

The FBI then goes on to describe the nature of the attack and lists the IP addresses associated with the intrusion.

Summary

The FBI received information of an additional IP address, 5.149.249.172, which was detected in the July 2016 compromise of a state’s Board of Election Web site. Additionally, in August 2016 attempted intrusion activities into another state’s Board of Election system identified the IP address, 185.104.9.39 used in the aforementioned compromise.

Technical Details

The following information was released by the MS-ISAC on 1 August 2016, which was derived through the course of the investigation. In late June 2016, an unknown actor scanned a state’s Board of Election website for vulnerabilities using Acunetix, and after identifying a Structured Query Language (SQL) injection (SQLi) vulnerability, used SQLmap to target the state website. The majority of the data exfiltration occurred in mid-July. There were 7 suspicious IPs and penetration testing tools Acunetix, SQLMap, and DirBuster used by the actor, detailed in the indicators section below.

“This is a big deal,” said Rich Barger, chief intelligence officer for ThreatConnect, a cybersecurity firm, who reviewed the FBI alert at the request of Yahoo News. “Two state election boards have been popped, and data has been taken. This certainly should be concerning to the common American voter.”

According to the FBI, the hack is the work of a ‘foreign entity.’ However, they have not named the country of origin. This has not stopped other officials from quickly blaming the Russians.

Also absent from the alert are the names of the states involved in the hack.

According to the report from Yahoo News:

The bulletin does not identify the states in question, but sources familiar with the document say it refers to the targeting by suspected foreign hackers of voter registration databases in Arizona and Illinois. In the Illinois case, officials were forced to shut down the state’s voter registration system for ten days in late July, after the hackers managed to download personal data on up to 200,000 state voters, Ken Menzel, the general counsel of the Illinois Board of Elections, said in an interview. The Arizona attack was more limited, involving malicious software that was introduced into its voter registration system but no successful exfiltration of data, a state official said.

“The FBI is requesting that states contact their Board of Elections and determine if any similar activity to their logs, both inbound and outbound, has been detected,” the alert reads. “Attempts should not be made to touch or ping the IP addresses directly.”

While the alert lists the IP addresses from which the attacks originated, it is highly unlikely that the hackers would use any traceable address.

“This is a wake-up call for other states to look at their systems,” said Tom Hicks, chairman of the federal Election Assistance Commission.

This news comes on the heels of a report earlier this month in which a professor from Princeton University and a graduate student proved electronic voting machines in the U.S. remain astonishingly vulnerable to hackers — and they did it in under eight minutes.

Professor Andrew Appel, a Princeton University computer science professor who has studied election security, and grad student Alex Halderman took just seven minutes to break into the authentic Sequoia AVC Advantage electronic voting machine Appel purchased for $82 online — one of the oldest models, but still used in Louisiana, Pennsylvania, New Jersey, and Virginia.

Appel notes that the only “reasonably safe” voting method is paper ballots as they can be counted alongside the electronic tally. However, crucial swing states, as Appel notes, rely on more vulnerable paperless touchscreen voting which does not back up any of the numbers.

“Then whatever numbers the voting computer says at the close of the polls are completely under the control of the computer program in there,” Appel wrote in a recent blog post entitled “Security Against Election Hacking.” “If the computer is hacked, then the hacker gets to decide what numbers are reported. … All DRE (paperless touchscreen) voting computers are susceptible to this kind of hacking. This is our biggest problem.”

The fact that the FBI is now admitting to the vulnerability of the election should raise serious concern for Americans. Before 2016, talk of vote rigging, or hacking elections, remained on the fringe — in spite of whistleblowers showing the easily provable insecure nature of electronic voting machines.

As the famous quote, often attributed to Joseph Stalin, notes:

The people who cast the votes don’t decide an election, the people who count the votes do.

And now, with electronic voting and this news of how easily hackable it is, even the vote counters may not decide.

BREAKING: Benghazi Documents FINALLY Found – Hidden In Hillary’s Deleted Email File

Screen-Shot-2016-08-27-at-6.36.34-AM

By Melissa Davis

Source: US Herald

Democrat nominee Hillary Clinton may have thought she could breathe a sigh of relief when FBI Director James Comey did not recommend charges be filed against her in connection with her unprecedented secret server set-up, but she hadn’t counted on federal judges, who are not quite as forgiving.

U.S. District Court Judge William P. Dimitrouleas has ordered the State Department to search 14,900 newly found Clinton emails to determine if any are responsive to requests in a Judicial Watch Freedom of Information Act (FOIA) lawsuit filed last year.

The FOIA requests sought all communications between then-Secretary of State Hillary Clinton and the Obama White House related to the 2012 terror attack on the U.S. Consulate in Benghazi from the day it took place – the anniversary of 9/11 – through the following week.

This week, the State Department was forced to admit in court filings it had “received positive hits” for Benghazi-related documents among the nearly 15,000 Clinton emails uncovered by the FBI during its more than year-long investigation into Mrs. Clinton’s unauthorized use of a private server housed at her home during her tenure as President Obama’s first Secretary of State.

Judge Dimitrouleas gave the Department until September 13 to review the emails, in addition to other communications, and turn over responsive records, but Clinton’s former department claimed it cannot comply with the order by the deadline due to the large number of emails to be reviewed, estimating that it will take until well after the November 8 election.

In a potentially devastating development it was learned that not only did Clinton withhold emails from State when she left the position, contrary to federal law and regulation, but utilized software to degrade the digital data to the extent that it cannot be retrieved.

The use of “bleach bit” technology could, conceivably, lead to allegations of obstruction of justice and evidence of intent.

Evidence points to another Snowden at the NSA

NSA-CIA-Edward-Snowden

By James Bamford

Source: Reuters

In the summer of 1972, state-of-the-art campaign spying consisted of amateur burglars, armed with duct tape and microphones, penetrating the headquarters of the Democratic National Committee. Today, amateur burglars have been replaced by cyberspies, who penetrated the DNC armed with computers and sophisticated hacking tools.

Where the Watergate burglars came away empty-handed and in handcuffs, the modern- day cyber thieves walked away with tens of thousands of sensitive political documents and are still unidentified.

Now, in the latest twist, hacking tools themselves, likely stolen from the National Security Agency, are on the digital auction block. Once again, the usual suspects start with Russia – though there seems little evidence backing up the accusation.

In addition, if Russia had stolen the hacking tools, it would be senseless to publicize the theft, let alone put them up for sale. It would be like a safecracker stealing the combination to a bank vault and putting it on Facebook. Once revealed, companies and governments would patch their firewalls, just as the bank would change its combination.

A more logical explanation could also be insider theft. If that’s the case, it’s one more reason to question the usefulness of an agency that secretly collects private information on millions of Americans but can’t keep its most valuable data from being stolen, or as it appears in this case, being used against us.

In what appeared more like a Saturday Night Live skit than an act of cybercrime, a group calling itself the Shadow Brokers put up for bid on the Internet what it called a “full state-sponsored toolset” of “cyberweapons.” “!!! Attention government sponsors of cyberwarfare and those who profit from it !!!! How much would you pay for enemies cyberweapons?” said the announcement.

The group said it was releasing some NSA files for “free” and promised “better” ones to the highest bidder. However, those with loosing bids “Lose Lose,” it said, because they would not receive their money back. And should the total sum of the bids, in bitcoins, reach the equivalent of half a billion dollars, the group would make the whole lot public.

While the “auction” seemed tongue in cheek, more like hacktivists than Russian high command, the sample documents were almost certainly real. The draft of a top-secret NSA manual for implanting offensive malware, released by Edward Snowden, contains code for a program codenamed SECONDDATE. That same 16-character string of numbers and characters is in the code released by the Shadow Brokers. The details from the manual were first released by The Intercept last Friday.

The authenticity of the NSA hacking tools were also confirmed by several ex-NSA officials who spoke to the media, including former members of the agency’s Tailored Access Operations (TAO) unit, the home of hacking specialists.

“Without a doubt, they’re the keys to the kingdom,” one former TAO employee told the Washington Post. “The stuff you’re talking about would undermine the security of a lot of major government and corporate networks both here and abroad.” Another added, “From what I saw, there was no doubt in my mind that it was legitimate.”

Like a bank robber’s tool kit for breaking into a vault, cyber exploitation tools, with codenames like EPICBANANA and BUZZDIRECTION, are designed to break into computer systems and networks. Just as the bank robber hopes to find a crack in the vault that has never been discovered, hackers search for digital cracks, or “exploits,” in computer programs like Windows.

The most valuable are “zero day” exploits, meaning there have been zero days since Windows has discovered the “crack” in their programs. Through this crack, the hacker would be able to get into a system and exploit it, by stealing information, until the breach is eventually discovered and patched. According to the former NSA officials who viewed the Shadow Broker files, they contained a number of exploits, including zero-day exploits that the NSA often pays thousands of dollars for to private hacking groups.

The reasons given for laying the blame on Russia appear less convincing, however. “This is probably some Russian mind game, down to the bogus accent,” James A. Lewis, a computer expert at the Center for Strategic and International Studies, a Washington think tank, told the New York Times. Why the Russians would engage in such a mind game, he never explained.

Rather than the NSA hacking tools being snatched as a result of a sophisticated cyber operation by Russia or some other nation, it seems more likely that an employee stole them. Experts who have analyzed the files suspect that they date to October 2013, five months after Edward Snowden left his contractor position with the NSA and fled to Hong Kong carrying flash drives containing hundreds of thousands of pages of NSA documents.

So, if Snowden could not have stolen the hacking tools, there are indications that after he departed in May 2013, someone else did, possibly someone assigned to the agency’s highly sensitive Tailored Access Operations.

In December 2013, another highly secret NSA document quietly became public. It was a top secret TAO catalog of NSA hacking tools. Known as the Advanced Network Technology (ANT) catalog, it consisted of 50 pages of extensive pictures, diagrams and descriptions of tools for every kind of hack, mostly targeted at devices manufactured by U.S. companies, including Apple, Cisco, Dell and many others.

Like the hacking tools, the catalog used similar codenames. Among the tools targeting Apple was one codenamed DROPOUTJEEP, which gives NSA total control of iPhones. “A software implant for the Apple iPhone,” says the ANT catalog, “includes the ability to remotely push/pull files from the device. SMS retrieval, contact-list retrieval, voicemail, geolocation, hot mic, camera capture, cell-tower location, etc.”

Another, codenamed IRATEMONK, is, “Technology that can infiltrate the firmware of hard drives manufactured by Maxtor, Samsung, Seagate and Western Digital.”

In 2014, I spent three days in Moscow with Snowden for a magazine assignment and a PBS documentary. During our on-the-record conversations, he would not talk about the ANT catalog, perhaps not wanting to bring attention to another possible NSA whistleblower.

I was, however, given unrestricted access to his cache of documents. These included both the entire British, or GCHQ, files and the entire NSA files.

But going through this archive using a sophisticated digital search tool, I could not find a single reference to the ANT catalog. This confirmed for me that it had likely been released by a second leaker. And if that person could have downloaded and removed the catalog of hacking tools, it’s also likely he or she could have also downloaded and removed the digital tools now being leaked.

In fact, a number of the same hacking implants and tools released by the Shadow Brokers are also in the ANT catalog, including those with codenames BANANAGLEE and JETPLOW. These can be used to create “a persistent back-door capability” into widely used Cisco firewalls, says the catalog.

Consisting of about 300 megabytes of code, the tools could easily and quickly be transferred to a flash drive. But unlike the catalog, the tools themselves – thousands of ones and zeros – would have been useless if leaked to a publication. This could be one reason why they have not emerged until now.

Enter WikiLeaks. Just two days after the first Shadow Brokers message, Julian Assange, the founder of WikiLeaks, sent out a Twitter message. “We had already obtained the archive of NSA cyberweapons released earlier today,” Assange wrote, “and will release our own pristine copy in due course.”

The month before, Assange was responsible for releasing the tens of thousands of hacked DNC emails that led to the resignation of the four top committee officials.

There also seems to be a link between Assange and the leaker who stole the ANT catalog, and the possible hacking tools. Among Assange’s close associates is Jacob Appelbaum, a celebrated hacktivist and the only publicly known WikiLeaks staffer in the United States – until he moved to Berlin in 2013 in what he called a “political exile” because of what he said was repeated harassment by U.S. law enforcement personnel. In 2010, a Rolling Stone magazine profile labeled him “the most dangerous man in cyberspace.”

In December 2013, Appelbaum was the first person to reveal the existence of the ANT catalog, at a conference in Berlin, without identifying the source. That same month he said he suspected the U.S. government of breaking into his Berlin apartment. He also co-wrote an article about the catalog in Der Spiegel. But again, he never named a source, which led many to assume, mistakenly, that it was Snowden.

In addition to WikiLeaks, for years Appelbaum worked for Tor, an organization focused on providing its customers anonymity on the Internet. But last May, he stepped down as a result of “serious, public allegations of sexual mistreatment” made by unnamed victims, according to a statement put out by Tor. Appelbaum has denied the charges.

Shortly thereafter, he turned his attention to Hillary Clinton. At a screening of a documentary about Assange in Cannes, France, Appelbaum accused her of having a grudge against him and Assange, and that if she were elected president, she would make their lives difficult. “It’s a situation that will possibly get worse” if she is elected to the White House, he said, according to Yahoo News.

It was only a few months later that Assange released the 20,000 DNC emails. Intelligence agencies have again pointed the finger at Russia for hacking into these emails.

Yet there has been no explanation as to how Assange obtained them. He told NBC News, “There is no proof whatsoever” that he obtained the emails from Russian intelligence. Moscow has also denied involvement.

There are, of course, many sophisticated hackers in Russia, some with close government ties and some without. And planting false and misleading indicators in messages is an old trick. Now Assange has promised to release many more emails before the election, while apparently ignoring email involving Trump. (Trump opposition research was also stolen.)

In hacktivist style, and in what appears to be phony broken English, this new release of cyberweapons also seems to be targeting Clinton. It ends with a long and angry “final message” against “Wealthy Elites . . . breaking laws” but “Elites top friends announce, no law broken, no crime commit[ed]. . . Then Elites run for president. Why run for president when already control country like dictatorship?”

Then after what they call the “fun Cyber Weapons Auction” comes the real message, a serious threat. “We want make sure Wealthy Elite recognizes the danger [of] cyberweapons. Let us spell out for Elites. Your wealth and control depends on electronic data.” Now, they warned, they have control of the NSA’s cyber hacking tools that can take that wealth away. “You see attacks on banks and SWIFT [a worldwide network for financial services] in news. If electronic data go bye-bye where leave Wealthy Elites? Maybe with dumb cattle?”

Snowden’s leaks served a public good. He alerted Americans to illegal eavesdropping on their telephone records and other privacy violations, and Congress changed the law as a result. The DNC leaks exposed corrupt policies within the Democratic Party.

But we now have entered a period many have warned about, when NSA’s cyber weapons could be stolen like loose nukes and used against us. It opens the door to criminal hackers, cyber anarchists and hostile foreign governments that can use the tools to gain access to thousands of computers in order to steal data, plant malware and cause chaos.

It’s one more reason why NSA may prove to be one of Washington’s greatest liabilities rather than assets.

 

About the Author

James Bamford is the author of The Shadow Factory: The Ultra-Secret NSA From 9/11 to the Eavesdropping on America. He is a columnist for Foreign Policy magazine.

 

Fear our new robot overlords: This is why you need to take artificial intelligence seriously

Matrix-Machines-Best-Movie-AI

Killer computers determined to kill us? Nope. Forget “Terminator” — there’s something more specific to worry about

By Phil Torres

Source: Salon

There are a lot of major problems today with tangible, real-world consequences. A short list might include terrorism, U.S.-Russian relations, climate change and biodiversity loss, income inequality, health care, childhood poverty, and the homegrown threat of authoritarian populism, most notably associated with the presumptive nominee for the Republican Party, Donald Trump.

Yet if you’ve been paying attention to the news for the past several years, you’ve almost certainly seen articles from a wide range of news outlets about the looming danger of artificial general intelligence, or “AGI.” For example, Stephen Hawking has repeatedly expressed that “the development of full artificial intelligence could spell the end of the human race,” and Elon Musk — of Tesla and SpaceX fame — has described the creation of superintelligence as “summoning the demon.” Furthermore, the Oxford philosopher and director of the Future of Humanity Institute, Nick Bostrom, published a New York Times best-selling book in 2014 called Superintelligence, in which he suggests that the “default outcome” of building a superintelligent machine will be “doom.”

What’s with all this fear-mongering? Should we really be worried about a takeover by killer computers hell-bent on the total destruction of Homo sapiens? The first thing to recognize is that a Terminator-style war between humanoid robots is not what the experts are anxious about. Rather, the scenarios that keep these individuals awake at night are far more catastrophic. This may be difficult to believe but, as I’ve written elsewhere, sometimes truth is stranger than science fiction. Indeed, given that the issue of AGI isn’t going anywhere anytime soon, it’s increasingly important for the public to understand exactly why the experts are nervous about superintelligent machines. As the Future of Life Institute recently pointed out, there’s a lot of bad journalism about AGI out there. This is a chance to correct the record.

Toward this goal, step one is to realize is that your brain is an information-processing device. In fact, many philosophers talk about the brain as the hardware — or rather, the “wetware” — of the mind, and the mind as the software of the brain. Directly behind your eyes is a high-powered computer that weighs about three pounds and has roughly the same consistency as Jell-o. It’s also the most complex object in the known universe. Nonetheless, the rate at which it’s able to process information is much, much slower than the information-processing speed of an actual computer. The reason is that computers process information by propagating electrical potentials, and electrical potentials move at the speed of light, whereas the fastest signals in your brain travel at around 100 miles per second. Fast, to be sure, but not nearly as fast as light.

Consequently, an AGI could think about the world at speeds many orders of magnitude faster than our brains can. From the AGI’s point of view, the outside world — including people — would move so slowly that everything would appear almost frozen. As the theorist Eliezer Yudkowsky calculates, for a computer running a million times faster than our puny brains, “a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”

Already, then, an AGI would have a huge advantage. Imagine yourself in a competition against a machine that has a whole year to work through a cognitive puzzle for every 31 seconds that you spend trying to think up a solution. The mental advantage of the AGI would be truly profound. Even a large team of humans working together would be no match for a single AGI with so much time on its hands. Now imagine that we’re not in a puzzle-solving competition with an AGI but a life-and-death situation in which the AGI wants to destroy humanity. While we struggle to come up with strategies for keeping it contained, it would have ample time to devise a diabolical scheme to exploit any technology within electronic reach for the purpose of destroying humanity.

But a diabolical AGI isn’t — once again — what many experts are actually worried about. This is a crucial point that the Harvard psychologist Steven Pinker misses in a comment about AGI for the website Edge.org. To quote Pinker at length:

“The other problem with AGI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.” Pinker then concludes with, “It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.”

Unfortunately, such criticism misunderstands the danger. While it’s conceptually possible that an AGI really does have malevolent goals — for example, someone could intentionally design an AGI to be malicious — the more likely scenario is one in which the AGI kills us because doing so happens to be useful. By analogy, when a developer wants to build a house, does he or she consider the plants, insects, and other critters that happen to live on the plot of land? No. Their death is merely incidental to a goal that has nothing to do with them. Or consider the opening scenes of The Hitchhiker’s Guide to the Galaxy, in which “bureaucratic” aliens schedule Earth for demolition to make way for a “hyperspatial express route” — basically, a highway. In this case, the aliens aren’t compelled to destroy us out of hatred. We just happen to be in the way.

The point is that what most theorists are worried about is an AGI whose values — or final goals — don’t fully align with ours. This may not sound too bad, but a bit of reflection shows that if an AGI’s values fail to align with ours in even the slightest ways, the outcome could very well be, as Bostrom argues, doom. Consider the case of an AGI — thinking at the speed of light, let’s not forget — that is asked to use its superior intelligence for the purpose of making humanity happy. So what does it do? Well, it destroys humanity, because people can’t be sad if they don’t exist. Start over. You tell it to make humanity happy, but without killing us. So it notices that humans laugh when we’re happy, and hooks up a bunch of electrodes to our faces and diaphragm that make us involuntarily convulse as if we’re laughing. The result is a strange form of hell. Start over, again. You tell it to make us happy without killing us or forcing our muscles to contract. So it implants neural electrodes into the pleasure centers of everyone’s brains, resulting in a global population in such euphoric trances that people can no longer engage in the activities that give life meaning. Start over — once more. This process can go on for hours. At some point it becomes painfully obvious that getting an AGI’s goals to align with ours is going to be a very, very tricky task.

Another famous example that captures this point involves a superintelligence whose sole mission is to manufacture paperclips. This sounds pretty benign, right? How could a “paperclip maximizer” pose an existential threat to humanity? Well, if the goal is to make as many paperclips as possible, then the AGI will need resources to do this. And what are paperclips composed of? Atoms — the very same physical stuff out of which your body is composed. Thus, for the AGI, humanity is nothing more than a vast reservoir of easily accessible atoms, atoms, atoms. As Yudkowsky eloquently puts it, “The [AGI] does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” And just like that, the flesh and bones of human beings are converted into bendable metal for holding short stacks of paper.

At this point, one might think the following, “Wait a second, we’re talking about superintelligence, right? How could a truly superintelligent machine be fixated on something so dumb as creating as many paperclips as possible?” Well, just look around at humanity. By every measure, we are by far the most intelligent creatures on our planetary spaceship. Yet our species is obsessed with goals and values that are, when one takes a step back and peers at the world with “new eyes,” incredibly idiotic, perplexing, harmful, foolish, self-destructive, other-destructive, and just plain weird.

For example, some people care so much about money that they’re willing to ruin friendships, destroy lives and even commit murder or start wars to acquire it. Others are so obsessed with obeying the commandments of ancient “holy texts” that they’re willing to blow themselves up in a market full of non-combatants. Or consider a less explicit goal: sex. Like all animals, humans have an impulse to copulate, and this impulse causes us to behave in certain ways — in some cases, to risk monetary losses and personal embarrassment. The appetite for sex is just there, pushing us toward certain behaviors, and there’s little we can do about the urge itself.

The point is that there’s no strong connection between how intelligent a being is and what its final goals are. As Pinker correctly notes above, intelligence is nothing more than a measure of one’s ability to achieve a particular aim, whatever it happens to be. It follows that any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as, well, stupid. A superintelligent machine could be no less infatuated with obeying Allah’s divine will or conquering countries for oil as some humans are.

So far, we’ve discussed the thought-speed of machines, the importance of making sure their values align with ours, and the weak connection between intelligence and goals. These considerations alone warrant genuine concern about AGI. But we haven’t yet mentioned the clincher that makes AGI an utterly unique problem unlike anything humanity has ever encountered. To understand this crucial point, consider how the airplane was invented. The first people to keep a powered aircraft airborne were the Wright brothers. On the windy beaches of North Carolina, they managed to stay off the ground for a total of 12 seconds. This was a marvelous achievement, but the aircraft was hardly adequate for transporting goods or people from one location to another. So, they improved its design, as did a long lineage of subsequent inventors. Airplanes were built with one, two, or three wings, composed of different materials, and eventually the propeller was replaced by the jet engine. One particular design — the Concorde — could even fly faster than the speed of sound, traversing the Atlantic from New York to London in less than 3.5 hours.

The crucial idea here is that the airplane underwent many iterations of innovation. Problems that arose in previous designs were improved upon, leading to increasingly safe and reliable aircraft. But this is not the situation we’re likely to be in with AGI. Rather, we’re likely to have one, and only one, chance to get all the problems mentioned above exactly right. Why? Because intelligence is power. For example, we humans are the dominant species on the planet not because of our long claws, sharp teeth and bulky musculatures. The key difference between Homo sapiens and the rest of the Animal Kingdom concerns our oversized brains, which enable us to manipulate and rearrange the world in incredible ways. It follows that if an AGI were to exceed our level of intelligence, it could potentially dominate not only the biosphere, but humanity as well.

Even more, since creating intelligent machines is an intellectual task, an AGI could attempt to modify its own code, a possibility known as “recursive self-improvement.” The result could be an exponential intelligence explosion that, before one has a chance to say “What the hell is happening?,” yields a super-super-superintelligent AGI, or a being that towers over us to the extent that we tower over the lowly cockroach. Whoever creates the first superintelligent computer — whether it’s Google, the U.S. government, the Chinese government, the North Korean government, or a lone hacker in her or his garage — they’ll have to get everything just right the first time. There probably won’t be opportunities for later iterations of innovation to fix flaws in the original design, if there are any. When it comes to AGI, the stakes are high.

It’s increasingly important for the public to understand the nature of thinking machines and why some experts are so worried about them. Without a grasp of these issues, claims like “A paperclip maximizer could destroy humanity!” will sound as apocalyptically absurd as “The Rapture is near! Save your soul while you still can!” Consequently, organizations dedicated to studying AGI safety could get defunded or shut down, and the topic of AGI could become the target of misguided mockery. The fact is that if we manage to create a “friendly” AGI, the benefits to humanity could be vast. But if we fail to get things right on the first go around, the naked ape could very well end up as a huge pile of paperclips.

 

 

Phil Torres is the founder of the X-Risks Institute and author of The End: What Science and Religion Tell Us About the Apocalypse. He’s on Twitter @xriskology.

The new mind control

mind_control

The internet has spawned subtle forms of influence that can flip elections and manipulate everything we say, think and do

By Robert Epstein

Source: Aeon Magazine

Over the past century, more than a few great writers have expressed concern about humanity’s future. In The Iron Heel (1908), the American writer Jack London pictured a world in which a handful of wealthy corporate titans – the ‘oligarchs’ – kept the masses at bay with a brutal combination of rewards and punishments. Much of humanity lived in virtual slavery, while the fortunate ones were bought off with decent wages that allowed them to live comfortably – but without any real control over their lives.

In We (1924), the brilliant Russian writer Yevgeny Zamyatin, anticipating the excesses of the emerging Soviet Union, envisioned a world in which people were kept in check through pervasive monitoring. The walls of their homes were made of clear glass, so everything they did could be observed. They were allowed to lower their shades an hour a day to have sex, but both the rendezvous time and the lover had to be registered first with the state.

In Brave New World (1932), the British author Aldous Huxley pictured a near-perfect society in which unhappiness and aggression had been engineered out of humanity through a combination of genetic engineering and psychological conditioning. And in the much darker novel 1984 (1949), Huxley’s compatriot George Orwell described a society in which thought itself was controlled; in Orwell’s world, children were taught to use a simplified form of English called Newspeak in order to assure that they could never express ideas that were dangerous to society.

These are all fictional tales, to be sure, and in each the leaders who held the power used conspicuous forms of control that at least a few people actively resisted and occasionally overcame. But in the non-fiction bestseller The Hidden Persuaders (1957) – recently released in a 50th-anniversary edition – the American journalist Vance Packard described a ‘strange and rather exotic’ type of influence that was rapidly emerging in the United States and that was, in a way, more threatening than the fictional types of control pictured in the novels. According to Packard, US corporate executives and politicians were beginning to use subtle and, in many cases, completely undetectable methods to change people’s thinking, emotions and behaviour based on insights from psychiatry and the social sciences.

Most of us have heard of at least one of these methods: subliminal stimulation, or what Packard called ‘subthreshold effects’ – the presentation of short messages that tell us what to do but that are flashed so briefly we aren’t aware we have seen them. In 1958, propelled by public concern about a theatre in New Jersey that had supposedly hidden messages in a movie to increase ice cream sales, the National Association of Broadcasters – the association that set standards for US television – amended its code to prohibit the use of subliminal messages in broadcasting. In 1974, the Federal Communications Commission opined that the use of such messages was ‘contrary to the public interest’. Legislation to prohibit subliminal messaging was also introduced in the US Congress but never enacted. Both the UK and Australia have strict laws prohibiting it.

Subliminal stimulation is probably still in wide use in the US – it’s hard to detect, after all, and no one is keeping track of it – but it’s probably not worth worrying about. Research suggests that it has only a small impact, and that it mainly influences people who are already motivated to follow its dictates; subliminal directives to drink affect people only if they’re already thirsty.

Packard had uncovered a much bigger problem, however – namely that powerful corporations were constantly looking for, and in many cases already applying, a wide variety of techniques for controlling people without their knowledge. He described a kind of cabal in which marketers worked closely with social scientists to determine, among other things, how to get people to buy things they didn’t need and how to condition young children to be good consumers – inclinations that were explicitly nurtured and trained in Huxley’s Brave New World. Guided by social science, marketers were quickly learning how to play upon people’s insecurities, frailties, unconscious fears, aggressive feelings and sexual desires to alter their thinking, emotions and behaviour without any awareness that they were being manipulated.

By the early 1950s, Packard said, politicians had got the message and were beginning to merchandise themselves using the same subtle forces being used to sell soap. Packard prefaced his chapter on politics with an unsettling quote from the British economist Kenneth Boulding: ‘A world of unseen dictatorship is conceivable, still using the forms of democratic government.’ Could this really happen, and, if so, how would it work?

The forces that Packard described have become more pervasive over the decades. The soothing music we all hear overhead in supermarkets causes us to walk more slowly and buy more food, whether we need it or not. Most of the vacuous thoughts and intense feelings our teenagers experience from morning till night are carefully orchestrated by highly skilled marketing professionals working in our fashion and entertainment industries. Politicians work with a wide range of consultants who test every aspect of what the politicians do in order to sway voters: clothing, intonations, facial expressions, makeup, hairstyles and speeches are all optimised, just like the packaging of a breakfast cereal.

Fortunately, all of these sources of influence operate competitively. Some of the persuaders want us to buy or believe one thing, others to buy or believe something else. It is the competitive nature of our society that keeps us, on balance, relatively free.

But what would happen if new sources of control began to emerge that had little or no competition? And what if new means of control were developed that were far more powerful – and far more invisible – than any that have existed in the past? And what if new types of control allowed a handful of people to exert enormous influence not just over the citizens of the US but over most of the people on Earth?

It might surprise you to hear this, but these things have already happened.

To understand how the new forms of mind control work, we need to start by looking at the search engine – one in particular: the biggest and best of them all, namely Google. The Google search engine is so good and so popular that the company’s name is now a commonly used verb in languages around the world. To ‘Google’ something is to look it up on the Google search engine, and that, in fact, is how most computer users worldwide get most of their information about just about everything these days. They Google it. Google has become the main gateway to virtually all knowledge, mainly because the search engine is so good at giving us exactly the information we are looking for, almost instantly and almost always in the first position of the list it shows us after we launch our search – the list of ‘search results’.

That ordered list is so good, in fact, that about 50 per cent of our clicks go to the top two items, and more than 90 per cent of our clicks go to the 10 items listed on the first page of results; few people look at other results pages, even though they often number in the thousands, which means they probably contain lots of good information. Google decides which of the billions of web pages it is going to include in our search results, and it also decides how to rank them. How it decides these things is a deep, dark secret – one of the best-kept secrets in the world, like the formula for Coca-Cola.

Because people are far more likely to read and click on higher-ranked items, companies now spend billions of dollars every year trying to trick Google’s search algorithm – the computer program that does the selecting and ranking – into boosting them another notch or two. Moving up a notch can mean the difference between success and failure for a business, and moving into the top slots can be the key to fat profits.

Late in 2012, I began to wonder whether highly ranked search results could be impacting more than consumer choices. Perhaps, I speculated, a top search result could have a small impact on people’s opinions about things. Early in 2013, with my associate Ronald E Robertson of the American Institute for Behavioral Research and Technology in Vista, California, I put this idea to a test by conducting an experiment in which 102 people from the San Diego area were randomly assigned to one of three groups. In one group, people saw search results that favoured one political candidate – that is, results that linked to web pages that made this candidate look better than his or her opponent. In a second group, people saw search rankings that favoured the opposing candidate, and in the third group – the control group – people saw a mix of rankings that favoured neither candidate. The same search results and web pages were used in each group; the only thing that differed for the three groups was the ordering of the search results.

To make our experiment realistic, we used real search results that linked to real web pages. We also used a real election – the 2010 election for the prime minister of Australia. We used a foreign election to make sure that our participants were ‘undecided’. Their lack of familiarity with the candidates assured this. Through advertisements, we also recruited an ethnically diverse group of registered voters over a wide age range in order to match key demographic characteristics of the US voting population.

All participants were first given brief descriptions of the candidates and then asked to rate them in various ways, as well as to indicate which candidate they would vote for; as you might expect, participants initially favoured neither candidate on any of the five measures we used, and the vote was evenly split in all three groups. Then the participants were given up to 15 minutes in which to conduct an online search using ‘Kadoodle’, our mock search engine, which gave them access to five pages of search results that linked to web pages. People could move freely between search results and web pages, just as we do when using Google. When participants completed their search, we asked them to rate the candidates again, and we also asked them again who they would vote for.

We predicted that the opinions and voting preferences of 2 or 3 per cent of the people in the two bias groups – the groups in which people were seeing rankings favouring one candidate – would shift toward that candidate. What we actually found was astonishing. The proportion of people favouring the search engine’s top-ranked candidate increased by 48.4 per cent, and all five of our measures shifted toward that candidate. What’s more, 75 per cent of the people in the bias groups seemed to have been completely unaware that they were viewing biased search rankings. In the control group, opinions did not shift significantly.

This seemed to be a major discovery. The shift we had produced, which we called the Search Engine Manipulation Effect (or SEME, pronounced ‘seem’), appeared to be one of the largest behavioural effects ever discovered. We did not immediately uncork the Champagne bottle, however. For one thing, we had tested only a small number of people, and they were all from the San Diego area.

Over the next year or so, we replicated our findings three more times, and the third time was with a sample of more than 2,000 people from all 50 US states. In that experiment, the shift in voting preferences was 37.1 per cent and even higher in some demographic groups – as high as 80 per cent, in fact.

We also learned in this series of experiments that by reducing the bias just slightly on the first page of search results – specifically, by including one search item that favoured the other candidate in the third or fourth position of the results – we could mask our manipulation so that few or even no people were aware that they were seeing biased rankings. We could still produce dramatic shifts in voting preferences, but we could do so invisibly.

Still no Champagne, though. Our results were strong and consistent, but our experiments all involved a foreign election – that 2010 election in Australia. Could voting preferences be shifted with real voters in the middle of a real campaign? We were skeptical. In real elections, people are bombarded with multiple sources of information, and they also know a lot about the candidates. It seemed unlikely that a single experience on a search engine would have much impact on their voting preferences.

To find out, in early 2014, we went to India just before voting began in the largest democratic election in the world – the Lok Sabha election for prime minister. The three main candidates were Rahul Gandhi, Arvind Kejriwal, and Narendra Modi. Making use of online subject pools and both online and print advertisements, we recruited 2,150 people from 27 of India’s 35 states and territories to participate in our experiment. To take part, they had to be registered voters who had not yet voted and who were still undecided about how they would vote.

Participants were randomly assigned to three search-engine groups, favouring, respectively, Gandhi, Kejriwal or Modi. As one might expect, familiarity levels with the candidates was high – between 7.7 and 8.5 on a scale of 10. We predicted that our manipulation would produce a very small effect, if any, but that’s not what we found. On average, we were able to shift the proportion of people favouring any given candidate by more than 20 per cent overall and more than 60 per cent in some demographic groups. Even more disturbing, 99.5 per cent of our participants showed no awareness that they were viewing biased search rankings – in other words, that they were being manipulated.

SEME’s near-invisibility is curious indeed. It means that when people – including you and me – are looking at biased search rankings, they look just fine. So if right now you Google ‘US presidential candidates’, the search results you see will probably look fairly random, even if they happen to favour one candidate. Even I have trouble detecting bias in search rankings that I know to be biased (because they were prepared by my staff). Yet our randomised, controlled experiments tell us over and over again that when higher-ranked items connect with web pages that favour one candidate, this has a dramatic impact on the opinions of undecided voters, in large part for the simple reason that people tend to click only on higher-ranked items. This is truly scary: like subliminal stimuli, SEME is a force you can’t see; but unlike subliminal stimuli, it has an enormous impact – like Casper the ghost pushing you down a flight of stairs.

We published a detailed report about our first five experiments on SEME in the prestigious Proceedings of the National Academy of Sciences (PNAS) in August 2015. We had indeed found something important, especially given Google’s dominance over search. Google has a near-monopoly on internet searches in the US, with 83 per cent of Americans specifying Google as the search engine they use most often, according to the Pew Research Center. So if Google favours one candidate in an election, its impact on undecided voters could easily decide the election’s outcome.

Keep in mind that we had had only one shot at our participants. What would be the impact of favouring one candidate in searches people are conducting over a period of weeks or months before an election? It would almost certainly be much larger than what we were seeing in our experiments.

Other types of influence during an election campaign are balanced by competing sources of influence – a wide variety of newspapers, radio shows and television networks, for example – but Google, for all intents and purposes, has no competition, and people trust its search results implicitly, assuming that the company’s mysterious search algorithm is entirely objective and unbiased. This high level of trust, combined with the lack of competition, puts Google in a unique position to impact elections. Even more disturbing, the search-ranking business is entirely unregulated, so Google could favour any candidate it likes without violating any laws. Some courts have even ruled that Google’s right to rank-order search results as it pleases is protected as a form of free speech.

Does the company ever favour particular candidates? In the 2012 US presidential election, Google and its top executives donated more than $800,000 to President Barack Obama and just $37,000 to his opponent, Mitt Romney. And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Google’s search results routinely favoured Democratic candidates. Are Google’s search rankings really biased? An internal report issued by the US Federal Trade Commission in 2012 concluded that Google’s search rankings routinely put Google’s financial interests ahead of those of their competitors, and anti-trust actions currently under way against Google in both the European Union and India are based on similar findings.

In most countries, 90 per cent of online search is conducted on Google, which gives the company even more power to flip elections than it has in the US and, with internet penetration increasing rapidly worldwide, this power is growing. In our PNAS article, Robertson and I calculated that Google now has the power to flip upwards of 25 per cent of the national elections in the world with no one knowing this is occurring. In fact, we estimate that, with or without deliberate planning on the part of company executives, Google’s search rankings have been impacting elections for years, with growing impact each year. And because search rankings are ephemeral, they leave no paper trail, which gives the company complete deniability.

Power on this scale and with this level of invisibility is unprecedented in human history. But it turns out that our discovery about SEME was just the tip of a very large iceberg.

Recent reports suggest that the Democratic presidential candidate Hillary Clinton is making heavy use of social media to try to generate support – Twitter, Instagram, Pinterest, Snapchat and Facebook, for starters. At this writing, she has 5.4 million followers on Twitter, and her staff is tweeting several times an hour during waking hours. The Republican frontrunner, Donald Trump, has 5.9 million Twitter followers and is tweeting just as frequently.

Is social media as big a threat to democracy as search rankings appear to be? Not necessarily. When new technologies are used competitively, they present no threat. Even through the platforms are new, they are generally being used the same way as billboards and television commercials have been used for decades: you put a billboard on one side of the street; I put one on the other. I might have the money to erect more billboards than you, but the process is still competitive.

What happens, though, if such technologies are misused by the companies that own them? A study by Robert M Bond, now a political science professor at Ohio State University, and others published in Nature in 2012 described an ethically questionable experiment in which, on election day in 2010, Facebook sent ‘go out and vote’ reminders to more than 60 million of its users. The reminders caused about 340,000 people to vote who otherwise would not have. Writing in the New Republic in 2014, Jonathan Zittrain, professor of international law at Harvard University, pointed out that, given the massive amount of information it has collected about its users, Facebook could easily send such messages only to people who support one particular party or candidate, and that doing so could easily flip a close election – with no one knowing that this has occurred. And because advertisements, like search rankings, are ephemeral, manipulating an election in this way would leave no paper trail.

Are there laws prohibiting Facebook from sending out ads selectively to certain users? Absolutely not; in fact, targeted advertising is how Facebook makes its money. Is Facebook currently manipulating elections in this way? No one knows, but in my view it would be foolish and possibly even improper for Facebook not to do so. Some candidates are better for a company than others, and Facebook’s executives have a fiduciary responsibility to the company’s stockholders to promote the company’s interests.

The Bond study was largely ignored, but another Facebook experiment, published in 2014 in PNAS, prompted protests around the world. In this study, for a period of a week, 689,000 Facebook users were sent news feeds that contained either an excess of positive terms, an excess of negative terms, or neither. Those in the first group subsequently used slightly more positive terms in their communications, while those in the second group used slightly more negative terms in their communications. This was said to show that people’s ‘emotional states’ could be deliberately manipulated on a massive scale by a social media company, an idea that many people found disturbing. People were also upset that a large-scale experiment on emotion had been conducted without the explicit consent of any of the participants.

Facebook’s consumer profiles are undoubtedly massive, but they pale in comparison with those maintained by Google, which is collecting information about people 24/7, using more than 60 different observation platforms – the search engine, of course, but also Google Wallet, Google Maps, Google Adwords, Google Analytics, Chrome, Google Docs, Android, YouTube, and on and on. Gmail users are generally oblivious to the fact that Google stores and analyses every email they write, even the drafts they never send – as well as all the incoming email they receive from both Gmail and non-Gmail users.

According to Google’s privacy policy – to which one assents whenever one uses a Google product, even when one has not been informed that he or she is using a Google product – Google can share the information it collects about you with almost anyone, including government agencies. But never with you. Google’s privacy is sacrosanct; yours is nonexistent.

Could Google and ‘those we work with’ (language from the privacy policy) use the information they are amassing about you for nefarious purposes – to manipulate or coerce, for example? Could inaccurate information in people’s profiles (which people have no way to correct) limit their opportunities or ruin their reputations?

Certainly, if Google set about to fix an election, it could first dip into its massive database of personal information to identify just those voters who are undecided. Then it could, day after day, send customised rankings favouring one candidate to just those people. One advantage of this approach is that it would make Google’s manipulation extremely difficult for investigators to detect.

Extreme forms of monitoring, whether by the KGB in the Soviet Union, the Stasi in East Germany, or Big Brother in 1984, are essential elements of all tyrannies, and technology is making both monitoring and the consolidation of surveillance data easier than ever. By 2020, China will have put in place the most ambitious government monitoring system ever created – a single database called the Social Credit System, in which multiple ratings and records for all of its 1.3 billion citizens are recorded for easy access by officials and bureaucrats. At a glance, they will know whether someone has plagiarised schoolwork, was tardy in paying bills, urinated in public, or blogged inappropriately online.

As Edward Snowden’s revelations made clear, we are rapidly moving toward a world in which both governments and corporations – sometimes working together – are collecting massive amounts of data about every one of us every day, with few or no laws in place that restrict how those data can be used. When you combine the data collection with the desire to control or manipulate, the possibilities are endless, but perhaps the most frightening possibility is the one expressed in Boulding’s assertion that an ‘unseen dictatorship’ was possible ‘using the forms of democratic government’.

Since Robertson and I submitted our initial report on SEME to PNAS early in 2015, we have completed a sophisticated series of experiments that have greatly enhanced our understanding of this phenomenon, and other experiments will be completed in the coming months. We have a much better sense now of why SEME is so powerful and how, to some extent, it can be suppressed.

We have also learned something very disturbing – that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide – entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue. In one of our recent experiments, biased search results shifted people’s opinions about the value of fracking by 33.9 per cent.

Perhaps even more disturbing is that the handful of people who do show awareness that they are viewing biased search rankings shift even further in the predicted direction; simply knowing that a list is biased doesn’t necessarily protect you from SEME’s power.

Remember what the search algorithm is doing: in response to your query, it is selecting a handful of webpages from among the billions that are available, and it is ordering those webpages using secret criteria. Seconds later, the decision you make or the opinion you form – about the best toothpaste to use, whether fracking is safe, where you should go on your next vacation, who would make the best president, or whether global warming is real – is determined by that short list you are shown, even though you have no idea how the list was generated.

Meanwhile, behind the scenes, a consolidation of search engines has been quietly taking place, so that more people are using the dominant search engine even when they think they are not. Because Google is the best search engine, and because crawling the rapidly expanding internet has become prohibitively expensive, more and more search engines are drawing their information from the leader rather than generating it themselves. The most recent deal, revealed in a Securities and Exchange Commission filing in October 2015, was between Google and Yahoo! Inc.

Looking ahead to the November 2016 US presidential election, I see clear signs that Google is backing Hillary Clinton. In April 2015, Clinton hired Stephanie Hannon away from Google to be her chief technology officer and, a few months ago, Eric Schmidt, chairman of the holding company that controls Google, set up a semi-secret company – The Groundwork – for the specific purpose of putting Clinton in office. The formation of The Groundwork prompted Julian Assange, founder of Wikileaks, to dub Google Clinton’s ‘secret weapon’ in her quest for the US presidency.

We now estimate that Hannon’s old friends have the power to drive between 2.6 and 10.4 million votes to Clinton on election day with no one knowing that this is occurring and without leaving a paper trail. They can also help her win the nomination, of course, by influencing undecided voters during the primaries. Swing voters have always been the key to winning elections, and there has never been a more powerful, efficient or inexpensive way to sway them than SEME.

We are living in a world in which a handful of high-tech companies, sometimes working hand-in-hand with governments, are not only monitoring much of our activity, but are also invisibly controlling more and more of what we think, feel, do and say. The technology that now surrounds us is not just a harmless toy; it has also made possible undetectable and untraceable manipulations of entire populations – manipulations that have no precedent in human history and that are currently well beyond the scope of existing regulations and laws. The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril.

Google’s lemmings: Pokémon go where Silicon Valley says

index

An analysis of Ingress and Pokémon Go reveals important truths about corporate control and the ability of our mobile phones to organize our desires.

By Alfie Brown

Source: ROAR Magazine

his article has a clickbaity title but a sobering and concerning point to make. In 2010, Google started up what is now a very important subsidiary, Niantic Inc. Google starts up a lot of companies each year and acquires a great many more, so there is nothing special in this. What is important is that whilst most of us see Google’s acquisition of every “start-up” and endless development of “subsidiary” companies with different names as simply an attempt to completely monopolize the market, the case of Niantic shows us that there is more to the extent of Google’s power.

Six years on from its inception with the launch of its biggest game yet, Pokémon Go, Niantic has hit the headlines and people are finally paying attention to the company, with some apparent leftists even claiming we ought to boycott Pokémon Go. In fact, Niantic have been working on mobile phone psychology and social organization for several years. An analysis of the company’s two big games, Ingress and Pokémon Go, shows us some important truths about the world we are living in, about corporate control and about the ability of our mobile phones to organize our desires.

Niantic developed their first major game, Ingress, in 2011. The game, one of the most important of recent years, is a key ideological tool for Google — one that, unlike Pokémon Go, is little publicized. Ingress has seven million or more players and Ingress tattoos show the degree to which people define themselves by the application. Some players even describe Ingress as a “lifestyle” rather than a “game”. The reader can be forgiven for thinking: “I don’t play it, so why would this apply to me?” But the entertainment coming out of Google via Niantic is in line with Google’s wider project of regulating our movements and experiences of the physical world; unless you don’t use Google or any of its applications, many of which come built-it to our phones and cannot be uninstalled, this applies to you.

Ingress reflects a trend of mobile phone application development (which includes Google Maps and Uber, among other well-known apps) designed to regulate and influence our experience of the city, turning the mobile phone into a new kind of unconscious: an ideological force driving our movements while we remain only semi-aware of what propels us and why we are propelled in the directions we are.

I first considered the importance of mobile phone games to be about a kind of “distraction” — an argument I made in my book and related article in The New Inquiry. Later, when playing Ingress for the first time, I realized there was a lot more to it than this. Ingress, rather than simply distracting us from the city around us, actually trains us to become Google’s perfect citizens. In Ingress, the player moves around the real environment capturing “portals” represented by landmarks, monuments and public art, as well as other less-famous features of the city. The player is required to be within physical range of the “portal” to capture it, so the game constantly tracks the player via GPS. Importantly, it not only monitors where we go, but directs us where it wants us to move.

As such it is very much the counterpart of Google Maps, which is also developing the ability not only to track our movements but to direct them. Of course, Google’s algorithms have long since dictated which restaurants we visit, which cafés we are aware of and which paths we take to get to these destinations. Now though, Google is developing new technology that actually predicts where you will want to go based on the time, your GPS location and your habitual history of movement stored in its infinitely powerful recording system. This, like Ingress, shows us a new pattern emerging in which the mobile phone dictates our paths around the city and encourages us, without realizing it, to develop habitual and repetitious patterns of movement. More importantly still, such applications anticipate our very desires, not so much giving us what we want as determining what we desire.

Here again, the connection with the concept of the unconscious is useful. While some have seen the unconscious as a morass of unregulated desires, followers of Freud and later of Lacanian psychoanalysis have been keen to show precisely how structured the unconscious is by outside forces. Our mobile phones pretend to be about fulfilling our every desire, giving us endless entertainment (games), easy transport (Uber) and instant access to food and drink (OpenRice, JustEat) and even near-instantaneous sex and love (Tindr, Grindr). Yet, what is much scarier than the fact that you can get everything you want via your mobile phone is the possibility that what you want is itself set in motion by the phone.

Into precisely this atmosphere enters Pokémon Go, out just days ago, and already the most significant mobile phone release of 2016. The game is, of course, made by none other than Niantic Labs. A series of hysterical events have already arisen from the ethical minefield that is Pokémon Go. In the case of Ingress, academic study has already been dedicated to the fact that the game has sent young children into unlit city parks at 3am. With Pokémon Go, Australian police have had to respond to a bunch of Pokémon trainers trying to get into a police station to capture the Pokémon within and some people found a dead body instead of a Pokémon. It has already been suggested that Pokémon Go is eventually going to kill someone — and since that article was published someone has crashed into a police car and another has been run-over while hunting Pokemon. But, as with Ingress, it is not the occasional mad story to emerge that should concern us, but the psychological and technological effects of every user’s experience.

The premise of Pokémon Go is simply that you use your GPS to find Pokémon in the real environment and then your camera to make the Pokémon visible, so that the world is enriched by looking through the screen at what lies behind it, as in the image below:

images

The Pokémon itself is an incredible phenomenon deserving of a book length study. Perhaps for now we can say that the Pokémon is the perfect example of what Jacques Lacan called the objet a, that perfectly cute fetishised but illusive object of desire that would truly make us happy if only we could just get our hands on it. We never do, because there is always a newer, cuter and harder to capture version that we just have to catch!

Dystopian visions of what technology and videogames would lead to seem to have got something completely wrong. Depictions of the dystopian videogame future have always tended to see the future as involving each individual isolated from the rest and sat quietly alone in a small room hooked up into a computer through which their lives are exclusively lived. In other words, the importance of the physical environment recedes in favor of the imaginary electronic world. On the contrary to these predictions of the future, we now live in a dystopia where Google and its subsidiaries send us madly around the city almost non-stop in directions of its choosing in search of the objects of desire, whether that be a lover on Tindr, a bowl of authentic Japanese ramen or that elusive Clefairy or Pikachu.

In the 1990s parents could ask their children to “get outside more” to escape the videogame space, but now it is the games that make us charge around the city capturing portals and collecting Pokémon and going on dates. Putting aside the full access that Google gets to your accounts via Pokémon Go, this shows us something really dangerous. It points to the increasing reality that there really is no escape from Google — and that while we are doing what we think we want, believing that we are just using our phones to help us get it, in fact Google has an even greater power, a truly revolutionary one: the ability to create and organize desire itself.

It is this truly revolutionary power that is important when it comes to Pokémon Go and Ingress. To say that these games are revolutionary is not to say that they are doing any good, nor that they are “radical”, and certainly it is not to say that they are left-wing — on the contrary, the revolution in desire appears to be corporate, hegemonic and centralized. If the left is to have any hope, however, it must not resist Pokémon Go, as Jacobin have now famously suggested, but understand and perhaps even embrace the power of the mobile phone to re-organize desire and look for ways forward from here.

 

Alfie Bown is the author of Enjoying It: Candy Crush and Capitalism (Zero, 2015) and The PlayStation Dreamworld (Polity, forthcoming 2017). He is the co-editor of the Hong Kong Review of Books and writes on the politics of technology and videogames for many publications.

SMARTPHONES, SOCIAL MEDIA AND SLEEP: THE INVISIBLE DANGERS OF OUR 24/7 CULTURE

cell-phone-addiction

By Martijn Schirp

Source: High Existence

If there is one book to read about our addictions to work, phones, consumption, and the current state of capitalism, it’s 24/7: Late Capitalism and the Ends of Sleep by Jonathan Crary, a professor of Modern Art & Theory at Columbia University. Crary argues that sleep is a standing affront to capitalism and while that seems grim, it highlights the very real dark sides of always having glowing LED screens clutched in our hands.

Technology has ushered us into a 24/7 state: we live in a world that never stops producing and is infinitely connected. We have digital worlds in our pockets, and we carry our phones and screens everywhere, feeding our dopamine addictions when we’re bored or lonely, cradling us before bed with endless scrolls of news and waking us up with notifications and emails.

The barrier between work and home life has disappeared, and most professionals are able to and choose to continue working all hours of the day in an increasingly competitive, winner-take-all environment.

Most of our time then, is either spent working or consuming (the upside of working so much is money, which is then used to consume): food, drugs, shopping, films, Youtube videos, Instagram feeds, news articles, updates from friends — even socializing-time has been reduced to a passive “Netflix & Chill”.

There are now very few significant interludes of human existence (with the colossal expectation of sleep) that have not been penetrated and taken over as work time, consumption time, or marketing time.

The social-world and the work-world are both digitized, which makes it increasingly difficult to distinguish between the two, and beyond the pop-ups and video ads, individuals have become their own marketers. Building a “personal brand” as a living is not uncommon.

It is only recently that the elaboration, the modeling of one’s personal and social identity, has been reorganized to conform to the uninterrupted operation of markets, information networks, and other systems. A 24/7 environment has the semblance of a social world, but it is actually a non-social model of machinic performance and a suspension of living that does not disclose the human cost required to sustain its effectiveness.

The average North American adult “now sleeps approximately six and a half hours a night, an erosion from eight hours a generation ago, and down from ten hours in the early twentieth century,” and what suffers most from this lack of sleep is our innate ability to dream. Most people tend to forget or don’t even think about their dreams, much less their extraordinary ability to control them. What is frightening about this is the prevalent attitude of accepting the current state of reality as it is:

The idea of technological change as quasi-autonomous, driven by some process of autopoiesis or self-organization, allows many aspects of contemporary social reality to be accepted as necessary, unalterable circumstances, akin to facts of nature. In the false placement of today’s most visible products and devices within an explanatory lineage that includes the wheel, the pointed arch, moveable type, and so forth, there is a concealment of the most important techniques invented in the last 150 years: the various systems for the management and control of human beings.

What may be the most important fact to remember: Nothing must be as it is. Here are a three ways to escape the never-ending 24/7 state:

Unplug Your Phone & Plug Into Your Imagination

Break your cell phone habit. The dopamine addiction is real. I keep my phone in a Faraday pouch, which blocks signals to my phone and keeps me to my rule of no cell phone or screen use one hour prior to sleeping and one hour after waking.

As “visual and auditory ‘content’ is most often ephemeral, interchangeable material that in addition to its commodity status, circulates to habituate and validate one’s immersion in the exigences of twenty-first-century capitalism,” it is important to focus on the power of our own imagination. The hierarchal and algorithm-driven fields of social media and newsfeeds tend to serve us things we already know or like, and keep us wanting.

Instead, we can explore the limitless field of our imagination. Write down your dreams in the morning and use them as a vehicle for self-exploration, or venture into lucid dreaming to manifest your own desires or to explore creative pursuits. And yet for most of us, when walking, during our daily commute, even sitting on the toilet or in any moment where it’s just us and our thoughts, we turn to our cell phones for comfort, to fill the silence:

One of the forms of disempowerment within 24/7 environments is the incapacitation of daydream or of any mode of absent-minded introspection that would otherwise occur in intervals of slow or vacant time.

Even when socializing with friends, it’s a common habit to check our phones again and again. I’ve found that when one person does this, it enables others:if I see someone sitting across from me at a dinner checking their Instagram feed, I’ll feel less guilty about doing the same. Make it can stop with you — turn off your phone.

Reevaluate Your Drug Habits & Addictions

Beyond digital dopamine, are you addicted to caffeine, sugar, alcohol, adderall, cocaine, Ambien, Lexapro, vicodin, etc., etc.? We live in a self-selecting society, where some drugs are perfectly acceptable as long as they are prescribed by a doctor and other drugs are deemed dangerous. I used to babysit for an eight-year-old who was fed Ritalin daily for his ADHD, and then at night, had to take a tranquilizer to help him fall asleep. He was speedballing throughout his childhood, and I’ve met others who had the same experience only to question the impact of these drugs on their personality and life-path.

There is a multiplication of the physical or psychological states for which new drugs are developed and then promoted as effective and obligatory treatments. As with digital devices and services, there is a fabrication of pseudo-necessities, or deficiencies for which new commodities are essential solutions… Over the last two decades, a growing range of emotional states have been increasingly pathologized in order to create vast new markets for previously unneeded products. The fluctuating textures of human affect and emotion that are only imprecisely suggested by the notions of shyness, anxiety, variable sexual desire, distraction, or sadness have been falsely converted into medical disorders to be targeted by hugely profitable drugs. Of the many links between the use of psychotropic drugs and communication devices, one is their parallel products of forms of social compliance.

Ritalin, adderall (and cocaine) not only make the takers compliant but fueled to tackle the 24/7 lifestyle, deadening empathy, increasing competitiveness and perhaps is linked to “destructive delusions about performance and self-aggrandizement”.

While methamphetamines are regularly fed to children, psychedelic drugs tend to be demonized as extreme and dangerous. Yet, refreshingly, there are organizations now like the Multidisciplinary Association of Psychedelic Studies (MAPS) and other studies looking into how psychedelics can not only treat addictions, anxiety, and disorders, but also how psychedelics can expand consciousness and leave lasting personality changes for the better.

Find Your Passion & Connect With Real Life Communities

Crary argues that “whatever remaining pockets of everyday life are not directed toward quantitative or acquisitive ends, or cannot be adapted to telematic participation, tend to deteriorate in esteem and desirability.” Our tendency to tie our social worth to digital networks takes the saying “if a tree falls in a forest and nobody is around to hear it, does it make a sound?” and turns it into “if you do something fun and meaningful and don’t post it to social media, does it matter?”

But those meaningful moments in real life do matter, as does having a strong community to participate in. After all, addictions are a result mostly of isolation and bad environments:

As stated earlier: it is much easier to fold to the insidious trap of looking at your cell phone or constantly working if the person across from you does so first. Find your passion beyond the screen. Find your source of dopamine, what drives you, what engages you and makes you want to get up every day.

Finding a real community centered around a meaningful activity can help tremendously. For me, rock climbing is a meditative activity that requires focus and attention, and is anchored in a community of people who are invested in your success as much as they are in their own. The nature of the sport is so individual because each person is unique; climbing is a niche that carves out time for people to participate in life without any social rules and concepts of winning over another. Climbing outdoors is a way to be connected to nature and to just hang out with friends.

I just returned from a week in New York City, the city that never sleeps, the capitol of the 24/7 world, and it took me two weeks just to be able to find the time to sit down and write this. It is not easy to accept the bleak claims in Crary’s book because it would be admitting our own addictions and how we play into this non-stop state. It’s just as hard to look away from our screens, but you can. Tonight, don’t put your phone or laptop into “sleep mode” — turn them off, and pay attention to your own dreams.

Further Study:


24/7: Late Capitalism and the Ends of Sleep
 by Jonathan Crary

24/7: Late Capitalism and the Ends of Sleep explores some of the ruinous consequences of the expanding non-stop processes of twenty-first-century capitalism. The marketplace now operates through every hour of the clock, pushing us into constant activity and eroding forms of community and political expression, damaging the fabric of everyday life.