It’s better to focus on where you are going than how you are feeling

By Christian Jarrett

Source: aeon

The notion that emotional pain and suffering reflect a deviation from a default happy baseline has been referred to as the ‘assumption of healthy normality’. But it’s a mistaken assumption. Estimates of the lifetime prevalence of psychiatric disorders indicate that around one in two adults will meet the criteria for a mental-health condition at some point in their lives. Given that psychological pain is so ubiquitous, we should focus less on what might make us happy, and more on achieving a sense of meaning, regardless of how we’re feeling. Psychotherapy should help people manage effective functioning while they are distressed, above and beyond aiming to reduce symptoms such as difficult thoughts, emotions and sensations. Acceptance and commitment therapy (ACT) takes this approach, using mindfulness, acceptance and other behavioural strategies to promote more flexible and value-driven behaviours. The goals in ACT are not necessarily to change or reduce one’s problematic thoughts or emotions, but to foster meaningful and effective behaviours regardless of mood, motivation or thinking. In other words, the primary goal is to promote what therapists call ‘valued living’.

Think of valued living as going about your daily life in the service of values that you find important, whereby engaging in these actions creates a sense of meaning and purpose. From an ACT perspective, symptoms of psychiatric disorders, and psychological suffering more broadly, are problematic when they are linked to rigid behaviours that pull us away from valued living. We might not have any control over the pain we experience – in fact, our emotional pain is profoundly human – but one area where we can exert some control is what we do in response to that suffering. Many common responses to difficult thoughts and emotions – such as avoidance, substance abuse, withdrawal and aggression – can alleviate distress in the short term, but also lead to long-term damage in our relationships, our jobs, our freedom and our personal growth – the very areas that provide that sense of meaning and purpose. By letting go of an agenda guided by minimising pain, and recalibrating toward a more value-driven agenda, our choices can be based on who we want to be, rather than how we want to feel.

In their 2013 study, the psychologists Todd Kashdan and Patrick McKnight of George Mason University in Virginia examined the day-to-day relationships between valued living and wellbeing in a sample of individuals with social anxiety disorder. This is a common but debilitating condition that’s marked by intense fear of social situations that might involve being negatively judged by others. People with social anxiety disorder often want and value positive relationships but considerable distress makes them avoid social interactions, so this is an excellent group in which to examine values and meaning.

In the study, participants began by identifying their central aim or purpose in life (eg, ‘trying to be a good role model to others’). Then, each day over the next two weeks, they rated their daily efforts and progress toward this goal, and provided daily ratings of their self-esteem, meaning in life, and experience of positive and negative emotions. On days when they reported investing greater effort toward their main life goal, they also tended to enjoy greater wellbeing: they said their life had more meaning, and they scored higher on self-esteem and the experience of positive emotions. Importantly, support was not found for the reverse path – greater wellbeing did not predict greater effort or progress toward strivings. This study highlights that sometimes we need to make the value-guided choice, regardless of how we feel.

If only it were so easy, though. For this reason, in ACT-based treatments, there is substantial focus on skills and techniques that can assist one in cultivating a more aware, willing and tolerant stance toward difficult feelings and other internal experiences. This stands in explicit contrast to a ‘do X and your distress will alleviate’ approach. The ACT techniques are not in the service of changing emotional states – they are in the service of facilitating valued action.

The effectiveness of ACT across different diagnoses and problem areas shows that committing to the benefits of valued living transcends traditional diagnostic categories. In addition to anxiety disorders, in studies of post-traumatic stress disorderdepression and resiliencechronic painsuicidal ideation and many more, engaging in behaviours consistent with personally held values has been linked to a range of positive outcomes.

Which brings me back to my work as a therapist. While the breadth of exercises and techniques employed in ACT is beyond the scope of this article, there is one exercise I’d like to share that has helped some of my clients see the inextricable link between valued living and painful experiences. In this activity (of which there are different variations), the therapist first asks the client to write on an index card some of the internal experiences they are struggling with most – difficult thoughts and judgments, emotions, memories.

I ask them, what do you notice when you read that index card? I feel awful, I don’t want this. What do you want to do with the card? I want to throw it in the trash. Then the client flips the card over, and I ask them to write out some of the things that are most important, most meaningful to them – being a parent, caring and supporting others, learning, growing, etc. What do you notice when you read this side? Warmth, it feels right, this is who I want to be. Where is the pain, where is the other stuff? Still here, on the other side of the card. What happens if you push that pain away, escape or avoid it? I push the meaningful stuff away too. In your heart of hearts, what does your experience tell you right now? If I’m going to do the things that are important to me, be the person that I want to be, I also have to make room for the painful stuff.

In my experience, this is both an emotionally difficult exercise and also one that helps a person grasp that it’s impossible to disentangle pain and valued living. Sometimes it is hard to engage with those struggles in session, but we regularly return to the rationale of the approach – that maybe a different stance toward pain is necessary. And that is the crux of the work in ACT – opening up to the demons, judgments and suffering that lie underneath, all for the purpose of moving toward that which is meaningful.

The valued path is not necessarily the happy path. Social connectedness sometimes brings us in contact with memories of abuse and trauma. Being a parent stirs up doubts, uncertainty and feelings of anxiety, fear, anger and shame. Advocating for social justice requires repeated exposure to the inequities in our societies and the feelings of helplessness that can come from fighting for an equality that might not exist until after you’re gone. But a growing body of psychological research suggests that the valued path is the more workable one, whereas the happy path can be more of an illusion.

For readers who would like to find out more, I recommend the book Get Out of Your Mind and Into Your Life (2005) co-authored by the founder of ACT, Steven Hayes, and also Things Might Go Terribly, Horribly Wrong (2010) co-authored by another ACT pioneer, Kelly Wilson. And here is the international directory of ACT therapists, maintained by the Association for Contextual Behavioral Science.

The new mind control

mind_control

The internet has spawned subtle forms of influence that can flip elections and manipulate everything we say, think and do

By Robert Epstein

Source: Aeon Magazine

Over the past century, more than a few great writers have expressed concern about humanity’s future. In The Iron Heel (1908), the American writer Jack London pictured a world in which a handful of wealthy corporate titans – the ‘oligarchs’ – kept the masses at bay with a brutal combination of rewards and punishments. Much of humanity lived in virtual slavery, while the fortunate ones were bought off with decent wages that allowed them to live comfortably – but without any real control over their lives.

In We (1924), the brilliant Russian writer Yevgeny Zamyatin, anticipating the excesses of the emerging Soviet Union, envisioned a world in which people were kept in check through pervasive monitoring. The walls of their homes were made of clear glass, so everything they did could be observed. They were allowed to lower their shades an hour a day to have sex, but both the rendezvous time and the lover had to be registered first with the state.

In Brave New World (1932), the British author Aldous Huxley pictured a near-perfect society in which unhappiness and aggression had been engineered out of humanity through a combination of genetic engineering and psychological conditioning. And in the much darker novel 1984 (1949), Huxley’s compatriot George Orwell described a society in which thought itself was controlled; in Orwell’s world, children were taught to use a simplified form of English called Newspeak in order to assure that they could never express ideas that were dangerous to society.

These are all fictional tales, to be sure, and in each the leaders who held the power used conspicuous forms of control that at least a few people actively resisted and occasionally overcame. But in the non-fiction bestseller The Hidden Persuaders (1957) – recently released in a 50th-anniversary edition – the American journalist Vance Packard described a ‘strange and rather exotic’ type of influence that was rapidly emerging in the United States and that was, in a way, more threatening than the fictional types of control pictured in the novels. According to Packard, US corporate executives and politicians were beginning to use subtle and, in many cases, completely undetectable methods to change people’s thinking, emotions and behaviour based on insights from psychiatry and the social sciences.

Most of us have heard of at least one of these methods: subliminal stimulation, or what Packard called ‘subthreshold effects’ – the presentation of short messages that tell us what to do but that are flashed so briefly we aren’t aware we have seen them. In 1958, propelled by public concern about a theatre in New Jersey that had supposedly hidden messages in a movie to increase ice cream sales, the National Association of Broadcasters – the association that set standards for US television – amended its code to prohibit the use of subliminal messages in broadcasting. In 1974, the Federal Communications Commission opined that the use of such messages was ‘contrary to the public interest’. Legislation to prohibit subliminal messaging was also introduced in the US Congress but never enacted. Both the UK and Australia have strict laws prohibiting it.

Subliminal stimulation is probably still in wide use in the US – it’s hard to detect, after all, and no one is keeping track of it – but it’s probably not worth worrying about. Research suggests that it has only a small impact, and that it mainly influences people who are already motivated to follow its dictates; subliminal directives to drink affect people only if they’re already thirsty.

Packard had uncovered a much bigger problem, however – namely that powerful corporations were constantly looking for, and in many cases already applying, a wide variety of techniques for controlling people without their knowledge. He described a kind of cabal in which marketers worked closely with social scientists to determine, among other things, how to get people to buy things they didn’t need and how to condition young children to be good consumers – inclinations that were explicitly nurtured and trained in Huxley’s Brave New World. Guided by social science, marketers were quickly learning how to play upon people’s insecurities, frailties, unconscious fears, aggressive feelings and sexual desires to alter their thinking, emotions and behaviour without any awareness that they were being manipulated.

By the early 1950s, Packard said, politicians had got the message and were beginning to merchandise themselves using the same subtle forces being used to sell soap. Packard prefaced his chapter on politics with an unsettling quote from the British economist Kenneth Boulding: ‘A world of unseen dictatorship is conceivable, still using the forms of democratic government.’ Could this really happen, and, if so, how would it work?

The forces that Packard described have become more pervasive over the decades. The soothing music we all hear overhead in supermarkets causes us to walk more slowly and buy more food, whether we need it or not. Most of the vacuous thoughts and intense feelings our teenagers experience from morning till night are carefully orchestrated by highly skilled marketing professionals working in our fashion and entertainment industries. Politicians work with a wide range of consultants who test every aspect of what the politicians do in order to sway voters: clothing, intonations, facial expressions, makeup, hairstyles and speeches are all optimised, just like the packaging of a breakfast cereal.

Fortunately, all of these sources of influence operate competitively. Some of the persuaders want us to buy or believe one thing, others to buy or believe something else. It is the competitive nature of our society that keeps us, on balance, relatively free.

But what would happen if new sources of control began to emerge that had little or no competition? And what if new means of control were developed that were far more powerful – and far more invisible – than any that have existed in the past? And what if new types of control allowed a handful of people to exert enormous influence not just over the citizens of the US but over most of the people on Earth?

It might surprise you to hear this, but these things have already happened.

To understand how the new forms of mind control work, we need to start by looking at the search engine – one in particular: the biggest and best of them all, namely Google. The Google search engine is so good and so popular that the company’s name is now a commonly used verb in languages around the world. To ‘Google’ something is to look it up on the Google search engine, and that, in fact, is how most computer users worldwide get most of their information about just about everything these days. They Google it. Google has become the main gateway to virtually all knowledge, mainly because the search engine is so good at giving us exactly the information we are looking for, almost instantly and almost always in the first position of the list it shows us after we launch our search – the list of ‘search results’.

That ordered list is so good, in fact, that about 50 per cent of our clicks go to the top two items, and more than 90 per cent of our clicks go to the 10 items listed on the first page of results; few people look at other results pages, even though they often number in the thousands, which means they probably contain lots of good information. Google decides which of the billions of web pages it is going to include in our search results, and it also decides how to rank them. How it decides these things is a deep, dark secret – one of the best-kept secrets in the world, like the formula for Coca-Cola.

Because people are far more likely to read and click on higher-ranked items, companies now spend billions of dollars every year trying to trick Google’s search algorithm – the computer program that does the selecting and ranking – into boosting them another notch or two. Moving up a notch can mean the difference between success and failure for a business, and moving into the top slots can be the key to fat profits.

Late in 2012, I began to wonder whether highly ranked search results could be impacting more than consumer choices. Perhaps, I speculated, a top search result could have a small impact on people’s opinions about things. Early in 2013, with my associate Ronald E Robertson of the American Institute for Behavioral Research and Technology in Vista, California, I put this idea to a test by conducting an experiment in which 102 people from the San Diego area were randomly assigned to one of three groups. In one group, people saw search results that favoured one political candidate – that is, results that linked to web pages that made this candidate look better than his or her opponent. In a second group, people saw search rankings that favoured the opposing candidate, and in the third group – the control group – people saw a mix of rankings that favoured neither candidate. The same search results and web pages were used in each group; the only thing that differed for the three groups was the ordering of the search results.

To make our experiment realistic, we used real search results that linked to real web pages. We also used a real election – the 2010 election for the prime minister of Australia. We used a foreign election to make sure that our participants were ‘undecided’. Their lack of familiarity with the candidates assured this. Through advertisements, we also recruited an ethnically diverse group of registered voters over a wide age range in order to match key demographic characteristics of the US voting population.

All participants were first given brief descriptions of the candidates and then asked to rate them in various ways, as well as to indicate which candidate they would vote for; as you might expect, participants initially favoured neither candidate on any of the five measures we used, and the vote was evenly split in all three groups. Then the participants were given up to 15 minutes in which to conduct an online search using ‘Kadoodle’, our mock search engine, which gave them access to five pages of search results that linked to web pages. People could move freely between search results and web pages, just as we do when using Google. When participants completed their search, we asked them to rate the candidates again, and we also asked them again who they would vote for.

We predicted that the opinions and voting preferences of 2 or 3 per cent of the people in the two bias groups – the groups in which people were seeing rankings favouring one candidate – would shift toward that candidate. What we actually found was astonishing. The proportion of people favouring the search engine’s top-ranked candidate increased by 48.4 per cent, and all five of our measures shifted toward that candidate. What’s more, 75 per cent of the people in the bias groups seemed to have been completely unaware that they were viewing biased search rankings. In the control group, opinions did not shift significantly.

This seemed to be a major discovery. The shift we had produced, which we called the Search Engine Manipulation Effect (or SEME, pronounced ‘seem’), appeared to be one of the largest behavioural effects ever discovered. We did not immediately uncork the Champagne bottle, however. For one thing, we had tested only a small number of people, and they were all from the San Diego area.

Over the next year or so, we replicated our findings three more times, and the third time was with a sample of more than 2,000 people from all 50 US states. In that experiment, the shift in voting preferences was 37.1 per cent and even higher in some demographic groups – as high as 80 per cent, in fact.

We also learned in this series of experiments that by reducing the bias just slightly on the first page of search results – specifically, by including one search item that favoured the other candidate in the third or fourth position of the results – we could mask our manipulation so that few or even no people were aware that they were seeing biased rankings. We could still produce dramatic shifts in voting preferences, but we could do so invisibly.

Still no Champagne, though. Our results were strong and consistent, but our experiments all involved a foreign election – that 2010 election in Australia. Could voting preferences be shifted with real voters in the middle of a real campaign? We were skeptical. In real elections, people are bombarded with multiple sources of information, and they also know a lot about the candidates. It seemed unlikely that a single experience on a search engine would have much impact on their voting preferences.

To find out, in early 2014, we went to India just before voting began in the largest democratic election in the world – the Lok Sabha election for prime minister. The three main candidates were Rahul Gandhi, Arvind Kejriwal, and Narendra Modi. Making use of online subject pools and both online and print advertisements, we recruited 2,150 people from 27 of India’s 35 states and territories to participate in our experiment. To take part, they had to be registered voters who had not yet voted and who were still undecided about how they would vote.

Participants were randomly assigned to three search-engine groups, favouring, respectively, Gandhi, Kejriwal or Modi. As one might expect, familiarity levels with the candidates was high – between 7.7 and 8.5 on a scale of 10. We predicted that our manipulation would produce a very small effect, if any, but that’s not what we found. On average, we were able to shift the proportion of people favouring any given candidate by more than 20 per cent overall and more than 60 per cent in some demographic groups. Even more disturbing, 99.5 per cent of our participants showed no awareness that they were viewing biased search rankings – in other words, that they were being manipulated.

SEME’s near-invisibility is curious indeed. It means that when people – including you and me – are looking at biased search rankings, they look just fine. So if right now you Google ‘US presidential candidates’, the search results you see will probably look fairly random, even if they happen to favour one candidate. Even I have trouble detecting bias in search rankings that I know to be biased (because they were prepared by my staff). Yet our randomised, controlled experiments tell us over and over again that when higher-ranked items connect with web pages that favour one candidate, this has a dramatic impact on the opinions of undecided voters, in large part for the simple reason that people tend to click only on higher-ranked items. This is truly scary: like subliminal stimuli, SEME is a force you can’t see; but unlike subliminal stimuli, it has an enormous impact – like Casper the ghost pushing you down a flight of stairs.

We published a detailed report about our first five experiments on SEME in the prestigious Proceedings of the National Academy of Sciences (PNAS) in August 2015. We had indeed found something important, especially given Google’s dominance over search. Google has a near-monopoly on internet searches in the US, with 83 per cent of Americans specifying Google as the search engine they use most often, according to the Pew Research Center. So if Google favours one candidate in an election, its impact on undecided voters could easily decide the election’s outcome.

Keep in mind that we had had only one shot at our participants. What would be the impact of favouring one candidate in searches people are conducting over a period of weeks or months before an election? It would almost certainly be much larger than what we were seeing in our experiments.

Other types of influence during an election campaign are balanced by competing sources of influence – a wide variety of newspapers, radio shows and television networks, for example – but Google, for all intents and purposes, has no competition, and people trust its search results implicitly, assuming that the company’s mysterious search algorithm is entirely objective and unbiased. This high level of trust, combined with the lack of competition, puts Google in a unique position to impact elections. Even more disturbing, the search-ranking business is entirely unregulated, so Google could favour any candidate it likes without violating any laws. Some courts have even ruled that Google’s right to rank-order search results as it pleases is protected as a form of free speech.

Does the company ever favour particular candidates? In the 2012 US presidential election, Google and its top executives donated more than $800,000 to President Barack Obama and just $37,000 to his opponent, Mitt Romney. And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Google’s search results routinely favoured Democratic candidates. Are Google’s search rankings really biased? An internal report issued by the US Federal Trade Commission in 2012 concluded that Google’s search rankings routinely put Google’s financial interests ahead of those of their competitors, and anti-trust actions currently under way against Google in both the European Union and India are based on similar findings.

In most countries, 90 per cent of online search is conducted on Google, which gives the company even more power to flip elections than it has in the US and, with internet penetration increasing rapidly worldwide, this power is growing. In our PNAS article, Robertson and I calculated that Google now has the power to flip upwards of 25 per cent of the national elections in the world with no one knowing this is occurring. In fact, we estimate that, with or without deliberate planning on the part of company executives, Google’s search rankings have been impacting elections for years, with growing impact each year. And because search rankings are ephemeral, they leave no paper trail, which gives the company complete deniability.

Power on this scale and with this level of invisibility is unprecedented in human history. But it turns out that our discovery about SEME was just the tip of a very large iceberg.

Recent reports suggest that the Democratic presidential candidate Hillary Clinton is making heavy use of social media to try to generate support – Twitter, Instagram, Pinterest, Snapchat and Facebook, for starters. At this writing, she has 5.4 million followers on Twitter, and her staff is tweeting several times an hour during waking hours. The Republican frontrunner, Donald Trump, has 5.9 million Twitter followers and is tweeting just as frequently.

Is social media as big a threat to democracy as search rankings appear to be? Not necessarily. When new technologies are used competitively, they present no threat. Even through the platforms are new, they are generally being used the same way as billboards and television commercials have been used for decades: you put a billboard on one side of the street; I put one on the other. I might have the money to erect more billboards than you, but the process is still competitive.

What happens, though, if such technologies are misused by the companies that own them? A study by Robert M Bond, now a political science professor at Ohio State University, and others published in Nature in 2012 described an ethically questionable experiment in which, on election day in 2010, Facebook sent ‘go out and vote’ reminders to more than 60 million of its users. The reminders caused about 340,000 people to vote who otherwise would not have. Writing in the New Republic in 2014, Jonathan Zittrain, professor of international law at Harvard University, pointed out that, given the massive amount of information it has collected about its users, Facebook could easily send such messages only to people who support one particular party or candidate, and that doing so could easily flip a close election – with no one knowing that this has occurred. And because advertisements, like search rankings, are ephemeral, manipulating an election in this way would leave no paper trail.

Are there laws prohibiting Facebook from sending out ads selectively to certain users? Absolutely not; in fact, targeted advertising is how Facebook makes its money. Is Facebook currently manipulating elections in this way? No one knows, but in my view it would be foolish and possibly even improper for Facebook not to do so. Some candidates are better for a company than others, and Facebook’s executives have a fiduciary responsibility to the company’s stockholders to promote the company’s interests.

The Bond study was largely ignored, but another Facebook experiment, published in 2014 in PNAS, prompted protests around the world. In this study, for a period of a week, 689,000 Facebook users were sent news feeds that contained either an excess of positive terms, an excess of negative terms, or neither. Those in the first group subsequently used slightly more positive terms in their communications, while those in the second group used slightly more negative terms in their communications. This was said to show that people’s ‘emotional states’ could be deliberately manipulated on a massive scale by a social media company, an idea that many people found disturbing. People were also upset that a large-scale experiment on emotion had been conducted without the explicit consent of any of the participants.

Facebook’s consumer profiles are undoubtedly massive, but they pale in comparison with those maintained by Google, which is collecting information about people 24/7, using more than 60 different observation platforms – the search engine, of course, but also Google Wallet, Google Maps, Google Adwords, Google Analytics, Chrome, Google Docs, Android, YouTube, and on and on. Gmail users are generally oblivious to the fact that Google stores and analyses every email they write, even the drafts they never send – as well as all the incoming email they receive from both Gmail and non-Gmail users.

According to Google’s privacy policy – to which one assents whenever one uses a Google product, even when one has not been informed that he or she is using a Google product – Google can share the information it collects about you with almost anyone, including government agencies. But never with you. Google’s privacy is sacrosanct; yours is nonexistent.

Could Google and ‘those we work with’ (language from the privacy policy) use the information they are amassing about you for nefarious purposes – to manipulate or coerce, for example? Could inaccurate information in people’s profiles (which people have no way to correct) limit their opportunities or ruin their reputations?

Certainly, if Google set about to fix an election, it could first dip into its massive database of personal information to identify just those voters who are undecided. Then it could, day after day, send customised rankings favouring one candidate to just those people. One advantage of this approach is that it would make Google’s manipulation extremely difficult for investigators to detect.

Extreme forms of monitoring, whether by the KGB in the Soviet Union, the Stasi in East Germany, or Big Brother in 1984, are essential elements of all tyrannies, and technology is making both monitoring and the consolidation of surveillance data easier than ever. By 2020, China will have put in place the most ambitious government monitoring system ever created – a single database called the Social Credit System, in which multiple ratings and records for all of its 1.3 billion citizens are recorded for easy access by officials and bureaucrats. At a glance, they will know whether someone has plagiarised schoolwork, was tardy in paying bills, urinated in public, or blogged inappropriately online.

As Edward Snowden’s revelations made clear, we are rapidly moving toward a world in which both governments and corporations – sometimes working together – are collecting massive amounts of data about every one of us every day, with few or no laws in place that restrict how those data can be used. When you combine the data collection with the desire to control or manipulate, the possibilities are endless, but perhaps the most frightening possibility is the one expressed in Boulding’s assertion that an ‘unseen dictatorship’ was possible ‘using the forms of democratic government’.

Since Robertson and I submitted our initial report on SEME to PNAS early in 2015, we have completed a sophisticated series of experiments that have greatly enhanced our understanding of this phenomenon, and other experiments will be completed in the coming months. We have a much better sense now of why SEME is so powerful and how, to some extent, it can be suppressed.

We have also learned something very disturbing – that search engines are influencing far more than what people buy and whom they vote for. We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make. They are having an impact on the opinions, beliefs, attitudes and behaviours of internet users worldwide – entirely without people’s knowledge that this is occurring. This is happening with or without deliberate intervention by company officials; even so-called ‘organic’ search processes regularly generate search results that favour one point of view, and that in turn has the potential to tip the opinions of millions of people who are undecided on an issue. In one of our recent experiments, biased search results shifted people’s opinions about the value of fracking by 33.9 per cent.

Perhaps even more disturbing is that the handful of people who do show awareness that they are viewing biased search rankings shift even further in the predicted direction; simply knowing that a list is biased doesn’t necessarily protect you from SEME’s power.

Remember what the search algorithm is doing: in response to your query, it is selecting a handful of webpages from among the billions that are available, and it is ordering those webpages using secret criteria. Seconds later, the decision you make or the opinion you form – about the best toothpaste to use, whether fracking is safe, where you should go on your next vacation, who would make the best president, or whether global warming is real – is determined by that short list you are shown, even though you have no idea how the list was generated.

Meanwhile, behind the scenes, a consolidation of search engines has been quietly taking place, so that more people are using the dominant search engine even when they think they are not. Because Google is the best search engine, and because crawling the rapidly expanding internet has become prohibitively expensive, more and more search engines are drawing their information from the leader rather than generating it themselves. The most recent deal, revealed in a Securities and Exchange Commission filing in October 2015, was between Google and Yahoo! Inc.

Looking ahead to the November 2016 US presidential election, I see clear signs that Google is backing Hillary Clinton. In April 2015, Clinton hired Stephanie Hannon away from Google to be her chief technology officer and, a few months ago, Eric Schmidt, chairman of the holding company that controls Google, set up a semi-secret company – The Groundwork – for the specific purpose of putting Clinton in office. The formation of The Groundwork prompted Julian Assange, founder of Wikileaks, to dub Google Clinton’s ‘secret weapon’ in her quest for the US presidency.

We now estimate that Hannon’s old friends have the power to drive between 2.6 and 10.4 million votes to Clinton on election day with no one knowing that this is occurring and without leaving a paper trail. They can also help her win the nomination, of course, by influencing undecided voters during the primaries. Swing voters have always been the key to winning elections, and there has never been a more powerful, efficient or inexpensive way to sway them than SEME.

We are living in a world in which a handful of high-tech companies, sometimes working hand-in-hand with governments, are not only monitoring much of our activity, but are also invisibly controlling more and more of what we think, feel, do and say. The technology that now surrounds us is not just a harmless toy; it has also made possible undetectable and untraceable manipulations of entire populations – manipulations that have no precedent in human history and that are currently well beyond the scope of existing regulations and laws. The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril.

The Asshole Factory

index

Our economy doesn’t make stuff anymore. So what does it make?

By Umair Haque

Source: Medium

My good friend Mara has not one but two graduate degrees. From fine, storied universities. Surprise, surprise: the only “job” she was able to find was at a retail store.

Hey—it’s only minimum wage, but at least she’s working, right? And at a major-league, blue-chip company, An American icon; an institution; a name every man, woman, and child in this country knows; an historic company that rings of the American Dream the world over, besides. Surely, if nothing else, it’s a start.

Perhaps you’re right. Maybe it isn’t the start she always dreamed of…but at least it is one. If so…then awaits her at the finish?

What is Mara’s job like? Her sales figures are monitored…by the microsecond. By hidden cameras and mics. They listen to her every word; they capture her every movement; that track and stalk her as if she were an animal; or a prisoner; or both. She’s jacked into a headset that literally barks algorithmic, programmed “orders” at her, parroting her own “performance” back to her, telling her how she compares with quotas calculated…down to the second…for all the hundreds of items in the store…which recites “influence and manipulation techniques” to her…to use on unsuspecting customers…that sound suspiciously like psychological warfare. It’s as if the NSA was following you around……and it was stuck in your head…telling you what an inadequate failure you were…psychologically waterboarding you…all day long…every day for the rest of your life.

Mara’s boss sits in the back. Monitoring all twelve, or fifteen, or twenty people that work in the store. On a set of screens. Half camera displays, half spreadsheets; numbers blinking in real-time. Glued to it like a zombie. Chewing slowly with her mouth open. Jacked into a headset. A drone-pilot… piloting a fleet of human drones…pressure-selling disposable mass-made shit…as if it were luxury yachts…through robo-programmed info-warfare…like zombies…to other zombies…who look stunned…like they just got laser blasted, cluster-bombed, shock-and-awed…

WTF?

It’s bananas. The whole scene is like a maximum-security mental asylum designed by sadomasochists in a sci-fi movie. If Jeffrey Dahmer, Rasputin, and Michael Bay designed a “store” together, they couldn’t do any better. Her “job” will begin to drive her crazy—paranoid, depressed, deluded—in a matter of years if she continues doing it. No human psyche can bear that kind of relentless, systematic abuse.

Now. Note what all the technology and bureaucracy that wonderful, noble company has invested hundreds of millions in doesn’t ask her to do. Learn. Think. Reflect. Teach. Inspire. Lead. Connect. Imagine. Create. Grow. Dream. Actually…serve customers. Heaven forbid. It just beats her over the head, over and over again, three times a minute, every twenty seconds, with how much she hasn’t sold; hasn’t made; hasn’t produced. For her shitty .0003% commission. According to the quota that’s been set for her. By her boss. For his boss. For their boss. And so on all the way up the food chain.

See my point? Mara’s job isn’t to benefit customers. It isn’t to educate, understand, listen to, or even to chat with them. It isn’t to stop them from buying what they don’t want; to help them find what they might need; to match them with the right stuff. Nope. It’s merely to push more and more and more and more shit at them…faster, meaner, and dumber than any sane person would think is humanly possible…using advanced military technology and techniques… programmed to abuse her…so she can wage advanced psychological warfare…on her customers. And they were just suckers, gaping maws, fools, marks. And be yelled at…by a robot…if she doesn’t.

Really? This is the best our economy can do? To take the stuff of 21st century warfare and use them them to…rack up the profit? To turn a bright young woman with two grad degrees…into a Superprofitable Human Weapon of Mass Consumption…a half-crazed algorithmically-programmed asshole…a human drone…so even bigger, actually crazy assholes…can get super-rich…by slinging entire supertankers full of junk…at people getting poorer at four thousand percent interest a year…by using drones and bots to wage psy-warfare against them…so they’re conned into buying too much?

The economy doesn’t make stuff anymore. That much you know. So what does it make?

It makes assholes.

The Great Enterprise of this age is the Asshole Industry.

And that’s not just a tragedy. It is something approaching the moral equivalent of a crime. For it demolishes human potential in precisely the same way as locking someone innocent up, and throwing away the key.

Consider Mara again. Who in Christ’s name would design such an inhuman system? Whose sick joke of an idea is a “store” like that? What do you even call it? Because it’s surely not a “store”.

Only a monstrous asshole of the highest order could assemble such a demonically vampiric bullshit machine to prey on…everyone. Customers, managers, workers alike. Such a carefully sophisticated engine of human misery; of finessed cruelty; all to rake in a few extra pennies an hour, at the expense of dignity, intelligence, creativity, commitment, fairness, craft, service, sovereignty…sanity.

The store is an Asshole Factory.

Allow me to explain.

What happens to Mara when she’s “doing her job”? Think about it for a second. She turns into precisely the kind of asshole that the heartless dweebs who thought up this infernal torture-machine no doubt already are. Not because she wants to. But because she has to.

That’s exactly what the store was designed to do. Turn everyone into the same kind of asshole as the assholes that made it, run it, and benefit from it…want everyone to be.

The store is an Asshole Factory.

Our world is now full of Asshole Factories. That’s what the stores, offices, industrial parks, skyscrapers, malls, low-rise blocks, gleaming headquarters, whimsically designed corporate campuses, really are.

It’s the grand endeavor of today. We don’t make stuff anymore. We make assholes. The Great Enterprise of this age is the Asshole Industry.

Consider, for a moment, my tiny hypothesis.

Have you noticed, lately, that people seem to be more, well, assholish…than before? That everywhere you go, people seem to be meaner, nastier, dumber, angrier, more brutish?

Why?

It is the last and greatest industry left in an economy that has been impoverished, emptied, hollowed out, drained.

The Great Enterprise of the Age of Stagnation is the wholesale manufacture not of great, world-shaking, ground-breaking ideas, inventions, concerns…but of bigger and bigger assholes.

The chain-store; the mall; the hypermarket. The corporation; the firm; the partnership. B-school; law school; med school. The boardroom; the backroom; the trading floor.

These are, by and large, Asshole Factories. They don’t make people. Capable of great things. Who create and build and touch and soar. They make assholes.

They are designed to disinfect us of our fragility. To cleanse us of our flaws. To disinfect us of weakness. Love, grace, mercy, longing, forgiveness, passion, truth, nobility, dreams. Their objective is to stamp all that out; to eradicate it; to erase it. To replace it with calculation, ruthlessness, self-concern; gluttony; cruelty; anxiety, despair. By using the most sophisticated technology ever made to subjugate, oppress, and goad us into being little torturers ourselves.

And in so doing, they emotionally sterilize us. They psychically traumatize us. They intellectually castrate us. They socially neuter us. They cheat us of greatness. That is how they turn us into assholes.

They are designed to deprive us into depriving everyone else of the lives we could and have, at our highest and truest and noblest.

The assholes haven’t just taken our incomes, our savings, our careers, our educations. They’ve taken something far more precious; something priceless. It is our lives—the full, true lives we should be living—that have been taken from us. And in the gaping void where the lives we should be living are, the assholes have deftly inserted carbon copies of…themselves.

When you think about it that way…is it any wonder that society seems to be stuck? That the economy seems headed into oblivion? That life for so pretty much anyone under the age of 35 and/or worth less than $20 million or so appears to be going…nowhere?

Remember my friend Mara? She’s probably being piloted like a drone…yelled at by a bot…three times a minute…into waging advanced techniques of psycho-war…designed to traumatize prisoners…over and over and over again…right this very second…

Until she’s cleansed. Perfect. Flawless. Pure. Another gleaming, brand-new asshole, rolling proudly off the assembly line of the Asshole Factory.

We’re obedient constructivists. Pragmatists. Rationalists. So you probably want to know: what can we do about it?

It’s pretty simple.

Don’t be an asshole. Remember the Asshole Factories? Here’s a secret: they’re churning out assholes by the millions. And so should you bravely decide to be an asshole, what you’ll really be is just another interchangeable, forgettable, rapidly depreciating commodity.

So who should you be?

Be yourself. The person you were meant to be. Whether you believe in heaven or the inferno, freedom or fate, the simple fact is: each and every one of us was put here to be something greater than Just Another Asshole stealing pennies from his neighbors to pay off Even Bigger Assholes.

So let me say it again. Don’t be an asshole. Be yourself. The miracle of being that you were meant to be. A person that, consumed with passion, seared with happiness, aglow with meaning, brings forth all that is great, noble, and true in the world, and so, with love, mercy, and wisdom, lifts every life that you meet into the light.

Thank you and goodbye.