9 Things You Don’t Know about Yourself

By Steve Ayan

Source: Waking Times

Your “self” lies before you like an open book. Just peer inside and read: who you are, your likes and dislikes, your hopes and fears; they are all there, ready to be understood. This notion is popular but is probably completely false! Psychological research shows that we do not have privileged access to who we are. When we try to assess ourselves accurately, we are really poking around in a fog.

Princeton University psychologist Emily Pronin, who specializes in human self-perception and decision making, calls the mistaken belief in privileged access the “introspection illusion.” The way we view ourselves is distorted, but we do not realize it. As a result, our self-image has surprisingly little to do with our actions. For example, we may be absolutely convinced that we are empathetic and generous but still walk right past a homeless person on a cold day.

The reason for this distorted view is quite simple, according to Pronin. Because we do not want to be stingy, arrogant, or self-righteous, we assume that we are not any of those things. As evidence, she points to our divergent views of ourselves and others. We have no trouble recognizing how prejudiced or unfair our office colleague acts toward another person. But we do not consider that we could behave in much the same way: Because we intend to be morally good, it never occurs to us that we, too, might be prejudiced.

Pronin assessed her thesis in a number of experiments. Among other things, she had her study participants complete a test involving matching faces with personal statements that would supposedly assess their social intelligence. Afterward, some of them were told that they had failed and were asked to name weaknesses in the testing procedure. Although the opinions of the subjects were almost certainly biased (not only had they supposedly failed the test, they were also being asked to critique it), most of the participants said their evaluations were completely objective. It was much the same in judging works of art, although subjects who used a biased strategy for assessing the quality of paintings nonetheless believed that their own judgment was balanced. Pronin argues that we are primed to mask our own biases.

Is the word “introspection” merely a nice metaphor? Could it be that we are not really looking into ourselves, as the Latin root of the word suggests, but producing a flattering self-image that denies the failings that we all have? The research on self-knowledge has yielded much evidence for this conclusion. Although we think we are observing ourselves clearly, our self-image is affected by processes that remain unconscious.

1. Your motives are often a complete mystery to you

How well do people know themselves? In answering this question, researchers encounter the following problem: to assess a person’s self-image, one would have to know who that person really is. Investigators use a variety of techniques to tackle such questions. For example, they compare the self-assessments of test subjects with the subjects’ behavior in laboratory situations or in everyday life. They may ask other people, such as relatives or friends, to assess subjects, as well. And they probe unconscious inclinations using special methods.

To measure unconscious inclinations, psychologists can apply a method known as the implicit association test (IAT), developed in the 1990s by Anthony Greenwald of the University of Washington and his colleagues, to uncover hidden attitudes. Since then, numerous variants have been devised to examine anxiety, impulsiveness, and sociability, among other features. The approach assumes that instantaneous reactions require no reflection; as a result, unconscious parts of the personality come to the fore.

Notably, experimenters seek to determine how closely words that are relevant to a person are linked to certain concepts. For example, participants in a study were asked to press a key as quickly as possible when a word that described a characteristic such as extroversion (say, “talkative” or “energetic”) appeared on a screen. They were also asked to press the same key as soon as they saw a word on the screen that related to themselves (such as their own name). They were to press a different key as soon as an introverted characteristic (say, “quiet” or “withdrawn”) appeared or when the word involved someone else. Of course, the words and key combinations were switched over the course of many test runs. If a reaction was quicker when a word associated with the participant followed “extroverted,” for instance, it was assumed that extroversion was probably integral to that person’s self-image.

“When we try to assess ourselves accurately, we are really poking around in a fog.” ~Steve Ayan

Such “implicit” self-concepts generally correspond only weakly to assessments of the self that are obtained through questionnaires. The image that people convey in surveys has little to do with their lightning-fast reactions to emotionally laden words. And a person’s implicit self-image is often quite predictive of his or her actual behavior, especially when nervousness or sociability is involved. On the other hand, questionnaires yield better information about such traits as conscientiousness or openness to new experiences. Psychologist Mitja Back of the University of Münster in Germany explains that methods designed to elicit automatic reactions reflect the spontaneous or habitual components of our personality. Conscientiousness and curiosity, on the other hand, require a certain degree of thought and can therefore be assessed more easily through self-reflection.

2. Outward appearances tell people a lot about you

Much research indicates that our nearest and dearest often see us better than we see ourselves. As psychologist Simine Vazire of the University of California, Davis, has shown, two conditions in particular may enable others to recognize who we really are most readily: First, when they are able to “read” a trait from outward characteristics and, second, when a trait has a clear positive or negative valence (intelligence and creativity are obviously desirable, for instance; dishonesty and egocentricity are not). Our assessments of ourselves most closely match assessments by others when it comes to more neutral characteristics.

The characteristics generally most readable by others are those that strongly affect our behavior. For example, people who are naturally sociable typically like to talk and seek out company; insecurity often manifests in behaviors such as hand-wringing or averting one’s gaze. In contrast, brooding is generally internal, unspooling within the confines of one’s mind.

We are frequently blind to the effect we have on others because we simply do not see our own facial expressions, gestures, and body language. I am hardly aware that my blinking eyes indicate stress or that the slump in my posture betrays how heavily something weighs on me. Because it is so difficult to observe ourselves, we must rely on the observations of others, especially those who know us well. It is hard to know who we are unless others let us know how we affect them.

3. Gaining some distance can help you know yourself better

Keeping a diary, pausing for self-reflection, and having probing conversations with others have a long tradition, but whether these methods enable us to know ourselves is hard to tell. In fact, sometimes doing the opposite—such as letting go—is more helpful because it provides some distance. In 2013, Erika Carlson, now at the University of Toronto, reviewed the literature on whether and how mindfulness meditation improves one’s self-knowledge. It helps, she noted, by overcoming two big hurdles: distorted thinking and ego protection. The practice of mindfulness teaches us to allow our thoughts to simply drift by and to identify with them as little as possible. Thoughts, after all, are “only thoughts” and not the absolute truth. Frequently, stepping out of oneself in this way and simply observing what the mind does fosters clarity.

4. We too often think we are better at something than we are

Gaining insight into our unconscious motives can enhance emotional well-being. Oliver C. Schultheiss of Friedrich-Alexander University of Erlangen-Nürnberg in Germany has shown that our sense of well-being tends to grow as our conscious goals and unconscious motives become more aligned or congruent. For example, we should not slave away at a career that gives us money and power if these goals are of little importance to us. But how do we achieve such harmony? By imagining, for example. Try to imagine, as vividly and in as much detail as possible, how things would be if your most fervent wish came true. Would it really make you happier? Often we succumb to the temptation to aim excessively high without taking into account all of the steps and effort necessary to achieve ambitious goals.

Are you familiar with the Dunning-Kruger effect? It holds that the more incompetent people are, the less they are aware of their incompetence. The effect is named after David Dunning of the University of Michigan and Justin Kruger of New York University.

Dunning and Kruger gave their test subjects a series of cognitive tasks and asked them to estimate how well they did. At best, 25 percent of the participants viewed their performance more or less realistically; only some people underestimated themselves. The quarter of subjects who scored worst on the tests really missed the mark, wildly exaggerating their cognitive abilities. Is it possible that boasting and failing are two sides of the same coin?

As the researchers emphasize, their work highlights a general feature of self-perception: Each of us tends to overlook our cognitive deficiencies. According to psychologist Adrian Furnham of University College London, the statistical correlation between perceived and actual IQ is, on average, only 0.16—a pretty poor showing, to put it mildly. By comparison, the correlation between height and sex is about 0.7.

So why is the chasm between would-be and actual performance so gaping? Don’t we all have an interest in assessing ourselves realistically? It surely would spare us a great deal of wasted effort and perhaps a few embarrassments. The answer, it seems, is that a moderate inflation of self-esteem has certain benefits. According to a review by psychologists Shelley Taylor of the University of California, Los Angeles, and Jonathon Brown of the University of Washington, rose-colored glasses tend to increase our sense of well-being and our performance. People afflicted by depression, on the other hand, are inclined to be brutally realistic in their self-assessments. An embellished self-image seems to help us weather the ups and downs of daily life.

5. People who tear themselves down experience setbacks more frequently

Although most of our contemporaries harbor excessively positive views of their honesty or intelligence, some people suffer from the opposite distortion: They belittle themselves and their efforts. Experiencing contempt and belittlement in childhood, often associated with violence and abuse, can trigger this kind of negativity—which, in turn, can limit what people can accomplish, leading to distrust, despair, and even suicidal thoughts.

It might seem logical to think that people with a negative self-image would be just the ones who would want to overcompensate. Yet as psychologists working with William Swann of the University of Texas at Austin discovered, many individuals racked with self-doubt seek confirmation of their distorted self-perception. Swann described this phenomenon in a study on contentment in marriage. He asked couples about their own strengths and weaknesses, the ways they felt supported and valued by their partner, and how content they were in the marriage. As expected, those who had a more positive attitude toward themselves found greater satisfaction in their relationship the more they received praise and recognition from their other half. But those who habitually picked at themselves felt safer in their marriage when their partner reflected their negative image back to them. They did not ask for respect or appreciation. On the contrary, they wanted to hear exactly their own view of themselves: “You’re incompetent.”

Swann based his theory of self-verification on these findings. The theory holds that we want others to see us the way we see ourselves. In some cases, people actually provoke others to respond negatively to them so as to prove how worthless they are. This behavior is not necessarily masochism. It is symptomatic of the desire for coherence: If others respond to us in a way that confirms our self-image, then the world is as it should be.

Likewise, people who consider themselves failures will go out of their way not to succeed, contributing actively to their own undoing. They will miss meetings, habitually neglect doing assigned work, and get into hot water with the boss. Swann’s approach contradicts Dunning and Kruger’s theory of overestimation. But both camps are probably right: hyperinflated egos are certainly common, but negative self-images are not uncommon.

6. You deceive yourself without realizing it

According to one influential theory, our tendency for self-deception stems from our desire to impress others. To appear convincing, we ourselves must be convinced of our capabilities and truthfulness. Supporting this theory is the observation that successful manipulators are often quite full of themselves. Good salespeople, for example, exude an enthusiasm that is contagious; conversely, those who doubt themselves generally are not good at sweet talking. Lab research is supportive as well. In one study, participants were offered money if, in an interview, they could convincingly claim to have aced an IQ test. The more effort the candidates put into their performance, the more they themselves came to believe that they had a high IQ, even though their actual scores were more or less average.

Our self-deceptions have been shown to be quite changeable. Often we adapt them flexibly to new situations. This adaptability was demonstrated by Steven A. Sloman of Brown University and his colleagues. Their subjects were asked to move a cursor to a dot on a computer screen as quickly as possible. If the participants were told that above-average skill in this task reflected high intelligence, they immediately concentrated on the task and did better. They did not actually seem to think that they had exerted more effort—which the researchers interpret as evidence of a successful self-deception. On the other hand, if the test subjects were convinced that only dimwits performed well on such stupid tasks, their performance tanked precipitously.

But is self-deception even possible? Can we know something about ourselves on some level without being conscious of it? Absolutely! The experimental evidence involves the following research design: Subjects are played audiotapes of human voices, including their own, and are asked to signal whether they hear themselves. The recognition rate fluctuates depending on the clarity of the audiotapes and the loudness of the background noise. If brain waves are measured at the same time, particular signals in the reading indicate with certainty whether the participants heard their own voice.

Most people are somewhat embarrassed to hear their own voice. In a classic study, Ruben Gur of the University of Pennsylvania and Harold Sackeim of Columbia University made use of this reticence, comparing the statements of test subjects with their brain activity. Lo and behold, the activity frequently signaled, “That’s me!” without subjects’ having overtly identified a voice as their own. Moreover, if the investigators threatened the participants’ self-image—say, by telling them that they had scored miserably on another (irrelevant) test—they were even less apt to recognize their voice. Either way, their brain waves told the real story.

In a more recent study, researchers evaluated performances on a practice test meant to help students assess their own knowledge so that they could fill in gaps. Here, subjects were asked to complete as many tasks as possible within a set time limit. Given that the purpose of the practice test was to provide students with information they needed, it made little sense for them to cheat; on the contrary, artificially pumped-up scores could have led them to let their studies slide. Those who tried to improve their scores by using time beyond the allotted completion period would just be hurting themselves.

But many of the volunteers did precisely that. Unconsciously, they simply wanted to look good. Thus, the cheaters explained their running over time by claiming to have been distracted and wanting to make up for lost seconds. Or they said that their fudged outcomes were closer to their “true potential.” Such explanations, according to the researchers, confuse cause and effect, with people incorrectly thinking, “Intelligent people usually do better on tests. So if I manipulate my test score by simply taking a little more time than allowed, I’m one of the smart ones, too.” Conversely, people performed less diligently if they were told that doing well indicated a higher risk for developing schizophrenia. Researchers call this phenomenon diagnostic self-deception.

7. The “true self” is good for you

Most people believe that they have a solid essential core, a true self. Who they truly are is evinced primarily in their moral values and is relatively stable; other preferences may change, but the true self remains the same. Rebecca Schlegel and Joshua Hicks, both at Texas A&M University, and their colleagues have examined how people’s view of their true self affects their satisfaction with themselves. The researchers asked test subjects to keep a diary about their everyday life. The participants turned out to feel most alienated from themselves when they had done something morally questionable: They felt especially unsure of who they actually were when they had been dishonest or selfish. Experiments have also confirmed an association between the self and morality. When test subjects are reminded of earlier wrongdoing, their surety about themselves takes a hit.

Another study by Newman and Knobe involved “Mark,” a devout Christian who was nonetheless attracted to other men. The researchers sought to understand how the participants viewed Mark’s dilemma. For conservative test subjects, Mark’s “true self” was not gay; they recommended that he resist such temptations. Those with a more liberal outlook thought he should come out of the closet. Yet if Mark was presented as a secular humanist who thought being homosexual was fine but had negative feelings when thinking about same-sex couples, the conservatives quickly identified this reluctance as evidence of Mark’s true self; liberals viewed it as evidence of a lack of insight or sophistication. In other words, what we claim to be the core of another person’s personality is in fact rooted in the values that we ourselves hold most dear. The “true self” turns out to be a moral yardstick.George Newman and Joshua Knobe, both at Yale University, have found that people typically think humans harbor a true self that is virtuous. They presented subjects with case studies of dishonest people, racists, and the like. Participants generally attributed the behavior in the case studies to environmental factors such as a difficult childhood—the real essence of these people must surely have been different. This work shows our tendency to think that, in their heart of hearts, people pull for what is moral and good.

The belief that the true self is moral probably explains why people connect personal improvements more than personal deficiencies to their “true self.” Apparently we do so actively to enhance appraisals of ourselves. Anne E. Wilson of Wilfrid Laurier University in Ontario and Michael Ross of the University of Waterloo in Ontario have demonstrated in several studies that we tend to ascribe more negative traits to the person we were in the past—which makes us look better in the here and now. According to Wilson and Ross, the further back people go, the more negative their characterization becomes. Although improvement and change are part of the normal maturation process, it feels good to believe that over time, one has become “who one really is.”

Assuming that we have a solid core identity reduces the complexity of a world that is constantly in flux. The people around us play many different roles, acting inconsistently and at the same time continuing to develop. It is reassuring to think that our friends Tom and Sarah will be precisely the same tomorrow as they are today and that they are basically good people—regardless of whether that perception is correct.

Is life without belief in a true self even imaginable? Researchers have examined this question by comparing different cultures. The belief in a true self is widespread in most parts of the world. One exception is Buddhism, which preaches the nonexistence of a stable self. Prospective Buddhist monks are taught to see through the illusionary character of the ego—it is always in flux and completely malleable.

Nina Strohminger of the University of Pennsylvania and her colleagues wanted to know how this perspective affects the fear of death of those who hold it. They gave a series of questionnaires and scenarios to about 200 lay Tibetans and 60 Buddhist monks. They compared the results with those of Christians and nonreligious people in the U.S., as well as with those of Hindus (who, much like Christians, believe that a core of the soul, or atman, gives human beings their identity). The common image of Buddhists is that they are deeply relaxed, completely “selfless” people. Yet the less that the Tibetan monks believed in a stable inner essence, the more likely they were to fear death. In addition, they were significantly more selfish in a hypothetical scenario in which forgoing a particular medication could prolong the life of another person. Nearly three out of four monks decided against that fictitious option, far more than the Americans or Hindus. Self-serving, fearful Buddhists? In another paper, Strohminger and her colleagues called the idea of the true self a “hopeful phantasm,” albeit a possibly useful one. It is, in any case, one that is hard to shake.

8. Insecure people tend to behave more morally

Insecurity is generally thought of as a drawback, but it is not entirely bad. People who feel insecure about whether they have some positive trait tend to try to prove that they do have it. Those who are unsure of their generosity, for example, are more likely to donate money to a good cause. This behavior can be elicited experimentally by giving subjects negative feedback—for instance, “According to our tests, you are less helpful and cooperative than average.” People dislike hearing such judgments and end up feeding the donation box.

Drazen Prelec, a psychologist at the Massachusetts Institute of Technology, explains such findings with his theory of self-signaling: What a particular action says about me is often more important than the action’s actual objective. More than a few people have stuck with a diet because they did not want to appearweak-willed. Conversely, it has been empirically established that those who are sure that they are generous, intelligent, or sociable make less effort to prove it. Too much self-assurance makes people complacent and increases the chasm between the self that they imagine and the self that is real. Therefore, those who think they know themselves well are particularly apt to know themselves less well than they think.

9. If you think of yourself as flexible, you will do much better

People’s own theories about who they are influence how they behave. One’s self-image can therefore easily become a self-fulfilling prophecy. Carol Dweck of Stanford University has spent much time researching such effects. Her takeaway: If we view a characteristic as mutable, we are inclined to work on it more. On the other hand, if we view a trait such as IQ or willpower as largely unchangeable and inherent, we will do little to improve it.

In Dweck’s studies of students, men and women, parents and teachers, she gleaned a basic principle: People with a rigid sense of self take failure badly. They see it as evidence of their limitations and fear it; fear of failure, meanwhile, can itself cause failure. In contrast, those who understand that a particular talent can be developed accept setbacks as an invitation to do better next time. Dweck thus recommends an attitude aimed at personal growth. When in doubt, we should assume that we have something more to learn and that we can improve and develop.

But even people who have a rigid sense of self are not fixed in all aspects of their personality. According to psychologist Andreas Steimer of the University of Heidelberg in Germany, even when people describe their strengths as completely stable, they tend to believe that they will outgrow their weaknesses sooner or later. If we try to imagine how our personality will look in several years, we lean toward views such as: “Level-headedness and clear focus will still be part and parcel of who I am, and I’ll probably have fewer self-doubts.”

Overall, we tend to view our character as more static than it is, presumably because this assessment offers security and direction. We want to recognize our particular traits and preferences so that we can act accordingly. In the final analysis, the image that we create of ourselves is a kind of safe haven in an ever-changing world.

And the moral of the story? According to researchers, self-knowledge is even more difficult to attain than has been thought. Contemporary psychology has fundamentally questioned the notion that we can know ourselves objectively and with finality. It has made it clear that the self is not a “thing” but rather a process of continual adaptation to changing circumstances. And the fact that we so often see ourselves as more competent, moral, and stable than we actually are serves our ability to adapt.

Building a Cooperative Economy

By Oliver Sylvester-Bradley

Source: Resilience

In permaculture terms the economy sometimes feels like a segregated monoculture planted with terminator seeds, sprayed with patented pesticides on venture capital backed farms designed to maximise profits in an unsustainable market place full of thieves and cheats. No wonder people prefer to potter in their gardens and allotments – and try to forget the craziness of corporate capitalism!

But no matter how much we try to ignore the corporate machine it ploughs on regardless and at various points in all of our lives we are forced to interact with the unsustainable, greed-based economy whether we like it or not. We all need to travel, buy energy, we like presents and holidays and now we are buying more and more of these goods and services online, from people we do not know.

As local banks close in favour of apps, local taxis are driven out by Uber and the likes of Airbnb and other holiday and comparison websites offer us ‘guaranteed savings’ – the brave new world of digital platforms is being thrust upon us, whether we like it or not.

The dominant form of business in our economy has not changed, but the method of delivery has. Platform businesses which reach further and wider than conventional ‘bricks and mortar’ businesses, that are able to ‘scale up’ and attract customers in their millions are forcing out the smaller players, just like supermarkets killed the traditional garden market. Except these “platform monopolies” are taking things to a new level – often unbeknown to us they’re gathering our data and using sophisticated algorithms to work out how to sell us more things, that quite often we don’t need or want. They’re aggregating data and dissintermediating in ways that we never knew were possible. Uber is valued at over 60 billion dollars but does not own a single taxi…

From monoculture to platform co-ops

To someone practicing permaculture, there is something almost offensive about vast fields where businesses cultivate the same single crop and, in a similar way, the exponents of ‘peer to peer’ and ‘open source’ technologies get equally offended by monolithic platforms that dominate the digital landscape.

Peer to peer, (where individuals share content with other people, rather than relying on centralised servers) and open source software (which is free to use and adapt, without requiring a licence fee) are like the digital community’s own versions of permaculture. They provide a pathway to greater independence, autonomy, diversity and resilience than is offered by the dominant system.

David Holmgreen’s ideas about creating small scale, copyable, adaptable solutions which have the power to change the world by creating decentralised, diverse, and more resilient systems have huge parallels with open source, collaborative software projects, which are developing as a response to the monolithic, proprietary and profit driven enclosures that dominate today’s Internet.

The end goal of this work is to create ‘platform cooperatives’, as alternatives to the venture capital backed platforms. Platform cooperatives that are member owned and democratically controlled – allowing everyone that is affected by the business, be they customers, suppliers, workers or investors, a say in how the business is run and managed. Co-ops are an inherently different form of organisation than Limited or Public companies, which place community before profit, hence have entirely different principles than their corporate rivals. For this reason they are more resilient in downturns, more responsible to their communities and environments and more effective at delivering real (not just financial) value to everyone they interact with.

Platform co-ops provide a template for a new kind of economy built on trust, mutual aid and respect for nature and community. By placing ownership firmly in the hands of the people and applying democratic forms of governance they offer a legitimate alternative to the defacto form of business. There are several platform co-ops that already provide comparable, and often better services than their corporate rivals and with more support others will continue to develop.

On 26 and 27 July the OPEN 2018 conference at Conway Hall in London will showcase platform co-ops such as The Open Food Network – which is linking up local food producers and consumers through Europe, Resonate – the music streaming co-op, and SMart from Belgium which provides support for a network of thousands of freelancers throughout Europe. The beginnings of a viable, self-supporting and sustainable economy are stating to emerge and OPEN 2018, along with similar events in the US and across Europe, is bringing together the people with the ideas, the tech developers and the legal experts to help catalyse the transition.

Shared values and the network effect

There are so many similarities between permaculture’s philosophy and principles and the works of other progressive groups that hope to encourage a more sustainable, more resilient and equitable future. From Occupy to Open sourcePermaculture to Peer to Peer and Collaborative Technology to the Commons Transition groups there are clearly overlapping values.

David Bollier, writing on the Peer to Peer Foundation blog has suggested that “…permaculturists and commoners need to connect more and learn from each other…” and the idea that these communities are ultimately working towards the same objective seems especially important to recognise if we are to accelerate the development of a more sustainable world.

There is already an evolving “shared narrative” between these various, disparate initiatives, but it is often sidelined by our self-selecting filters which lead us back into the communities we know and trust. Collaboration and cooperation can be hard work and as groups get bigger they can become harder still but that’s no reason not to try. The fact that Wikipedia provides a better encyclopaedia for free in more languages than Britannica ever managed proves that online, open source collaboration can deliver greater value than proprietary, closed source systems.

The true value of a collaborative, open networks only really manifests when its members communicate, and work together, through connected systems. Sharing ideas, discussing problems and addressing challenges in larger networks creates positive feedback loops via the network effect – a term which describes how the value of something increases in proportion to the number of people using it (like a phone, or social media network) – something all the various ethical and progressive networks could benefit from enormously.

Parallels between collaborative, open source software development and permaculture principles:

1. Observe and interact

Progressive software projects often utilise ‘user focused’ design strategies to ensure they meet people’s needs. Taking time to understand how users interact with software systems via user experience testing groups and an ongoing, iterative design processes are recognised to deliver higher quality solutions which suit specific user needs.

2. Catch and store energy

Peer to peer networks don’t rely on centralised servers but instead make use of the latent capacity of other user’s machines. Imagine how much more efficient it would be than deploying huge server farms if our computers were not shut off at night, or left idle, when they could be providing valuable processing power for others. The Holochain project aims to make it simple and secure for anyone to join a truly peer to peer network and to share files and processing power in this way – and to even earn credits for hosting other people’s files and applications.

3. Obtain a yield

The Peer Production License provides a means by which open source developers can make the code they develop available for free and still benefit from it’s use. Sites like the Internet of Ownership, which contains a directory of cooperative platforms use the PPL to “permit reuse exclusively for non-commercial and worker-owned enterprises” thereby helping to grow the commons. The ultimate goal of the PPL is to enable mechanisms so commoners can support themselves and ensure their own social reproduction without resorting to capitalism.

4. Apply self-regulation and accept feedback

This principle is particularly integral to open source development since the concepts of ‘user focussed’ and ‘agile development’, ‘branching’ and ‘forking’ are all designed to ensure that software projects are self-regulating by listening to the users needs, driven by user feedback and that they are able to be adapted to changing needs.

5. Use and value renewable resources and services

Open source technology is inherently more renewable in the way it enables the reuse and repackaging of code for new purposes. Ethically minded hosts and developers such as Green Net power their servers with renewable energy.

6. Produce no waste

As above, open source code is often re-used and repurposed but progressive developers still have a lot to gain from better collaboration. There are often multiple teams working on identical problems and ideas and whilst this has benefits in terms of developing strength and resilience through diversity it also leads to waste, mainly in terms of time. At least the waste ‘product’ of web development is only digital and so old technology and code doesn’t littler the streets or pollute the environment as much as physical products can, especially if archives are stored on renewably powered servers.

7. Design from patterns to details

Genuine online collaboration has been slow to evolve, with the best examples being Linux (the open source operating system), Firefox, the open source web browser and Wikipedia, the open source encyclopaedia. It is only recently, with the rise of monolithic capitalist gardens such as Google and Facebook and Amazon that the hive mind of the internet is recognising the need to step back and redesign its systems according to new patterns. The push for “Net neutrality” and Tim Berners-Lee’s Solid project are examples of this in action as is the Holo project, a very exciting and truly peer to peer “community of passionate humans building a distributed cloud, owned and run by users like you and me.”

8. Integrate rather than segregate

The move from centralised to decentralised, to distributed and federated technology is a a key element of open source and collaborative technology design. The entire Peer to Peer philosophy is based on the recognition that the connections and relationships between nodes (people or computers) in a network is what gives it strength and value. Collaborative technologists still have a lot to gain from developing deeper and wider integrations, like we see in nature, and which permaculturists know so well.

9. Use small and slow solutions

Designing a computer system to be slow is not something you will normally (ever?) hear a programmer talk about but they often talk about small, in many guises. Small packages (of code), small apps, “minified” (meaning compressed) code and even small computers, like the Raspberry Pi are key features of collaborative technology which all aim for increased efficiency.

10. Use and value diversity

Diversity is intrinsic to open source and collaborative technology. The plurality and adaptability of open source solutions ensures a highly diverse ecosystem. Users are free to adapt open source code to their needs and the open nature of most open source projects values contributions from anyone, irrespective of race, gender, age or any other factor. It is true that the majority of contributors to open source projects are normally young, white and male but the reasons for that seem more to do with societal inequalities and stereotypes rather than any specific prejudices or practices.

11. Use edges and value the marginal

The explanation of this principle places most value on “the interface between things…” and this is a central component of web design. Web services have now realised the necessity of providing intuitive user interfaces, to allow users to navigate complex data and to investigate deeper informational relationships but, more interestingly the latest developments in linked open data enable users to interface with more specific, more granular and more timely data to provide increase value. The Internet Of Things will facilitate a massive increase in the number and type of products which can interact over the internet. Whilst it is not the norm, drawing diverse information from the edges and valuing the marginal is something the open internet can really facilitate.

12. Creatively use and response to change

Most open source, collaborative projects use some kind of agile development, which advocates adaptive planning, evolutionary development, early delivery, and continuous improvement, and encourages rapid and flexible response to change. Permaculture and open source see eye to eye on this principle which bodes very well for a growing, symbiotic relationship in our rapidly evolving world.

OUR NEW, HAPPY LIFE? THE IDEOLOGY OF DEVELOPMENT

By Charles Eisenstein

Source: Waking Times

In George Orwell’s 1984, there is a moment when the Party announces an “increase” in the chocolate ration – from thirty grams to twenty. No one except for the protagonist, Winston, seems to notice that the ration has gone down not up.

‘Comrades!’ cried an eager youthful voice. ‘Attention, comrades! We have glorious news for you. We have won the battle for production! Returns now completed of the output of all classes of consumption goods show that the standard of living has risen by no less than 20 percent over the past year. All over Oceania this morning there were irrepressible spontaneous demonstrations when workers marched out of factories and offices and paraded through the streets with banners voicing their gratitude to Big Brother for the new, happy life which his wise leadership has bestowed upon us.

The newscaster goes on to announce one statistic after another proving that everything is getting better. The phrase in vogue is “our new, happy life.” Of course, as with the chocolate ration, it is obvious that the statistics are phony.

Those words, “our new, happy life,” came to me as I read two recent articles, one by Nicholas Kristof in the New York Times and the other by Stephen Pinker in the Wall Street Journal, both of which asserted, with ample statistics, that the overall state of humanity is better now than at any time in history. Fewer people die in wars, car crashes, airplane crashes, even from gun violence. Poverty rates are lower than ever recorded, life expectancy is higher, and more people than ever are literate, have access to electricity and running water, and live in democracies.

Like in 1984, these articles affirm and celebrate the basic direction of society. We are headed in the right direction. With smug assurance, they tell us that thanks to reason, science, and enlightened Western political thinking, we are making strides toward a better world.

Like in 1984, there is something deceptive in these arguments that so baldly serve the established order.

Unlike in 1984, the deception is not a product of phony statistics.

Before I describe the deception and what lies on the other side of it, I want to assure the reader that this essay will not try to prove that things are getting worse and worse. In fact, I share the fundamental optimism of Kristof and Pinker that humanity is walking a positive evolutionary path. For this evolution to proceed, however, it is necessary that we acknowledge and integrate the horror, the suffering, and the loss that the triumphalist narrative of civilizational progress skips over.

What hides behind the numbers

In other words, we need to come to grips with precisely the things that Stephen Pinker’s statistics leave out. Generally speaking, metrics-based evaluations, while seemingly objective, bear the covert biases of those who decide what to measure, how to measure it, and what not to measure. They also devalue those things which we cannot measure or that are intrinsically unmeasurable. Let me offer a few examples.

Nicholas Kristof celebrates a decline in the number of people living on less than two dollars a day. What might that statistic hide? Well, every time an indigenous hunter-gatherer or traditional villager is forced off the land and goes to work on a plantation or sweatshop, his or her cash income increases from zero to several dollars a day. The numbers look good. GDP goes up. And the accompanying degradation is invisible.

For the last several decades, multitudes have fled the countryside for burgeoning cities in the global South. Most had lived largely outside the money economy. In a small village in India or Africa, most people procured food, built dwellings, made clothes, and created entertainment in a subsistence or gift economy, without much need for money. When development policies and the global economy push entire nations to generate foreign exchange to meet debt obligations, urbanization invariably results. In a slum in Lagos or Kolkata, two dollars a day is misery, where in the traditional village it might be affluence. Taking for granted the trend of development and urbanization, yes, it is a good thing when those slum dwellers rise from two dollars a day to, say, five. But the focus on that metric obscures deeper processes.

Kristof asserts that 2017 was the best year ever for human health. If we measure the prevalence of infectious diseases, he is certainly right. Life expectancy also continues to rise globally (though it is leveling off and in some countries, such as the United States, beginning to fall). Again though, these metrics obscure disturbing trends. A host of new diseases such as autoimmunity, allergies, Lyme, and autism, compounded with unprecedented levels of addiction, depression, and obesity, contribute to declining physical vitality throughout the developed world, and increasingly in developing countries too. Vast social resources – one-fifth of GDP in the US – go toward sick care; society as a whole is unwell.

Both authors also mention literacy. What might the statistics hide here? For one, the transition into literacy has meant, in many places, the destruction of oral traditions and even the extinction of entire non-written languages. Literacy is part of a broader social repatterning, a transition into modernity, that accompanies cultural and linguistic homogenization. Tens of millions of children go to school to learn reading, writing, and arithmetic; history, science, and Shakespeare, in places where, a generation before, they would have learned how to herd goats, grow barley, make bricks, weave cloth, conduct ceremonies, or bake bread. They would have learned the uses of a thousand plants and the songs of a hundred birds, the words of a thousand stories and the steps to a hundred dances. Acculturation to literate society is part of a much larger change. Reasonable people may differ on whether this change is good or bad, on whether we are better off relying on digital social networks than on place-based communities, better off recognizing more corporate logos than local plants and animals, better off manipulating symbols rather than handling soil. Only from a prejudiced mindset could we say, though, that this shift represents unequivocal progress.

My intention here is not to use written words to decry literacy, deliciously ironic though that would be. I am merely observing that our metrics for progress encode hidden biases and neglect what won’t fit comfortably into the worldview of those who devise them. Certainly, in a society that is already modernized, illiteracy is a terrible disadvantage, but outside that context, it is not clear that a literate society – or its extension, a digitized society – is a happy society.

The immeasurability of happiness

Biases or no, surely you can’t argue with the happiness metrics that are the lynchpin of Pinker’s argument that science, reason, and Western political ideals are working to create a better world. The more advanced the country, he says, the happier people are. Therefore the more the rest of the world develops along the path we blazed, the happier the world will be.

Unfortunately, happiness statistics encode as assumptions the very conclusions the developmentalist argument tries to prove. Generally speaking, happiness metrics comprise two approaches: objective measures of well-being, and subjective reports of happiness. Well-being metrics include such things as per-capita income, life expectancy, leisure time, educational level, access to health care, and many of the other accouterments of development.  In many cultures, for example, “leisure” was not a concept; leisure in contradistinction to work assumes that work itself is as it became in the Industrial Revolution: tedious, degrading, burdensome. A culture where work is not clearly separable from life is misjudged by this happiness metric; see Helena Norberg-Hodge’s marvelous film Ancient Futures for a depiction of such a culture, in which, as the film says, “work and leisure are one.”

Encoded in objective well-being metrics is a certain vision of development; specifically, the mode of development that dominates today. To say that developed countries are therefore happier is circular logic.

As for subjective reports of individual happiness, individual self-reporting necessarily references the surrounding culture. I rate my happiness in comparison to the normative level of happiness around me. A society of rampant anxiety and depression draws a very low baseline. A woman told me once, “I used to consider myself to be a reasonably happy person until I visited a village in Afghanistan near where I’d been deployed in the military. I wanted to see what it was like from a different perspective. This is a desperately poor village,” she said. “The huts didn’t even have floors, just dirt which frequently turned to mud. They barely even had enough food. But I have never seen happier people. They were so full of joy and generosity. These people, who had nothing, were happier than almost anyone I know.”

Whatever those Afghan villagers had to make them happy, I don’t think shows up in Stephen Pinker’s statistics purporting to prove that they should follow our path. The reader may have had similar experiences visiting Mexico, Brazil, Africa, or India, in whose backwaters one finds a level of joy rare amidst the suburban boxes of my country. This, despite centuries of imperialism, war, and colonialism. Imagine the happiness that would be possible in a just and peaceful world.

I’m sure my point here will be unpersuasive to anyone who has not had such an experience first-hand. You will think, perhaps, that maybe the locals were just putting on their best face for the visitor. Or maybe that I am seeing them through romanticizing “happy-natives” lenses. But I am not speaking here of superficial good cheer or the phony smile of a man making the best of things. People in older cultures, connected to community and place, held close in a lineage of ancestors, woven into a web of personal and cultural stories, radiate a kind of solidity and presence that I rarely find in any modern person. When I interact with one of them, I know that whatever the measurable gains of the Ascent of Humanity, we have lost something immeasurably precious. And I know that until we recognize it and turn toward its recovery, that no further progress in lifespan or GDP or educational attainment will bring us closer to any place worth going.

What other elements of deep well-being elude our measurements? Authenticity of communication? The intimacy and vitality of our relationships? Familiarity with local plants and animals? Aesthetic nourishment from the built environment? Participation in meaningful collective endeavors? Sense of community and social solidarity? What we have lost is hard to measure, even if we were to try. For the quantitative mind, the mind of money and data, it hardly exists. Yet the loss casts a shadow on the heart, a dim longing that no assurance of new, happy life can assuage.

While the fullness of this loss – and, by implication, the potential in its recovery – is beyond measure, there are nonetheless statistics, left out of Pinker’s analysis, that point to it. I am referring to the high levels of suicide, opioid addiction, meth addiction, pornography, gambling, anxiety, and depression that plague modern society and every modernizing society. These are not just random flies that have landed in the ointment of progress; they are symptoms of a profound crisis. When community disintegrates, when ties to nature and place are severed, when structures of meaning collapse, when the connections that make us whole wither, we grow hungry for addictive substitutes to numb the longing and fill the void.

The loss I speak of is inseparable from the very institutions – science, technology, industry, capitalism, and the political ideal of the rational individual – that Stephen Pinker says have delivered humanity from misery. We might be cautious, then, about attributing to these institutions certain incontestable improvements over Medieval times or the early Industrial Revolution. Could there be another explanation? Might they have come despite science, capitalism, rational individualism, etc., and not because of them?

The empathy hypothesis

One of the improvements Stephen Pinker emphasizes is a decline in violence. War casualties, homicide, and violent crime, in general, have fallen to a fraction of their levels a generation or two ago. The decline in violence is real, but should we attribute it, as Pinker does, to democracy, reason, rule of law, data-driven policing, and so forth? I don’t think so. Democracy is no insurance against war – in fact, the United States has perpetrated far more military actions than any other nation in the last half-century. And is the decline in violent crime simply because we are better able to punish and protect ourselves from each other, clamping down on our savage impulses with the technologies of deterrence?

I have another hypothesis. The decline in violence is not the result of perfecting the world of the separate, self-interested rational subject. To the contrary: it is the result of the breakdown of that story, and the rise of empathy in its stead.

In the mythology of the separate individual, the purpose of the state was to ensure a balance between individual freedom and the common good by putting limits on the pursuit of self-interest. In the emerging mythology of interconnection, ecology, and interbeing, we awaken to the understanding that the good of others, human and otherwise, is inseparable from our own well-being.

The defining question of empathy is, What is it like to be you? In contrast, the mindset of war is the othering, the dehumanization and demonization of people who become the enemy. That becomes more difficult the more accustomed we are to considering the experience of another human being. That is why war, torture, capital punishment, and violence have become less acceptable. It is not that they are “irrational.” To the contrary: establishment think tanks are quite adept at inventing highly rational justifications for all of these.

In a worldview in which competing self-interested actors is axiomatic, what is “rational” is to outcompete them, dominate them, and exploit them by any means necessary? It was not advances in science or reason that abolished the 14-hour workday, chattel slavery, or debtors’ prisons.

The worldview of ecology, interdependence, and interbeing offers different axioms on which to exercise our reason. Understanding that another person has an experience of being, and is subject to circumstances that condition their behavior, makes us less able to dehumanize them as a first step in harming them. Understanding that what happens to the world in some way happens to ourselves, reason no longer promotes war. Understanding that the health of soil, water, and ecosystems is inseparable from our own health, reason no longer urges their pillage.

In a perverse way, science & technology cheerleaders like Stephen Pinker are right: science has indeed ended the age of war. Not because we have grown so smart and so advanced over primitive impulses that we have transcended it. No, it is because science has brought us to such extremes of savagery that it has become impossible to maintain the myth of separation. The technological improvements in our capacity to murder and ruin make it increasingly clear that we cannot insulate ourselves from the harm we do to the other.

It was not primitive superstition that gave us the machine gun and the atomic bomb. Industry was not an evolutionary step beyond savagery; it applied savagery at an industrial scale. Rational administration of organizations did not elevate us beyond genocide; it enabled it to happen on an unprecedented scale and with unprecedented efficiency in the Holocaust. Science did not show us the irrationality of war; it brought us to the very extreme of irrationality, the Mutually Assured Destruction of the Cold War. In that insanity was the seed of a truly evolutive understanding – that what we do to the other, happens to ourselves as well. That is why, aside from a retrograde cadre of American politicians, no one seriously considers using nuclear weapons today.

The horror we feel at the prospect of, say, nuking Pyongyang or Tehran is not the dread of radioactive blowback or retributive terror. It arises, I claim, from our empathic identification with the victims. As the consciousness of interbeing grows, we can no longer easily wave off their suffering as the just deserts of their wickedness or the regrettable but necessary price of freedom. It as if, on some level, it would be happening to ourselves.

To be sure, there is no shortage of human rights abuses, death squads, torture, domestic violence, military violence, and violent crime still in the world today. To observe, in the midst of it, a rising tide of compassion is not a whitewash of the ugliness, but a call for fuller participation in a movement. On the personal level, it is a movement of kindness, compassion, empathy, taking ownership of one’s judgments and projections, and – not contradictorily – of bravely speaking uncomfortable truths, exposing what was hidden, bringing violence and injustice to light, telling the stories that need to be heard. Together, these two threads of compassion and truth might weave a politics in which we call out the iniquity without judging the perpetrator, but instead seek to understand and change the circumstances of the perpetration.

From empathy, we seek not to punish criminals but to understand the circumstances that breed crime. We seek not to fight terrorism but to understand and change the conditions that generate it. We seek not to wall out immigrants, but to understand why people are so desperate in the first place to leave their homes and lands, and how we might be contributing to their desperation.

Empathy suggests the opposite of the conclusion offered by Stephen Pinker. It says, rather than more efficient legal penalties and “data-driven policing,” we might study the approach of new Philadelphia District Attorney Larry Krasner, who has directed prosecutors to stop seeking maximum sentences, stop prosecuting cannabis possession, steer offenders toward diversionary programs rather than penal programs, cutting inordinately long probation periods, and other reforms. Undergirding these measures is compassion: What is it like to be a criminal? An addict? A prostitute? Maybe we still want to stop you from continuing to do that, but we no longer desire to punish you. We want to offer you a realistic opportunity to live another way.

Similarly, the future of agriculture is not in more aggressive breeding, more powerful pesticides, or the further conversion of living soil into an industrial input. It is in knowing soil as a being and serving its living integrity, knowing that its health is inseparable from our own. In this way, the principle of empathy (What is it like to be you?) extends beyond criminal justice, foreign policy, and personal relationships. Agriculture, medicine, education, technology – no field is outside its bounds. Translating that principle into civilization’s institutions (rather than extending the reach of reason, control, and domination) is what will bring real progress to humanity.

This vision of progress is not contrary to technological development; neither will science, reason, or technology automatically bring it about. All human capacities can be put into service to a future embodying the understanding that the world’s wellbeing, human and otherwise, feeds our own.

Anomie and Postmodernism

By Richard Sahn

Source: Bracing Views

These are not exactly happy times. Americans have fewer safeguards for their jobs, financial well-being, and, ultimately, their very lives. Uncertainty and insecurity have become more prevalent than ever I can remember. As a consequence, insomnia, depression, angst seem to be characteristic of an increasing number of people across the country, almost as American as apple pie. Just as being a divorcee in California is nothing to write home about—you’re even considered odd if you have never been divorced—so is the sense that something “bad” can happen at any time, without warning. Sociologists—I am a sociologist–call this condition “anomie,” a concept formulated by one of the founders of sociology, Emile Durkheim. The Trump presidency, it may be argued, exacerbates anomie since we seem to be moving closer to economic nightmares and possibly nuclear holocaust than in recent decades.

What exactly is anomie? Anomie literally means without norms. It’s a psychological condition, according to Durkheim, in which an individual member of a society, group, community, tribe, fails to see any purpose or meaning to his/her own life or reality in general. Anomie is the psychological equivalent to nihilism. Such a state of mind is often characteristic of adults who are unemployed, unaffiliated with any social organization, unmarried, lack family ties and for whom group and societal norms, values, and beliefs have no stabilizing effect.

The last situation can readily be the outcome of exposure to sociology courses in college, providing the student does not regularly fall asleep in class.  I’ve always tried to caution my own students that sociology could be, ironically, dangerous to their mental health because of the emphasis on critical thinking regarding social systems and structures. Overcoming socialization or becoming de-socialized from one’s culture—when one begins to question the value of patriotism, for instance—can be conducive to doubt and cynicism which may give rise to anomie.  Of course, I also emphasize the benefits of the sociological enterprise to the student and to society in general. For example, a sociology major is perhaps less likely to participate voluntarily in wars that only favor special interests and which unnecessarily kill civilians.

Clinical depression is virtually an epidemic in the U.S. these days. Undoubtedly, anomie is a major factor, especially in a culture where meaningful jobs or careers are difficult to obtain.  To a great extent social status constitutes one’s definition of self. In Western societies the answer to the perennial philosophical question, “Who are you?” is one’s name and then job or role in the social structure.  Both motherhood and secure jobs or careers are usually antidotes against anomie. Childless women and unemployed or under-employed males are most susceptible to anomie.

What does postmodernism offer to combat the anomie of modern society and now the Trump era itself? An over-simplification of post-modernism or the postmodern perspective is that there is no fixed or certain reality external to the individual. All paradigms and scientific explanations are social constructs, one model being no more valid than the other. A good example of the application of the postmodern perspective is the popular lecture circuit guru, Byron Katie. Ms. Katie has attracted thousands of followers by proclaiming that our problems in life stem from our thoughts alone. The clutter of consciousness, thoughts, feelings, can simply be recognized as such during meditation and then dismissed as not really being real. Problems gone.

An analogy here is Dorothy Day, the founder of the Catholic worker movement, proclaiming in the 1960s that “our problems stem from our acceptance of this filthy, rotten system.” Postmodernists would claim that our problems stem from our acceptance of the Enlightenment paradigm of reality—the materialist world-view, rationality itself.  Reality, postmodernists claim, is simply what we think it is. There is no “IS” there. Postmodern philosophers claim that all experiences of a so-called outside world are only a matter of individual consciousness.  Nothing is certain except one’s own immediate experience. The German existential philosopher, Martin Heidegger, contributed significantly to the postmodern perspective with his concept, “dasein,” or “there-being.” Dasein bypasses physiology and anatomy by implying that neurological processes are not involved in any act of perception, that what we call “scientific knowledge” is a form of propaganda, that is, what we are culturally conditioned to accept as real. There is no universal right or wrong, good or bad.

The great advantage of adopting the postmodern perspective as a way of overcoming anomie is the legitimacy or validation it gives to non-ordinary experiences. If the brain and nervous system are social constructs then so-called altered states of consciousness such as near death experience (NDE), out-of-body experience, reincarnation, time-travel, spirits, miraculous healing become plausible. Enlightenment science and rationality are only social constructs, byproducts of manufactured world-views. The “new age” idea that we create our own reality rather than being immersed in it has therapeutic value to those suffering from anomie.

Sociologist Peter Berger employs the concept of “plausibility structures” to legitimize (make respectable) views of the “real world” which conflict with the presuppositions of Enlightenment science. Science then becomes “science” or social constructs which may or may not have validity even though they are widely accepted as such.  A good example is the postmodern practice of deconstructing or calling into question empirical science and rational thought itself, disregarding the brain as source of all perceptions, feelings, desires, and ideas. Postmodernists maintain that only individual consciousness is real; the brain is a social construct which doesn’t hold water—no pun intended—as the source of what it means to be human.

CAVEAT

The postmodern perspective may work for a while in suppressing anomie and dealing with the horrors of a hostile or toxic social and political environment. Sooner or later, however, existential reality intervenes. The question is, can postmodernism alleviate physical pain, the death of a loved one, personal injury and illness, the loss of one’s home and livelihood? At this point in the evolution of my philosophical reflections I would argue that postmodernism can reduce or eliminate the depression that inevitably comes from too much anomie–but only temporarily.   The postmodern perspective is not up to the task of assuaging the truly catastrophic events in one’s life. As much as I would like not to believe this, I’m afraid only political and social action can help us out when the going really gets rough, although I don’t recommend sacrificing the teaching of critical thinking, a possible cause of anomie, in regard to society’s values and institutions.

 

Richard Sahn, a professor of sociology in Pennsylvania, is a free-thinker.

Billionaires Want Poor Children’s Brains to Work Better

By Gerald Coles

Source: CounterPunch

Why are many poor children not learning and succeeding in school? For billionaire Bill Gates, who funded the start-up of the failed Common Core Curriculum Standards, and has been bankrolling the failing charter schools movement, and Facebook’s Mark Zuckerberg, it’s time to look for another answer, this one at the neurological level. Poor children’s malfunctioning brains, particularly their brains’ “executive functioning”–that is, the brain’s working memory, cognitive flexibility, and inhibitory control–must be the reason why their academic performance isn’t better.

Proposing to fund research on the issue, the billionaires reason that not only can executive malfunctioning cause substantial classroom learning problems and school failure, it also can adversely affect socio-economic status, physical health, drug problems, and criminal convictions in adulthood. Consequently, if teachers of poor students know how to improve executive function, their students will do well academically and reap future “real-world benefits.” For Gates, who is always looking for “the next big thing,” this can be it in education.

Most people looking at this reasoning would likely think, “If executive functioning is poorer in poor children, why not eliminate the apparent cause of the deficiency, i.e., poverty?” Not so for the billionaires. For them, the “adverse life situations” of poor students are the can’t-be-changed-givens. Neither can instructional conditions that cost more money provide an answer. For example, considerable research on small class size teaching has demonstrated its substantially positive academic benefits, especially for poor children, from grammar school through high school and college. Gates claims to know about this instructional reform, but money-minded as he is, he insists these findings amount to nothing more than a “belief” whose worst impact has been to drive “school budget increases for more than 50 years.”

Cash–rather, the lack of it–that’s the issue: “You can’t fund reforms without money and there is no more money,” he insists. Of course, nowhere in Gates’ rebuke of excessive school spending does he mention corporate tax dodging of state income taxes, which robs schools of billions of dollars. Microsoft, for example, in which Gates continues to play a prominent role as “founder and technology advisor” on the company’s Board of Directors would provide almost $29.6 billion in taxes that could fund schools were its billions stashed offshore repatriated.

In a detailed example of Microsoft’s calculated tax scheming and dodging that would provide material for a good classroom geography lesson, Seattle Times reporter, Matt Day, outlined one of the transcontinental routes taken by a dollar spent for a Microsoft product in Seattle. Immediately after the purchase, the dollar takes a short trip to Microsoft’s company headquarters in nearby Redmond, Washington, after which it moves to a Microsoft sales subsidiary in Nevada. Following a brief rest, the dollar breathlessly zigzags from one offshore tax haven to another, finally arriving in sunny Bermuda where it joins $108 billion of Microsoft’s other dollars. Zuckerberg’s Facebook has similarly kept its earnings away from U.S. school budgets.

By blaming poor children’s school learning failure on their brains, the billionaires are continuing a long pseudoscientific charade extending back to 19th century “craniology,” which used head shape-and-size to explain the intellectual inferiority of “lesser” groups, such as southern Europeans and blacks. When craniology finally was debunked in the early 20thcentury, psychologists devised the IQ test, which sustained the mental classification business. Purportedly a more scientific instrument, it was heavily used not only to continue craniology’s identification of intellectually inferior ethnic and racial groups, but also to “explain” the educational underachievement of black and poor-white students.

After decades of use, IQ tests were substantially debunked from the 1960s onward, but new, more neurologically complex, so-called brain-based explanations emerged for differing educational outcomes. These explanations conceived of the overall brain as normal, but contended that brain glitches impeded school learning and success. Thus entered “learning disabilities,” “dyslexia,”and “attention deficit hyperactivity disorder (ADHD)” as major neuropsychological concepts to (1) explain school failure, particularly for poor children, although the labels also extended to many middle-class students; and (2) serve as “scientific” justification for scripted, narrow, pedagogy in which teachers seemingly reigned in the classroom, but in fact, were themselves controlled by the prefabricated curricula.

In the forefront of this pedagogy was the No Child Left Behind legislation (NCLB), with its lock-step instruction, created under George W. Bush and continued by Barack Obama. Supposedly “scientifically-based,” federal funds supported research on “brain-based” teaching that would be in tune with the mental make-up of poor children, thereby serving to substitute for policy that would address poverty’s influence on educational outcomes. My review of the initial evidence supposedly justifying the launching of this diversionary pedagogy revealed it had no empirical support. However, for the students this instruction targeted, a decade had to pass before national test results confirmed its failure.

The history of “scientific brain-based” pedagogy for poor children has invariably been a dodge from addressing obvious social-class influences. In its newest iteration– improve poor children’s  executive functioning–billionaires Gates and Zuckerberg will gladly put some cash into promoting a new neurological fix for poor children, thereby helping (and hoping) to divert the thinking of education policy-makers, teachers and parents. Never mind that over three years ago, a review of research on executive functioning and academic achievement failed to find “compelling evidence that a causal association between the two exists.” What’s critical for these billionaires and the class they represent is that the nation continues to concoct policy that does not deplete the wealth of the rich and helps explain away continued poverty. Just because research on improving executive functioning in poor children has not been found to be a solution for their educational underachievement, doesn’t mean it can’t be!

Now that’s slick executive functioning!

 

Gerald Coles is an educational psychologist who has written extensively on the psychology, policy and politics of education. He is the author of Miseducating for the Global Economy: How Corporate Power Damages Education and Subverts Students’ Futures (Monthly Review Press).

Neoliberal Defenestration and the Overton Window

By Stephen Martin

Source: CounterPunch

‘It is difficult to get Artificial Intelligence to understand something, when the Research and Development funding it depends upon its not understanding it’

Paraphrase of Upton Sinclair.

defenestration  (diːˌfɛnɪˈstreɪʃən)

n

the act of throwing a person or thing  out of a window

[C17: from New Latin dēfenestrātiō, from Latin de- + fenestra window

The freedictionary.com

‘If there is such a phenomenon as absolute evil, it consists of treating another human being as a thing

John Brunner ‘The Shockwave Rider

This small article a polemic against neoliberal hegemony; in particular the emerging issue of ‘surplus population’ as related to technological displacement in context of a free market, an issue purposive to such hegemony which as an ‘elephant growing in the panopticon’  i.e. not to be mentioned?

The central premise is that Artificial Intelligence (AI) + Robotics  comprise a nefarious as formulaic temptation to the elite of the ‘Technetronic era’ as Zbigniew Brzezinski put it: this consistent with a determinism as stems ontologically from ‘Empiricism’  form of a  ‘One Dimensionality’ as Marcuse phrased it over five Decades ago; and which thru being but mere simulacra, AI and Robotics represent an ontological imperative  potentially expropriated under pathology to denial of Kant’s concept of ‘categorical imperative’?  (That Kant did not subscribe to determinism is acknowledged). The neoliberal concepts of ‘Corporatism’ and ‘free market’ are powerful examples of this ‘one dimensionality’ which is clearly pathological, a topic notably explored by Joel Bakan concerning the pursuit of profit within a Corporatist framework.

– ‘One dimensionality in, one dimensionality out’– so it goes ontologically as to some  paraphrase of GIGO as trending alas way of ‘technological determinism’ towards an ‘Epitaph for Biodiversity’  as would be – way of ‘Garbage’ or ‘Junk’ un apperceived as much as ‘retrospection’ non occurrent indeed -and where ‘Farewell to the Working Class’ as André Gorz conceived to assume an entirely new meaning: -this to some denouement of  ‘Dystopian Nightmare’ as opposed to  ‘Utopian Dream’, alas; such the ‘Age of Leisure’  as ‘beckoning’ to be not for the majority or ‘Demos’,but rather  for the ‘technetronic elite’ and  their ‘AI’ and Robotics – such ‘leisure’ being as to a ‘freedom’ pathological and facilitated  by the absence of conscience as much as morality; such the ‘farewell’; such the defenestration of ‘surplus’ , such the ‘Age’ we ‘live’ within as to ‘expropriation’ and ‘arrogation’  to amount to  ‘Death by Panopticon’ such the ‘apotheosis’?

It is being so cheerful which keeps these small quarters going.

But digression.

–  It is a relatively small step from ‘the death of thought’ to ‘the death of Life’under Neoliberal Orthodoxy as proving to be the most toxic ideology ever knownsuch the hegemony as a deliberative, shift of the ‘ Overton Window’ currently occurring as to trend deterministic; such the mere necrotrophy as a ‘defenestration’ – and the ‘one percent’ but a deadly collective of parasitic orifice? For what is ‘Empiricism’ when implemented thru  AI and Robotic Technology in a Corporatist economy as but a ‘selective investment’  as to Research and Development by elite ‘private interests’,  which to a determinism so evidently entailing a whole raft of ‘consequence’ ; such the means, such the production, such the ‘phenomenology’ as ‘owned’ indeed? Under pathology, selectivity is impaired to point of ‘militarization’?

But foremost amongst said ‘raft’ of consequence – the concept of ‘classification’ as incorporates methodological reduction of the particular to a composite of generalities so typical of ‘Science’ as expropriated; the fruition thereof replicated not least thru ‘Consumerism’ – and ‘Lifestyle’ – as much as ‘Life’ reduced as much as abrogated to but correlation way  of ‘possession’ of ‘things’: this  as said replication expressed as much ‘thru’ Linnaeus as Marx  concerning ‘class’- and as results in concepts’ Incorporated’ such as the ‘Overton Window’ – as will be explored by way of ‘extrapolation’ below? The debasing of identity as a correlate of possessions as a necessary ‘abrogation’ by way of engineered ‘bio hack’ is only furthered, such the loss of dimensionality as a potential, by such as social media? An excellent multimedia illustration of such loss is found here.

It’s to be noted that for Empiricism the concept of ‘good’ and ‘evil’ entails an extra dimensionality  as ‘metaphysical’ – and that ‘Politics’ so deconstructed despite abuse under orthodoxy as to ‘mitigation’ remains as  ‘Moral Economics’ – this despite the  mitigative contention of neoliberal orthodoxy that there no morality in the ‘synonymy’; to a pragmatic as ‘Utilitarian’ point of a ‘Killing the Host’ prevailing at paradigmatic as much as Geopolitical level as but explicative of a ‘necrotrophy’; as much as the ‘defenestration’ as euphemism herein proposed this small article  would explicate?

Kudos to Michael Hudson for exposing, and continuing to expose, the ‘death of thought’ which Neoliberalism as an orthodoxy as but a mere ‘racket’ of ‘transfer of resources’ represents.

– Are we ‘on a roll’ here as much as ‘off the leash’ – such the rebellion ‘psycho political’; such the ‘CounterPunch’ by way of digital pamphleteering as ‘restricted code’ rejected evidenced by way of ‘alternative’?

‘Politics’ should mean a diversity biological as much as phenomenological; it should mean more than Empirical ‘utility’, by way of the extra dimensionality which Metaphysics represents; this, as much as ‘Democracy’ demands diversity; ‘thought permitted’ as evidencing same as much as questions allowed to be asked; while Corporatism as antithesis demands ‘line’, a homogeneity, a uniformity, an abrogation as to a contingency?

Whether ‘Empiricism’ as to Philosophy is ‘Essentialist’ or ‘Instrumentalist’ is as of much political relevance as the Medieval question ‘how many angels can dance upon the head of a needle?‘ – the tragic fact remaining in ‘Oceania’  become to Corporatism  is that the economic impact of (AI + Robotics) constitutes a ‘technological displacement’ which under a ‘one dimensionality’ as ‘Orthodox’  is as a ‘political impact’ at a quintessential or axiomatic level, such the ‘formulae’ as (=Ecocide) ‘completes’ under a determinism of hegemony as demands ‘whoredom’? AI and Robotics destroy the symbiotic relationship between production and consumption thru reducing the requirement for human labor as waged in the productive process; AI and Robotics thereby impact upon the distribution of resources as wage based. The fairy tale of compensatory employment opportunities is at best wishful thinking, at worst it is a purposively contrived propaganda?

‘Biodiversity’, alas, becoming more limited to a ‘Thanatos‘ of ‘Military Industrial Corporatist Complex’  as explicates the ‘Age of the Anthropocene’as of the ‘1%’ as funders of ‘hegemony’ such the transfer of resources?

‘As ‘we’ view the world so it becomes’, indeed – this as much as how ‘Others’, through hegemony as ‘Utilitarian’, would have it viewed – and that such a large part of ‘Currency’ as ‘it’  to the permit of ‘Hegemony’ which would control and issue views – such the window as ‘Overtonian’ as can be shifted?

How long before the ‘collateral damage’ concerning premature fatality and reduced life expectancy as evidenced in homelessness, withdrawal of social welfare, militarization of policing, drug abuse, including prescription of synthetic opioids, incarceration all as obscenities explicating the ‘cheapening of life’  becomes ‘formalised’ under neoliberal orthodoxy as euthanasia? To such an Overton Window would it at first be ‘voluntary’?

It was ‘Utilitarianism’ as gave the World the concept of ‘Panopticon’ back in the Eighteenth Century.

Apropos to such ‘Thanatos‘ identified:

Wherefore art thou ‘Eros’: as represented by checkout operator, bank teller, driver, warehouseman, barista, cook, secretary, journalist, lawyer, receptionist, ‘ human voice at the end of the line’ – such the insidious inroads being made consequent the advance of Empirical based Technology?

The ‘Litany’ of such ‘displacement/defenestration’ could go on, as it undoubtedly shall under prevailing Hegemony ‘Western’ as ‘Oceanic’ as much orthodox as ‘deadly’, such the invasive penetration of ‘one dimensionality’ as ‘technological displacement’ evidenced under neoliberal orthodoxy as entailing a transfer of resources to mere ‘Stateless Bastards’, such the point of a determinism as would prevail woeful?

Ponerological questions such as would not be asked commensurate:

Is the ultimate ‘stateless bastard’ satan?

Can ‘synonymy seen’ be’ revolutionary’?

‘Technological displacement’ under an ontology of ‘One Dimensionality’ aka‘Corporatism’ but a euphemism for ‘surplus population’; this as much as ‘Corporatism’ promotes the illusion of ‘Democracy’ by way of ‘mental cheating’; such the synonymy as ‘simulacra’ an ‘illusion’ denied – but ‘real’ under ‘doublethink’?

‘Trickle down economics’ is as real as much as ‘technological displacement’ to be compensated for in the ‘opening up of a whole new vista of opportunity’, such the death of thought as neoliberal orthodoxy represents some parallel of tales told by such as Horatio Alger to point of ‘illusion’ encouraged?

These small quarters would ‘rip a new one’ in Neoliberal Orthodoxy by way of polemic.

It remains a ‘big’ question under contemplation of the genius of Orwell and the highly perspicacious ratio of ‘1/15/84’ (hence the ‘one percent’) as to why, within an undoubtedly comprehensive as prescient Dystopian vision where ‘oligarchic collective’ a synonym of ‘technetronic elite’, he ‘permitted the currency’ of the ‘Prole’ as ‘84%’ to prevail as a presence rather than but a memory disappeared by way of ‘memory hole’?

Under neoliberal orthodoxy the political utility of the ‘Proles’ and in particular the ‘Lumpenproletariat’, alas, is as to but fear as a ‘stick’; a basis of control and manipulation same sense as Upton Sinclair explicated ‘carrot’ contingent by way of synonym seen: to wit;  accept control and manipulation as ‘rewarded’ or be ‘expelled’; be but as a ‘Prole’ subsisting and awaiting death, such the economic incarceration as ‘CAFO’ epitomises the cheapening of life under a hegemony as has corollary of alienation, marginalization and impoverishment wielded under Dystopian imperative; this to a ‘transfer of resources’ from ‘Eros‘ to ‘Thanatos‘ reinforced thru contingency of profit such the ‘ponerology’ of ‘Biodiversity’ reduced by way of paradigm Geopolitical?

To love Life is to loathe the deadly; such the philanthropy as would only be evidenced, such the irony, by the ‘ragged trousered’ as so reduced, such the divide et imperaevidenced?

-And so to ‘defenestration’ as ongoing 21stC. by way of ‘surplus population’ generated deterministically as to be dealt with ‘Utilitarian’ as much as the elimination of ‘Biodiversity’ such the ‘technetronic era’ as much as of Dystopia for Humanity as opposed to a Utopia for (AI + Robotics)?

In the annals of International Finance, in which ‘usury’ figures large as polymorphous(?), the power of (AI + Robotics) ‘growing’ as ‘metastasising’ –  as evidenced by the concept of ‘Dark Pools’, and in political manipulation to point of control and issue of currency  by algorithms, such as  infamously deployed by the now defunct Cambridge Analytica using data ‘supplied’ by Facebook are on the rise – such the ‘technetronic era’ furthered?

When neoliberal orthodoxy states Say hello to my little friend!‘ way of the militarization of ‘AI +Robotics” then a ‘defenestration’ form of ‘take all to hell’ will occur, such the shift of the ‘Overton Window’ in progress?

As to the ‘Overton Window’ as in title this small article; a fellow ‘CounterPuncher ‘sees thru it’ most apposite:

Overton described the evolution to broad public acceptance as a process that develops by degrees: “Unthinkable; Radical; Acceptable; Sensible; Popular; Policy.” The right used this model and stuck with it for 30 years to achieve its current dominance. Ideas like slashing unemployment insurance and welfare, privatizing crown corporations, gutting taxes on the wealthy, making huge cuts to social programs and signing “trade” deals that give corporations more power, were all “unthinkable” or “radical” in the beginning. But after 30 years of relentless promotion and the courting of politicians, all of these ideas are now public policy.

Murray Dobbin

– You see it is really quite ‘simple’ when boiled down, such the reductioad absurdumas ‘one dimensionality’ the result of Empiricism embraced to an exclusion of alternative, as ‘Evolution’ circumscribes as much as theoretically delineates?

‘The Talking Heads’ got it right regarding the consequences of neoliberal orthodoxy as concerns the masses and the ‘death of thought’ whereby a ‘Road to Nowhere’  becoming more ‘incorporative’?

Thru Empiricism as an ‘Ontology’, Man in process of nurturing a necrotrophy, such the orifice as existing to be stuffed to a ‘friction of the finitude’. Each ‘Billionaire’ as become so Empirically quantifiable as under such ‘neoliberal jungle’ such the paradigm of abrogation is as to to an obscene ‘transfer of resources’ whereby egocentricity a ‘Thanatos‘ as opposed to ‘Eros‘, and whereby in such land of the blind as a ‘Kingdom’, the ‘one eyed’ as ‘one dimensional’ as much as Empiricism lack any sense of empathy let alone mercy, and which AI and Robotics to epitomise?

Under such ‘paradigm’ of necrotrophy as debasement as much as abrogation it is as blessed to be a Prole? As Martin Luther so humorously said:

‘So our Lord God commonly gives riches to those gross asses to whom He vouchsafes nothing else’

The rationale for composing this small article is not to be pessimistic, rather such the pretension it is as to be a ‘gadfly’ against the manifestly ongoing cheapening of life thru which such a small minority profit from, and which yet remains mutable, as future submissions shall propose.

No Need To Wait – Dystopia Is Almost Upon Us

Source: TruePublica

Microsoft’s CEO has warned the technology industry against creating a dystopian future, the likes of which have been predicted by authors including George Orwell and Aldous Huxley. Satya Nadella kicked off the the company’s 2017 Build conference with a keynote that was as unexpected as it was powerful. He told the developers in attendance that they have a huge responsibility, and that the choices they make could have enormous implications.

They won’t listen of course. The collection of big data along with management, selling and distribution and the systems architecture to control it is now worth exactly double global military defence expenditure. In fact, this year, the big data industry overtook the worlds most valuable traded commodity – oil.

The truth is that the tech giants have already captured us all. We are already living in the beginnings of a truly dystopian world.

Leaving aside the endemic surveillance society our government has chosen on our behalf with no debate, politically or otherwise, we already have proof of the now and where it is leading. With fingerprint scanning, facial recognition, various virtual wallets to pay for deliveries, some would say your identity is as good as stolen. If it isn’t, it soon will be. That’s because the hacking industry, already worth a mind blowing $1trillion annually is expected to reach $2.1 trillion in just 14 months time.

The reality of not being able to take public transportation, hire a car, buy a book, or a coffee – requiring full personal identification is almost upon us. Britain even had an intention to be completely cashless by 2025 – postponed only by the impact of Brexit.

Alexa, the Amazon home assistant listens to everything said in the house. It is known to record conversations. Recently, police in Arkansas, USA demanded that Amazon turn over information collected from a murder suspect’s Echo — the speaker that controls Alexa, because they already knew what information could be extracted from it.

32M is the first company in the US that provides a human chip, allowing employees “to make purchases in their break-room micro market, open doors, login to computers, use the copy machine.” 3M also confirmed what the chip could really do – telling employees to “use it as your passport, public transit and all purchasing opportunities.”

Various Apps now locate people you may know and your own location can be shared amongst others without your knowledge and we’ve known for years that governments and private corporations have access to this data, whether you like it not.

Other countries are providing even scarier technologies.  Hypebeast Magazine reports that  Aadhaar is a 12-digit identity number issued to all Indian residents based on their biometric and demographic data. “This data must be linked to their bank account or else they’ll face the risk of losing access to their account. Folks have until the end of the year to do this, with phone numbers soon to be connected through the 12 digits by February. Failure to do so will deactivate the service. ” The technology has the ability to refuse access to state supplied services such as healthcare.

Our article “Insurance Industry Leads The Way in Social Credit Systems” also highlights what the fusion of technology and data is likely to end up doing for us. An astonishing 96 per cent of insurers think that ecosystems or applications made by autonomous organisations are having a major impact on the insurance industry. The use of social credit mechanisms is being developed, some already implemented, which will determine our future behaviour, which will affect us all – both individually and negatively.”

The Chinese government plans to launch its Social Credit System in 2020. Already being piloted on 12 million of its citizens, the aim is to judge the trustworthiness – or otherwise – of its 1.3 billion residents. Something as innocuous as a person’s shopping habits become a measure of character. But the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like. Friends are considered as well and individual credit scores fall depending on their trustworthiness. It’s not possible to imagine how far this will go in the end.

However to get us all there, to that situation, we need to be distracted from what is going on in the background. Some, are already concerned.

 

Distraction – detaching us from truth and reality

The Guardian wrote an interesting piece recently which highlighted some of the concerns of those with expert insider knowledge of the tech industry. For instance, Justin Rosenstein, the former Google and Facebook engineer who helped build the ‘like’ button –  is concerned. He believes there is a case for state regulation of smartphone technology because it is “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies.

If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.” Rosenstien also makes the observation that after Brexit and the election of Trump, digital forces have completely upended the political system and, left unchecked, could render democracy as we know it obsolete.

Carole Cadwalladre’s recent Exposé in the Observer/Guardian proved beyond doubt that democracy has already departed.  Here we learn about a shadowy global operation involving big data and billionaires who influenced the result of the EU referendum. Britain’s future place in the world has been altered by technology.

Nir Eyal 39, the author of Hooked: How to Build Habit-Forming Products writes: “The technologies we use have turned into compulsions, if not full-fledged addictions.” Eyal continues: “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Eyal feels the threat and protects his own family by cutting off the internet completely at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.”

The truth is we are no longer in control and have not been since we learned that our government was lying to us with the Snowden revelations back in 2013.

Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry agrees about the lack of control. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” Harris insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.

Harris is a tech whistleblower. He is lifting the lid on the vast powers accumulated by technology companies and the ways they are abusing the influence they have at their fingertips – literally.

“A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today.”

The techniques these companies use such as social reciprocity, autoplay and the like are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, ultimately revealed that the company can identify when teenagers feel “worthless or “insecure.” Harris adds, that this is “a perfect model of what buttons you can push in a particular person”.

Chris Marcellino, 33, a former Apple engineer is now in the final stages of retraining to be a neurosurgeon and notes that these types of technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

Roger McNamee, a venture capitalist who benefited from hugely profitable investments in Google and Facebook, has grown disenchanted with both of the tech giants. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”

James Williams ex-Google strategist who built the metrics system for the company’s global search advertising business, says Google now has the “largest, most standardised and most centralised form of attentional control in human history”. “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

Williams also takes the view that if the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on. If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?”

“Will we be able to recognise it, if and when it happens?” Williams says. “And if we can’t, then how do we know it hasn’t happened already?”

 

The dystopian arrival

Within ten years, some are speculating that many of us will be wearing eye lenses. Coupled with social media, we’ll be able to identify strangers and work out that a particular individual, in say a bar, has a low friend compatibility, and data shows you will likely not have a fruitful conversation. This idea is literally scratching the surface of the information overload en-route right now.

It is not at all foolish to think that in that same bar a patron is shouting at the bartender, who refuses to serve him another drink because the glass he was holding measured his blood-alcohol level through the sweat in his fingers. He’ll have to wait at least 45 minutes before he’ll be permitted to order another scotch. You might even think that is a good idea – it isn’t.

Google’s Quantum Artificial Intelligence  Lab, already works with other organisations associated with NASA. Google’s boss sits on the Board of the Pentagon with links plugged directly into the surveillance architecture of the NSA in the USA and GCHQ in Britain. This world, where artificial intelligence makes its mark, as Williams mentions earlier, will deliberately undermine the ability to think for yourself.

In the scenario of the eye lenses, you might even have the ability to command your eyewear to shut down. But when you do, suddenly you are confronted with an un-Googled world. It appears drab and colourless in comparison. The people before you are bland, washed out and unattractive. The art, plants, wall paint, lighting and decorations had all been shaped by your own preferences, and without the distortion field your wearable eyewear provided, the world appears as a grey, lifeless template.

You find it difficult to last without the assistance of your self imposed augmented life, and accompanied by nervous laughter you switch it back on. The world you view through the prism of your computer eyewear has become your default setting. You know you have free will, but don’t feel like you need it. As Marcellino says the same neurological pathways as gambling and drug use drive how you choose to see the world.

This type of technology will be available and these types of scenario’s will become real, sooner than you think.

Our governments, allied with the tech giants are coercing us into a place of withering obedience with the use of 360 degree state surveillance. New technology, which is somehow seen as the road to liberty, contentment and prosperity, is really our future being shaped by a system that will destroy our civil liberties, crush our human rights and it will eventually ensnare and trap us all. This much they are already attempting in China and Japan with social credit mechanisms and pre-crime technology which is a truly frightening prospect. Without debate or our knowledge, here in western democracies, these technologies are already in use.

 

Saturday Matinee: The Selfish Ledger

“The Selfish Ledger” (2016) is a leaked internal Google video by Nick Foster, the head of design at Google’s research-and-development division, X. It draws on theories of evolutionary biology to explain how the collective data history of all devices could be used  by an AI “ledger” similar to how genes shape characteristics of future generations. As explained by Foster:

“User-centered design principles have dominated the world of computing for many decades, but what if we looked at things a little differently? What if the ledger could be given a volition or purpose rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information? What if we thought of ourselves not as the owners of this information, but as custodians, transient carriers, or caretakers?…By thinking of user data as multigenerational, it becomes possible for emerging users to benefit from the preceding generation’s behaviors and decisions.”

This database of human behavior can be mined for patterns, and “sequenced” like the human genome, making future behaviors and decisions easier to predict and direct. According to Google the video was designed to be provocative and did not relate to any products in development. Watch it yourself and decide.