Democracy Rising 28: AI, Gossip, and Our Epistemological Crisis

By Tom Prugh

Source: resilience

The other day I joined the rush to explore ChatGPT, signing up at the OpenAI website. I gave it my full legal name and correct birth date, and asked it to pretend I had died and to write my obituary. The result was 300 words describing a somewhat boring paragon of a man.

Except maybe for the boring part, I am not that man, much less that paragon.

The obit wasn’t completely wrong, but it did nothing to undermine ChatGPT’s reputation for “uneven factual accuracy.” It said I was born in Ohio (true), but in Cleveland (false) in 1957 (false). It said I was a “committed environmentalist” (true; I worked for the late lamented Worldwatch Institute for the best part of my career), and that I was an active member of “several environmental organizations” (somewhat true, off and on). It described me as an “avid cyclist” (kind of true, but the last time I did a century ride was 1987).

So much for the hits. The misses include accounts of me as:

  • A “devoted husband” to my wife of 40 years, Mary (my marriage, to a fine woman not named Mary, lasted 26 years) and a “loving father” to two children (one, in fact)
  • A “brilliant engineer” with a degree in electrical engineering from Ohio State University who worked for Boeing, General Electric, and SpaceX (wrong on all counts)
  • Someone who “was instrumental in the development of several renewable energy projects” (my wife and I put a few solar panels on our garage roof, but that’s it)
  • An “active member” of a church who spent “many hours volunteering at the food bank” (I am neither very religious nor, it shames me to admit, very generous with my personal time)

The obituary proclaimed that my “death” had “left a deep void in the lives of his family, friends, and colleagues” and that I would be “deeply missed by all who knew him.” Well, that would be gratifying—if there is a me to be gratified—but I’ll settle for a drunken wake where somebody plays “Won’t Get Fooled Again.”

Maybe everyone should try this. You too might be amused and/or appalled by the plausible distortions and lies a quasi-intelligent computer program can gin up by accessing the petabytes of data (“data”?) on the Internet—accounts of people and events that are bogus but increasingly, and seamlessly, hard to tell  from reality.

I am not a tech nerd and my grasp of what ChatGPT does is rudimentary. But I find it disturbing that this expression of artificial intelligence will instantly fabricate a profile and populate it with—not questions, or blanks to be filled in—but invented factoids tailored to fit a particular format. And this reservation isn’t just me being PO’d about my obit (I’m actually grateful my Internet footprint isn’t bigger); prominent tech geeks also have misgivings. Here’s Farhad Manjoo, for instance:

ChatGPT and other chatbots are known to make stuff up or otherwise spew out incorrect information. They’re also black boxes. Not even ChatGPT’s creators fully know why it suggests some ideas over others, or which way its biases run, or the myriad other ways it may screw up.  …[T]hink of ChatGPT as a semi-reliable source.

Likewise Twitter and other social media, whose flaws and dangers are well known by now, and feared by some of the experts who know them best. The most recent book from revered tech guru and virtual reality pioneer Jaron Lanier is called Ten Arguments for Deleting Your Social Media Accounts Right Now. Chapter titles include “Quitting Social Media Is the Most Finely Targeted Way to Resist the Insanity of Our Time,” “Social Media Is Making You into an Asshole,” “Social Media Is Undermining Truth,” and “Social Media Is Making Politics Impossible.”

About those politics: ChatGPT and its successors and rivals, whatever their virtues, are the latest agents in the corruption of the public sphere by digital technology, threatening to extend and deepen the misinformation, fabulism, and division stoked by Twitter and other digital media. Once again, a powerful new technology is out the door and running wild while society and regulators struggle to understand and tame it.

It’s hard to see how this can end well.

An earlier post in this series (DR5) looked at recent archaeological evidence suggesting that humans have explored lots of different means of governing ourselves over the last several thousand years. Eventually, for several reasons, we seem to have ended up with large, top-down, hierarchical organizations. These have lots of problems that won’t be reviewed here, but neuroscientist and philosopher Eric Hoel argues that at least they freed us from the “gossip trap.”

Hoel thinks the main reason small prehistoric human groups didn’t evolve hierarchical governing systems is because of “raw social power,” i.e., gossip:

[Y]ou don’t need a formal chief, nor an official council, nor laws or judges. You just need popular people and unpopular people.

After all, who sits with who is something that comes incredibly naturally to humans—it is our point of greatest anxiety and subject to our constant management. This is extremely similar to the grooming hierarchies of primates, and, presumably, our hominid ancestors.

“So,” Hoel says, “50,000 BC might be a little more like a high school than anything else.”

Hoel believes that raw social power was a major obstacle to cultural development for tens of thousands of years. When civilization did finally arise, it created “a superstructure that levels leveling mechanisms, freeing us from the gossip trap.”

But now, Hoel says, the explosion of digital media and their functions have resurrected it:

[I]f we lived in a gossip trap for the majority of our existence as humans, then what would it be, mentally, to atavistically return to that gossip trap?

Well, it sure would look a lot like Twitter.

I’m serious. It would look a lot like Twitter. For it’s on social media that gossip and social manipulation are unbounded, infinitely transmittable.

…Of course we gravitate to cancel culture—it’s our innate evolved form of government.

Allowing the gossip trap to resume its influence on human affairs—and turbocharging it the way digital media are doing—seems like a terrible way to run a PTA or a garden club, let alone a community or a nation.

The industrialization of made-to-order opinions, “facts,” and “data” via AI and social media, despite efforts to harness them for constructive ends, is plunging us into an epistemological crisis: “How do you know?” is becoming the most fraught question of our time. T.S. Eliot said that “humankind cannot bear very much reality,” but now we are well into an era when we can’t even tell what it is—or in which we simply make it up to please ourselves. The more convincing these applications become, the less anchored we are to the “fact-based” world.

We’ve struggled with this for centuries. Deception is built into nature as an evolutionary strategy, and humans are pretty good at it, both individually and at scale by means of propaganda, advertising, public relations, and spin. These all prey on human social and cognitive vulnerabilities (see DR4).

Humans can only perceive the world partially and indirectly. It starts with our senses, which ignore all but a tiny fraction of the vast amount of data that’s out there. (Sight, for instance, captures only a sliver of the electromagnetic spectrum.) In addition, we’re social creatures and our perceptions of what’s real are powerfully shaped by other people. And now comes the digital mediation of inputs, in which information and data come from the ether via often faceless and anonymous sources and are cloaked or manipulated in ways we may never detect or suspect.

Digital media curate our information about reality, like all media do. But things have changed in the last few decades, and especially in the last few years. It’s been only a generation or so since the old days when Walter, or Chet and David, or any of hundreds of daily newspapers told us what was going on in the world. In those days the curation was handled by a relatively small number of individuals with high profiles. We knew, or could learn, something about who they were and where their biases lay. They were professionals, which also counted for something. There’s no perfect system and this one wasn’t either, but its chain of information custody was a far cry from the distant, anonymized, chat-botted, and algorithm-driven inputs flooding the public sphere now.

One liberal pundit recently noted that the increasing ideological specialization of media outlets “compels customers who care about getting a full and nuanced picture not to buy from just one merchant … .” That’s good advice. But you don’t have to force yourself, teeth clenched, to watch Fox News or MSNBC to get a different point of view; just sit down with your neighbors for a civil chat. In fact, getting away from our TVs and into a room with other people now and then would be good for all of us.

This being a blog about deliberative democracy, I default to deliberation in response to many of our political ills. Deliberation can’t fix everything, and no doubt we will get fooled again—but the tools of democratic deliberation can be used to mitigate the seemingly ubiquitous attempts at manipulation and deceit that surround us. Humans have struggled for a long time to build institutions to check our worst tendencies and have had some success. Digitally mediated information poses a fresh threat and we need institutions to meet these new circumstances.

Deliberative settings built for shaping community action should be among those new institutions. At the very least, they will outperform the social processes seen in high school cafeterias. The methods and structures of deliberative democracy can shorten the chain of information custody as well as restore and nurture the direct human presence of neighbors and fellow citizens: they’re sitting around the same table, and you will see them later at the local school or grocery store. Like them or not (or vice versa), they remain a potent element of our daily lives—a source of influence that can work for good or ill. Deliberation channels normal human interactions in ways that can benefit the community, help check the kinds of fantasist catastrophes so prevalent in digital media, and ground our perceptions of reality in the shared concerns of a community of people who may be less than friends but far more than strangers.

Pokémon and the Age of Augmented Hyper-Surreality

maxresdefault

By Luther Blissett

Imagine walking to a park in a fairly average medium-sized city on a warm Summer day. There you see groups, pairs and individuals of different ages and races slowly milling about, some with dogs, some with baby carriages. Approaching closer, you realize nearly everyone in the park other than yourself is staring intently at their phone, occasionally tapping and swiping the screen. It seems odd, though not completely out of the ordinary in this day and age. Then, off in the distance at the far end of the park, someone shouts what sounds like a word in an alien language or dialect triggering a crowd to rapidly swarm towards the general area; most speed-walking or jogging but all aiming their phones at the same destination. Soon everyone in the vicinity of the park (except yourself a few vagrants and junkies of a less tech-savvy sort) surges towards the center of the swarm of over a hundred participants as if sucked into a vortex. As quickly as it started, the crowd disperses and an “normalcy” resumes, albeit temporarily since the pattern repeats continuously at half hour to one hour intervals throughout different areas of the park.

This dream-like scenario is an outsider’s description of a Pokémon Go session on a typical Summer weekend at Bellevue Downtown Park. The crowd might have been slightly larger than usual due to the balmy weather, but numerous videos posted on YouTube indicate such occurrences aren’t completely anomalous.

An example:

Still, the relative newness and novelty of the experience doesn’t make it feel any less like being in a dystopian narrative such as a Philip K. Dick novel or an episode of Charlie Brooker’s “Black Mirror”. However, the sense of social displacement and alienation for non-gamers is dampened by nearly a decade of collective exposure to increasingly advanced internet-enabled cellphones whose ubiquity and usage has steadily increased over the years.

Prior to the release of Pokémon Go more people have been spending increasing hours using smartphones for talking, texting, email, news, entertainment and social media, selfies, etc. In the context of modern industrial society it’s almost an aberration to be without a device, or to not be heavily reliant on one. What sets Pokémon Go apart is its ability to simulate a fusion of material and virtual worlds by depicting through phone screens digital sprites superimposed on real-time images of physical environments to its users.

Just as shamans would use entheogens to peer behind the veil of reality, augmented reality allows users to perceive additional veils over reality. This is not necessarily a bad thing because there’s potential for “digital veils” to assist us in seeing what certain interests might prefer to keep hidden. For example, what if everyone could literally see the interests orchestrating a politician’s rise to power? What if we could walk into any store and instantly know which products were made by war-profiteers, polluters, and/or sweatshop owners? Would people want to know? How much of an impact would it have on decisions and actions in the context of a media environment inundated with heavily financed government/corporate PR and marketing? Of course, even without augmented reality the virtual realm affects the “real world”, most notably with the economic dominance of the tech industry as well as the social, political and economic havoc wreaked by hackers; but rarely is such influence immediately manifested as when crowds swarm newly spawned Pokémon sprites.

In many ways, Pokémon Go was the ideal vehicle to bring augmented reality to the masses. Many apps have utilized it for different purposes such as navigating, translating, finding dates, viewing celestial objects, narrating self-guided tours, weather forecasting, image enhancement, etc., but only Pokémon was able to use the technology to bring a fictional universe closer to life by creating a cross-generational craze. Alfie Brown of ROAR Magazine, characterized virtual Pokémon as the perfect example of what Jacques Lacan called the objet petit a, a fetishized yet ephemeral and unobtainable object of desire, a key concept behind consumerist neoliberalism’s push towards cheap, chronically obsolete, ephemeral and now digital goods and services.

But what makes Pokémon creatures so desirable? In regard to children, they seem naturally drawn towards cute and brightly colored cartoon characters. The mechanics of the game taps into natural tendencies to collect things and to display one’s collection to others (a phenomenon South Park astutely critiqued on episodes lampooning World of Warcraft and “freemiums”). In consumer societies children and adults are prone to feeling prestige and power from the size and perceived value of their collections; however, children are mostly limited in terms of the acquisitive power: video games elicit a rare opportunity to gain more prestige and power than adults have in real life.

As for older folks, there’s a variety of additional interconnected factors. For teens and young adults, peer pressure alone might be enough to hook some people, but the mainstreaming of geek culture no doubt plays a part, making fandom, quirkiness and technological obsession more accepted and valued. The transition to adulthood also happens to be a time when there’s increased pressure to establish one’s sense of identity, become more independent and to succeed academically and professionally. Games are a means of escape from such pressures (as real life opportunities for economic advancement continue to dwindle) while at the same time functioning as structured activities for social interaction and, more broadly, to build communities. For adults, reasons may include all of those previously mentioned in addition to fascination with technology, bonding with younger friends and family, the feeling of being part of a global phenomena, or nostalgia for the original Pokémon games, for example.

Returning to Pokémon Go’s more dystopian aspects, the game has been used as a tool by the unscrupulous for crimes such as robbery and sexual assault. Though crowds created by Pokémon Go spawning areas or “gyms” (locations where players battle each other in teams to increase their avatars’ abilities) have been a benefit to some local businesses, residential neighbors in some cases view game players as unwanted loiterers invading their privacy. There have also been news reports of video game battles escalating to physical brawls and innocent gamers being racially profiled as suspicious threats.

As with most online tools, there’s a risk of the app and users being exploited for surveillance, social control, to extract money and personal data, etc. Modern media literacy requires an understanding of how businesses benefit from our use of game and service apps (especially “free” ones) and how intentional or unknowing misuse of collected data could serve government/corporate/criminal interests. Augmented reality games are an exciting new media with potential to be used in novel and fun ways, but we should be vigilant of its potential to influence beliefs as well as decisions regarding how we spend time and resources.

Pokémon Go is at the forefront of the increasing power of tech companies such as Google and Niantic (the software developer behind Pokémon Go) to control and use information to manipulate the masses. Such power in itself is disturbing, but more sensational examples might include news reports of car accidents caused by drivers mindlessly following Google Maps off the road or colliding into other cars while playing Pokémon Go. Such cases may seem absurd but they prompt a number of important questions. Why do some prioritize and trust mediated information over their own senses? As online personas increase in perceived importance, at what lengths will people go to sustain it and would it be at the expense of others things (such as personal safety)? Are we becoming addicted to cognitive “skinner boxes” with our needs perpetually triggered and gratified by apps? In an increasingly hyperreal world in which the boundary between the real and virtual becomes more permeable, what new hazards await?

Mainstream Media Stock Prices Collapsing as People Choose Internet Over TV

Dont-trust-the-corporate-media-426x240-300x162

By Nick Bernabe

Source: AntiMedia.org

The long-term decline in viewership for America’s big TV outlets is finally starting to catch up to their stock prices. Since 2009, media stocks have been some of the best performers in S&P 500, but the last few days have seen $50 billion wiped from these companies.

According to Bloomberg, “Ignited by a plunge in Walt Disney Co., shares tracked by the 15-company S&P 500 Media Index have tumbled 8.2 percent in two days, the biggest slump for the group since 2008…In just five stocks — Disney, Time Warner Inc., Fox, CBS and Comcast Corp. — almost $50 billion of value was erased in two days. Viacom slid 14 percent on Thursday alone, its biggest drop since October 2008.”

Stock analysts say the reason behind the drop is simple on the surface: many of the media companies missed their profit projections, prompting investors to drop their stocks. Disney has lowered its growth projections for its sports brand, ESPN, while Viacom reported lower revenues than expected, which triggered a sell-off.

However, there is a larger trend at play here—one that the mainstream media—which is owned by these very companies facing the stock beat-down—doesn’t want to talk about. People are simply outgrowing the old media paradigm, and instead, are turning to the internet for both their news and entertainment at a break-neck pace. As we reported last month, Netflix will have more viewers than ABC, CBS, NBC, or Fox by 2016.

Viewership of television media is dropping — and it’s left the old media scrambling for answers. According to the Huffington Post,

“Though overall video viewing is up thanks to a plethora of new online services, fewer people are sitting down in front of a television set and a growing number of households — roughly 2.6 million, or 2.8 percent — are becoming ‘broadband only,’ forgoing cable and broadcast signals altogether. In the third quarter of 2014, the average viewer watched 141 hours of TV a month, down 6 hours from the same time last year, and a full 12 minutes less per day.

Digital, on the other hand, has shown strong growth over the past year across all age groups, with viewership up 53 percent among people 18-49, up 62 percent among people 25-54, and up 55 percent among those 55 and older since the third quarter of 2013.”

In the past, TV news outlets relied on a virtual monopoly between the big six companies that own 90% of the media to make their numbers. This left viewers with no choice but to consume media from one of these companies if they watched TV.

But now, as people have multiple sources and choices of news thanks to the internet and independent media, the monopoly is coming under pressure. Aging generations, which will probably never break their TV habits, are now the only reliable audiences for the likes of CNN, Fox News, NBC, CBS, and the rest of the mainstream media. Members of the internet age would rather have choices and read or watch news from sources they both trust and believe in. This is major problem for the old media, as poll after poll has shown eroding trust in the big six. According to Gallup polling numbers, Americans’ confidence in the media’s ability to report “the news fully, accurately, and fairly” reached an all-time low of 40% in 2014.

RELATED: Six Non-Corporate News Outlets You Should Be Following

The reason for the falling ratings and trust in the media is not mentioned in the poll, but one could speculate that younger generations have become disillusioned by endless war mongering, partisanship, racial biaspolitician and police worship, reality TV, and celebrity media frenzies that have become the trademarks of TV news. However, one thing is clear: television media will soon suffer the same fate as the near-extinct newspaper industry—barring some unexpected miracle—and that is a positive development for the well being of the political and social conversation in America. America’s new media is becoming more like America as a whole: diverse.

Is the Web Destroying the Cultural Economy?

Internet-devices-globe-world-connected

By Charles Hugh Smith

Source: Of Two Minds

Are we entering a cultural Dark Age, where the talented cannot earn a living creating culture?

Longtime correspondent G.F.B. recently sent me this 13-minute Interview with Andrew Keen. This is my first exposure to Keen, and his view that the democratization of the Web is great for politics but a disaster for what he calls the Cultural Economy— the relatively small but important slice of the economy that pays creators and artists to make culture: music, literature, art and serious journalism.

The title of Keen’s 2007 book encapsulates his dire perspective: The Cult of the Amateur: How blogs, MySpace, YouTube, and the rest of today’s user-generated media are destroying our economy, our culture, and our values.

(His 2012 book had a similar theme: Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us)

(Author Scott Timberg makes some of the same points in his new book Culture Crash: The Killing of the Creative Class (via Cheryl A.)

Keen touches on a great many ideas and themes in this brief interview, but his core point is this: by enabling everyone to express themselves on an essentially equal footing, the Web has undermined legitimate journalism and buried the talented few in an avalanche of mediocrity–in his words, talent is “lost in a sea of garbage.”

By eliminating the middleman who added value by sorting the wheat from the chaff–the film studio, the music labels, the publishers–the Web has created a cultural landscape where “soft, ordinary” content such as cute cat videos garner the most “likes” and clicks–the digital world’s metric for popularity and thus value in the marketplace.

Keen tossed off one of his most interesting ideas as an aside: that the break-up of community and the resulting loss of identity has generated a universal drive to establish an identity via self-expression: everybody feels they can compose a song, write a novel or make a movie.

Keen is at his most provocative (to the democratized ideal of the amateur making it big) when he declares the vast majority of people are talentless: talent is by definition scarce. We can’t all be equally talented, nor can anyone generate culturally valuable content without mastering their craft over thousands of hours of practice.

Keen unapologetically calls the previous arrangement an “industrial meritocracy.” He feels this hierarchical meritocracy is being destroyed and there is nothing to replace it.  This will result in a cultural Dark Age where the talented cannot earn a living creating culture. The only avenue left for creators of content that can be copied and distributed digitally (music, digital art, writing) is to find wealthy patrons to support their work.

One of G.F.B.’s points in our conversation was the Web’s “level playing field” is an artificial construct, much like the playing field in a stadium. But outside the stadium, the geography is anything but level.  Put other way, global corporations have great advantages in the supposedly “level playing field” of the Web.

Keen mentions that what will remain scarce in this tsunami of digital content is access to the artist, live performances and art that cannot be digitized, such as sculpture and paintings. As I have discussed in previous Musings, musicians who perform constantly can make a living in this environment, because their free music on the web builds an audience for their live performances.  But not every band performs enough to make a go of this model.

In Keen’s  view, it is now essentially impossible for bands, artists and writers to create a “brand” that will generate an income. Only those creators who entered the digital age with an established brand can leverage their recognition into an income.

As a completely marginal creator of content who never rose within the industrial meritocracy lauded by Keen, I  think Keen makes some excellent points but overstates his case for a cultural Dark Age.

As G.F.B. pointed out in our conversation on this topic, a new class of curators is arising within the Web, people who sift through the vast outpouring of content and select the best or most interesting (in their view). Those curators who succeed are adding value just as the industrial middlemen did in the pre-digital model. In some small way, I think Of Two Minds performs a bit of this curation.

It seems to me that the digital age requires every creator of content to not only be perseverant but to focus a great deal of time and energy on marketing their content–precisely what the industrial media and cultural industrial-model companies once did for their talent.

There is no longer enough money in creating content to pay an office full of people to issue press releases and arrange book tours.  In the publishing world, promotion is increasingly up to the authors; as Keen noted, only those authors with brands that were established in the pre-digital age can sell enough content to support industrial-type promotion.

We can bemoan this, or we can grasp the nettle and realize that  it is no longer enough to practice one’s craft for the fabled 10,000 hours–one must also invest another 10,000 hours in promoting and marketing one’s content/cultural creations.  That dual process (creation and marketing) is so arduous, so impoverishing, so demanding, only the driven few can sustain it long enough to claw their way through the mountains of mediocrity.

Making a living at cultural content was always brutally Darwinian; perhaps all that’s changed is the nature of the Darwinian selection process.