By Tom Prugh
The other day I joined the rush to explore ChatGPT, signing up at the OpenAI website. I gave it my full legal name and correct birth date, and asked it to pretend I had died and to write my obituary. The result was 300 words describing a somewhat boring paragon of a man.
Except maybe for the boring part, I am not that man, much less that paragon.
The obit wasn’t completely wrong, but it did nothing to undermine ChatGPT’s reputation for “uneven factual accuracy.” It said I was born in Ohio (true), but in Cleveland (false) in 1957 (false). It said I was a “committed environmentalist” (true; I worked for the late lamented Worldwatch Institute for the best part of my career), and that I was an active member of “several environmental organizations” (somewhat true, off and on). It described me as an “avid cyclist” (kind of true, but the last time I did a century ride was 1987).
So much for the hits. The misses include accounts of me as:
- A “devoted husband” to my wife of 40 years, Mary (my marriage, to a fine woman not named Mary, lasted 26 years) and a “loving father” to two children (one, in fact)
- A “brilliant engineer” with a degree in electrical engineering from Ohio State University who worked for Boeing, General Electric, and SpaceX (wrong on all counts)
- Someone who “was instrumental in the development of several renewable energy projects” (my wife and I put a few solar panels on our garage roof, but that’s it)
- An “active member” of a church who spent “many hours volunteering at the food bank” (I am neither very religious nor, it shames me to admit, very generous with my personal time)
The obituary proclaimed that my “death” had “left a deep void in the lives of his family, friends, and colleagues” and that I would be “deeply missed by all who knew him.” Well, that would be gratifying—if there is a me to be gratified—but I’ll settle for a drunken wake where somebody plays “Won’t Get Fooled Again.”
Maybe everyone should try this. You too might be amused and/or appalled by the plausible distortions and lies a quasi-intelligent computer program can gin up by accessing the petabytes of data (“data”?) on the Internet—accounts of people and events that are bogus but increasingly, and seamlessly, hard to tell from reality.
I am not a tech nerd and my grasp of what ChatGPT does is rudimentary. But I find it disturbing that this expression of artificial intelligence will instantly fabricate a profile and populate it with—not questions, or blanks to be filled in—but invented factoids tailored to fit a particular format. And this reservation isn’t just me being PO’d about my obit (I’m actually grateful my Internet footprint isn’t bigger); prominent tech geeks also have misgivings. Here’s Farhad Manjoo, for instance:
ChatGPT and other chatbots are known to make stuff up or otherwise spew out incorrect information. They’re also black boxes. Not even ChatGPT’s creators fully know why it suggests some ideas over others, or which way its biases run, or the myriad other ways it may screw up. …[T]hink of ChatGPT as a semi-reliable source.
Likewise Twitter and other social media, whose flaws and dangers are well known by now, and feared by some of the experts who know them best. The most recent book from revered tech guru and virtual reality pioneer Jaron Lanier is called Ten Arguments for Deleting Your Social Media Accounts Right Now. Chapter titles include “Quitting Social Media Is the Most Finely Targeted Way to Resist the Insanity of Our Time,” “Social Media Is Making You into an Asshole,” “Social Media Is Undermining Truth,” and “Social Media Is Making Politics Impossible.”
About those politics: ChatGPT and its successors and rivals, whatever their virtues, are the latest agents in the corruption of the public sphere by digital technology, threatening to extend and deepen the misinformation, fabulism, and division stoked by Twitter and other digital media. Once again, a powerful new technology is out the door and running wild while society and regulators struggle to understand and tame it.
It’s hard to see how this can end well.
An earlier post in this series (DR5) looked at recent archaeological evidence suggesting that humans have explored lots of different means of governing ourselves over the last several thousand years. Eventually, for several reasons, we seem to have ended up with large, top-down, hierarchical organizations. These have lots of problems that won’t be reviewed here, but neuroscientist and philosopher Eric Hoel argues that at least they freed us from the “gossip trap.”
Hoel thinks the main reason small prehistoric human groups didn’t evolve hierarchical governing systems is because of “raw social power,” i.e., gossip:
[Y]ou don’t need a formal chief, nor an official council, nor laws or judges. You just need popular people and unpopular people.
After all, who sits with who is something that comes incredibly naturally to humans—it is our point of greatest anxiety and subject to our constant management. This is extremely similar to the grooming hierarchies of primates, and, presumably, our hominid ancestors.
“So,” Hoel says, “50,000 BC might be a little more like a high school than anything else.”
Hoel believes that raw social power was a major obstacle to cultural development for tens of thousands of years. When civilization did finally arise, it created “a superstructure that levels leveling mechanisms, freeing us from the gossip trap.”
But now, Hoel says, the explosion of digital media and their functions have resurrected it:
[I]f we lived in a gossip trap for the majority of our existence as humans, then what would it be, mentally, to atavistically return to that gossip trap?
Well, it sure would look a lot like Twitter.
I’m serious. It would look a lot like Twitter. For it’s on social media that gossip and social manipulation are unbounded, infinitely transmittable.
…Of course we gravitate to cancel culture—it’s our innate evolved form of government.
Allowing the gossip trap to resume its influence on human affairs—and turbocharging it the way digital media are doing—seems like a terrible way to run a PTA or a garden club, let alone a community or a nation.
The industrialization of made-to-order opinions, “facts,” and “data” via AI and social media, despite efforts to harness them for constructive ends, is plunging us into an epistemological crisis: “How do you know?” is becoming the most fraught question of our time. T.S. Eliot said that “humankind cannot bear very much reality,” but now we are well into an era when we can’t even tell what it is—or in which we simply make it up to please ourselves. The more convincing these applications become, the less anchored we are to the “fact-based” world.
We’ve struggled with this for centuries. Deception is built into nature as an evolutionary strategy, and humans are pretty good at it, both individually and at scale by means of propaganda, advertising, public relations, and spin. These all prey on human social and cognitive vulnerabilities (see DR4).
Humans can only perceive the world partially and indirectly. It starts with our senses, which ignore all but a tiny fraction of the vast amount of data that’s out there. (Sight, for instance, captures only a sliver of the electromagnetic spectrum.) In addition, we’re social creatures and our perceptions of what’s real are powerfully shaped by other people. And now comes the digital mediation of inputs, in which information and data come from the ether via often faceless and anonymous sources and are cloaked or manipulated in ways we may never detect or suspect.
Digital media curate our information about reality, like all media do. But things have changed in the last few decades, and especially in the last few years. It’s been only a generation or so since the old days when Walter, or Chet and David, or any of hundreds of daily newspapers told us what was going on in the world. In those days the curation was handled by a relatively small number of individuals with high profiles. We knew, or could learn, something about who they were and where their biases lay. They were professionals, which also counted for something. There’s no perfect system and this one wasn’t either, but its chain of information custody was a far cry from the distant, anonymized, chat-botted, and algorithm-driven inputs flooding the public sphere now.
One liberal pundit recently noted that the increasing ideological specialization of media outlets “compels customers who care about getting a full and nuanced picture not to buy from just one merchant … .” That’s good advice. But you don’t have to force yourself, teeth clenched, to watch Fox News or MSNBC to get a different point of view; just sit down with your neighbors for a civil chat. In fact, getting away from our TVs and into a room with other people now and then would be good for all of us.
This being a blog about deliberative democracy, I default to deliberation in response to many of our political ills. Deliberation can’t fix everything, and no doubt we will get fooled again—but the tools of democratic deliberation can be used to mitigate the seemingly ubiquitous attempts at manipulation and deceit that surround us. Humans have struggled for a long time to build institutions to check our worst tendencies and have had some success. Digitally mediated information poses a fresh threat and we need institutions to meet these new circumstances.
Deliberative settings built for shaping community action should be among those new institutions. At the very least, they will outperform the social processes seen in high school cafeterias. The methods and structures of deliberative democracy can shorten the chain of information custody as well as restore and nurture the direct human presence of neighbors and fellow citizens: they’re sitting around the same table, and you will see them later at the local school or grocery store. Like them or not (or vice versa), they remain a potent element of our daily lives—a source of influence that can work for good or ill. Deliberation channels normal human interactions in ways that can benefit the community, help check the kinds of fantasist catastrophes so prevalent in digital media, and ground our perceptions of reality in the shared concerns of a community of people who may be less than friends but far more than strangers.