THE RISE OF THE CYBORGIAN CONSUMER

By Elva Thompson

Source: Waking Times

“You don’t take over the world with gaudy displays of violence. Real control is surgical, and invisible.” ~ John Greer – Person of Interest, Season 4, Episode: YHVH

The Posthuman (or Transhuman) Movement was formally established in California in the early 1980s. A central figure within the movement, Nick Bostrom, declared that Posthumanism holds to the belief that human nature is an improvable structure, and all its capacities can be enhanced through the use of ‘applied science’. The proposition is that human beings should use technology to transcend the limits of body and mind. Posthumanists do not see humanity as isolated physical entities but as distributed processes within a system. ‘THE BODY IS OBSOLETE’ is their battle cry, but the Posthuman seems to be defined by a lack of definition.

Stephen Brown, the doyen of marketing and interpretive consumer research, introduced the term apophasis. This is a curious choice of word, as it is a theological term which refers to knowing God through negation, or what God is not. Apophasis is from the ancient Greek, meaning to deny or negate. The problem seems to be that we can happily describe what a human being does or performs, but there is a difficulty when it comes to describing what a human being is. This lays bare the profound incapacity of language to penetrate into the very essence of things. In other words, to fully express Being, (I bet humpback whales don’t have this problem).

I Consume, Therefore I Am

To the Posthumanist technocrats, ‘natural man’ is a failed project that desperately needs to be improved upon. But before this can happen radical changes need to be made to our current conceptions of what it means to be human. Traditional accounts of the state of being human are based upon Cartesian worldviews, which regard external reality as objectively recordable and expressed primarily through mathematical laws.

People, as physically isolated thinking mammals, are viewed as necessary consumers within a society that is imagined to be ever abundant and fruitful. This is the Gaia of Classical thought, an ever-producing cornucopia of unquenchable life. Consuming this bounty is our essential function, and God-given right. Not only that, but the mind, or disembodied consciousness, is now theorized to be nothing more than an information processor. This data collecting ‘machine’, being rational and scientific, is external to the fixed social reality which it continually observes and records.

Posthumanism, conversely, puts the emphasis on socio-cultural and technological contexts, and the individual consciousnesses are but processes within this contextual framework. These are socially constructed “hybrid marketplace matrices” using a variety of social, economic and technological systems to control and manoeuvre the population through their insatiable desire to consume. The boundary between the individual and society is seen as being porous and amorphous. Reality becomes protean, as nothing is stable, and it becomes difficult to draw a clear line between the human, the animal and the machine. This is viewed as a natural societal progression, the next phase of an ever-evolving humanity.

“I would rather be a cyborg than a goddess.” ~ Donna J. Haraway

The rational consumer is now replaced by a bio-technological symbiont, or cyborg. As the “proto-typical posthumanist consumer”, the cyborg is seen as the most popular manifestation of the posthuman. This image contains a dichotomy, though, as it is both liberatory and oppressive. The hope is that it can liberate mankind from organic failures, such as ageing and disease, yet contains the fear of a powerful hierarchical elite of technologized bio-machines lording it over the general unmodified masses.

Yet, the cyborg is a hybrid, a bio-technological chimera, and cannot exist without social, economic and technological systems. It is fundamentally a process (or even a product) rather than a Being, a verb rather than a noun. It has lost its essential individuality and become subsumed in the process itself. A cyborg is a Golem by any other name. The Golem is one of the oldest legends of artificial creation, barring Hermetic tales of Egyptian magic instilling stone statues with the consciousness of their gods or neters.

Similarly, through the power of Jewish cabbalism, a model of stone or metal could be infused with life (the root of the word golem is golmi, meaning ‘unformed limbs’). The modern attempt to create consciousness within digital constructs is no different.

Ironically, Descartes himself was fascinated by the automaton, comparing it favourably with the human body. In the late 1980s Hans Moravec, a faculty member of the Robotics Institute of Carnegie Mellon University, proposed that human identity is essentially an informational pattern. He suggested that this proposition can be demonstrated by downloading human consciousness into a computer, thus proving that machines can become human. He believed that robots will evolve into a new series of artificial species, starting around 2030 – 2040.

Almost 30 years later, a large percent of human communication is mediated by a technology that has become so entwined with the creation of identity that it can no longer be meaningfully separated from the human subject. In other words, we are in the initial stages of becoming cyborgs. We may not be physically augmented by technology, but we are beginning to think in the same manner as machines, or the algorithms that run them. The more we use computers, and spend our valuable time prowling around the internet, the more we embody becoming Posthuman. We are already well on the way to becoming cyborg.

“Beam me up, Scotty.” Captain James T. Kirk, any original Star Trek episode.

In 1948, the mathematician Norbert Weiner published a book entitled ‘Cybernetics: control and communication in the animal and machine’. Here he proposed it was possible to telegraph a human being, imagining that the body can be dematerialised into an informational pattern and rematerialised at a different location. More recently, molecular biology treats information as the essential code the body expresses, what we know as DNA. This is termed the ‘impossible inversion’, where information becomes primary and materiality secondary.

During our present cultural moment the belief is that information can circulate unchanged among different material substrates. “Beam me up, Scotty” has become a cultural icon for the global information age. Information has now come to be conceptualised as an entity in itself, existing separately from the material forms in which it is thought to be embedded.

As such, information can be considered the thoughts of the machine and a biological body is seen as an accident of history rather than an inevitable consequence of material life. In his book ‘The Order of Things’ (1973), the philosopher Michel Foucault suggested that “man” is no more than a historical construct whose era is about to end. The Posthuman view considers consciousness as an epiphenomenon, an evolutionary by-product or secondary effect. The physical body is considered as the original prosthesis we, as a species, learned to manipulate.

Extending or replacing the biological body with artificial prostheses is therefore a continuation of an evolutionary process. This view, in turn, configures the human being to be seamlessly articulated with intelligent machines. In the Posthuman world there is no essential demarcation between bodily existence and computer simulation, cybernetic mechanism and biological organism.

Posthumanism does away with the ‘natural self’, rejecting the arguments that the social philosophers Hobbes and Locke constructed about humans being originally in a ‘state of nature’, and therefore owing nothing to society, as that was an after-effect. Instead, the Posthuman is an amalgamation of diverse components, a material-information entity whose boundaries undergo continuous construction and reconstruction.

Chaos out of Order?

Indeed, the situation just described seems to contradict the oft touted phrase of the elite: Ordo Ab Chao, or order out of chaos. The beliefs of the Posthumanists appear to reverse the above declaration. Instead of an ordered reality complete with clearly defined boundaries and roles for the inhabitants of society, they appear to offer an amalgamated mass of ever-changing variables that resemble the protean, but infinitely fecund, primal Chaos that preceded the manifestation of this physical universe. Is it the intentional creation of a universal Chaos (alchemically merging mineral with vegetable, namely silicon with carbon) from which a new level of Order can be produced?

Addendum: it struck me that posthuman is nearly posthumous, and thus denoting the death-knell of the natural human being.

The world wide cage

zuckerberg_VR_people-625x350

Technology promised to set us free. Instead it has trained us to withdraw from the world into distraction and dependency

By Nicholas Carr

Source: Aeon

It was a scene out of an Ambien nightmare: a jackal with the face of Mark Zuckerberg stood over a freshly killed zebra, gnawing at the animal’s innards. But I was not asleep. The vision arrived midday, triggered by the Facebook founder’s announcement – in spring 2011 – that ‘The only meat I’m eating is from animals I’ve killed myself.’ Zuckerberg had begun his new ‘personal challenge’, he told Fortune magazine, by boiling a lobster alive. Then he dispatched a chicken. Continuing up the food chain, he offed a pig and slit a goat’s throat. On a hunting expedition, he reportedly put a bullet in a bison. He was ‘learning a lot’, he said, ‘about sustainable living’.

I managed to delete the image of the jackal-man from my memory. What I couldn’t shake was a sense that in the young entrepreneur’s latest pastime lay a metaphor awaiting explication. If only I could bring it into focus, piece its parts together, I might gain what I had long sought: a deeper understanding of the strange times in which we live.

What did the predacious Zuckerberg represent? What meaning might the lobster’s reddened claw hold? And what of that bison, surely the most symbolically resonant of American fauna? I was on to something. At the least, I figured, I’d be able to squeeze a decent blog post out of the story.

The post never got written, but many others did. I’d taken up blogging early in 2005, just as it seemed everyone was talking about ‘the blogosphere’. I’d discovered, after a little digging on the domain registrar GoDaddy, that ‘roughtype.com’ was still available (an uncharacteristic oversight by pornographers), so I called my blog Rough Type. The name seemed to fit the provisional, serve-it-raw quality of online writing at the time.

Blogging has since been subsumed into journalism – it’s lost its personality – but back then it did feel like something new in the world, a literary frontier. The collectivist claptrap about ‘conversational media’ and ‘hive minds’ that came to surround the blogosphere missed the point. Blogs were crankily personal productions. They were diaries written in public, running commentaries on whatever the writer happened to be reading or watching or thinking about at the moment. As Andrew Sullivan, one of the form’s pioneers, put it: ‘You just say what the hell you want.’ The style suited the jitteriness of the web, that needy, oceanic churning. A blog was critical impressionism, or impressionistic criticism, and it had the immediacy of an argument in a bar. You hit the Publish button, and your post was out there on the world wide web, for everyone to see.

Or to ignore. Rough Type’s early readership was trifling, which, in retrospect, was a blessing. I started blogging without knowing what the hell I wanted to say. I was a mumbler in a loud bazaar. Then, in the summer of 2005, Web 2.0 arrived. The commercial internet, comatose since the dot-com crash of 2000, was up on its feet, wide-eyed and hungry. Sites such as MySpace, Flickr, LinkedIn and the recently launched Facebook were pulling money back into Silicon Valley. Nerds were getting rich again. But the fledgling social networks, together with the rapidly inflating blogosphere and the endlessly discussed Wikipedia, seemed to herald something bigger than another gold rush. They were, if you could trust the hype, the vanguard of a democratic revolution in media and communication – a revolution that would change society forever. A new age was dawning, with a sunrise worthy of the Hudson River School.

Rough Type had its subject.

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim in ‘The Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.

Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: they’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.

The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in ‘social production’ outside the marketplace. As the Harvard law professor Yochai Benkler wrote in his influential book The Wealth of Networks (2006):

This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.

Calling it a revolution, he said, is no exaggeration.

Benkler and his cohort had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when the system’s commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organised to enrich a small group of businesses and their owners.

The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort – and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterised by frenetic production and consumption – smartphones have made media machines of us all – but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that shrouds the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.

Late in his life, the economist John Kenneth Galbraith coined the term ‘innocent fraud’. He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. ‘It is innocent because most who employ it are without conscious guilt,’ Galbraith wrote in 1999. ‘It is fraud because it is quietly in the service of special interest.’ The idea of the computer network as an engine of liberation is an innocent fraud.

I love a good gizmo. When, as a teenager, I sat down at a computer for the first time – a bulging, monochromatic terminal connected to a two-ton mainframe processor – I was wonderstruck. As soon as affordable PCs came along, I surrounded myself with beige boxes, floppy disks and what used to be called ‘peripherals’. A computer, I found, was a tool of many uses but also a puzzle of many mysteries. The more time you spent figuring out how it worked, learning its language and logic, probing its limits, the more possibilities it opened. Like the best of tools, it invited and rewarded curiosity. And it was fun, head crashes and fatal errors notwithstanding.

In the early 1990s, I launched a browser for the first time and watched the gates of the web open. I was enthralled – so much territory, so few rules. But it didn’t take long for the carpetbaggers to arrive. The territory began to be subdivided, strip-malled and, as the monetary value of its data banks grew, strip-mined. My excitement remained, but it was tempered by wariness. I sensed that foreign agents were slipping into my computer through its connection to the web. What had been a tool under my own control was morphing into a medium under the control of others. The computer screen was becoming, as all mass media tend to become, an environment, a surrounding, an enclosure, at worst a cage. It seemed clear that those who controlled the omnipresent screen would, if given their way, control culture as well.

‘Computing is not about computers any more,’ wrote Nicholas Negroponte of the Massachusetts Institute of Technology in his bestseller Being Digital (1995). ‘It is about living.’ By the turn of the century, Silicon Valley was selling more than gadgets and software: it was selling an ideology. The creed was set in the tradition of US techno-utopianism, but with a digital twist. The Valley-ites were fierce materialists – what couldn’t be measured had no meaning – yet they loathed materiality. In their view, the problems of the world, from inefficiency and inequality to morbidity and mortality, emanated from the world’s physicality, from its embodiment in torpid, inflexible, decaying stuff. The panacea was virtuality – the reinvention and redemption of society in computer code. They would build us a new Eden not from atoms but from bits. All that is solid would melt into their network. We were expected to be grateful and, for the most part, we were.

Our craving for regeneration through virtuality is the latest expression of what Susan Sontag in On Photography (1977) described as ‘the American impatience with reality, the taste for activities whose instrumentality is a machine’. What we’ve always found hard to abide is that the world follows a script we didn’t write. We look to technology not only to manipulate nature but to possess it, to package it as a product that can be consumed by pressing a light switch or a gas pedal or a shutter button. We yearn to reprogram existence, and with the computer we have the best means yet. We would like to see this project as heroic, as a rebellion against the tyranny of an alien power. But it’s not that at all. It’s a project born of anxiety. Behind it lies a dread that the messy, atomic world will rebel against us. What Silicon Valley sells and we buy is not transcendence but withdrawal. The screen provides a refuge, a mediated world that is more predictable, more tractable, and above all safer than the recalcitrant world of things. We flock to the virtual because the real demands too much of us.

‘You and I are alive at this moment.’ That Wired story – under headline ‘We Are the Web’ – nagged at me as the excitement over the rebirth of the internet intensified through the fall of 2005. The article was an irritant but also an inspiration. During the first weekend of October, I sat at my Power Mac G5 and hacked out a response. On Monday morning, I posted the result on Rough Type – a short essay under the portentous title ‘The Amorality of Web 2.0’. To my surprise (and, I admit, delight), bloggers swarmed around the piece like phagocytes. Within days, it had been viewed by thousands and had sprouted a tail of comments.

So began my argument with – what should I call it? There are so many choices: the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age, the posthuman age. The more names we pin on it, the more vaporous it seems. If nothing else, it is an age geared to the talents of the brand manager. I’ll just call it Now.

It was through my argument with Now, an argument that has now careered through more than a thousand blog posts, that I arrived at my own revelation, if only a modest, terrestrial one. What I want from technology is not a new world. What I want from technology are tools for exploring and enjoying the world that is – the world that comes to us thick with ‘things counter, original, spare, strange’, as Gerard Manley Hopkins once described it. We might all live in Silicon Valley now, but we can still act and think as exiles. We can still aspire to be what Seamus Heaney, in his poem ‘Exposure’, called inner émigrés.

A dead bison. A billionaire with a gun. I guess the symbolism was pretty obvious all along.

Beyond Palliative Care

130517cntower

By arranjames

Source: Synthetic Zero

Not all that long ago the curators of this blog started talking about the possibility of the palliative care of the Earth. Recently dmf posted up a podcast dealing with the same topic. I haven’t listened to it yet so won’t be drawing on it in this post. I wanted to take a few minutes to experiment with the senses of the phrase “palliative care of the Earth”.

First of all, what is palliative care? Like all attempts at sense it is a contested battleground rife with bullet holes and no-man’s lands with various armies massed and pressing on it. One such army is the global institutional Roman legion that is the World Health Organisation. The WHO loves definitions. One could almost assume it employed nothing but glossophiliacs who spent their days and night writing endless variations on definitions who, in their frenzied madness, ended up trying to murder the words they were seeking to play midwife to. The WHO definition is long. And vague. You can read it here. Operationalising a little we can extract the fundamentals: palliative care seeks to make life as liveable as possible for the dying body and for the bodies who will mourn it.

The Earth as a system of ecosystems, an ecological metasystem, is considered as a body composed of bodies that play habitat and inhabitant, catalyst and anticatalyst, metabolism and metabolite, and so on, to one another [1]. Not all of these bodies are living but as with any machinic assemblage this Earth emerges as a necessarily heterogenetic improvisation (the imposition of unpredictability) that depends on both organic and machinic kinds [2]. A cyborg of a different order than Robocop the Earth is more akin to the Half-Faced Man, a machine that wants to be human. This isn’t to say the Earth wants to be human, or that it wants anything in any way we’d recognise as desire, although interspecies sexuality clearly indicates a queer promiscuity among nonhuman organisms, but that the Earth has assembled in such a way that the organic has come out of the inorganic. From a certain perspective: so what? It’s all just interlocking mechanism. Well, fine. But its dying is what.

But the Earth won’t die. Not yet. Far more likely- and if we stop being so anthropophobic- we’re talking about ourselves. It is us that is dying. It is the palliative care of the human that we should really consider. We open with a discussion of the dying Earth because it is this dying that is killing us: a vicarious species-suicide? These are dark thoughts that imply a loathing so great in our species that we’d take out everything else just to slit our own throats once and for all. But we’re not that grand, we’re all too limited, all too human still. Like smokers in the 1950s we didn’t know what we were doing, then we did and did nothing about it, then everyone said it was too late. We’re not quite sure of the periodicity. We don’t know if it is too late. What we do know is that we’ve had a mass terminal diagnosis and there is no consensus on the prognosis. What are we dying of then, if not some anthropathology [3]?

Does the species have a body? Or is the species also a hallucination? Hallucinations can die too- ask a “schizophrenic” on Clozapine. The WHO is an ensemble of equipment and technique in the same way Guattari once spoke of the unconscious. It is almost as if the WHO invented health (“a total state”) and must administer it. What is it that the WHO wants to say about palliative care? It has things to say about the reduction of suffering; the affirmation of life and death; it seeks neither the hastening nor the postponing of death; it looks to psychology and spirituality; produces support systems; is multidisciplinary; is life enhancing; it’s never too early to start.

How does this map onto humanity? We’re just scale in a sense, where “humanity” stands in for “person”. So it is the reduction of the suffering of species and the enhancement of its existence. This follows nicely from Lacanian ideas that we live both by the reality-principle and jouissance, by both aversion and hedonics. We’re also not talking about killing ourselves off, so no reproductions of Zapffe’s conclusion to ‘The Last Messiah’- we aren’t about to go forth to be fruitless and let the Earth be silent after us (as if it would be). I think it’s safe to say fuck Messiahs, especially last ones.

We’re also looking to the psychology and spirituality of humanity? Doesn’t this translate quite well to looking at the cognitive biases and metacognitive illusions and the affects and emotions in their normativity and pragmatics? Support systems like what? New technologies and alternative energy sources? Sure. But it can’t be limited to that- what if extinction is much closer than this than we think? Well think about it for a second. The process has already begun. And I’m not just talking about Tim Morton’s plutonium, or irreversible glacial melting, or any other particular doomsday protocol. If we’ve been paying attention to the three ecologies then we should have spotted multiple extinctions have been in process for a long time now. Systems of systems have been disintegrating within whatever it is- or was- that we called the human for decades. By 2050 or so even the strange hominid form will have been eradicated, recorded in images that no creature surviving us will care much about.

So palliative care is about easing our way into dying off. It is about quietly doing our best to assemble societies in which we can humanely coexist with wild being until our time’s really up. That was certainly my feeling two years ago when I wrote a post on extinction. Back in 2012 I declared that

Any post-nihilistic pragmatics will require that we operate consciously within catastrophic time and that we surrender the impossible task of removing precariousness from the human condition. These are the same project in fact, given that the former reveals to us the anthropocentrism of the latter…the benign revelation that precariousness is the condition of all things. IF this garners the accusation of privelging the perspective of extinction and heat death then this is a necessary part of the pragmatic ethics of a self-management of extinction. As I have said before, the task now is to think the ethics of palliative care for the species. The dream of species-being is realised at last.

Today I wonder at the sadness of that post. At the time I’d thought of it as realistic, hard-headed, unsentimental. All that. But ultimately, I think it was a depressive position. If 2050 is the time limit then maybe it is too late, and maybe we should be looking at harm-reduction and palliation. But for me this could lead us to a politics of the worst in which we try to stave off the ‘least of all possible evils’, a mode of thought that Eyal Weizman has convincingly shown to be at work in some of the worst atrocities in modern history. In trying to create the conditions of the least possible harm the scale of the species we might actually end up with a resigned sigh in the face of forces we might be able to do something about. As such, the only way to “self-manage our extinction” has to be truly palliative in that it doesn’t just avoid suffering but also seeks jouissance. In fact, I’d concretise the program into what David Roden has been talking about in terms of a speculative posthumanism in which posthuman beings that emerge out of the human bear as much intuitive relation to us as we do to our ancestral forebears. The jouissance of this is less Lacanian and more about a sweeping mutative recombinatory innovation in the normativity of posthumanity.

In fact the pessimist and transhumanist programs belong together when we view harm reduction without the depressive targeting system, when in fact we dare to accelerate palliative philosophy into a praxis of assisted dying. What is born from the uneven unity of these programs is what I’m (stupidly; deliriously; in a state of panic) calling transpessimism: the speculative conviction that humanity must become extinct by becoming something else.

[1] From wikipedia:

System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete.[1] The methodology for defining, abstracting, modeling, and analyzing system of systems problems is typically referred to as system of systems engineering.

[2] I’ve stolen the term “heterogenetic improvisation” from David Malvinni’s study of Roma music: “heterogenic improvisation…the divided interval where improvisation orginates out of otherness while identifying with itself…grows out of a desire for a purely involved performance, a symbiosis of listener and sound, via an identification of the same with its unpredictable mutation” (p. 47-48).

[3] The term “anthropathology”, a neologism of the “anthro” pertaining to the human and “pathology” pertaining to disease, was coined in 2007 by a practicing counselor and counselling theorist, Colin Feltham, turned pessimist philosopher. Feltham defines the condition of anthropathology at length in his book treatment of the “condition”- as valid as most psychopathological categories- but also presents a condensed definition as follows:

‘the marked, universal tendency of human beings individually and collectively towards suffering deceptiveness, irrationality, destructiveness and dysfunction,including an extreme difficulty in perceiving and freeing ourselves from this state’ (What’s Wrong With Us? The anthropathology thesis. 2007. p.256).