After the Crash

Dispatches From a Long Recovery (Est. 10/2024)

After the Crash

Cyberpunk is Now and No One Knows What to Do With It

By Pattern Theory

Source: Modern Mythology

Cyberpunk broke science fiction. Creeping in alongside the commercialization of the internet, it extrapolated the corruption and dysfunction of its present into a brutal and interconnected future that remained just a heartbeat away. Cyberpunk had an attitude that refused to be tamed, dressed in a style without comparison. Its resurgence shows that little has changed since its inception, and that’s left cyberpunk incapable of discussing our future.

Ghost in the Shell got the live-action treatment in 2017, a problematic remakeof the 1995 adaptation. Some praised its art direction for increasing the visual fidelity of retrofuture anime cityscapes, but the general consensus was that the story failed to apply care and consideration towards human brains and synthetic bodies like Mamoru Oshii had more than two decades before. A few months later came Blade Runner 2049, a sequel to the cyberpunk classic. Critics and fans praised it for high production values, sincere artistic effort, and meticulous direction. Yet something had gone wrong. Director Denis Villeneuve couldn’t shake the feeling that he was making a period movie, not one about the future.

Enough has changed since the 1980s that cyberpunk needs reinvention. New aesthetics. An expanded vocabulary. Code 46 managed this years ago. It rejects a fetish for all things Japanese and embraces China’s economic dominance. Conversations being in English and are soon peppered with Mandarin and Spanish. Life takes place at night to avoid dangerous, unfiltered sunlight. Corporations guide government decisions. Genetics determine freedom of movement and interaction. Climate refugees beg to leave their freeway pastures for the safety of cities.

Code 46 is cyberpunk as seen from 2003, a logical future that is now also outdated.

If Blade Runner established the look, Neuromancer defined cyberpunk’s voice. William Gibson’s debut novel was ahead of the curve by acknowledging the personal computer as a disruptive force when the Cold War was at its most threatening. “Lowlife and high tech” meant the Magnetic Dog Sisters headlining some creep joint across the street from a capsule hotel where console cowboys rip off zaibatsus with their Ono Sendai Cyberdeck. But Gibson’s view of the future would be incomplete without an absolute distrust of Reaganism:

“If I were to put together a truly essential thank-you list for the people who most made it possible for me to write my first six novels, I’d have to owe as much to Ronald Reagan as to Bill Gates or Lou Reed. Reagan’s presidency put the grit in my dystopia. His presidency was the fresh kitty litter I spread for utterly crucial traction on the icey driveway of uncharted futurity. His smile was the nightmare in my back pocket.” — William Gibson

“Fragments of a Hologram Rose” to Mona Lisa Overdrive is a decade of creative labor that was “tired of America-as-the-future, the world as a white monoculture.” The Sprawl is a cyberpunk trilogy where military superpowers failed and technology gave Japan leadership of the global village. Then Gibson wrote Virtual Light and readers witnessed extreme inequality shove the middle class into the gig economy as corporations schemed to profit off natural disasters with proprietary technology.

Gibson knew the sci-fi he didn’t care for would absorb cyberpunk and tame its “dissident influence”, so the genre could remain unchanged. “Punk” is the go-to suffix for emerging subgenres that want to appear subversive while posing a threat to nothing and no one. It’s how “hopepunk” becomes a thing. But to appreciate cyberpunk’s assimilation, look at how it’s presented sincerely.

CD Projekt Red (CDPR), known for the Witcher game series, has spent six years developing what’s arguably the most anticipated video game of the moment, Cyberpunk 2077. Like Gibson, Mike Pondsmith, creator the original “pen-n-paper” RPG, and collaborator on this adaptation of his work, has had his writing absorbed by mainstream sci-fi. CDPR could survive on that 31-year legacy, but they insist they’re taking their time with Cyberpunk 2077 to craft an experience with a distinct political identity that somehow allows players to remain apolitical. In a way this is reflective of CDPR’s reputation as a quality-driven business that’s pro-consumer, but has driven talent away by demanding they work excessive hours and promoting a hostile attitude towards unions. This crunch culture is a problem across the industry.

We’ll soon see how Cyberpunk 2077 developed. What we can infer from its design choices, like giving protagonist V a high-collar jacket seen on the cover of the 2nd edition game book from 1990, is that Cybperpunk 2077 will be familiar. Altered Carbon and Ready Player One share this problem. Altered Carbon is so derivative of first-wave cyberpunk it’s easy to forget its based on a novel from 2002. Ready Player One at least has the courtesy to be shameless in its love of pop culture, proud to proclaim that nothing is more celebrated today than our participation in media franchises without ever considering how that might be a problem.

What’s being suggested, intentionally or not, is that contemporary reality has avoided the machinations of the powerful at a time when technology is wondrous, amusing, and prolific. If only we were so lucky.

238 cities spent more than a year lobbying Amazon, one of two $1 trillion corporations in existence, for privilege of hosting their new office. In November it was announced that Amazon would expand to Crystal City, Virginia and Long Island City, Queens. Plenty of New Yorkers are incensedthat the world’s largest online marketplace will get $3 billion in subsidies, tax breaks, and grants to further disrupt a housing market that takes more from them than any city should allow. Some Amazon employees were so excited to relocate they made down payments on their new homes before the decision went public, telling real estate developers to get this corner of New York readyfor a few thousand transplants. But what of the people already there?

Long Island City is home to the Queensbridge Houses, the largest housing project in the US. Built in 1939, these two buildings are home to more then 6,000 people with an average income of $16,000. That’s far below the $54,000 for Queens residents overall. But neither group is anywhere near the average salary for the 25,000 employees Amazon will bring with them, which will exceed $150,000. How many of those positions will be filled by locals? How many will come from Queensbridge?

Over 800 languages are spoken in Queens, making it the most linguistically diverse place in the world. Those diverse speakers spend over 30% of their income on rent. They risk being priced out of their neighborhoods. Some will be forced out of the city. Has Governor Cuomo considered the threat this deal poses to people’s homes? Has Mayor de Blasio prepared for the inevitable drift to other boroughs once property values spike? Looking at Seattle and San Francisco, there’s no reason to expect local governments to be proactive. So New Yorkers have taken up the fight on their own.

Amazon boss Jeff Bezos toyed with these politicians. He floated the idea that any city could become the next Silicon Valley and they believed him. They begged for his recognition, handed over citizen data, and took part in the $100 billion ritual of subsidizing tech companies.

It was all for nothing. Crystal City is a 20-minute drive from Bezos’ house in Washington DC, where Amazon continues to increase its spending on lobbyists. That’ll seem like a long commute compared to the helicopter ride from Long Island City, the helipad for which is subsidized by the city, to Manhattan, the financial and advertising capital of the world, where Bezos owns four more houses.

The auction for Bezos’ favor was a farce. New York and Virginia give him regular access to people with decision-making power, invaluable data, and institutions that are are sure to expand his empire. These cities were always the only serious options.

Amazon’s plans read like the start of a corporate republic, a cyberpunk trope inspired by company towns. Employers were landlords, retailers, and even moral authorities to workforces too in debt to quit. Many had law enforcement and militias to call on in addition to the private security companies they hired to break labor strikes, investigate attempts at unionization, and maintain a sense of order that resulted in massacres like Ludlow, Colorado.

Amazon is known for labor abuses, monitoring, and tracking speed and efficiency in warehouses without bathroom breaks, where employees have collapsed from heat exhaustion. They sell unregulated facial recognition services to police departments, knowing it misidentifies subjects because of inherent design bias. Companies with a history of privacy abuses have unfettered access to their security devices. They control about half of all e-commerce in the US and, as Gizmodo’s Kashmir Hill found out, it is impossible to live our lives without encountering Amazon Web Services.

It doesn’t take a creative mind to imagine similar exposition being attributed to corporate villains like Cayman Global or Tai Yong Medical.

Rewarding corporations for their bad behavior is just one way the world resembles a fictive dystopia. We also have to face rapid ecological and institutional decay that fractionally adjusts our confidence in stability, feeding a persistent situational anxiety. That should make for broader and bolder conversations about the future, and a few artists have managed to do that.

Keiichi Matsuda is the designer and director behind Hyper-Reality, a short film that portrays augmented reality as a fever dream that influences consumption, and shows how freeing and frightening it is to be cut off from that network. Matsuda’s short film got him an invitation to the World Economic Forum in Davos to “speak truth to power.” What Matsuda witnessed were executives and billionaires pledging responsibility with t-shirts and sustainability, while simultaneously destroying the environment, as an audience of their peers and the press nodded and applauded “this brazen hypocrisy.” So Matsuda took a stanchion to his own installation.

Independence means Matsuda gets to decide how to talk about technology and capitalism, and how to separate his art and business. It also means smaller audiences and fewer productions.

Sam Esmail used a more visible platform to “bring cyberpunk to TV” with Mr. Robot. Like Gibson’s Pattern Recognition, it’s cyberpunk retooled for the present — post-cyberpunk. Esmail never hesitates to place our villains in Mr. Robot. Enron is an influence on logo designs and tactics of evil corps. Google, Verizon, and Facebook are called out for their complicity with the federal government in exposing customer data. AT&T’s Long Lines building, an NSA listening post since the 1970s, plays the role of a corporate data hub that reaches across the county. Even filming locations serve as commentary.

An anti-capitalist slant runs through Mr. Robot, exposing the American dream as a lie and our concept of meritocracy as a tool to protect the oligarchy, presenting hackers as in direct contact with a world of self-isolation and exploitation, those who dare to hope for a future affected by people rather than commerce. And Esmail somehow manages this without interference from NBC.

Blade Runner will get more life as an animeCowboy Bebop is joining Battle Angel Alita in live action. Altered Carbon is in the process of slipping into a new sleeve. There’s no shortage of revivals, remakes, and rehashing of cyberpunk’s past on the way. They’ll get bigger audiences than a short film about submitting to algorithms. More sites will discuss their pros and cons than a mobile tie-in that name-drops Peter Kropotkin and Maria Nikiforova. But in being descriptive and prescriptive, moving to the future and looking for sure footing in the accelerated present, Matsuda’s and Esmail’s work reminds us that cyberpunk needs to be more than just repeating what’s already been said about yuppies, Billy Idol, and the Apple IIc.

We live at a time where 3D printing is so accessible refugees can obtain prosthesis as part of basic aid. People forced to migrate because of an iceless arctic will rely on that assistance. Or we could lower temperatures and slow climate change by spraying the atmosphere with sulfate, an option that might disrupt advertising in low-orbit. Social credit systems are bringing oppressive governments together. Going cashless is altering our expectations of others. Young people earn so little they’re leveraging nude selfies to extend meager lines of credit. Productivity and constant notifications are enough to drive some into a locked room, away from anything with an internet connection. Deepfakes deny women privacy, compromise their identity, and obliterate any sense of safety in exchange for porn. Online communities are refining that same technology, making false video convincing, threatening our sense of reality. Researchers can keep our memories alive in chat bots distilled from social media, but the rich will outlive us all by transfusing bags of teenage blood purchased through PayPal.

In a world that increasingly feels like science fiction it’s important to remind ourselves that writing about the future is writing about the present. Artists worthy of an audience should be unable to look at the embarrassment of inspiration around them and refuse the chance to say something new.

Initial Thoughts on Blade Runner 2049

Upon hearing early reports of a planned Blade Runner sequel a couple years ago, I felt both anticipation and dread. I considered it a singular vision which didn’t necessarily need a sequel, yet could understand the desire to re-immerse oneself in the compelling world it introduced. Re-experiencing the film through its Director’s Cut and Final Cut versions in subsequent years seemed to me as satisfying as watching sequels since even the relatively minor changes had a significant impact on its meaning and the richness of the sound and production design allows for the discovery of new details with every viewing. Also, one’s subjective experience watching even the same movie can be vastly different depending on one’s age and other circumstances.

One of my earliest cinematic memories was seeing the first Star Wars film as a toddler. At around the same time I remember staying up late with my parents to watch the network television premiere of 2001: A Space Odyssey. Though I was too young to fully comprehend those films’ narratives, the spectacle and sounds definitely left an impression and established a lifelong appreciation for the sci-fi genre and it’s mind-expanding possibilities.

Flash forward to an evening sometime in early 1982. After viewing a commercial for Blade Runner I instantly knew it was a movie I had to see. In the short trailer there were glimpses of flying cars over vast cityscapes, the guy that played Han Solo in a trench coat, bizarre humanoid robots within settings as strange yet detailed as 2001: a Space Odyssey. My parents, responsible as they were, refused to give in to incessant demands to see Blade Runner and that Summer I must have been the only kid who reluctantly agreed to see “E.T.” as a compromise. I probably did enjoy it more than I expected to, but might have enjoyed it more had I not viewed it as a weak Blade Runner substitute and if I actually paid attention to the entire film.

Back then our family usually saw films at drive-in theaters and the one we went to that night had two screens, one showing E.T. and the other, to my delight and frustration, happened to be Blade Runner. Even without sound and at a distorted angle I was awestruck by the establishing shots of LA in 2019 (which I glanced over to witness just as E.T.’s ship was landing on the screen in front of us, and for the entire duration of the films my eyes would switch back and forth between screens. Even without understanding anything about the plot of Blade Runner it made the most fantastical elements of E.T. pale in comparison. Judging from the box-office receipts of its theatrical run, the majority must have thought otherwise since Blade Runner earned a relatively meager $28 million while E.T. was the breakout hit of the year with nearly $360 million.

Within a few years I’d see portions of Blade Runner on cable TV at a friend’s house and finally saw the complete film after my family got a VCR and it was one of the first videos I rented. The film served as a gateway to many other interests such as cyberpunk, film noir, electronic music, but most importantly, an appreciation of the novels of Philip K. Dick. Like a psychedelic drug, they inspired philosophical questioning regarding the nature of reality, consciousness, society and what it means to be human.

This background, which is probably not too dissimilar to other stories of obsessive fandom, outlines how one’s immersion in media is rooted not just in the work itself but how it resonates with and shapes aspects of one’s identity and personal narrative as much as other memories. There’s also a nostalgia factor involved because, similar to a souvenir or any object with sentimental value, revisiting such media can recapture a sense of the feelings and sensations associated with the initial experience and sometimes the milieu of the content as well. Nostalgia is a longing for the past, even a past one has never directly experienced, never was and/or never will be, often prompted by loneliness and disconnectedness. Because it can sometimes provide comfort and hope it’s a feeling too often exploited by the marketing industry as well as media producers such as those behind reboots and sequels. Though Blade Runner 2049 may not have been solely created to cash in on nostalgia for the original, as with most big studio sequels it’s still a factor.

The type of nostalgia evoked by Blade Runner is singular, for it envisions a (near and soon to be past) future through the lens of the early eighties combining a pastiche of styles of previous eras. The film also serves as a meditation on the importance of memory and its relation to identity and the human experience. In a sense, being a longtime fan of the film is like having nostalgia for distilled nostalgia. Also unique is the fact that it took 35 years for the sequel to get made, just a couple years shy of the year in which the original takes place. The long delay is largely due to Blade Runner being so far ahead of its time it took over a decade for it to be widely regarded as a science fiction masterwork. Also, it took an additional decade and a half to develop plans for a sequel. But perhaps now is the ideal time for a follow-up as aspects of our world become more dystopian and there’s a greater need for nostalgic escape, even through narratives predicting dystopia.

While the future world of the original Blade Runner was definitely grim, it was also oddly alluring due to it’s depiction of a chaotic globalized culture, exotic yet functional-looking technology and hybrid retro/futuristic aesthetic shaped by sources as diverse as punk rock fashion, Heavy Metal magazine, film noir and Futurism among many others. The imagery of Blade Runner 2049 expands on the original by visualizing how the future (or alternate reality) LA has evolved over the course of 30 years as well as the environmentally and socially devastating impact of trying to sustain a technocratic corporate global system for so long.

Blade Runner opens with shots of oil refineries in the city intercut with close-ups of a replicant’s eye. 2049 opens with a close-up of an eye and transitions to an overhead shot of an endless array of solar panels, indicating a post peak-oil world. Despite the use of cleaner energy, the world of 2049 is far from clean with the entirety of San Diego depicted as a massive dumping ground for Los Angeles. Scavengers survive off the scraps which are recycled into products assembled by masses of orphaned child laborers in dilapidated sweatshop factories.

The Los Angeles of Blade Runner 2049 looks (and is) even colder and more foreboding than before. Gone are the Art Deco-inspired architecture and furnishings, replaced by Brutalist architecture and fluorescent-lit utilitarian interiors (with a few exceptions such as Deckard’s residence, Stelline Corporation headquarters and the Wallace Corporation building). Aerial shots reveal a vast elevated sprawl of uniform city blocks largely consisting of dark flat rooftops with glimmers of light emanating from below, visible only in the deep but narrow chasms between.

One of the more prominant structures is the LAPD headquarters which looks like an armored watchtower, signalling its role as a hub of the future surveillance state panopticon. Though an imposing feature of the city’s skyline, it’s dwarfed by larger structures housing even more powerful institutions. Just as a massive ziggurat owned by the Tyrell Corporation dominated the cityscape of the first film, by 2049 the Wallace Corporation has bought out the Tyrell Corporation and not only claims the ziggurat but has constructed an absurdly large pyramid behind it. Protecting the entire coastline of the city is a giant sea wall, presumably to prevent mass flooding from rising sea levels.

In a referential nod to the original film, city scenes of 2049 display some of the same ads such as Atari, Coca-Cola and Pan Am, but even more distracting are product placements for Sony, one of the companies which produced the new film. Such details might work as “Easter eggs” for fans (and shareholders), but takes away from the verisimilitude of the world depicted in the film where the Wallace Corporation has such seeming dominance over the economy and society in general, it probably wouldn’t leave much room for competition large enough to afford mass advertising.

While the background characters in the city of the first film seemed rude or largely indifferent to one another, 30 years later citizens are more outwardly hostile. This could reflect increasing social tensions from economic stratification as well as hostility towards replicants because the protagonist of this film is openly identified as one. Speaking of which, Ryan Gosling turns in an excellent performance as the new Blade Runner, Officer K (aka Joe, an obvious reference to Joseph K from Kafka’s “The Trial”).

Ironically, the replicants and other forms of AI in 2049 seem a little more self-aware and human-like while the humans and social institutions have become correspondingly android-like. From the perspective of the future CEOs (and some today), both replicants and non-wealthy humans (known as “little people” in cityspeak) exist to be exploited for labor and money and then “retired” when no longer needed. Reflecting this brutal reality are the largely grey and drab color scheme of the landscapes, interiors, and fashions. Adding to the mood is the soundtrack which, while at times evoking the calmer and more subtle Vangelis music of the original, is more often louder and harsher, sometimes blending with the noisy diegetic (background) soundscape.

2049‘s screenplay is almost a meta-sequel, introducing plot elements seemingly designed to address problems and inconsistencies in the original which have been pointed out by fans and critics through the years. Numerous references to Blade Runner, while nostalgic and crowd-pleasing, are almost distracting enough to break the spell of the film (at least for those who’ve re-watched the original enough times to memorize every detail). Fortunately, just as frequently new revelations, concepts and hints at potential new directions pulls one back in. I especially appreciated the further exploration of the origins and impact of false memories and its parallels to the creation and consumption of media and the way the film expanded the scope of the story beyond the city. Also surprising were references to films partly inspired by the works of Philip K. Dick such as Ghost in the Shell, The Matrix, and Her as well as stylistic influences from more contemporary aesthetic subcultures such as glitch and vaporwave.

Like with most sequels, the main draw for fans is the chance to see familiar faces from the original and 2049 doesn’t disappoint too much. Judging from the posters, trailers, interviews, etc. it was clear Harrison Ford would make a return, but unfortunately it wasn’t until after the majority of the duration of the film had passed. Nevertheless, the reappearance of Ford’s character Deckard was memorable, found by Agent K as a disheveled hermit in an abandoned casino surrounded by copious amounts of alcohol and ancient pop culture detritus. Deckard is apparently as much of a drinker as in the first film, but now not just to block out the pain of the past and present but to escape to an idealized past. Though his involvement in the plot seemed too brief it nevertheless plays a pivotal role in resolving the central mystery of the film and providing additional metacommentary.

Ford’s performance is arguably more compelling than his work in the original, though his character’s lack of charisma in the first film could be seen as intentional. Deckard’s character arc in the film, as well as that of Ford’s last two iconic roles from the 80s he reprised, cements his status as our culture’s archetype for the deadbeat father. This seems inevitable in hindsight because for a generation of latchkey kids (many with actual deadbeat dads), stars such as Harrison Ford were virtually surrogate father figures. Thus, it makes sense that the beloved characters Ford drifted away from for so long would be written as variations of a long absent deadbeat parent in their last installments.

An interesting detail about the way Deckard was characterized in the film was how he seemed more in line with a typical baby-boomer today than the gen x-er one would be at that age 32 years from now.  For example, people in their seventies today are probably more likely to be nostalgic for Elvis and Sinatra than a seventy-something person in 2049 who in our actual timeline would have more likely spent formative years listening to grunge or hiphop. A possible subtextual meaning might be that like false memories, nostalgia for media of enduring cultural value transcends lived experience. The referencing of “real” pop-culture figures within the world of the film seemed anachronistic at first, but the way it was done was interesting and worked with the themes and aesthetic (I suppose it’s preferable to having something like Beastie Boys’ “Sabotage” shoe-horned into the film like in the Star Trek reboots). Getting back to the point, in the original Blade Runner, nostalgia permeated the film through its themes, production design, costumes and soundtrack. In Blade Runner 2049, nostalgia is a subtext of repeated callbacks to the original film, Agent K’s idealized retro relationship with his AI girlfriend Joi and Deckard’s hideout within the ruins of a city once associated with fun and glamour. The simulacrum of iconic figures from the past like Elvis and Marilyn Monroe (and Ford) haunting the deserted casino like ghosts reinforces the idea of media and culture’s ability to “implant” memories and resultant nostalgia.

As for the finale, I was disappointed that it was so far from the unconventional conclusion of the showdown between Roy Batty and Deckard. One could argue it’s a reflection of the state of the world (in and of film and reality), but it’d be nice to have a little more creativity and risk-taking. Though viscerally exciting and suspenseful, it wasn’t distinguishable enough from countless modern action films to be truly memorable. More satisfying was the epilogue which paralleled the contemplative nature of the original while reconnecting to the film’s recurring themes.

In a sense, the writers and director of Blade Runner 2049 were in a catch-22 situation. Creating a film too unlike or similar to the original Blade Runner would provoke criticism from fans. What director Denis Villeneuve and co-writers Hampton Fancher and Michael Green have managed to pull off is a balancing act of a film that’s unique in many ways yet interwoven with the original; nostalgic, but not in an obvious or overly sentimental way. Both have their flaws, but while I admire the thought and craft put into the sequel, I prefer the originality, tone, texture and atmosphere of Blade RunnerBlade Runner 2049 will likely satisfy most sci-fi fans, but I’m not sure it proves a sequel was necessary or that it stands alone as a classic.

Though not given the recognition it deserved in its time, Blade Runner was a groundbreaking and visionary film upping the bar for intellectual depth, moral complexity, production design and special effects to a degree not seen since 2001: a Space Odyssey. Its influence can be spotted in countless dystopian science fiction films made since. Though it’s too early to tell how influential Blade Runner 2049 will be, it doesn’t seem to have pushed the genre forward to a similar extent (of course contemporaneous opinions can seem wildly off the mark in hindsight). Regardless, it’s an above-average science fiction film by any reasonable standard so it’s unfortunate that judging from disappointing initial box-office reports, it seems to be following in the footsteps of the first Blade Runner pretty closely in that regard. Time will tell whether it achieves a similar cult status in years to come. Perhaps in 35 years?

 

David Cronenberg’s Videodrome Was a Technology Prophecy

videodrome_one_sheet_movie_poster_l

Editor’s note: Since today marks director David Cronenberg’s 73rd birthday, it’s a good time to appreciate one of his greatest and most notorious works. Though my favorite of his remains the distinctly PKD-like eXistenZ, a close runner up is the cult classic Videodrome, which the following analysis reappraises in the context of contemporary social media-fixated culture.

By Nathan Jurgenson

Source: Omni Reboot

David Cronenberg’s vision of technology as the “new flesh” in Videodrome isn’t so shocking anymore.

Videodrome is the best movie ever made about Facebook.

What felt “vaguely futuristic” about it in 1983 is prescient today: technology and media are ever more intimate, personal, embodied, an interpenetration that David Cronenberg’s film graphically explores.
Videodrome offers a long-needed correction to how we collectively view and talk about technology. As the anti-Matrix, Videodrome understood that media is not some separate space, but something which burrows into mind and flesh. The present has a funny habit of catching up with David Cronenberg.
Still, Videodrome is deeply of its time and place. It’s set in Toronto, where Cronenberg was born and studied at the same time as University of Toronto superstar media theorist Marshall McLuhan, who coined the phrase “the medium is the message.” Beyond McLuhan’s reputation, Toronto was also known as a wired city; among other things, it was an early adopter of cable television.
In suit, Videodrome follows a Toronto cable television president, Max Renn (James Woods). He becomes involved with a radio psychiatrist named Nikki Brand (Debbie Harry, of Blondie fame), who reminds us of popular criticisms of television culture: we want to be stimulated until we’re desensitized, becoming (at best) apolitical zombies and (at worst) amoral monsters. Television signal saturates this film. The satellite dishes, screens, playback devices, and general aesthetics of analogue video are on glorious, geeked-out display. Although Videodrome’s operating metaphor is television, this film can be understood as being a fable about media in general. And what seemed possible with television in 1983 seems obvious today with social media.
Over the course of the film, Max comes to know a “media prophet” named Professor Brian O’Blivion—an obvious homage to Marshall McLuhan. O’Blivion builds a “Cathode Ray Mission,” named after the television set component which shoots electrons and creates images. The Cathode Ray Mission gives the destitute a chance to watch television in order to “patch them back into the world’s mixing board,” akin to McLuhan’s notion of media creating a “global village,” premised on the idea that media and technology, together, form the social fabric. O’Blivion goes on to monologue, “The television screen is the retina of the mind’s eye. Therefore, the television screen is part of the physical structure of the brain. Therefore, whatever appears on the television screen appears as raw experience for those who watch it. Therefore, television is reality; and reality is less than television.”
This is Videodrome’s philosophy. It’s the opposite of The Matrix’s reading of Baudrillard’s theories of simulation, and it goes completely against the common understanding of the Web as “virtual,” of the so-called “offline” as “real.” O’blivion would agree when I claim that “it is wrong to say ‘IRL’ to mean offline: Facebook is real life.”
This logic—that the Web is some other place we visit, a “cyber” space, something “virtual” and hence unreal—is what I call “digital dualism” and I think it’s dead wrong. Instead, we need a far more synthetic understanding of technology and society, media and bodies, physicality and information as perpetually enmeshed and co-determining. If The Matrix is the film of digital dualism, Videodrome is its synthetic and augmented opponent.
As P.J. Rey illustrates, fictional Web-spatiality is the favorite digital dualist plot device. Yet more than fiction books and films, what has come to dominate much of our cultural mythology around the Web is the idea that we are trading “real” communication for something simply mechanical: that real friendship, sex, thinking, and whatever else lazy op-ed writers can imagine are being replaced by merely simulated experiences. The non-coincidental byproduct of inventing the notion of a “cyber” space is the simultaneous invention of “the real,” the “IRL,” the offline space that is more human, deep, and true. Where The Matrix’s green lines of code or Neal Stephenson’s 3D Metaverse may have been the sci-fi milieu of the 1990s, the idea of a natural “offline” world is today’s preferred fiction.
Alternatively, what makes Videodrome, and Cronenberg’s oeuvre in general, so useful for understanding social media is their fundamental assumption that there is nothing “natural” about the body. Cronenberg’s trademark flavor of body-horror is highly posthuman: boundaries are pushed and queered, first through medical technologies in Shivers , Rabid , The Brood , and Scanners , then through media technology in Videodrome  and eXistenZ , then, most notoriously, in The Fly, where the human and animal merge. If The Matrix is René Descartes, Videodrome is Donna Haraway.
Cronenberg’s characters are consistent with Haraway’s theory of the cyborg: not the half-robot with the shifty laser eye, but you and me. In the film, the goal is never to remove the videodrome signal that is augmenting the body, but to reprogram it. To direct it. As Haraway famously wrote, “I’d rather be a cyborg than a goddess.” “Natural” was never a real option anyways.
Max Renn is especially good at finding the real in the so-called “virtual” because he is equally good at seeing virtuality in the “real.” From the beginning, he understands that much of everyday life is a massive media event devoid of meaning. The old flesh is tired, used up, and toxic. The world is filled with a suffering assuaged only by glowing television screens. As the film progresses, the real and unreal blur, making each seem hyperbolic: hallucinations become tangible, while the tangible drips with a surrealism that’s gritty, jumpy, dirty, erotic, and violent—closer to Spring Breakers than The Wizard of Oz. As such, Cronenberg’s universe is always a little sticky: an unease which begs the nightmares to come true, so that we at least know what’s real.
Videodrome’s depiction of techno-body synthesis is, to be sure, intense; Cronenberg has the unusual talent of making violent, disgusting, and erotic things seem even more so. The technology is veiny and lubed. It breaths and moans; after watching the film, I want to cut my phone open just to see if it will bleed. Fittingly, the film was originally titled “Network of Blood,” which is precisely how we should understand social media, as a technology not just of wires and circuits, but of bodies and politics. There’s nothing anti-human about technology: the smartphone that you rub and take to bed is a technology of flesh. Information penetrates the body in increasingly more intimate ways.
This synthesis of the physical and the digital is mirrored in the film’s soundtrack, too. In his book on Videodrome’s production, Tim Lucas calls Howard Shore’s score “bio-electronic” because it was written, programmed into a synthesizer, and played back on a computer in a recording studio while live strings played along. Early in the film, the score is mostly those strings, but as time passes the electronic synthesizers creep up in the mix, forming the bio-electronic synthesis.
The most fitting example of techno-human union in Videodrome is the famous scene of Max inserting his head into a breathing, moaning, begging video screen; somewhere between erotic and hilarious, media and humanity coalesce. There isn’t a person and then an avatar, a real world and then an Internet. They’re merged. As theorists like Katherine Hayles have long taught, technology, society, and the self have always been intertwined. Videodrome knows this, and it shows us with that headfirst dive into the screen—to say nothing of media being inserted directly into a vaginal opening in Max’s stomach, or the gun growing into his hand.

Thirty years after its release, Videodrome remains the most powerful fictional representation of technology-self synthesis. This merger wasn’t invented with the Internet, or even television. Humans and technology have always been co-implicated. We often forget this when talking about the Web, selling ourselves instead a naive picture of defined “virtual” spaces which somehow lack the components of “real” reality. This is why The Matrix and “cyberspace” have long outworn their welcome as a frame for understanding the Internet. It should be of no surprise that body horror is as useful for understanding social media as cyberpunk.

The Role of Dystopian Fiction in a Dystopian World

images

By Luther Blissett and J. F. Sebastian of Arkesoul

A few years ago, Neal Stephenson wrote a widely-shared article called Innovation Starvation for the World Policy Institute. He began the piece lamenting our inability to fulfill the hopes and dreams of mid-20th century mainstream American society. Looking back at the majority of sci-fi visions of the era, it’s clear many thought we’d be living in a utopian golden age and exploring other planets by now. In reality, the speed of technological innovation has seemingly declined compared to the first half of the 20th century which saw the creation of cars, airplanes, electronic computers, etc. Stephenson also mentions the Deepwater Horizon oil spill and Fukushima disasters as examples of how we’ve collectively lost our ability to “execute on the big stuff”.

Stephenson’s explanation for this predicament is two-fold; outdated bureaucratic structures which discourage risk-taking and innovation, and the failure of cultural creatives to provide “big visions” which dispute the notion that we have all the technology we’ll ever need. While there’s much to be said about archaic, inefficient (and corrupt) bureaucracies, there’s also a compelling argument invoked over the cultural importance of storytelling and art and how best to utilize it. One of the solutions offered by Stephenson, in this regard, is Project Hieroglyph which he describes as “an effort to produce an anthology of new SF that will be in some ways a conscious throwback to the practical techno-optimism of the Golden Age.”

While Project Hieroglyph may be a noble endeavor, one could argue that it’s based on a flawed premise. The role of science fiction has never been just about supplying grand visions for a better future, but to make sense of the present. There seems to be an assumption that the optimistic Golden Age had a causal relationship with a perceived technological golden age when it may have simply been a reflection of it— just as dystopian sci-fi reflects and strongly resonates with the world today. Stephenson may be correct in his view that much SF today is written in a “generally darker, more skeptical and ambiguous tone”, but this more nuanced perspective does not necessarily signify the belief that “we have all the technology we’ll ever need”. Rather, it reflects decades of collective experience and knowledge of unforeseen and cumulative effects of technologies. Nor does such fiction focus only on destructive effects of technology, as large a component of the narrative it may be simply because it makes for better drama and the subtext is often intended as a critique rather than celebration. For example, the archetypal hacker protagonists of technocratic cyberpunk dystopias employ technology for more positive ends (though some question whether good SF, as in speculative fiction, needs to involve new technology at all).

A particularly positive function for dystopian sci-fi is its use as rhetorical shorthand. It’s increasingly common in public discourse on major issues of the day to invoke dystopian references. Disastrous social effects of peak oil or post-collapse are often characterized as Mad Max scenarios. Various negative aspects of genetic modification and pharmaceutical development conjure Brave New World. Anxiety over out-of-control AI and resultant devaluing of human life brings to mind films as varied as Blade Runner, The Matrix and Terminator films. The expanding police/surveillance state is reminiscent of 1984 and numerous classics which have followed in its footsteps including V for Vendetta and Brazil. General fears of duplicitous, psychopathic power elites and social manipulation have elevated They Live from relatively obscure b-movie to cult classic. The entry of the term “zombie apocalypse” into the popular lexicon may in part stem from fear (and uncomfortable recognition) of images of viral social disintegration and martial law-enforced containment efforts depicted throughout various media. The burgeoning omnipotence of multinational corporations and hackers in Mr. Robot may have been the stuff of cyberpunk dystopias such as Neuromancer and Max Headroom 30 years ago, yet, it still has much to contribute to the public discourse as contemporary drama. Such visions may not prevent (or have not prevented) the scenarios they warn us of but have provided a vocabulary and framework for understanding such problems, and who’s to say how much worse it could be had such cautionary memes never existed?

The prophetic nature of storytelling, inasmuch as it derives from the minds of authors, artists and commentators that coexist with tensions and contexts particular to their epochs, resonate with the oughts, ifs, and whats inherent to our daily lives. As it were, the cautionary element of narrative is a natural product of the human mind, and the premium of what involves sharing our mental reserves to the world. To creatively dwelve and concoct problems and solutions from experience, is an axiom analogous to that of the categorical imperative—purely, and in abstract terms of what rationality involves. Yet, often times, we find material that is in favor of cultural malaise; of all things pathological in our society, such as censorship, conformity, bureaucracy, authoritarianism, militarism, and capital marketing; things which underpin issues that, if left untouched, can engulf the real brilliance of our spirit.

Stephenson fails to see this point. SF, as any form of intelligent culture, denounces and opposes systems of oppression, and even shows us the how, when, and why—the frameworks, the makings of apparent utopias into dystopias. Dystopian storytelling can serve the efforts of downtrodden creators with utopian ideals as effectively as utopian stories can reframe a societal trajectory led by beneficiaries of real world dystopia (though it may be experienced as utopia for a privileged few). SF does not only conjure visions of better futures. They lend us vocabularies and syntaxes to understand, and impede the fallenness of a confused, and ever increasingly isolated humanity. They are languages that pervade our interiorities, and that allow the exterior to change.

At the core, SF is prophecy through reasoned extrapolation and artistic intuition. This is what SF stands for when properly aligned with the subjectivities of the oppressed, and not with the voices of oppression: true testaments of a space and a time; visions of the future that carefully partake in not committing the mistakes of the past; and tools for our personal and collective flourishing.

Skynet Ascendant

t2skynetbdcap1

By Cory Doctorow

Source: Locus Online

As I’ve written here before, science fiction is terrible at predicting the future, but it’s great at predicting the present. SF writers imagine all the futures they can, and these futures are processed by a huge, dynamic system consisting of editors, booksellers, and readers. The futures that attain popular and commercial success tell us what fears and aspirations for technology and society are bubbling in our collective imaginations.

When you read an era’s popular SF, you don’t learn much about the future, but you sure learn a lot about the past. Fright and hope are the inner and outer boundaries of our imagination, and the stories that appeal to either are the parameters of an era’s political reality.

Pay close attention to the impossibilities. When we find ourselves fascinated by faster than light travel, consciousness uploading, or the silly business from The Matrix of AIs using human beings as batteries, there’s something there that’s chiming with our lived experience of technology and social change.

Postwar SF featured mass-scale, state-level projects, a kind of science fictional New Deal. Americans and their imperial rivals built cities in space, hung skyhooks in orbit, even made Dyson Spheres that treated all the Solar System’s matter as the raw material for the a new, human-optimized megaplanet/space-station that would harvest every photon put out by our sun and put it to work for the human race.

Meanwhile, the people buying these books were living in an era of rapid economic growth, and even more importantly, the fruits of that economic growth were distributed to the middle class as well as to society’s richest. This was thanks to nearly unprecedented policies that protected tenants at the expense of landlords, workers at the expense of employers, and buy­ers at the expense of sellers. How those policies came to be enacted is a question of great interest today, even as most of them have been sunsetted by successive governments across the developed world.

Thomas Piketty’s data-driven economics bestseller Capital in the Twenty-First Century argues that the vast capital destruction of the two World Wars (and the chaos of the interwar years) weakened the grip of the wealthy on the governments of the world’s developed states. The arguments in favor of workplace safety laws, taxes on capital gains, and other policies that undermined the wealthy and benefited the middle class were not new. What was new was the political possibility of these ideas.

As developed nations’ middle classes grew, so did their material wealth, political influence, and expectations that governments would build am­bitious projects like interstate highways and massive civil engineering projects. These were politically popular – because lawmakers could use them to secure pork for their voters – and also lucrative for government contractors, making ‘‘Big Government’’ a rare point of agreement between the rich and middle-income earners.

(A note on poor people: Piketty’s data suggests that the share of the national wealth controlled by the bottom 50% has not changed much for several centuries – eras of prosperity are mostly about redistributing from the top 10-20% to the next 30-40%)

Piketty hypothesizes that the returns on investment are usually greater than the rate of growth in an economy. The best way to get rich is to start with a bunch of money that you turn over to professional managers to invest for you – all things being equal, this will make you richer than you could get by inventing something everyone uses and loves. For example, Piketty contrasts Bill Gates’s fortunes as the founder of Microsoft, once the most profitable company in the world, with Gates’s fortunes as an investor after his retirement from the business. Gates-the-founder made a lot less by creating one of the most successful and profitable products in history than he did when he gave up making stuff and started owning stuff for a living.

By the early 1980s, the share of wealth controlled by the top decile tipped over to the point where they could make their political will felt again – again, Piketty supports this with data showing that nations elect seriously investor-friendly/worker-unfriendly governments when investors gain control over a critical percentage of the national wealth. Leaders like Reagan, Thatcher, Pinochet, and Mulroney enacted legislative reforms that reversed the post-war trend, dis­mantling the rules that had given skilled workers an edge over their employers – and the investors the employers served.

The greed-is-good era was also the cyberpunk era of literary globalized corporate dystopias. Even though Neuromancer and Mirrorshades predated the anti-WTO protests by a decade and a half, they painted similar pictures. Educated, skilled people – people who comprised the mass of SF buyers – became a semi-disposable under­class in world where the hyperrich had literally ascended to the heavens, living in orbital luxury hotels and harvesting wealth from the bulk of humanity like whales straining krill.

Seen in this light, the vicious literary feuds between the cyberpunks and the old guard of space-colonizing stellar engineer writers can be seen as a struggle over our political imagination. If we crank the state’s dials all the way over the right, favoring the industrialist ‘‘job creators’’ to the exclusion of others, will we find our way to the stars by way of trickle-down, or will the overclass graft their way into a decadent New Old Rome, where reality TV and hedge fund raids consume the attention and work we once devoted to exploring our solar system?

Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Termina­tor and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.

It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.

Stephen Hawking, Elon Musk, and other luminaries have issued pan­icked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.

The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recurr­ing payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.

Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.

We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.

We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

The age of automation gave us stories like Chap­lan’s Modern Times, and the age of multinational hedge-fund capitalism made The Matrix into an enduring parable. We’ve gone from being cogs to being a reproductive agar within which new cor­porations can breed. As Mitt Romney reminded us, ‘‘Corporations are people.’’

The Movie Every Screwed Millennial Should Watch

514460241_640

By Arthur Chu

Source: Alternet

Jennifer Phang’s indie science fiction film “Advantageous,” a darling of 2015’s Sundance, came to Netflix Instant Streaming earlier this week. If you’re a millennial, you have Netflix. If you’re an un- or underemployed millennial, you have time. Every un- or underemployed millennial needs to see this movie.

We live in a renaissance of science fiction film and TV and “geek” culture in general — the accelerating pace of technological change thanks to Moore’s Law makes it hard to deny we’re living in “the future,” we’re all part-machine-part-human for practical purposes now, no one can guess what element of science fiction is next to become science fact, blah blah blah.

You’ve heard that song and dance before. They use it to sell everything from splashy popcorn blockbusters with robot villains to artsy thinky indie dramas with robot antiheroes.

But “Advantageous” is the first science fiction film I’ve seen that really grasps something I think is core to the experience of us young people who are on the bleeding edge of the troubling trend of Machines Taking Our Jobs Away.

And the core theme of the film that makes it so important is also the one that I worry will scare a lot of its audience away. Because this is a science fiction film but not an action film — there’s no violence, no gunplay. There’s no heroes or villains, precious little of good-vs-evil conflict. There’s no pulsing electronic backbeat and even though there’s smartphones and holograms, there’s not that much visible technology, no one tapping madly at keyboards while incomprehensible lines of green text scroll down the monitor.

Which makes sense, actually. These are all things we imagined would happen in “the future” of the 2010s back in the 1980s. The fears that defined the genre we call “cyberpunk” that set the tone for dark, dystopian futures for a generation were 1980s fears — fears of street gang violence, fears of nuclear war, fears of the drug trade. An adult in the 1980s, imagining a member of my generation, imagined someone doing designer drugs at raves, casually gunning people down in the street and hacking into the mainframe to trick China into launching their ICBMs.

We don’t do a lot of that. In fact, the least fortunate of our Lost Generation of millennials don’t do a lot of anything.

What “Advantageous” is that other science fiction films aren’t is quiet.

That’s my experience of being an unemployed millennial in the 2000s. Long stretches of unnerving silence. Being one of a handful of unlucky young people walking aimlessly around in the middle of the day when civilized people are at work. Failing to make eye contact with each other or speak because we’ve forgotten how to have in-person conversations. Turning to social media or aimless Web surfing to fill the long stretches of emptiness, of boredom.

I’ve joked, darkly, that the worst thing about being unemployed isn’t not having any money but not having anything to do.

And to a large extent that’s what “Advantageous” is about. Yes, the eerily empty streets our characters walk through might be a result of the film’s limited budget — but it also makes sense within the film’s setting. All the buildings are empty; all the stores are closed. Homeless people wander the parks and sleep in the bushes and stare numbly into the distance. (At one point the characters try to walk into a restaurant only to find that it’s been boarded up and the owner, sitting inside, ignores them. They treat this as a normal, everyday occurrence.)

We’re told that the world is in the grip of a tech-driven economic recession. There’s no jobs for anyone — anything the small elite of wealthy customers need done, they can get a machine to do for them better than any human can. Our protagonist, Gwen, is a spokesmodel for a cosmetics — essentially an eye-candy job.

Even though she mentions having gone to grad school and hoping to go into teaching, there’s no jobs out there for teachers now that people can get any information they want from machines. The only job out there for a flesh-and-blood human who’s not already rich is a job that involves looking pretty and smiling at rich people to try to sway their opinions, and it’s a job she’s lucky to get and devastated to lose.

(Every college-educated millennial who’s ended up taking a position in sales because it was the only thing on offer ought to be feeling a familiar twinge right now.)

The film gets a lot of mileage out of taking all-too-familiar scenes from the 2009 recession and exaggerating them just enough to make them fully dystopian. Anyone who’s dealt with the infuriating process of being forced to apply for jobs through poorly-designed automated Web forms will feel Gwen’s pain as she argues with a recruiter telling her her résumé has been “red flagged” and she slowly realizes, as the recruiter’s voice on the phone devolves into ELIZA-like nonsense responses, that she can’t get a job because she’s talking to a poorly-programmed machine that’s taken someone else’s job.

Anyone who’s felt the intense pressure of the college-application arms race will sigh at Gwen’s daughter, Jules — who appears to be 11 or 12 but talks, reads and writes at the college level — calmly telling her mother about a journal article she read describing how her generation’s high-pressure lifestyle means she’s likely to become infertile by her 20s.

Jules needs a $10,000 deposit to get into an exclusive summer camp in order to get into an exclusive prep school. Without those credentials, she’s unlikely to get a job — any job — at all. Her genius-level abilities are barely enough to get her foot in the door, and without connections and credentials and money, she’ll never be able to walk through it.

It sounds like an exaggeration, if you personally haven’t witnessed a Facebook feed filled with top-ranked students from top-ranked schools with thousands of dollars of student loan debt clawing and trampling each other to get minimum-wage call center work.

And Gwen’s response to the impossible situation of trying to secure a future for her daughter when she doesn’t even have an income anymore isn’t to pick up a gun and start shooting anyone. The long scenes of her sitting in brooding silence while racking her brains for a solution are, in fact, punctuated by explosions going off in the far distance, part of a hopeless war against the government by unnamed “rebel forces” — but those explosions are oddly silent, oddly peaceful, and they never feel completely real.

It feels like the warlike shouting and chanting from Zuccotti Park that most of us sat at home and watched on TV — a revolution I now feel happened mainly because our generation felt the essential frustration, the essential wrongness of the actual soundtrack of the recession, an eerie passive silence, and some of us tried to force some noise into the silence just to fill it up.

But it didn’t work, because there was no victory condition, no enemy to defeat, no Death Star to blow up. In retrospect the protests feel as futile as the quiet clouds of smoke in the “Advantageous” skyline. You can’t blow up an entire world, an entire economic system; you can’t beg it for mercy or shout moral imprecations at it either. Break things, throw things, scream things — at the end of the day you still don’t have a job.

I think on some level we’ve always understood this. I think on some level we’re silent because the damage done to us was done through silence — no one beat us up or assaulted us or stole anything from us. All that happened was the phone didn’t ring, the email never came, the poorly designed Web form spat out an automated “You will be contacted shortly” that was a lie.

“Advantageous” is a quiet film, and a pretty one. The city Gwen and Jules’ cramped apartment exists in is gorgeous and clean. When we do hear music, it’s not pulsing techno or anarchic punk but a street musician plying his trade, playing beautiful classical pieces on the violin — perhaps he got a degree from Juilliard only to end up as destitute as Gwen.

The gritty slums of the cyberpunk milieu purported to be about a world where technology was grinding down humanity but what they really showed was a world where humans could still strike back at things — could graffiti the walls, shatter the windows, shoot pockmarks into billboards, and the property owners couldn’t keep up with the damage. Vandalism is, at the very least, a sign of human activity — a sign that someone out there is still doing something.

The eerie Disney cleanness of Gwen’s city’s streets — the way the damage caused by the rebel bombings causes no one any concern and is seemingly fully repaired overnight — is a sign of a world where the things have won and the people have given up.

That, for me, was the worst thing about the recession — seeing shiny storefronts and clean-swept streets and all the trappings of a thriving economy — but none of us participating in it. The recovery from our recession was a so-called “jobless recovery” — still plenty of stuff being made, still plenty of money, in the hands of increasingly few people, to buy things with. The economy of things is doing fine, and always has been. It’s only the economy of people that collapsed.

The anger that comes from feeling oppressed, exploited, used — that’s one thing. The weary, quiet frustration from feeling ignored, forgotten, useless — that’s something different.

There are other themes in “Advantageous.” It’s mentioned that women have borne the brunt of this recession because, a suited executive bluntly tells Gwen, people fear the social disruptions frustrated men might cause more than they fear frustrated women —something that rings eerily true in the past few years, where a handful of men who feel left behind by the modern world are increasingly willing to channel their grievances into extremist ideologies and trying to puncture our generation’s silence with escalating acts of violence.

Gwen, a highly intelligent woman who’s been reduced to making a living solely off her looks, is being replaced by her employer because they want a younger and more “universal” look for their brand. Gwen is portrayed by Jacqueline Kim, an Asian-American actress who turned 50 this year and who co-wrote the script for “Advantageous” — it’s hard not to see this plot point as reflecting Kim’s real career.

And then there’s the climactic decision Gwen must make, whether to take advantage of a “cosmetic procedure” that involves uploading her mind into a more youthful, racially-ambiguous body. While it’s far from a unique conceit, in the context of this film the idea of reducing people, especially women, into commodities, where technology makes our identity mutable and economics makes it negotiable, takes on extra resonance.

We live in a world where cheap and plentiful technology has made us cheap — the market for human labor is glutted. There’s too many of us out there, we’re too easily replaceable, almost none of us are specifically needed for anything. As a result, just to survive — just to avoid being irrelevant — we give away more of ourselves than we have in generations, selling our timeour privacyour rights just for a chance not to be left behind.

How much further will it go, “Advantageous” asks. How much less needed can people get, as the things get smarter and shinier and more efficient? How much more will you have to give away, if they ask you to — your body? Your mind? Your soul?

The film doesn’t give any easy answers. But that’s the question we all need to be asking.

Arthur Chu is an actor, comedian and blogger.

Saturday Matinee: Electric Dragon 80000 V

MV5BNDg2MDA3NTYyMl5BMl5BanBnXkFtZTcwMTc0OTIzMQ@@._V1_SY317_CR5,0,214,317_AL_

 

“Electrict Dragon 80,000 V” (2001) Is a hyperkinetic experimental film directed by Sogo Ishii, whose noise-punk band Mach 1.67 provided the soundtrack. Tadanobu Asano stars as Dragon Morrison, a reptile whisperer with the ability to channel electricity who confronts Thunderbolt Buddha, a TV repairman/vigilante with similar electric powers. Deciding there’s only room in the city for one electric powered superhero, they have an epic showdown on the rooftops of Tokyo. As Augustine Joyce, the YouTuber who uploaded the film accurately noted: “One of the most rocknroll movies of all times. Surreal, Bizarre and Unique… Seems this is a Cult Film.”

Saturday Matinee: Tetsuo Trilogy

tetsuo-ironWith the underground film “Tetsuo: The Iron Man” (1989), writer/director/actor Shinya Tsukamoto broke out into the global cult cinema scene. Made on a shoestring budget and drawing influences from cyberpunk sci-fi and anime such as “Akira”, Tetsuo depicts a man’s transformation into a machine as his psychological state rapidly deteriorates. Two later “sequels”, “Tetsuo II: Body Hammer” (1992) and “Tetsuo: The Bullet Man” (2009) are variations of the same plot with progressively larger budgets and artistic ambition.

Tetsuo: The Iron Man

Tetsuo II: Body Hammer

Tetsuo: The Bullet Man

Lara Trace Hentz

INDIAN COUNTRY NEWS

In Saner Thought

"It is the duty of every man, as far as his ability extends, to detect and expose delusion and error"..Thomas Paine

ZEDJournAI

Human in Algorithms

Rooster Crows

From the Roof Top

Aisle C

I See This

The Free

blog of the post capitalist transition.. Read or download the novel here + latest relevant posts

अध्ययन-अनुसन्धान(Essential Knowledge of the Overall Subject)

अध्ययन-अनुसन्धानको सार