This is the final 5th part of my four part series on the TV-show Devs - yes I am a professional. If you need to start from the beginning you can head to the first post. If you need not the first post but not this post, we’ll just have to rely on your powers of internet navigation.
In the penultimate episode of Devs there is a deeply unsettling scene. At this point the Devs team have, against Forest’s wishes, continued their work using Lyndon’s superior many-worlds algorithm and now have a perfected product. Sitting together in front of the wall sized screen, the team watch from a perspective floating in space, a simulation of Earth a billion years in the past. The engineers are both in awe of their own creation, but also uncomfortable with it because the simulation now contains everything, even them. This is hammered home when Stuart, breaking the rule that Forest and Katie have been breaking all along, projects the future. Specifically the future of the room containing all of them, a few seconds ahead. The engineers witness themselves, reacting to their own projection, and dutifully whenever the projection says or does anything a few seconds later they repeat it. The violence against not just the engineers themselves but the entirety of humanity is profound. They are dehumanised in that their sense of self and agency are demolished, but also through the recognition that the entirety of their existence, and everything they know, can be rendered within a box of their own creation.
For me, tech culture has a severe dehumanising streak throughout it. The very fundamental focus on technology as the driver of human progress inherently disempowers humans, positioning technology itself as the force that we rely upon to improve rather than recognising all the other non-technological definitions of progress. As such, humans are secondary to the primary driver of our own history, except of course for those humans that bring us the technology. Tech culture often denigrates humans through its assumptions that human skills, knowledge and functions can be improved through their replacement by technological replacements, and through transhumanist narratives that rely on a framing of human consciousness as fundamentally computational. In this final piece I explore some of these forms of dehumanisation, before bowing out with a flourish because I think this series has gone on long enough.
Dehumanisation via Disempowerment
In the final episode, Lily articulates this deep violation to Forest. Having been told that the predictions show Lily going to the Devs lab tomorrow, Lily attempts to defy the prediction, until events set in motion by Forest, through a long chain of cause and effect, result in her deciding to go and meet Forest. Lily says that he has taken everything from her, but he denies it, saying she never had it to begin with. She was never participating in life, simply there to watch it unfold. Lily’s role, as is the role of every human under Forest’s model of determinism, is to allow the future to unfold, not to participate in it. The structure of society is set, the hierarchies unwavering, those who benefit from the unfolding of this process were always destined to benefit, and those who don’t, weren’t, and there’s nothing that can change it.
Forest is positing a technologically driven confirmation of a Conservative mindset that legitimises social stratification by appeals to essential natural hierarchies. Claims of corruption, systemic imbalances, the need for reform, are all ineffectual complaints from those who need to accept this essential natural truth - everyone in society is where they were destined to be by virtue of their essential qualities. It is a narrative that unsurprisingly is well entrenched further up the social hierarchy as it sweeps away complicated concerns around systematic undeserved privilege and by default legitimates the order as it is. Forest, with his unbounded access to the past and the future can claim to know this truth and he is comfortable with it, because his predetermined place is at the top and accepting that underlying reality will return his family.
For everybody else - and Lily represents all the rest of us in this exchange - accepting Forest’s truth is deeply anti-human. Firstly it accepts that if someone experiences a life of suffering, that suffering is objectively correct, not from a moral stance but through an objective sense of practicality - the hard-nosed reality is that everyone is in their place and whilst I may have a wonderful life and you do not, if you were more intelligent you’d accept this inequity. Lily, expresses what it feels like to be fully aware of this predetermination. When Lily says Forest has taken everything, she is speaking not just of the material losses of Sergei or her friend Jamie, her career even perhaps her sanity. She is talking about her sense of self, her very identity as a person that can choose. To lose the sense of choice is a loss of morality, ideology, values, personal history, agency. Everything she is has nothing to do with her, she is simply a marker representing a predetermined position. There is no her.
Dehumanisation via Denigration
Lily’s introduction to the Devs project is when Forest shows her the simulation of his daughter, Amaya, and claims that she is alive. This claim that the simulation is itself alive, articulates the key mechanism by which technological determinism dehumanises. The imagery of Amaya playing on the screen is, Lily says, a computer simulation. She’s not alive, the simulation is simply going through fixed motions, predetermined, without free will or choice, and Forest responds “as are you, as am I, as is everything else”. In many ways this is the most crucial component of the Amaya system, not the simulation itself, but how successful Forest is in justifying that the simulation is life. His rhetorical strategy is not to elevate the simulation to the standards of reality, but to reduce reality down to the standards of his simulation. In his desperation to have produced reality through computation, he denigrates actual reality by equating it to computation.
Long long ago back in 2005 N. Katherine Hayles considered this computationalist mindset in her awesomely titled book My Mother was a Computer and referred to it as the ‘Regime of Computation’, the penetration of computational processes not only practically into every aspect of human practice, but into the way we construct reality itself. By this she meant that to some extent the computational metaphor - conceptualising everything as similar to or working like a computer - had become the dominant way of approaching problems, to the extent that for some, reality itself was seen as the result of computation. Just like the 18th century got really excited about gears and began to understand everything through the metaphor of clockwork, our metaphor is the computer. Crucially Hayles frames this dynamic as such:
“What we make and what (we think) we are co-evolve together.”1
Computationalism as a metaphor is potent because computation is everywhere, and it is everywhere because the metaphor has become so potent. The means and the metaphor, she says, are dynamically interacting with one another2.
Hayles presented this from a relatively neutral position, but I’m not into that. For me this dynamic also speaks to the risks of the computational metaphor and the power dynamics behind it. Take for example the current trend to subject all of human creativity and effort to generative AI models. Human creativity is a complex semantic processes of representation and meaning, the practice of trying to communicate, to externalise what is inside the human mind and be understood by others outside of it. Rather than claim that the models can do all of those very human things above, the computational metaphor instead obscures those processes, and instead focuses on what is computationally possible - efficiently laying out pixels or words into the most probable combinations, and measuring their success against that. By demonstrating competency in this newly defined version of the creative process the metaphor that everything is actually computational is reinforced.
Calling back to previous discussions of ‘abstraction’, the computational metaphor is an abstractive process - to preconceive of something as computational means selectively excluding some aspects of it to make the metaphor work. It is a value-laden choice to take a phenomena in the world, and to highlight those aspects that could be seen as computational, whilst excluding those parts that don’t. It makes something less than it was and reveres the denigrated version. Forest’s insistence that his simulated daughter is real, relies on this denigration of reality through computational metaphor.
Dehumanisation via Instrumental Thinking
Forest’s equating of his computational simulation with life itself draws on an instrumental form of reason that is closely tied to a highly rational mindset. Instrumentalism is a way of thinking where ultimately what matters is that particular goals are met. Forest redefines reality as meeting a particular set of goals, a set of criteria and that then allows him to classify his simulation as real.
Throughout this series I’ve used the term instrumental quite often, referring broadly to a way of thinking that is simply goal oriented. However there is more to it than this - Max Weber, old school sociologist, man with good facial hair distinguished between two types of rationality, instrumental and value rationality. Instrumental rationality is a kind of decision making that is focused entirely on the goal of an outcome - the point is to make decisions that are optimal for that outcome. Value rationality however is different. Value rationality is about making decisions that will further a particular value in the world, rather than necessarily meeting specific material goals. An instrumental approach would ask how best we can simulate a small child in a computer simulation, a values approach would ask what kind of values are expressed by doing that, and are they values we want to express.
Whenever I think about instrumental versus value rationality I’m reminded of the internet weirdness that is the Torture vs Dust Specks thought experiment, originally posed by LessWrong Rationalist Elizier Yudkowsky. Broadly the thinking goes like this: If realistically the least bad thing that can happen to someone is that a dust speck gets in their eye momentarily, and the worst thing that could happen to them is being tortured for 50 years, these two things both exist on a kind of continuum of badness. Experiencing the dust speck is one amount of badness points, 50 years of torture is another. If you, instrumentally, have made a commitment to maximise the amount of ‘good’ in the world and someone offers you the choice of inflicting one of these events on another person, you’d be duty bound to choose the dust speck because that is the least amount of bad of the two options.
If your option is to choose two people receiving dust specks versus one person being tortured, you’ve just doubled the amount of harm, but still it’s less harm than torturing one person3. If then, you a rational being have chosen this option, then according to Yudkowsky there must be a point where there would be enough people receiving dust specks that it would be objectively better to torture one person for 50 years instead. This number could be huge, ridiculously huge. Huge enough for you to really show off your love for obscure mathematical symbols like Knuth’s 1976 up-arrow notation that makes a number like really really big, especially if you concatenate a load of them together to show you’re super serious.
Notably thought experiments like this can be more than just intellectual dicking around - they get us to consider what is at play in seemingly simple scenarios, to question why we make the choices we make, what values are modifying our approach to the world. This is not one of those times.
Faced with the horror of this many people receiving momentary dust specks in their eye, versus one single person being horribly tortured for 50 years the answer, says Yudkowsky ‘is obvious’. Torture. He thinks it’s the torture.
Yudkowsky here is taking an extreme instrumentalist approach. By first abstracting and defining the problem as a matter of good and bad in the world he obscures any value judgements. This, presumably, is to demonstrate that he is being rational, that his actions are guided by proper rigorous thought. Furthermore, to demonstrate his commitment to this rationality, he then does the classic contrarian move of choosing the abhorrent option to communicate the depth of commitment he has to proper logical thinking. This is typical within groups that think themselves uber rational, like Right Libertarians that like to claim that starving your own child4 , or letting someone fall to their death from your balcony by refusing to give them access to your property, is completely rational behaviour5.
The argument of course is that if you weren’t willing to choose the torture option, you are not fully committed to the rational pursuit of improving the world and that your choices are guided by irrational value judgements. To be guided by such political concerns reveals you to be nothing but a wet, irrational moron who would let their feelings get in the way of true human progress.
Cards on the table, as a wet irrational moron I would not choose the torture because I am not wholly wed to instrumental reason to the extent that it makes me say insane things. There is more at play than the contrived abstracted scenario at play. There is my sense of empathy for the human being tortured. I don’t want to be tortured for 50 years. That sounds terrible and if I don’t want that to happen to me I don’t think we should impose it on somebody else either. That we chose it as an option makes implicit statements about the value of human life, there are broader implications for how such a decision might impact the way we see humanity as a whole, how it impacts upon our experience of living to know that we chose the torture option to avoid a completely inconsequential harm. What are the effects on everyone’s psyche of knowing that you could be next up for the torture chamber to avoid 200,000 people stepping on a Lego brick or something? What kind of values are we championing by saying that this is a line we’re willing to cross?
To bring us back on course away from the insanity of ‘Rational’ thought experiments, what is the point of all of this? Well, as someone who brings up these kinds of value based objections I may be criticised as being irrational, or relying upon fallible judgement. However as we have already seen, abstraction is ultimately a process of judgement, the deciding of what does or does not matter. The thought experiment could easily have added an extra dimension in that modified the ‘score’ on the basis of how much pain was felt by any one individual. To not do so was a judgement, a choice that allowed the scenario to meet the criteria necessary to make the claim. When Forest built his simulation, he set the criteria to something achievable, and then redefined the phenomena he was simulating itself as only being comprised of that criteria. His decision making was instrumental, goal oriented, but in the process he was willing to strip away everything else that made his daughter, and strip that away from humanity in general, to achieve his goals. The consequences for the values of humanity were not a consideration. All that mattered was achieving the criteria he had set.
The End
Eventually, Forest tells Lily what he has seen will happen in the lab. Lily will take out the gun she has brought with her, force her and Forest into the floating capsule used to access the main lab, and then she will use the gun to kill Forest and in the process destabilise the capsule and also die. Forest has seen it all happen and is just there to play his role. For Forest and Katie, the authority of the machine is total in this scenario. It has, through rational logical deterministic processes decreed that they will die, and Forest makes no attempt to subvert this prediction. Within the parameters of the abstraction they have built, this is the only possible outcome.
Crucially this speaks a great deal I think to one critical thing to consider about the culture of Tech which is the degree to which this is driven by belief. Whilst a lot can be said for profit motive, I truly think it is more ideological than that, potentially rooted in their own narrative about tech culture’s roots in utopian countercultural movements. A drive to change the world with the conviction that their solution is the correct one. Throughout the process are value judgements about how the world should be, value judgements about what matters to human progress, value judgements about what is relevant. All of these judgements are made with a degree of finality, a closed-mindedness regarding what could matter, in order to draw on the authority we afford claims to rationality and objectivity.
Yet the only reason the machine has this authority over them all, Lily, Forest and Katie, is because of Forest’s deep trust in its predictions as truth. Whilst the machine is imbued with the authority to speak, it is only because Forest legitimised what it said. He is the one that communicates the next steps to Lily and he is the one that does nothing to prevent it, demonstrating his total belief in the authority he has produced. The authority of the machine is completely reliant upon human reverence and the willingness to carry out the prediction. The commitment to the value of tech and instrumental thinking comes with the consequences of reducing the complexity of actually lived human experience and human reality, of reducing the human condition and sublimating it to something of their own creation, even for themselves.
The most impactful moment, for me, of the entire series is then the moment when Lily breaks the illusion. As she steps into the capsule with Forest, she throws the gun back out through the door just as it closes, making it impossible for her to complete the prediction. To me, this is Lily demonstrating that there is more to her, more to humans, than the abstraction used for prediction by Forest’s machine. In that act suddenly the fallibility of the predictions becomes apparent, the certainty they held is undermined, and the responsibility for the deaths and the destruction they had been abdicating for so long comes crashing back.
If it was me, I would have smash-cut to black Sopranos style right here, but they didn’t. In the end it is Stuart, fearful of the dehumanising effects of the world questioning their own free-will, who chooses to disable the floating capsule, killing Lilly and Forest. Then like a dream, Lily wakes up back at the beginning of the storyline. Sergei is still alive, ready for his first day at Devs and when they arrive at the Amaya campus, the Devs compound is gone, never having existed. Forest is there in its place, with his wife and Amaya. They are simulations living out a variant of the timeline where Forest’s family never died. It is implied that, with the many-worlds implementation, that there are infinite Forests and Lily’s, living out all manner of lives. We are seeing primarily the paradise ending, but Forest notes that for others life will be closer to hell.
I don’t think the Devs show runners fully realised quite how analogous this could be to tech in its current moment. Despite being wrong, despite the moral abhorrence of his actions, despite really doing very little but get in the way of his own project, this Forest in the end gets everything he wants. Lily is dragged in with him and subjected to the same fate that only Forest consented to. The world that Forest lives in becomes the abstraction Forest always insisted it was, and the abstraction gives him everything he wanted whilst the consequences are happening to someone else.
But rationally, when you weigh up the amount of happiness points Forest gains from getting his family back, that’s got to be worth a few stacks of infinite torture right?
Here ends my series on Devs. I started this project to try and get myself used to writing for its own sake again after a long hiatus. The world of academic publishing is not fun and sometimes I just want to write what I want to write without worrying about peer-review. Whilst the series is done, I am not. I’m moving on to writing my book on tech culture and fascism and plan to drop thoughts and explainers here as I go. If that sounds like your thing then hit the button.
Hayles, N. K. (2006). Unfinished Work: From Cyborg to Cognisphere. Theory, Culture & Society, 23(7-8), 159–166. Retrieved November 6, 2013, from http://tcs.sagepub.com/cgi/doi/10.1177/0263276406069229 p.164
Ibid - (That’s academic for '“same as above but I can’t be bothered to write it again”)
Ignore the nagging feelings you have regarding how one quantifies goodness or badness or even why you as an individual feel the right or obligation to take on this role. You’re clearly not rational enough.
Rothbard, M. (1998). The Ethics of Liberty. New York and London: New York University Press. p.100
Block, W. (2003). The Non-Aggression Axiom of Libertarianism. LewRockwell.com. Retrieved March 22, 2024, from https://www.lewrockwell.com/2003/02/walter-e-block/turning-their-coats-for-the-state/