Homo Versus Machina: What our Responses to Technological Revolutions Reveal About Us

What would you do if you got diagnosed with cancer and were given six months to live? Would you move heaven and earth in the attempt to find a cure? Would you accept your fate and live whatever life you have left to the absolute fullest? Essentially, and philosophically, would you choose hedonism, or stoicism?

That’s a thought I had last weekend after having a conversation with a friend about the imminent threat of artificial intelligence on human labor. I know, it’s a completely different topic, but stay with me.

“If we have three years until ‘the Machine’ takes our jobs, then how should we spend these three years?” I asked my friend.

“I want to go back to nature and live a quiet life in the countryside, while I’m still able to,” my friend replies.

“Well, I think that if we have three years until ‘the Machine’ colonizes our livelihoods, then I’d rather spend these three years outsmarting ‘the Machine.’ It means we have three years to ‘tame the beast.’” I countered.

Both sides are valid. They’re different approaches, different philosophies, different ways of life. But it made me wonder: what makes us think the way we do? Is indulging in the present an abdication of survival, and by extension, an abdication of life itself? Is it—let’s hypothesize here—quiet suicide? When we refuse to plan for tomorrow, does it mean that we don’t really care about existing tomorrow?

As for AI, it will, for better or for worse, inarguably and inevitably colonize our lives—and has obviously already started to do so—and while that’s a reality not many of us will be able to alter or influence, how we face that very reality can tell us a lot about ourselves. As a matter of fact, how we face adversity in general says a lot about ourselves—whether that looks like terminal cancer, nuclear apocalypse or a robot invasion. After all, adversity is what molded human intelligence and resilience. Adversity is something no human being on planet Earth has ever been able to escape from. Some would even argue that adversity is something we secretly crave, something we can’t help but bring upon ourselves, like picking at one’s scab or touching one’s bruise.

However, as far as AI is concerned, and the fear it induces in many of us, it is only the continuation of mankind’s pursuit of technological augmentation. In Western civilization, the past couple of centuries offer multiple examples of that. When James Watt improved the steam engine in 1776, spearheading the first Industrial Revolution, he rewrote an entire labor model, causing factories to move away from river-powered water wheels, into urban centers. A century later, when Thomas Edison built the first power plant in 1882, he fueled a second Industrial Revolution, hence passing the baton to Henry Ford a few decades later, who turned manufacturing into a high-speed science. Barely half a century had passed since Ford invented the assembly line, that we were already birthing the first industrial robot, beginning a third Industrial Revolution, and marking the beginning of task automation and deindustrialization. That was it: machines were taking over, and the Computer Revolution of the late twentieth century hit the nail in the coffin. And here we are, thirty years later, right in the middle of the AI Revolution.

But we can go even further, if we want to. Humanity’s quest towards technological augmentation didn’t start in the eighteenth century, and it didn’t start in the West. The Neolithic Revolution is a prime example. Around 10,000 BC, in a region called the “Fertile Crescent” (modern-day Middle East), humans invented tools: the sickle (that emblematic curved knife featured on the USSR flag), and the plow. That means that humans gradually stopped sourcing their food by nomadically moving around in search of greener pastures, and started cultivating food in one fixed place, in settlements. This then created a surplus of resources, which is the prerequisite for all future technological advancement. Think about it: when everyone isn’t busy running after wild boars or foraging berries, then someone has time to invent the wheel. The Bronze and Iron Ages continued that cycle, and the rest is pretty much history. Humans have, essentially, never stopped their quest of technological augmentation. We’re constantly, endlessly, “toolmaxxing,” if you will.

This is where my mind wandered, during my conversation with my friend: what we’re experiencing now isn’t anything new. AI is new in the shapes and forms it takes, sure, but the core driver behind it has always been the same: energy efficiency, and a sprinkle of greed. If we strip away the plastic, the steam or the stone, the core anthropological driver of all of these human discoveries is this one principle: the Law of Least Effort, which posits that living organisms and people naturally choose the path requiring the minimum energy or resistance to achieve a goal, ultimately seeking efficiency and the path of least cognitive demand.

Biologically speaking, as humans, we’re relatively high-maintenance engines. Our brains consume about twenty percent of our daily calories, despite being only two percent of our total body weight. So, to survive and thrive, we are evolutionarily hardwired to get the maximum output for the minimum caloric input. It’s the same concept as the modern “Energy Performance Certificate” (or EPC) sticker that you see on windows or electronic appliances. A high EPC rating means that you get to use an appliance that will serve its purpose successfully whilst not bloating your energy bills. You get high efficiency at low consumption. It’s the best of both worlds.

Now of course, the irony—and the evolutionary paradox—is that humans have always wanted to cut corners and save time, but only to end up spending that extra time working more, on more ways to save even more time. All along, we never really stopped working. And that’s human nature. The Jevons Paradox (an economic phenomenon observed when improvements in technology or efficiency increase the speed at which a resource is consumed, rather than decreasing it) has in fact gained attention in the context of the AI Revolution. That also reminds me of one thing my friend told me during our conversation: “if workplaces want to use AI for us to save time, then we all know that the time saved will be spent working more. It’s never really going to be time truly saved,” my friend vented.

So, if technological changes have always happened, why the fear? Why are we so scared? And more importantly, why are so many of us unprepared? Didn’t we have it coming all along?

That’s where things get really interesting. While change has always occurred, in every timeline and place under the sun, it also induces stress in many of us, because change threatens our stability, and our stability is often synonymous with our livelihood. Stability can mean a boring routine, but it can also mean having the ability to come back home every day, and seek shelter under one same roof, which we know we can count on day or night. So, understandably, when a storm comes and blows our roof away, that’s a change we don’t embrace. That’s a change that threatens our livelihood.

Nevertheless, change isn’t always bad. Change can mean winning the lottery. It can mean falling in love. It can mean birthing a child. It can mean discovering a new amazing combination of cheese-and-jam toast, which you never thought about before but that you now love. Change can be good, and we know that, but change is also something we all have different thresholds to. And in an accelerating world, many of us are tired of change. We’re tired that fast, repeated change has gradually become our reality. And that’s precisely where we split into two groups: those who fight, and those who flee.

Of course, fighting versus fleeing does not equate to bravery versus cowardice, or strength versus weakness. Philosophically, our responses to change can be dissected in order to be understood. For instance, Aristotle explained that, as humans, we need a sense of purpose and trajectory in life. Aristotle called that telos. When change comes into our lives, our telos is placed under threat. Our pre-established trajectory of life is jeopardized. It’s like driving your car, and suddenly seeing a tree blocking the road ahead. Your entire itinerary has to be recalculated, and in the worst case scenario, you might not even make it to your destination on time. Fortunately for us, we nowadays have GPSs, but the tree still constitutes a setback nonetheless.

Unexpected changes are the reason why we have savings accounts, why we open insurance policies or keep spare house keys. We unconsciously welcome the possibility of change into our lives, and want to ensure that no matter what happens, we still come out on top. Planning for tomorrow and the uncertainty it brings is how we experience ourselves as ongoing beings rather than temporary events. We live in the present, whilst acknowledging that living will also happen beyond the present, in the future. So we can ask ourselves: is refusing to plan for tomorrow a philosophical rejection of the self as a long-term project? And on a psychoanalytical level, we can ask ourselves: is “living in the now” wisdom, or rather a psychic maneuver?

Living only in the present can in fact function as denial, by refusing to mentally represent a threatening future, but it can also function as a manic defense, with intensity, pleasure and immediacy serving as protection against despair. In philosophical terms, it’s, in a way, hedonism over stoicism. But on an even deeper level, only living in the now means foreclosing the future altogether. When the prospects of the future become unbearable, the psyche may amputate it. As organisms, we’re still alive, but we’re no longer investing in our own continuation. So, is it a quiet dance with death? Are we unconsciously flirting with Thanatos?

A little over a century ago, Sigmund Freud introduced in “Beyond the Pleasure Principle” his theory of the death drive—or Thanatos—not as an immediate desire to die or literal suicidal ideation, but as a pull toward stasis, towards rest, or non-striving. Freud posited that the aim of all life is death, meaning living matter has an internal drive to return to an earlier, non-living state.

On a more concrete level, that split between fighting and fleeing are both responses to a perceived loss of control. The split was never brave people versus weak people, or rationality versus irrationality, but rather the illusion of agency being preserved versus agency being relinquished. It’s adaptive over-identification versus adaptive withdrawal. Psychoanalytically, the fighters (if we really have to call them that) externalize anxiety, while the flee-ers internalize it. The fighter thinks “I can master this,” whilst the flee-er thinks “I’ll live first, and try to master it later.” So, again, neither is philosophically wrong per see. Both are responses to external change.

So we could envisage AI not as a villain, but as an umpteenth psychological stress test revealing our tolerance for uncertainty, our relationship to time, and whether we still believe tomorrow is worth investing in. We could almost ask ourselves: is the machine trying to kill us, or is it asking us if we still want to continue? Now of course, some may say I did a lot of rambling all along without ever mentioning that it isn’t really “the Machine” that is after us, but more a capitalist elite unleashing their goon-bots onto the masses. But does that really change anything? At the end of the day, change is coming, and how we respond to it reveals whether we are still investing in tomorrow, or quietly learning to let it go.


Sources
Wikipedia contributors. (n.d.). Watt steam engine.
https://en.wikipedia.org/wiki/Watt_steam_engine

Wikipedia contributors. (n.d.). Assembly line.
https://en.wikipedia.org/wiki/Assembly_line

Wikipedia contributors. (n.d.). Jevons paradox.
https://en.wikipedia.org/wiki/Jevons_paradox

Wikipedia contributors. (n.d.). Telos.
https://en.wikipedia.org/wiki/Telos

Wikipedia contributors. (n.d.). Thanatos.
https://en.wikipedia.org/wiki/Thanatos