Existentialism and technology: did I pick the wrong career path?

Jackie Sabillon
4 min readFeb 6, 2022

--

Learning about philosophers is the last thing on my mind when I think about design. I’ve been a UX designer professionally for a couple of years now, and my career has greatly depended on disciplines such as history, psychology, and mathematics. Never would I have thought that philosophers could help me become a better designer… until this week.

We opened the class by watching a documentary about Martin Heidegger, ‘Being in the World’. Naturally, I googled his name, and the first search result provided talked about Heidegger’s affiliation with the Nazi party. Surprised, I dug a little deeper to find out he is a prominent philosopher that introduced many ideas about existentialism through tools, design, and emotion.

I expected to learn about Heidegger’s concepts and be introduced to his beliefs through this documentary. Instead, I was introduced to people who studied Heidegger’s ideologies and formed their own educated guesses on technology. One of these people is philosopher Hubert Dreyfus, who introduces the artificial intelligence debate. He argues that AI research is based on a poor understanding of humans. The human mind cannot be replicated, no matter how many facts you cram into a computer. John Haugeland and Taylor Carman follow this argument by revising Husserl’s and Stein’s ideas on what it means to be a human being, explaining that humans care about things and machines don’t. To quote Haugeland:

“It matters to us what happens in the world. We give a damn.”

Humans are logical and rational, but we are also moved by emotions. We are capable of empathizing with our peers and can use our senses to feel, something that no machine has been able to replicate. Based on these claims, I found it fascinating that MIT researchers, engineers, and scientists completely changed their view on AI and machine learning, accepting that human intelligence is unique to its proprietor. I was left pondering about what it means to be human.

Heidegger’s student Hannah Arendt expands on the concept of being human by talking about the three fundamental human conditions: labor, work, and action. I’ll admit, I didn’t quite grasp these definitions at first glance, but I’ll try my best to explain them:

Labor — refers to all human biological processes. Think about the process of breathing, digesting food, growing, etc. It is also associated with survival, sustenance, and productivity.

Work — provides “artificial” things and is associated with durability and stability.

Action — the group of people who live and inhabit the world. It focuses on the multiplication and survival of a species and is associated with the freedom of contestation and collaboration in politics.

These conditions, according to Arendt, make us human, or Dasein. We are conditioned to talk, walk, and contribute to society as a means of survival and development of our species, or action. However, as society evolved and technology flourished, we started prioritizing work and labor over action. Arendt feared that the automation of action would lead to an unfulfilled vita activa, or active life. In other words, humans can become slaves to work and labor and forget to exercise our right to political participation if we rely on machines to do our thinking and speaking. To Arendt, the question of technology is a question of how human beings live and act together.

Should we all fear technology then? Heidegger claimed we would all be molded and shaped by technology one day, losing our sense of self-identity and consciousness. Arendt claimed that technology is necessary, but automation would ruin us. Does this mean I am contributing to the formation of this dystopian future because of my profession? Not really. There’s another view on technology that I haven’t touched on that helped me tie these ideas together. Albert Borgmann’s work, much like myself, questioned his predecessors about what type of future humanity is building. His ideas expanded on Arendt’s fears of automation by arguing that not all technology made us slaves to work. He made a distinction between things and devices: things require engagement and bring people together. Devices do the opposite; they automate an action and tend to make people drift apart.

Imagine you’re making coffee one morning for yourself and your partner. Grinding the coffee beans, measuring the grounds, and pouring hot water for a pour-over are examples of things. You were engaged in every step of the process, making sure that your beans were ground to a fine powder but not too fine. You slowly poured the hot water on your grounds, making sure you added enough water to cover them but not too much that it would spill over. Maybe your partner came over to chat, enjoying the smell of the freshly ground beans. Making coffee brought people together.

Now, if you had a Keurig machine, all you needed to do to produce a cup of coffee is grab a Keurig pod and pop it in the machine. The process of making the coffee has been automated. The Keurig machine is not bringing anyone together. The Keurig machine is a device.

This important distinction tied the remaining loose ends on what technology is. Humanity will continue evolving with technology, but that doesn’t mean that our species is doomed because of it. Heidegger failed to see that not all technology is destructive. His ideas persuaded MIT researchers that AI could not replace human intelligence. Arendt helped prove this with her ideas on empathy, and Borgmann only cleared my worries about knowing what tools will help me grow instead of replacing my actions. This week got me doubting my professional choices, but I gained a whole new understanding of what it means to be in the technological world we live in.

--

--

No responses yet