Does technology have ethics? The problem with Waymo, Carebots, and AirTags.

Jackie Sabillon
4 min readApr 18, 2022
A picture of an iPhone with an AirTag notification displaying: “AirTag Found Moving With You — The location of this AirTag can be seen by the owner.”

Do artifacts have ethics? Class this week opened up with this question. Our readings this week aimed to answer this question. The question is not whether technology has a moral dimension, the question is whether we recognize it or not — L.M. Sacasas.

I found Sacasas's essay enlightening. Why have I never considered that inanimate objects or software may have moral implications? To put it simply, I have not been trained to do so. I obtained my bachelor’s degree at the Savannah College of Art and Design. The school did an amazing job training me to become a UX Designer, splitting its curriculum into various design, art history, and programming classes. We learned about human-centered design, the UX design process, and how to break into the industry, but no class ever questioned the moral dilemma of well-established technologies. Take for example Google’s self-driving car Waymo. The car has been seen driving around the Bay Area and in NYC, navigating in the early hours of the day to avoid other cars. What happens when the car crashes? My guess is that Google would be at fault since it is still in its testing phase. However, once the car is available for purchase and a passenger is inside of the vehicle, the moral implications change. Do you blame the driver (passenger?) or the vehicle? Other car manufacturers that claim their cars have self-driving capabilities have found a loophole by requiring the driver to keep their gaze on the road (Mazda) or keeping their hands on the wheel (Tesla). Is this a self-driving car then? The simple answer is no, since there’s input from the driver, hence if the car gets into an accident, the car manufacturer will avoid any blame.

Another interesting ethics dilemma circles around Carebots. Shannon Vallor's presentation on Do Carebots Care? explains that tech ethics is an ongoing responsibility of technologists, not a box to be checked — there’s no algorithm for robot/technology ethics. Like other innovative technologies, Carebots have limits and risks that need to be taken extremely seriously, especially because these bots serve vulnerable populations such as the elderly or neurodivergent patients. One of the biggest problems encountered so far is a bot’s lack of emotions. Robots can emulate human emotions and motivations, but they are unable to feel. This can lead to humans forming emotional attachments with social robots that are not reciprocated. As someone who is deeply interested in the intersection of design and healthcare, Vallor’s presentation is a wake-up call. Technology has the potential to increase access to caring services for vulnerable and isolated people, but it can also cause harm if ethics are not a crucial aspect of its design.

Lastly, and probably one of the most shocking ethical dilemmas I’ve witnessed, is the terrible rollout of Apple’s AirTags (I found a great reading on its ethics — read here). Almost a year ago Apple released small round devices that aimed to help users track their belongings. At first, people were putting them in their backpacks, wallets, and even their dogs to keep track of their stuff (and pets). Devoted Apple users seem to love the product, however, many issues started to come up:

  1. Users are able to turn off notifications. What’s the big deal with this? Well, let’s say I decide to leave an AirTag in my friend’s car and my friend has notifications off. My friend would have no idea that I was tracking them. Similarly, if I turn off notifications, others will be able to track me without my knowledge. There is no setting to turn off the ability of others to track me using an AirTag. As Per Axbom puts it, I can disable an AirTag only after I suspect it is tracking me.
  2. If you are not an Apple user, you are screwed. When Apple released the AirTag, they did not tell Android designers about the product. Companies like Google were forced to come up with a quick solution to protect Android users from possibly being tracked by an AirTag (which came in the form of a third-party app many months later after its release). This coupled with the already existing privacy issues for Apple users led to many people finding random AirTags in their belongings and not knowing who was tracking them. TikTok users started posting their experiences finding AirTags and Apple failing to address these privacy issues. Many contacted the police, but given that no harm had been done (yet), nothing was done to ensure the safety of these people.

--

--