Ideas & Insights / All Perspectives / Ideas & Insights

Complete Machine Autonomy? It’s Just a Fantasy

Maya Indira Ganesh — Research Associate, Media Theory Department, Karlsruhe University for Arts and Design

The endless human–machine monitoring loop of human bodies, minds and hearts in driverless cars

According to the National Transportation Safety Board’s investigation of the Uber accident that resulted in the death of Elaine Herzberg in Tempe, Arizona, in March 2018, the test driver in the Uber spent 34 percent of her time looking at her phone streaming the TV show The Voice.1 In the three minutes before the crash, she glanced at her phone 23 times. And, “The operator redirected her gaze to the road ahead about one second before impact.” This is known because the Volvo Uber she was test-driving was fitted with a driver-facing camera. In fact, driverless cars today are fitted with a variety of cameras, sensors, and audio recording equipment to monitor the human drivers of almost-driverless cars. Why should human drivers be under surveillance? And if a car is driverless, why does it need a human driver?

The autonomous vehicle (AV) is supposed to drive itself. This means it can navigate a path between two points and make decisions about how to deal with things that happen on that path. However, there is still some distance to go before this is technically feasible. Currently, semi-AVs require human drivers to be alert and vigilant, ready to take over at a moment’s notice should something go wrong. That’s exactly what the Uber test driver did not do. Nor did the three other test drivers in three fatal accidents involving semi-AVs. In each of the accidents, the driver did not take back control from the semi-AV because he or she was distracted by something else; ironically, the driverless car is supposed to free up the human to do other things. I refer to this as an ‘irony of autonomy,’ playing on what researcher Lisanne Bainbridge wrote about automation in 1983: “The automatic control system has been put in because it can do the job better than the operator, but yet, the operator is being asked to monitor that it is working effectively.”

And now the loop of the human-and-machine has become a spiral. The human manager who oversees the semi-AV is overseen by a different kind of technology: affective computing. Affective computing is an applied and interdisciplinary field that analyzes individual human facial expressions, gait, and stance to map out emotional states. By associating every single point on the face and how it moves and looks when conveying a particular emotion and combining the findings with posture and gait, affective computing can allegedly tell what a human is feeling. However, after a review of a thousand studies, psychologists brought together by the American Psychological Association found “Efforts to simply ‘read out’ people’s internal states from an analysis of their facial movements alone, without considering various aspects of context, are at best incomplete and at worst entirely lack validity, no matter how sophisticated the computational algorithms.”

Despite this, the “emotional AI” industry is estimated to be worth $20 billion. Affectiva, an emotional-measurement technology company, writes that its product “understand[s] drivers’ and passengers’ states and moods . . . to address critical safety concerns and deliver enhanced in-cabin experiences . . . unobtrusive measures, in real time, complex and nuanced emotional and cognitive states from face and voice.” Affective computing can be used to understand drivers’ and passengers’ states and moods as well as issues like road rage and driver fatigue. During the past year, I have identified 47 affect patents—patents for innovations that register the affective and psychological states of humans in vehicular contexts. Patents can signal intent to markets, customers, and competitors rather than conclusively verify the state of a technology. Still, they make for fascinating reading. Patent pending number JP-2005143896-A, for instance, proposes to “determine the psychological state of a driver.” Its telematics sensors create data from (1) the force with which a driver steps on the brake and (2) the torque applied to the steering wheel when turned. Patent number US-2017294060-A1 uses an on-board diagnostic system to record the driver’s behavior and give real-time advice about how to drive in a fuel-efficient manner.

The monitoring of human drivers in semi- AVs is likely to increase both for reasons of safety and for the management of future insurance and liability claims. Thus, the irony is that autonomy for machines is not in fact a real separation of human and machine, as it is often viewed, but is instead enabled by human bodies and minds and monitored and managed by computing programs—even as the humans maintain the fantasy of machine autonomy.

Related Updates

  • Report

    AI+1: Shaping Our Integrated Future

    The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
    Download PDF

Tags :