1: Mindy

On a recent morning, I attempted to drive my trusty old Prius, Mindy, and by instinct, toggled the gear selector into [D] (as in “Drive Mode”). It was at this moment that Mindy decided to go sentient, filling the cabin with an incessant, deafening tone, accompanied by a surfeit of red warning lights & rather ominous instructions.

Prius Warning 1

After pulling into a nearby driveway, and averting a mild panic attack, I wondered what terrible, terrible thing could have befallen Mindy. “Park”? “Batteries”? “Transaxle"? Those words could mean: an imminent car-wide failure, or that I had somehow trashed the transmission, or worse yet, that the battery pack had failed.

As that display had instructed, going into [P] made all the noise & lights go away. I tried [D], once more; Mindy screamed. A couple minutes later, I took a closer look at one of the warning lights—the “door open” one.

Prius Warning 2

Turns out, the driver’s-side door was ever-so-slightly open. Once I closed it, [D] worked as expected, without any further intrusions from Mindy.


2: Air France Flight 447

This incident reminded me of a Vanity Fair feature, written by William Langewiesche, on Air France Flight 447. Investigators had little idea why the Airbus A330 crashed in the middle of the Atlantic Ocean in 2009—until search teams recovered the black boxes from the ocean floor in 2011. Investigators were then able to piece together the events that caused the plane to dive from 36,000 feet to the ocean’s surface in under four minutes.

Langewiesche’s reporting on the A330’s automated flying & warning systems grabbed my attention when I originally read the article last year. That plane’s seemingly minor design issues mirror those of Mindy. For example, the black boxes revealed that crew members were overwhelmed by confusing & intrusive automated warnings:

In the cockpit, the situation was off the scale of test flights. After [Captain] Dubois arrived [from his break], the stall warning temporarily stopped, essentially because the angle of attack was so extreme that the system rejected the data as invalid. This led to a perverse reversal that lasted nearly to the impact: each time [First Officer & Pilot Flying] Bonin happened to lower the nose, rendering the angle of attack marginally less severe, the stall warning sounded again—a negative reinforcement that may have locked him into his pattern of pitching up, assuming he was hearing the stall warning at all.

[Emphasis mine.]

Another design quirck, which only added to the confusion once the inexperienced pilots’ Crew Resource Management (CRM) protocols broke down:

[T]he pilot and co-pilot’s side-sticks are not linked and do not move in unison. This means that when the Pilot Flying deflects his stick, the other stick remains stationary, in the neutral position. If both pilots deflect their sticks at the same time, a DUAL INPUT warning sounds, and the airplane responds by splitting the difference. To keep this from causing a problem in the case of a side-stick jam, each stick has a priority button that cuts out the other one and allows for full control.

The arrangement relies on clear communication and good teamwork to function as intended. Indeed, it represents an extreme case of empowering the co-pilot and accepting CRM into a design. More immediately, the lack of linkage did not allow [Backup Pilot] Robert to feel [First Officer] Bonin’s flailing.

[Emphases mine.]

Robert actually thought he was in control of the plane for those final minutes, but Bonin had flipped the “priority button” switch on his own side-stick. In the midst of the commotion on the flight deck, Robert couldn't really tell that he wasn't in control.


3: The Point

Mindy & the A330’s automated systems technically worked as designed. If Mindy’s driver’s-side door is ajar, the vehicle automatically goes into [N], presumably so a driver won't fall out of the car when it's on the freeway. And the display gives correct information: the battery system isn’t capable of being charged when it’s in [N]. But Mindy's blaring warning tone & opaque instructions didn’t translate to a remedial action that I could take.

These particular systems exhibit Artificial Narrow Intelligence (ANI), described by Tim Urban, in his eye-opening article, The AI Revolution: The Road to Superintelligence, as:

[Computers that] only take in stagnant information and process it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

How do ANIs' problem-solving capabilities compare to those in humans?

Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking,’ but has failed to do most of what people and animals do ‘without thinking.'

We are in a transitionary period, where computers are currently capable of doing only grunt work extremely well; people are still required to make sense out of all that information & act on it. Mindy and the A330’s systems worked as originally designed, but the engineers who created them failed to take into account how people, in real-world situations, might interact with them.

Systems that are designed, from the start, with people in mind offer actual remedies & choices that can save lives. At a minimum, there shouldn’t be a need for panicked Prius owners to flee to the internet because of a startling warning.