Artificial Intelligence and the Dangers of Human Error

No comments

Fifteen were killed and 14 injured last April when a junior hockey team from Saskatchewan, Canada was involved in a bus crash while on a tournament. Compare this to the 12 kids who died in the Columbine shooting.

There are 1,250,000 traffic deaths annually worldwide. In 2016, there were 37,461 traffic deaths in the United States alone, 2000 of them children under 16. There were 11,004 homicides by firearm. Everybody knows about Douglass or Sante Fe. But 15 kids dying in a car crash? Eh, that’s just Tuesday night! And Wednesday… and Thursday… and Friday, Saturday, Sunday, Monday…

150207203106-03-jenner-crash-super-169.jpg
CNN

Tesla has developed and continues to develop self-driving technology that uses artificial intelligence. In its current form, Tesla autopilot reduces crash rates by 40%. If their current tech was implemented worldwide, 900,000 people would be saved annually. And it’s only improving, their goal is to have autopilot outperform humans by a factor of 10.

But what happened in March when a man ignored his vehicles alerts to regain control of the vehicle and neglected to repair his vehicles crash attenuator, a concrete barrier impact device, and died? News reports, local and national, demonizing the technology. What happens when a car full of drunk teens don’t die in a car crash? Well, nothing. It’s confirmation bias to imply that self-driving technology is unsafe. That man died because he over-estimated his car’s tech. Autonomous cars aren’t here yet. And if the media keeps sensationalizing self-driving AI as dangerous when somebody gets hurt it might not ever be. It’s not a car “accident” if it could have been prevented. The 15 kids dead in Canada? That’s no “accident”. They’re not so big news because it was a human who killed those kids. If an algorithm made that same mistake? It’d be in the news for weeks.

Irrationally we grip to human intuition to solve our problems despite having computers that indisputably outperform humans. Nurses and midwives used to intuitively determine whether a newborn was in need of medical attention and would often times misdiagnose an unhealthy baby as healthy, leading to higher rates of infant mortality. One doctor fixed this by making 5 categories of the babies qualities one minute after birth such as skin color, heart rate, breathing, etc… and made an algorithm determining the necessary care. This proved much more effective than previous. Another example of algorithms was when a Chicago hospital that was completely overwhelmed with the number of patients coming in. Many came in for chest pain which can be an indication of a heart attack but can also mean nothing at all. Doctors had to figure out a way to discern between the two quickly and precisely as a misdiagnosed heart attack would be fatal. Doctors created 5 categories on which to rate the patients and considered no other qualities the patient reported. The unnecessary information weakened diagnosis. When doctors thought their intuitive opinion superior to that of the algorithm’s, misdiagnosis rates increased. It feels unnatural to hand over our mortality to boxes of categories. But it would seem irresponsible if your doctor or midwife argued with these statistical facts, thought that they could outperform the algorithm, and possibly killed a newborn or heart disease patient.

This same logic applies to AI technology. Undoubtedly it’s necessary to proceed with caution in the production of these technologies, a systematic error in a computer that decreases its effectiveness by .001% is potentially multiple lives. Any haphazardness in its implementation should be met with intense scrutiny. But when the media sensationalizes this technology and propagates and reaffirms our irrational fears, they are not acting as a constructive critic, they are sabotaging the success of the technology. Just like an irresponsible doctor who thinks his judgment superior to the computer, it’s just as irresponsible perpetuate any notion that human error is less dangerous than the computer’s.

The sad inevitability about artificial intelligence is that it uses learning algorithms and neural networking to store and access information. For example, the computer is programmed to stop when a kid is in the road. Every time it encounters a child it logs everything in its environment so that it improves itself to better react the next time a similar scenario occurs. Over the course of hundreds and hundreds of miles, the computer will do this and perform as expected. But when the AI makes a mistake, it’s not going to be on an average encounter on the road, it’s going to make a mistake on the most extreme of extreme cases that occur only very rarely to which the computer has little if not zero knowledge of how to react. This instance would be extremely rare. Almost impossible when juxtaposed to how often human drivers make errors. But it doesn’t matter how infrequent this happens because it only has to happen once for the media lambast the technology. Humans killing themselves and each other everyday? Not even worth conversation let alone the news. Did it not occur that an autopilot crash in the news means that it’s newsworthy? And that it’s newsworthy because it doesn’t usually happen?

That doesn’t change the fact that fearing this technology only puts more lives in danger. Why do accept that cars are not designed to survive a collision if it happens, but for when it happens. We treat $10,000 pieces of machinery like they’re disposable K-Cups for a Keurig. We need to embrace Artificial Intelligence, not just in the auto industry, but in every aspect of life that it can help. Approach with caution, but don’t turn around.

 

There will be an article about artificial intelligence and unemployment coming soon!

  1. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
  2. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812451
  3. https://ucr.fbi.gov/crime-in-the-u.s/2016/crime-in-the-u.s.-2016/tables/expanded-homicide-data-table-4.xls
  4. Gladwell, Malcolm, 1963-. (2005). Blink : the power of thinking without thinking. New York :Little, Brown and Co.
  5. An Update on Last Week’s Accident. (2018, March 31). Retrieved from https://www.tesla.com/blog/update-last-week’s-accident
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s