The Limitations that Algorithms and AI Share With the Ford Pinto and Exploding Pressure Cookers

Algorithms and AI are the most exciting and profound product technology to enter society since the perfection of the internal combustion engine (although I would argue that the development of a family-affordable car was far more important). Yet, for all their awe-inspiring complexity, and their ability to make the discoveries humans could only hyphothesize, they are still products. Thus, they can become harmful to humans based on their poor design and/or defective components. In other words, algorithm-based products could be as infamous as the Ford Pinto, which had a gas tank that exploded in rear-end collisions, or lead paint, that continues to cause brain damage to children one hundred years after it was applied to house trim.

Recent news stories have provided clear examples of how readily AI and algorithms can be designed defectively. NBC News reported on an algorithm used to screen rental applicants that confused Hispanic names, and thus denied credit to a Navy veteran with top secret clearance based on criminal convictions of a Mexican drug dealer. https://www.nbcnews.com/tech/tech-news/tenant-screening-software-faces-national-reckoning-n1260975. When a product’s purpose is to collect data, and then accurately and efficiently draw conclusions from that data far quicker than any human, yet that product cannot accurately draw such conclusions, it should be subject to liability for its design defect, just like a car model that cannot be driven safely down the road. Moreover, while that car might horrifically mow down two or three pedestrians on a sidewalk, the rental screening algorithm could permanent damage the credit history of hundreds or thousands of renters. Worse, unlike the car, the renters literally won’t even know what hit them, since the non-transparent nature of all algorithms will shield the defects in that algorithm.

That same news story noted a case going before the Supreme Court involving an individual denied a car loan because a similar named individual was on the US Government’s terrorist watch list. This demonstrates how algorithms and AI share the same exposure to bad components as other more mundane products, such as pressure cookers with bad seals. It takes only one problem with a pressure cooker’s component to create a time bomb in one’s kitchen. However, at least with pressure cookers, the components are limited and identifiable. In the case of algorithms, each bit of data in the ether that is the internet, social media and The Cloud can be erroneous. From that one erroneous data point can spring untold ramifications as algorithms searching through trillions of data bits to make literally millions of decisions can draw the wrong ones from that one bit of wrong data. In the case going to the Supreme Court, it took only the error-filled U.S. terrorist watch list–which has already denied Senators and toddlers the right to board aircraft–to prevent innocent individuals from obtaining credit. Will those same individuals tagged as terrorists eventually be pulled over and shot one dark night by a security guard who has been told a terrorist is driving through a sub-division?

Before businesses, and eventually society itself, introduce artificial intelligence into every aspect of our lives, American jurisprudence must set standards for individuals harmed by defective algorithm-based products to recover for their harm, and prevent that harm from happening again to them, as well as their fellow citizens. Courts will even have to develop new procedural and evidence rules for breaking through the black boxes that shroud algorithm-based products.
Otherwise, the high technology meant to elevate us will literally kill us.
Furthermore, just