Credit determination by algorithms

Defeating the Fair Credit Act with Algorithms as Proposed in New Trump Regulations

The Trump Administration is proposing new regulations under the Fair Housing Act that will turn the worst aspects of algorithms against plaintiffs who would otherwise have a case of housing discrimination under the Fair Housing Act. Currently, a plaintiff who was denied a housing loan, or whose offer to buy a house or rent an apartment was denied, can prove a violation of the Fair Housing Act by showing the lender or property owner’s regular denials resulted in a “disparate impact” against minorities or women. Thus, a plaintiff need not prove the defendant harbored an intent to discriminate, but can let the results of the defendant’s actions, in essence, speak for themselves. As reported by David Gershgorn in a OneZero post in Medium, https://onezero.medium.com/a-proposed-trump-administration-rule-could-let-lenders-discriminate-through-a-i-2f9a729b0f3c, HUD wants to pass regulations that will allow discriminators to avoid evidence of disparate impact simply by using a well-designed algorithm as the tool to discriminate.
In my book (see listing under “Publications”), I warn that algorithms could quickly become a tool to rationalize all sorts of discriminatory actions, such that plaintiffs will be unable to prove discrimination because a defendant employs an algorithm to undertake that discrimination. Algorithms work in what Prof. Frank Pasquale has called a “Black Box”. No one can be sure what data an algorithm has used to reach its conclusions, nor can anyone know the process by which an algorithm used that data to reach its conclusion. In fact, the more sophisticated the algorithm, the more its “machine learning” capabilities will obfuscate how it reached its conclusions, since it will have taught itself the most expedient way to reach those conclusions, regardless of what the algorithm’s designer initially intended. To make matters worse, the databases used by algorithms are often infected with decades of discriminatory results, and algorithms have a nasty tendency to “learn” of past discrimination and actually employ that discrimination as an “efficient” way to reach a conclusion. After all, what is easier for an algorithm than denying a housing loan the moment the algorithm determines an applicant is a minority, a woman or a resident of an area with higher historical rates of mortgage defaults?
HUD should be working on regulations to prevent algorithms from becoming 21st Century tools of red-lining. Unfortunately, it is doing the exact opposite. HUD is proposing that disparate impact claims can be defeated by discriminating lenders and property owners simply by using algorithms to make the discriminatory decisions for those lenders and owners. The regulations would create five elements that would be defenses against disparate impact claims. One of those elements would be that the algorithm was designed to use “objective” criteria to reach its conclusions. Another would allow the algorithm user to simply hire an expert to opine that the algorithm seems to be working objectively. Both elements are simply masks by which algorithm-based discrimination is already occurring, and thus should be subject to regulations against their use, not regulations supporting their use.
As my book details, given the tendency of algorithms to discriminate, along with the Black Box nature of algorithms that precludes proving via direct evidence that the algorithm’s process was discriminatory, algorithms are the perfect tool to obfuscate discrimination otherwise demonstrated by statistical evidence in disparate impact claims. In fact, this is just the latest example of where governments have hidden behind algorithms to discriminate, ranging from criminal sentencing to child welfare investigations. The algorithm industry is quite happy to provide the experts to testify how objective those algorithms “really” are in the face of clear evidence of disparate impact, even though it begs the question that, if a plaintiff can’t determine how an algorithm reached its discriminatory result, how those experts can testify that they have a basis for their opinions when they are likewise clueless to how the algorithm reached its result. If these regulations are enacted, they could be the start of a destructive use of algorithms to render any disparate impact claim impotent.

Posted by Alfred Cowger