I Was Interviewed About the Content of My Last Blog

My last posting was a complaint about how those wishing vaccinations from local pharmacies are being required to join the pharmacies’ Member Clubs. The local FOX affiliate, Channel 19 in Cleveland (WOIO), picked up on the story. Its investigation corroborated what I asserted. The reporter also found the excuse of the pharmacy to be weak. The story can be seen here: https://www.cleveland19.com/2021/01/26/made-sign-up-receive-marketing-ads-get-covid-vaccine-some-say-thats-not-right/

Posted by Alfred Cowger

Covid Vaccinations and the Forced Loss of Privacy

Unnoticed by virtually everyone, as far as I can tell, is the yet another impingement on the control of one’s privacy by way of the Covid vaccine. At least here in Ohio, many of the sources of vaccinations are private pharmacies, not public hospitals. When I registered my mother (who qualifies for the 1B category of persons entitled to vaccines so far), I was required to enroll her in the “savings club” of the store in which the pharmacy was located. So, in order to get my mother life-saving treatment, she was forced to turn over data about her without compensation, let alone control.

Once again, a government ignorant about the ramifications of losing control of one’s private data has allowed that to happen. In this Age of Algorithms, where AI and algorithms work best if they have access to a variety of large databases, one’s personal data is one of the most valuable commodities we personally own. We would never allow a private operation to show up one day and say “Your back yard is a good place to store my industrial supplies–get out of my way and, oh, move your dog someplace else”. Worse, if we learned a government program had been set up to allow such property trespass, the citizenry would be protesting in front of City Hall by nightfall. So why should the government allow retailers to demand that senior citizens give away their personal information for free, and in fact be required to actually sign on to an activity that is meant to mine even more data about their buying habits? The government, instead, should be ordering these retailers to collect data only for purposes of setting up vaccination reservations, and proscribing the use of that data for any marketing or other use.

To make matters worse, the “Privacy Policy” of the establishment where my mother is registered is an oxymoron of contradictory legal clauses. The retailer states that it will only give my mother’s info to its vendors in a non-individualized format, but then states the vendors may “bring to” my mother offers for sales and marketing purposes. That means the vendors are getting not just anonymous demographic data about my mother, but rather her personal data tied to her address, phone number and email address. Otherwise, how are these vendors going to “bring” directly to my mother offers that are based on her demographics? So, the promise that her personal information will not be used is immediately superseded by the fact vendors may use her personal information. All of this data can be re-sold to third-party data aggregators multiple times, put into a database with unrestricted access by third parties, and otherwise used to label and categorize my mother literally forever.

Europe is well ahead of the U.S. on ensuring that individuals know how their data might be used, and preventing the exploitation of that data without individuals agreeing to that use. Americans should not be forced to choose between privacy and health. This vaccination effort could, on the other hand, be an opportunity to set a precedence whereby Americans are entitled to goods and services in the Age of Algorithms without both paying for those goods and services and handing over valuable data without proper compensation.

Posted by Alfred Cowger

The Use of Algorithms to Substantiate Anything–Including Treason

So much could be written about how algorithms directly contributed to those historic, awful events on January 6, 2021. Marketing algorithms were most surely used to identify persons who might be interested in joining the coup planning, and to aim claims of election fraud directly that those who would be most swayed by those claims. Algorithms SHOULD have been used prior to the coup attempt by law enforcement to track the social media postings about what would happen on January 6th, though it is apparent that whatever results were discoverable via algorithms were ignored by senior US Capital security officials.

However, what I want to focus on how the term “algorithm” itself was used to make falsehoods seem true.  Algorithms, according to the affidavits of alleged experts, proved pervasive election fraud.  For example, algorithms were used to derive expected election outcomes based on early returns on Election Night.  When the final vote counts did not ultimately result in a Trump victory, these algorithms  concluded the only explanation was that, as the evening’s vote count went on, Trump votes were destroyed and Biden ballots were stuffed via some unexplainable means. At least one “expert” used algorithm-driven comparisons between the 2016 and 2020 elections to “prove” that vote fraud had to have been systemic.  One expert went so far as to claim that voter fraud was so well done, it could not be discerned from evidence, but only from algorithmic analysis.  Dozens of courts quickly rejected these analyses as proving nothing, except that the algorithm designers, the databases being used, and/or the conclusions derived by the so-called experts were all obviously and fundamentally flawed, and thus proved nothing.  Notwithstanding these irrefutable conclusions, the word “algorithm” was sufficient for Trump supporters, without a moment’s question, to ignore the decisions of all these courts.  

As the coup was unfolding, I began reading almost verbatim postings by Trump supporters that those attacking the Capitol and its police were actually Antifa and Black Lives Matter infiltrators.  Trump’s legion cited a Washington Times report stating algorithm-based facial recognition technology had proven known Antifers were at the Capitol. Within twenty-four hours the company that the Times reported undertook the analysis announced this claim was a complete falsehood.  Indeed, that company had found multiple instances of right-wingers who had been at previous violent street actions appearing in the halls of the Capitol after it was breached Once again, however, Trump supporters wanting to justify the actions of the Capitol rioters latched on to the claims that algorithms were used to prove the presence of Antifa, and they were unwavering in repeating this assertion well after the coup was suppressed and these claims were refuted completely.   Even the videos of Trump flag-waving rioters screaming “Hang Pelosi” and “Hang Pence”, vocalizing their intent to stop the electoral vote certification and harm elected officials who stood in their way, were not enough to convince Trump supporters that Antifa and BMI members were not the rioters.  Algorithms had already pronounced the “truth”, and that was enough for Trump’s supporters.

Until society becomes used to, and perhaps healthily jaded by, the application of algorithms, algorithms will take on almost mystical properties.  Studies have already shown how humans tend to presume that if a technology is complicated, it must be right.  The less a human understands the limitations of algorithms, the more likely the human is to believe any claim allegedly based on an algorithm is true.  Even those who should know better than to simply believe a claim because it has high-tech basis, such as judges, doctors and business professionals, tend to initially believe algorithms over humans.  They do not even alter their initial opinions even their own intuition and experience causes them to question an algorithmic conclusion.  

History will eventually adjudge that last week’s coup attempt will be one of the most extreme and dangerous examples of using the term “algorithm” to prove falsehoods. However, this should be a lesson for what could happen in situations with less widespread, yet still serious, implications.  Individuals harmed by algorithm-based conclusions denying their claims or benefits will not have the resources to overcome this religion-like deference to almighty algorithms that rule against individuals’ interests.  Doctors may choose treatments because their hospital’s very expensive algorithms say to choose those treatments, even if the doctor might have decided otherwise.  Corporate boards may defer to the recommendations of their management because the management used algorithms to make strategic plans, even if those plans sound suspect on further reflection and insight.  Judges may grant or deny freedom based on algorithmic conclusions, notwithstanding their experience-based hesitancy to do so.  Those arguing against algorithm-based conclusions will be at an immediate disadvantage, not because their arguments make sense, but because they are arguing against an algorithm.  

Society needs to learn quickly that the emperor is not clothed simply because he has an algorithm claiming otherwise.

Posted by Alfred Cowger

A More Mundane Example of Algorithm Design Error

Last week I posted about the errors in Presidential Election polls for this General Election. These errors occurred despite exhortations by the polling industry that it had learned from its mistakes in 2016, and that the polls would be more accurate this time. I used these errors as an example of how bad algorithm designs, combined with suspect data, can result in unacceptably wrong outcomes.

This week I providing a more mundane example, but nonetheless one that shows the limitations and risks of algorithms. Facebook uses an algorithm to insert advertisements into subscribers’ Facebook pages. Facebook advertisers pay for the opportunity to aim advertising at a class of likely consumers, thereby promising “more bang for the buck”. If my page is anything like Facebook subscribers in general, Facebook, despite its huge investment in advertising algorithms, is falling very short.

Recently, the vast majority of the ads on my Facebook page started to be for workout wear, shorts that are clearly designed to show off ones assets (emphasis on the first syllable), and lounge wear for the apparently active man who actually sits around most of the day (one can wear these lounge pants for ‘twenty-four hours at a time in complete comfort’, although with apparently complete disregard for daily hygiene). I can only surmise that Facebook must have recently subtracted thirty-five years and thirty-five pounds from my personal dataset. Unfortunately, I can guarantee I cannot fit my assets into these clothing lines, and if I could, the result would be not uncomfortable for me and anyone in eyesight of me. So, every one of those postings is a waste of ad money for the advertiser, and a lost opportunity for more appropriate advertisers selling what would entice me–might I suggest those offering food or cocktail of the month clubs, or clothes that fit an “Oh, Daddy!” as my teen daughter disapprovingly uses that phrase for every occasion, not an “OH DADDY!” as that phrase might be used in streaming porn.

The closest aim that Facebook’s algorithm gets to being right are for biking tours through various wine “countries”. These ads would have been appropriate a few decades ago (before Mark Zuckerberg was born, let alone formulating the idea for easing hookups at Harvard that was to become Facebook), but I now prefer my touring to be via chauffeured vehicles, so I can drink without concern for driving, let alone peddling. This placement can be deemed fifty percent accurate. Unfortunately, as with almost any real-world application of algorithms, fifty percent (or even eighty percent) accuracy is going to prove woefully inadequate, and thus useless or even harmful to the public.

I might add that I have clear proof that I am not the only recipient of these woefully inadequate ad placements. My nephew, who just finished advanced communications training with the Marines, let me know that ads on his Facebook page are for rentals of private jets. He thought that perhaps Facebook must know best, and that the problem was he is the only underpaid Marine in the United States. Ironically, he is a far better candidate for the workout wear ads that I am receiving, and I, in turn, at least enjoyed a few private jet trips in my professional career.

While this ad algorithm deficiency is amusing, no one would be laughing if the algorithm was for medical treatment, and my physical conditions were being ignored because the algorithm assumed I was as health as a young Marine, while the young Marine was being given needless treatments because the algorithm had concluded he suffered the health problems of an aging, sedentary attorney. That is why in the Age of Algorithms, AI should not be considered a perfect miracle worker, and why humans must always be the final and complementary step in any AI process with serious ramifications.

Posted by Alfred Cowger

The Bad Presidential Polling Results and their Ominous Implications in the Age of Algorithms

For the second Presidential election in a row, the polls were materially off. This suggests that the science of polling is regressing, not improving, because pollsters are failing to create polling models that realistically reflect the voters, and thus the extrapolations of polls based on faulty models are resulting in faulty polls. Not only is this bad news for polling in years to come, but it is a sadly superlative example of what society faces with algorithms in the future.

Just like polls, algorithms are dependent on three fundamental elements: 1) an accurate database, 2) a design that uses, or allows the algorithm to learn, accurate correlations between the database and conclusions to be drawn by the algorithm, and 3) a design that properly uses the those correlations to make accurate conclusions. If any one of these elements fails, the entire algorithm will result in inaccurate, misleading or simply incoherent results.

The bad polling results have ominous implications for what society faces in the Age of Algorithms. Just like bad polling models suggested that Biden might actually win my home state of Ohio, a bad algorithm might decide, for example, that some convicted criminals are more likely to be a harm to society, and thus those algorithms might instruct judges to increase the sentences for those guilty persons. However, when a poll is bad, eventually the election results will show the poll was erroneous, and the poll will not have resulted in fundamental harm to the election, except perhaps the pride of commentators who were paid to spout the poll results like an electoral oracle. In contrast, after a criminal is given a longer sentence by a faulty algorithm, there is no subsequent test by which that sentencing will be proven wrong, and the harm caused to the prisoner will mean an irreparable loss of additional and unnecessary years in prison, while criminals who perhaps should be in prison longer will be freed too soon to commit more crimes.

So the bad polling of the last two elections should serve as an omen for what society faces in the Age of Algorithms.

Posted by Alfred Cowger

Due to technical difficulties….

Due to a combination of several intermittent days without internet, as well as a glitch in the upgrade of my blog’s programming that knocked me offline for several days, I have been unable to update this blog until today. But that’s not a problem, because what possibly could have happened between October 10th and now that would give cause to legal analysis? Oy….

Posted by Alfred Cowger

Capital Punishment–Missing the Forest for the Trees

The Supreme Court has agreed to take on a case out of Kentucky, where a mentally disabled man was convicted of the death penalty. In 2002, the Supreme Court ruled in Atkins v. Virginia, 536 U.S. 304 (2002), that executing a criminal with mental disabilities violates the ban on “cruel and unusual punishment” under the Eighth Amendment. Now, the Court has agreed to hear arguments about whether a mentally disabled person can waive a claim of mental disability, Kentucky v. White, Case No. 20-240.

Let two implications of the Court’s decision to hear arguments on this case sink in. First, the Court had based its Atkins decision on the theory that persons with mental disabilities would not be deterred from committing a capital crime, since they would not understand the ramifications of being subject to the death penalty because of their limited mental capacity. Second, the Court found that society should only demand retribution from those who understood the seriousness of their heinous crimes, and persons with mental disabilities would lake that understanding. If an execution will not further the goals of retribution and deterrence, then it is to be deemed cruel and unusual punishment. That does beg the question: if the criminal cannot understand the serious of his or her crime, how could the criminal have sufficient mental capacity to waive this defense? The Court should be embarrassed to even consider the Catch-22 posed by prosecutors, i.e. that a defendant can be so mentally incapacitated that the defendant should not be executed, but that same defendant must have sufficient mental capacity to proactively waive the right not to be executed.

The second implication of this case is how barbaric it is that the United States is still trying to defend capital punishment. Executing someone is so obviously cruel and unusual that, to deem such actions constitutional, the Court must create limited exceptions and attenuated justifications to the Eighth Amendment proscriptions in order to find any execution constitutional in the 21st Century. That this is an issue that needs a Supreme Court decision shows how picayune are capital punishment standards. This case is nothing more than an attempt by prosecutors to create a “gotcha”–if you are too mentally disabled to know your rights, and you end up with an incompetent attorney because that’s all you could afford on your disability benefits, then prosecutors will have a means to kill you notwithstanding the Eighth Amendment. This would be like claiming you are an environmentalist tree hugger, but then you develop a sufficiently long list of reasons why every tree in that forest can be cut down. The Court should stop trying to rationalize the killing of any person by the government via the exceptions to the Eighth Amendment it is asked to craft, and simply find that every such exception does not create a justifiable basis for circumventing the clear language of the Eighth Amendment. Capital punishment should once and for all be deemed unconstitutional, whether one is old or young, intelligent or impaired, sane or insane, wealthy or poor, White or a Person of Color, etc. etc.

Posted by Alfred Cowger

Obtaining Search Results Without a Warrant—the Patriot Act Strikes Again

Few people noticed in May when the Senate failed by one vote to end the power of the government to search citizens’ internet search records without a warrant. This provision was a part of the Patriot Act, passed in the wake of the 9/11 fears about terrorists plotting under the noses of U.S. security officials. In 2001, few citizens used the internet as they regularly do today, and thus legislators had no idea of the unfettered intrusion into everyday life this provision could wrought. Unfortunately, almost two decades later, when legislators should know better, this blatant violation of the Fourth Amendment was re-authorized rather than dumped in the historical dustpan to join the Japanese interment legislation and Jim Crow laws prohibiting protests by “socialist” labor activists at the turn of the 20th Century.
U.S. history has shown us that the worst time to legislate security measures is in the face of a security threat. During times of national fear, legislators trod over constitutional rights like an elderly person who has fallen in the middle of a human stampede. The Patriot Act passed in the wake of 9/11 is the most recent example of when civil rights succumb to terror—and not just by terrorists. Few people realize that one of the provisions in the Patriot Act empowers the government to view a citizen’s search results without a warrant.
In other words, the average citizen’s daily search of the web for information ranging from financial advice to help with mental health issues, not to mention everyday shopping and socializing, is free for the grabbing by the government. When algorithms are involved, there is no limit to what could be discovered about a person and, worse, how those discoveries could be twisted against someone the government wants to look bad. On one side, every search one does can, via search engine algorithms, lead in directions that the searcher never intended and does not want. On the other side, every search result can, via government algorithms, result in categorizations about a person, and thus conclusions about a person, that may bear little resemblance to that person, but are completely “legitimate” given the design of the algorithm. In fact, given how victims of police brutality have regularly had their private lives, prior records and social contacts smeared in social media by government officials as a defense tactic against those victims’ lawsuits, one should expect government officials to regularly use algorithms to find and twist an individual’s internet searches into dirt, and then to use social media algorithms to spread that dirt quickly and anonymously. Those same algorithms can then be used to expand smear campaigns to virtually every social contact a victim might have via the internet.
I can use myself as example of what could happen. When I was General Counsel of a mass market perfume and cosmetic company, I regularly did internet searches of the company’s trademarks to find instances of trademark infringement and counterfeiting of goods. My searches for “English Leather” turned up so many NSFW sites that I had to be exempted from our IT’s Department’s blocks that prevented the misuse of the company’s network for streaming porn. What if I ended up on an “enemy’s list” of some future administration because of my support of causes and candidates diametrically opposed to that administration?
Without my knowledge and without any limits to the search, the government could unleash an algorithm-based review of all my searches. My completely innocent and rational searches done to protect my employer’s trademarks could easily be used to make me look like someone addicted to sites run by British dominatrixes and BDSM Masters. If those sites happened to use actors who were underage under U.S. law, the fact my trademark search included those sites, even if I had no intention of clicking on the listed links, could expose me to public ridicule and prosecution by government attorneys. Anyone with whom I regularly associate could then be smeared as someone who is a friend of a porn addict and pedophile. My career and private life, as well as those of my friends and business associates, could be ruined as a result of a warrantless search algorithm and a social-media marketing algorithm.
What this demonstrates is that “security” laws passed on emotion rather than reason are likely to have even more heinous ramifications in the Age of Algorithms. In the days of paper records, limitless searches were at least practically limited because voluminous paper records in multiple locations were harder to find and review. Moreover, the government’s misuse of those records might be stopped before the spread became too wide or permanently engrained in the minds of the citizenry. Now, given the ease in which one’s private life can be laid bare, and the ease in which the government can instantaneously and permanently spread mistruths worldwide via the internet, an unfettered government grant of intrusion in the name of security will be far more destructive to individuals’ rights, and thus their lives. If the Fourth Amendment is to survive the Age of Algorithms, legislators and judges should be even more skeptical of police demands for power to search the internet without limitation, and should choose the protection of individual rights over the expansion of police powers.

Posted by Alfred Cowger

The British Secondary School Test Debacle—When Algorithms Are Designed to Churn Out a Result, not a Rational Conclusion

Few Americans have heard of the standardized test disaster that occurred this year in Britain, and has shaken both the British education system and the Johnson government. This disaster should be considered a warning when any entity, in particular the government, wants to use an algorithm to justify a result rather than to make objective determinations.
This debacle started with a seemingly beneficial result in mind. The British education system had long been accused of “grade creep”, such that more top marks were being given out than was warranted by the quality of those students receiving the marks. To make matters worse, that creep seemed to be favoring students of the upper classes who attend the most posh public (i.e. private, to the confusion of the average U.S. citizen) high schools. After all, parents pay good money to ensure that by attending those exclusive secondar schools, their children will be more likely to be admitted to the best universities in the British Isle.
In response to these criticisms, the British government hired designers to develop an algorithm that would prevent grade creep. The algorithm would determine the percentage of students that “should” fall within each grade range. Furthermore, each student’s expected grade as given by a teacher would be evaluated using both historical results for students with similar schooling as that student, as well as expected results for all students taking tests that year. Those historical results and expected results were, in turn, based on algorithmic analysis. If the student’s final grade given by a teacher deviated from that expected grade, the algorithm could override the teacher’s grade and raise or lower the student’s score.
The resulting re-grading was so disastrous the government had no choice but to scrap the entire plan and fall back on the teachers’ initial test scores. Those students who came from schools with historically low test scores found their grades lowered, notwithstanding their personal achievement. Students from schools with historically high test score results, particularly those in small classes—in other words the upper class private schools– found their scores revised upwards. The alterations were so clearly unfair, and affected so many students striving to perform better than society assumed of them, that the algorithm results were deemed clearly unfair and biased, notwithstanding the fact these algorithms were incredibly intricate in design, because they were to be tools to overcome unfairness and bias. In fact, the algorithm creators wrote a 317-page report explaining just how fair and objective the algorithm results would be. See Will Bedingfield, Everything that went wrong with the botched A-levels algorithm, WIRED (Aug 19, 2020), https://www.wired.co.uk/article/alevel-exam-algorithm,
So what went wrong? The complicated answer is the many problems were to be expected, given the complexity of the algorithm. The simple answer is that this outcome is a prime example of when governments design and use algorithms to reach a desired outcome, rather than use algorithms to reach a proper outcome. Moreover, this demonstrates what happens when algorithms use a bell curve to define outcomes—those persons who have traditionally fallen outside the norms which establish the bell curve are those who are most detrimentally affected by the forced outcomes required by a bell curve. Finally, this proves clearly that algorithms will go wrong. Even when a majority of the algorithm’s determinations are accurate, no algorithm will be perfectly accurate. When thousands of people, like the British graduating student population, are affected by an algorithm, the number harmed by inevitably accurate results could likewise be in the thousands, even with the best algorithm. As this debacle shows, algorithms that are “just good” will result in too many individuals actually harmed.
Finally, one must remember what could have happened if government officials had not acted. How would the average student be able to protest his or her wrongful treatment? They could never prove how the algorithm harmed them, or perhaps even if they were indeed one of the individuals harmed, because the process was so non-transparent. The government, in fact, could easily establish that for “most” students, the results were acceptably accurate. Inevitably, the government would be buttressed by experts paid by the algorithm designer to argue the algorithm was acceptable. Students would face discrimination, as well as harm from arbitrary and unreasonable results, which would clearly be constitutional but for the fact the students would not have the resources to meet their burden of proof. In fact, even with substantial resources, given the Black Box nature of algorithms, the students still would never be able to meet their burden of proof, meaning that the use of the algorithms by the government was sure to preclude any student’s due process rights. This debacle is a foreshadowing of both the harm that could befall recipients of government benefits and determinations in the Age of Algorithms, and the inevitable deprivation of constitutional rights that will preclude those harmed from ever being made whole.

Posted by Alfred Cowger

Defeating the Fair Credit Act with Algorithms as Proposed in New Trump Regulations

The Trump Administration is proposing new regulations under the Fair Housing Act that will turn the worst aspects of algorithms against plaintiffs who would otherwise have a case of housing discrimination under the Fair Housing Act. Currently, a plaintiff who was denied a housing loan, or whose offer to buy a house or rent an apartment was denied, can prove a violation of the Fair Housing Act by showing the lender or property owner’s regular denials resulted in a “disparate impact” against minorities or women. Thus, a plaintiff need not prove the defendant harbored an intent to discriminate, but can let the results of the defendant’s actions, in essence, speak for themselves. As reported by David Gershgorn in a OneZero post in Medium, https://onezero.medium.com/a-proposed-trump-administration-rule-could-let-lenders-discriminate-through-a-i-2f9a729b0f3c, HUD wants to pass regulations that will allow discriminators to avoid evidence of disparate impact simply by using a well-designed algorithm as the tool to discriminate.
In my book (see listing under “Publications”), I warn that algorithms could quickly become a tool to rationalize all sorts of discriminatory actions, such that plaintiffs will be unable to prove discrimination because a defendant employs an algorithm to undertake that discrimination. Algorithms work in what Prof. Frank Pasquale has called a “Black Box”. No one can be sure what data an algorithm has used to reach its conclusions, nor can anyone know the process by which an algorithm used that data to reach its conclusion. In fact, the more sophisticated the algorithm, the more its “machine learning” capabilities will obfuscate how it reached its conclusions, since it will have taught itself the most expedient way to reach those conclusions, regardless of what the algorithm’s designer initially intended. To make matters worse, the databases used by algorithms are often infected with decades of discriminatory results, and algorithms have a nasty tendency to “learn” of past discrimination and actually employ that discrimination as an “efficient” way to reach a conclusion. After all, what is easier for an algorithm than denying a housing loan the moment the algorithm determines an applicant is a minority, a woman or a resident of an area with higher historical rates of mortgage defaults?
HUD should be working on regulations to prevent algorithms from becoming 21st Century tools of red-lining. Unfortunately, it is doing the exact opposite. HUD is proposing that disparate impact claims can be defeated by discriminating lenders and property owners simply by using algorithms to make the discriminatory decisions for those lenders and owners. The regulations would create five elements that would be defenses against disparate impact claims. One of those elements would be that the algorithm was designed to use “objective” criteria to reach its conclusions. Another would allow the algorithm user to simply hire an expert to opine that the algorithm seems to be working objectively. Both elements are simply masks by which algorithm-based discrimination is already occurring, and thus should be subject to regulations against their use, not regulations supporting their use.
As my book details, given the tendency of algorithms to discriminate, along with the Black Box nature of algorithms that precludes proving via direct evidence that the algorithm’s process was discriminatory, algorithms are the perfect tool to obfuscate discrimination otherwise demonstrated by statistical evidence in disparate impact claims. In fact, this is just the latest example of where governments have hidden behind algorithms to discriminate, ranging from criminal sentencing to child welfare investigations. The algorithm industry is quite happy to provide the experts to testify how objective those algorithms “really” are in the face of clear evidence of disparate impact, even though it begs the question that, if a plaintiff can’t determine how an algorithm reached its discriminatory result, how those experts can testify that they have a basis for their opinions when they are likewise clueless to how the algorithm reached its result. If these regulations are enacted, they could be the start of a destructive use of algorithms to render any disparate impact claim impotent.

Posted by Alfred Cowger