Accuracy of algorithms

The Use of Algorithms to Substantiate Anything–Including Treason

So much could be written about how algorithms directly contributed to those historic, awful events on January 6, 2021. Marketing algorithms were most surely used to identify persons who might be interested in joining the coup planning, and to aim claims of election fraud directly that those who would be most swayed by those claims. Algorithms SHOULD have been used prior to the coup attempt by law enforcement to track the social media postings about what would happen on January 6th, though it is apparent that whatever results were discoverable via algorithms were ignored by senior US Capital security officials.

However, what I want to focus on how the term “algorithm” itself was used to make falsehoods seem true.  Algorithms, according to the affidavits of alleged experts, proved pervasive election fraud.  For example, algorithms were used to derive expected election outcomes based on early returns on Election Night.  When the final vote counts did not ultimately result in a Trump victory, these algorithms  concluded the only explanation was that, as the evening’s vote count went on, Trump votes were destroyed and Biden ballots were stuffed via some unexplainable means. At least one “expert” used algorithm-driven comparisons between the 2016 and 2020 elections to “prove” that vote fraud had to have been systemic.  One expert went so far as to claim that voter fraud was so well done, it could not be discerned from evidence, but only from algorithmic analysis.  Dozens of courts quickly rejected these analyses as proving nothing, except that the algorithm designers, the databases being used, and/or the conclusions derived by the so-called experts were all obviously and fundamentally flawed, and thus proved nothing.  Notwithstanding these irrefutable conclusions, the word “algorithm” was sufficient for Trump supporters, without a moment’s question, to ignore the decisions of all these courts.  

As the coup was unfolding, I began reading almost verbatim postings by Trump supporters that those attacking the Capitol and its police were actually Antifa and Black Lives Matter infiltrators.  Trump’s legion cited a Washington Times report stating algorithm-based facial recognition technology had proven known Antifers were at the Capitol. Within twenty-four hours the company that the Times reported undertook the analysis announced this claim was a complete falsehood.  Indeed, that company had found multiple instances of right-wingers who had been at previous violent street actions appearing in the halls of the Capitol after it was breached Once again, however, Trump supporters wanting to justify the actions of the Capitol rioters latched on to the claims that algorithms were used to prove the presence of Antifa, and they were unwavering in repeating this assertion well after the coup was suppressed and these claims were refuted completely.   Even the videos of Trump flag-waving rioters screaming “Hang Pelosi” and “Hang Pence”, vocalizing their intent to stop the electoral vote certification and harm elected officials who stood in their way, were not enough to convince Trump supporters that Antifa and BMI members were not the rioters.  Algorithms had already pronounced the “truth”, and that was enough for Trump’s supporters.

Until society becomes used to, and perhaps healthily jaded by, the application of algorithms, algorithms will take on almost mystical properties.  Studies have already shown how humans tend to presume that if a technology is complicated, it must be right.  The less a human understands the limitations of algorithms, the more likely the human is to believe any claim allegedly based on an algorithm is true.  Even those who should know better than to simply believe a claim because it has high-tech basis, such as judges, doctors and business professionals, tend to initially believe algorithms over humans.  They do not even alter their initial opinions even their own intuition and experience causes them to question an algorithmic conclusion.  

History will eventually adjudge that last week’s coup attempt will be one of the most extreme and dangerous examples of using the term “algorithm” to prove falsehoods. However, this should be a lesson for what could happen in situations with less widespread, yet still serious, implications.  Individuals harmed by algorithm-based conclusions denying their claims or benefits will not have the resources to overcome this religion-like deference to almighty algorithms that rule against individuals’ interests.  Doctors may choose treatments because their hospital’s very expensive algorithms say to choose those treatments, even if the doctor might have decided otherwise.  Corporate boards may defer to the recommendations of their management because the management used algorithms to make strategic plans, even if those plans sound suspect on further reflection and insight.  Judges may grant or deny freedom based on algorithmic conclusions, notwithstanding their experience-based hesitancy to do so.  Those arguing against algorithm-based conclusions will be at an immediate disadvantage, not because their arguments make sense, but because they are arguing against an algorithm.  

Society needs to learn quickly that the emperor is not clothed simply because he has an algorithm claiming otherwise.

Posted by Alfred Cowger

A More Mundane Example of Algorithm Design Error

Last week I posted about the errors in Presidential Election polls for this General Election. These errors occurred despite exhortations by the polling industry that it had learned from its mistakes in 2016, and that the polls would be more accurate this time. I used these errors as an example of how bad algorithm designs, combined with suspect data, can result in unacceptably wrong outcomes.

This week I providing a more mundane example, but nonetheless one that shows the limitations and risks of algorithms. Facebook uses an algorithm to insert advertisements into subscribers’ Facebook pages. Facebook advertisers pay for the opportunity to aim advertising at a class of likely consumers, thereby promising “more bang for the buck”. If my page is anything like Facebook subscribers in general, Facebook, despite its huge investment in advertising algorithms, is falling very short.

Recently, the vast majority of the ads on my Facebook page started to be for workout wear, shorts that are clearly designed to show off ones assets (emphasis on the first syllable), and lounge wear for the apparently active man who actually sits around most of the day (one can wear these lounge pants for ‘twenty-four hours at a time in complete comfort’, although with apparently complete disregard for daily hygiene). I can only surmise that Facebook must have recently subtracted thirty-five years and thirty-five pounds from my personal dataset. Unfortunately, I can guarantee I cannot fit my assets into these clothing lines, and if I could, the result would be not uncomfortable for me and anyone in eyesight of me. So, every one of those postings is a waste of ad money for the advertiser, and a lost opportunity for more appropriate advertisers selling what would entice me–might I suggest those offering food or cocktail of the month clubs, or clothes that fit an “Oh, Daddy!” as my teen daughter disapprovingly uses that phrase for every occasion, not an “OH DADDY!” as that phrase might be used in streaming porn.

The closest aim that Facebook’s algorithm gets to being right are for biking tours through various wine “countries”. These ads would have been appropriate a few decades ago (before Mark Zuckerberg was born, let alone formulating the idea for easing hookups at Harvard that was to become Facebook), but I now prefer my touring to be via chauffeured vehicles, so I can drink without concern for driving, let alone peddling. This placement can be deemed fifty percent accurate. Unfortunately, as with almost any real-world application of algorithms, fifty percent (or even eighty percent) accuracy is going to prove woefully inadequate, and thus useless or even harmful to the public.

I might add that I have clear proof that I am not the only recipient of these woefully inadequate ad placements. My nephew, who just finished advanced communications training with the Marines, let me know that ads on his Facebook page are for rentals of private jets. He thought that perhaps Facebook must know best, and that the problem was he is the only underpaid Marine in the United States. Ironically, he is a far better candidate for the workout wear ads that I am receiving, and I, in turn, at least enjoyed a few private jet trips in my professional career.

While this ad algorithm deficiency is amusing, no one would be laughing if the algorithm was for medical treatment, and my physical conditions were being ignored because the algorithm assumed I was as health as a young Marine, while the young Marine was being given needless treatments because the algorithm had concluded he suffered the health problems of an aging, sedentary attorney. That is why in the Age of Algorithms, AI should not be considered a perfect miracle worker, and why humans must always be the final and complementary step in any AI process with serious ramifications.

Posted by Alfred Cowger

The Bad Presidential Polling Results and their Ominous Implications in the Age of Algorithms

For the second Presidential election in a row, the polls were materially off. This suggests that the science of polling is regressing, not improving, because pollsters are failing to create polling models that realistically reflect the voters, and thus the extrapolations of polls based on faulty models are resulting in faulty polls. Not only is this bad news for polling in years to come, but it is a sadly superlative example of what society faces with algorithms in the future.

Just like polls, algorithms are dependent on three fundamental elements: 1) an accurate database, 2) a design that uses, or allows the algorithm to learn, accurate correlations between the database and conclusions to be drawn by the algorithm, and 3) a design that properly uses the those correlations to make accurate conclusions. If any one of these elements fails, the entire algorithm will result in inaccurate, misleading or simply incoherent results.

The bad polling results have ominous implications for what society faces in the Age of Algorithms. Just like bad polling models suggested that Biden might actually win my home state of Ohio, a bad algorithm might decide, for example, that some convicted criminals are more likely to be a harm to society, and thus those algorithms might instruct judges to increase the sentences for those guilty persons. However, when a poll is bad, eventually the election results will show the poll was erroneous, and the poll will not have resulted in fundamental harm to the election, except perhaps the pride of commentators who were paid to spout the poll results like an electoral oracle. In contrast, after a criminal is given a longer sentence by a faulty algorithm, there is no subsequent test by which that sentencing will be proven wrong, and the harm caused to the prisoner will mean an irreparable loss of additional and unnecessary years in prison, while criminals who perhaps should be in prison longer will be freed too soon to commit more crimes.

So the bad polling of the last two elections should serve as an omen for what society faces in the Age of Algorithms.

Posted by Alfred Cowger

The British Secondary School Test Debacle—When Algorithms Are Designed to Churn Out a Result, not a Rational Conclusion

Few Americans have heard of the standardized test disaster that occurred this year in Britain, and has shaken both the British education system and the Johnson government. This disaster should be considered a warning when any entity, in particular the government, wants to use an algorithm to justify a result rather than to make objective determinations.
This debacle started with a seemingly beneficial result in mind. The British education system had long been accused of “grade creep”, such that more top marks were being given out than was warranted by the quality of those students receiving the marks. To make matters worse, that creep seemed to be favoring students of the upper classes who attend the most posh public (i.e. private, to the confusion of the average U.S. citizen) high schools. After all, parents pay good money to ensure that by attending those exclusive secondar schools, their children will be more likely to be admitted to the best universities in the British Isle.
In response to these criticisms, the British government hired designers to develop an algorithm that would prevent grade creep. The algorithm would determine the percentage of students that “should” fall within each grade range. Furthermore, each student’s expected grade as given by a teacher would be evaluated using both historical results for students with similar schooling as that student, as well as expected results for all students taking tests that year. Those historical results and expected results were, in turn, based on algorithmic analysis. If the student’s final grade given by a teacher deviated from that expected grade, the algorithm could override the teacher’s grade and raise or lower the student’s score.
The resulting re-grading was so disastrous the government had no choice but to scrap the entire plan and fall back on the teachers’ initial test scores. Those students who came from schools with historically low test scores found their grades lowered, notwithstanding their personal achievement. Students from schools with historically high test score results, particularly those in small classes—in other words the upper class private schools– found their scores revised upwards. The alterations were so clearly unfair, and affected so many students striving to perform better than society assumed of them, that the algorithm results were deemed clearly unfair and biased, notwithstanding the fact these algorithms were incredibly intricate in design, because they were to be tools to overcome unfairness and bias. In fact, the algorithm creators wrote a 317-page report explaining just how fair and objective the algorithm results would be. See Will Bedingfield, Everything that went wrong with the botched A-levels algorithm, WIRED (Aug 19, 2020), https://www.wired.co.uk/article/alevel-exam-algorithm,
So what went wrong? The complicated answer is the many problems were to be expected, given the complexity of the algorithm. The simple answer is that this outcome is a prime example of when governments design and use algorithms to reach a desired outcome, rather than use algorithms to reach a proper outcome. Moreover, this demonstrates what happens when algorithms use a bell curve to define outcomes—those persons who have traditionally fallen outside the norms which establish the bell curve are those who are most detrimentally affected by the forced outcomes required by a bell curve. Finally, this proves clearly that algorithms will go wrong. Even when a majority of the algorithm’s determinations are accurate, no algorithm will be perfectly accurate. When thousands of people, like the British graduating student population, are affected by an algorithm, the number harmed by inevitably accurate results could likewise be in the thousands, even with the best algorithm. As this debacle shows, algorithms that are “just good” will result in too many individuals actually harmed.
Finally, one must remember what could have happened if government officials had not acted. How would the average student be able to protest his or her wrongful treatment? They could never prove how the algorithm harmed them, or perhaps even if they were indeed one of the individuals harmed, because the process was so non-transparent. The government, in fact, could easily establish that for “most” students, the results were acceptably accurate. Inevitably, the government would be buttressed by experts paid by the algorithm designer to argue the algorithm was acceptable. Students would face discrimination, as well as harm from arbitrary and unreasonable results, which would clearly be constitutional but for the fact the students would not have the resources to meet their burden of proof. In fact, even with substantial resources, given the Black Box nature of algorithms, the students still would never be able to meet their burden of proof, meaning that the use of the algorithms by the government was sure to preclude any student’s due process rights. This debacle is a foreshadowing of both the harm that could befall recipients of government benefits and determinations in the Age of Algorithms, and the inevitable deprivation of constitutional rights that will preclude those harmed from ever being made whole.

Posted by Alfred Cowger