Algorithms Designer Error

A More Mundane Example of Algorithm Design Error

Last week I posted about the errors in Presidential Election polls for this General Election. These errors occurred despite exhortations by the polling industry that it had learned from its mistakes in 2016, and that the polls would be more accurate this time. I used these errors as an example of how bad algorithm designs, combined with suspect data, can result in unacceptably wrong outcomes.

This week I providing a more mundane example, but nonetheless one that shows the limitations and risks of algorithms. Facebook uses an algorithm to insert advertisements into subscribers’ Facebook pages. Facebook advertisers pay for the opportunity to aim advertising at a class of likely consumers, thereby promising “more bang for the buck”. If my page is anything like Facebook subscribers in general, Facebook, despite its huge investment in advertising algorithms, is falling very short.

Recently, the vast majority of the ads on my Facebook page started to be for workout wear, shorts that are clearly designed to show off ones assets (emphasis on the first syllable), and lounge wear for the apparently active man who actually sits around most of the day (one can wear these lounge pants for ‘twenty-four hours at a time in complete comfort’, although with apparently complete disregard for daily hygiene). I can only surmise that Facebook must have recently subtracted thirty-five years and thirty-five pounds from my personal dataset. Unfortunately, I can guarantee I cannot fit my assets into these clothing lines, and if I could, the result would be not uncomfortable for me and anyone in eyesight of me. So, every one of those postings is a waste of ad money for the advertiser, and a lost opportunity for more appropriate advertisers selling what would entice me–might I suggest those offering food or cocktail of the month clubs, or clothes that fit an “Oh, Daddy!” as my teen daughter disapprovingly uses that phrase for every occasion, not an “OH DADDY!” as that phrase might be used in streaming porn.

The closest aim that Facebook’s algorithm gets to being right are for biking tours through various wine “countries”. These ads would have been appropriate a few decades ago (before Mark Zuckerberg was born, let alone formulating the idea for easing hookups at Harvard that was to become Facebook), but I now prefer my touring to be via chauffeured vehicles, so I can drink without concern for driving, let alone peddling. This placement can be deemed fifty percent accurate. Unfortunately, as with almost any real-world application of algorithms, fifty percent (or even eighty percent) accuracy is going to prove woefully inadequate, and thus useless or even harmful to the public.

I might add that I have clear proof that I am not the only recipient of these woefully inadequate ad placements. My nephew, who just finished advanced communications training with the Marines, let me know that ads on his Facebook page are for rentals of private jets. He thought that perhaps Facebook must know best, and that the problem was he is the only underpaid Marine in the United States. Ironically, he is a far better candidate for the workout wear ads that I am receiving, and I, in turn, at least enjoyed a few private jet trips in my professional career.

While this ad algorithm deficiency is amusing, no one would be laughing if the algorithm was for medical treatment, and my physical conditions were being ignored because the algorithm assumed I was as health as a young Marine, while the young Marine was being given needless treatments because the algorithm had concluded he suffered the health problems of an aging, sedentary attorney. That is why in the Age of Algorithms, AI should not be considered a perfect miracle worker, and why humans must always be the final and complementary step in any AI process with serious ramifications.

Posted by Alfred Cowger

The Bad Presidential Polling Results and their Ominous Implications in the Age of Algorithms

For the second Presidential election in a row, the polls were materially off. This suggests that the science of polling is regressing, not improving, because pollsters are failing to create polling models that realistically reflect the voters, and thus the extrapolations of polls based on faulty models are resulting in faulty polls. Not only is this bad news for polling in years to come, but it is a sadly superlative example of what society faces with algorithms in the future.

Just like polls, algorithms are dependent on three fundamental elements: 1) an accurate database, 2) a design that uses, or allows the algorithm to learn, accurate correlations between the database and conclusions to be drawn by the algorithm, and 3) a design that properly uses the those correlations to make accurate conclusions. If any one of these elements fails, the entire algorithm will result in inaccurate, misleading or simply incoherent results.

The bad polling results have ominous implications for what society faces in the Age of Algorithms. Just like bad polling models suggested that Biden might actually win my home state of Ohio, a bad algorithm might decide, for example, that some convicted criminals are more likely to be a harm to society, and thus those algorithms might instruct judges to increase the sentences for those guilty persons. However, when a poll is bad, eventually the election results will show the poll was erroneous, and the poll will not have resulted in fundamental harm to the election, except perhaps the pride of commentators who were paid to spout the poll results like an electoral oracle. In contrast, after a criminal is given a longer sentence by a faulty algorithm, there is no subsequent test by which that sentencing will be proven wrong, and the harm caused to the prisoner will mean an irreparable loss of additional and unnecessary years in prison, while criminals who perhaps should be in prison longer will be freed too soon to commit more crimes.

So the bad polling of the last two elections should serve as an omen for what society faces in the Age of Algorithms.

Posted by Alfred Cowger

The British Secondary School Test Debacle—When Algorithms Are Designed to Churn Out a Result, not a Rational Conclusion

Few Americans have heard of the standardized test disaster that occurred this year in Britain, and has shaken both the British education system and the Johnson government. This disaster should be considered a warning when any entity, in particular the government, wants to use an algorithm to justify a result rather than to make objective determinations.
This debacle started with a seemingly beneficial result in mind. The British education system had long been accused of “grade creep”, such that more top marks were being given out than was warranted by the quality of those students receiving the marks. To make matters worse, that creep seemed to be favoring students of the upper classes who attend the most posh public (i.e. private, to the confusion of the average U.S. citizen) high schools. After all, parents pay good money to ensure that by attending those exclusive secondar schools, their children will be more likely to be admitted to the best universities in the British Isle.
In response to these criticisms, the British government hired designers to develop an algorithm that would prevent grade creep. The algorithm would determine the percentage of students that “should” fall within each grade range. Furthermore, each student’s expected grade as given by a teacher would be evaluated using both historical results for students with similar schooling as that student, as well as expected results for all students taking tests that year. Those historical results and expected results were, in turn, based on algorithmic analysis. If the student’s final grade given by a teacher deviated from that expected grade, the algorithm could override the teacher’s grade and raise or lower the student’s score.
The resulting re-grading was so disastrous the government had no choice but to scrap the entire plan and fall back on the teachers’ initial test scores. Those students who came from schools with historically low test scores found their grades lowered, notwithstanding their personal achievement. Students from schools with historically high test score results, particularly those in small classes—in other words the upper class private schools– found their scores revised upwards. The alterations were so clearly unfair, and affected so many students striving to perform better than society assumed of them, that the algorithm results were deemed clearly unfair and biased, notwithstanding the fact these algorithms were incredibly intricate in design, because they were to be tools to overcome unfairness and bias. In fact, the algorithm creators wrote a 317-page report explaining just how fair and objective the algorithm results would be. See Will Bedingfield, Everything that went wrong with the botched A-levels algorithm, WIRED (Aug 19, 2020), https://www.wired.co.uk/article/alevel-exam-algorithm,
So what went wrong? The complicated answer is the many problems were to be expected, given the complexity of the algorithm. The simple answer is that this outcome is a prime example of when governments design and use algorithms to reach a desired outcome, rather than use algorithms to reach a proper outcome. Moreover, this demonstrates what happens when algorithms use a bell curve to define outcomes—those persons who have traditionally fallen outside the norms which establish the bell curve are those who are most detrimentally affected by the forced outcomes required by a bell curve. Finally, this proves clearly that algorithms will go wrong. Even when a majority of the algorithm’s determinations are accurate, no algorithm will be perfectly accurate. When thousands of people, like the British graduating student population, are affected by an algorithm, the number harmed by inevitably accurate results could likewise be in the thousands, even with the best algorithm. As this debacle shows, algorithms that are “just good” will result in too many individuals actually harmed.
Finally, one must remember what could have happened if government officials had not acted. How would the average student be able to protest his or her wrongful treatment? They could never prove how the algorithm harmed them, or perhaps even if they were indeed one of the individuals harmed, because the process was so non-transparent. The government, in fact, could easily establish that for “most” students, the results were acceptably accurate. Inevitably, the government would be buttressed by experts paid by the algorithm designer to argue the algorithm was acceptable. Students would face discrimination, as well as harm from arbitrary and unreasonable results, which would clearly be constitutional but for the fact the students would not have the resources to meet their burden of proof. In fact, even with substantial resources, given the Black Box nature of algorithms, the students still would never be able to meet their burden of proof, meaning that the use of the algorithms by the government was sure to preclude any student’s due process rights. This debacle is a foreshadowing of both the harm that could befall recipients of government benefits and determinations in the Age of Algorithms, and the inevitable deprivation of constitutional rights that will preclude those harmed from ever being made whole.

Posted by Alfred Cowger