So much could be written about how algorithms directly contributed to those historic, awful events on January 6, 2021. Marketing algorithms were most surely used to identify persons who might be interested in joining the coup planning, and to aim claims of election fraud directly that those who would be most swayed by those claims. Algorithms SHOULD have been used prior to the coup attempt by law enforcement to track the social media postings about what would happen on January 6th, though it is apparent that whatever results were discoverable via algorithms were ignored by senior US Capital security officials.
However, what I want to focus on how the term “algorithm” itself was used to make falsehoods seem true. Algorithms, according to the affidavits of alleged experts, proved pervasive election fraud. For example, algorithms were used to derive expected election outcomes based on early returns on Election Night. When the final vote counts did not ultimately result in a Trump victory, these algorithms concluded the only explanation was that, as the evening’s vote count went on, Trump votes were destroyed and Biden ballots were stuffed via some unexplainable means. At least one “expert” used algorithm-driven comparisons between the 2016 and 2020 elections to “prove” that vote fraud had to have been systemic. One expert went so far as to claim that voter fraud was so well done, it could not be discerned from evidence, but only from algorithmic analysis. Dozens of courts quickly rejected these analyses as proving nothing, except that the algorithm designers, the databases being used, and/or the conclusions derived by the so-called experts were all obviously and fundamentally flawed, and thus proved nothing. Notwithstanding these irrefutable conclusions, the word “algorithm” was sufficient for Trump supporters, without a moment’s question, to ignore the decisions of all these courts.
As the coup was unfolding, I began reading almost verbatim postings by Trump supporters that those attacking the Capitol and its police were actually Antifa and Black Lives Matter infiltrators. Trump’s legion cited a Washington Times report stating algorithm-based facial recognition technology had proven known Antifers were at the Capitol. Within twenty-four hours the company that the Times reported undertook the analysis announced this claim was a complete falsehood. Indeed, that company had found multiple instances of right-wingers who had been at previous violent street actions appearing in the halls of the Capitol after it was breached Once again, however, Trump supporters wanting to justify the actions of the Capitol rioters latched on to the claims that algorithms were used to prove the presence of Antifa, and they were unwavering in repeating this assertion well after the coup was suppressed and these claims were refuted completely. Even the videos of Trump flag-waving rioters screaming “Hang Pelosi” and “Hang Pence”, vocalizing their intent to stop the electoral vote certification and harm elected officials who stood in their way, were not enough to convince Trump supporters that Antifa and BMI members were not the rioters. Algorithms had already pronounced the “truth”, and that was enough for Trump’s supporters.
Until society becomes used to, and perhaps healthily jaded by, the application of algorithms, algorithms will take on almost mystical properties. Studies have already shown how humans tend to presume that if a technology is complicated, it must be right. The less a human understands the limitations of algorithms, the more likely the human is to believe any claim allegedly based on an algorithm is true. Even those who should know better than to simply believe a claim because it has high-tech basis, such as judges, doctors and business professionals, tend to initially believe algorithms over humans. They do not even alter their initial opinions even their own intuition and experience causes them to question an algorithmic conclusion.
History will eventually adjudge that last week’s coup attempt will be one of the most extreme and dangerous examples of using the term “algorithm” to prove falsehoods. However, this should be a lesson for what could happen in situations with less widespread, yet still serious, implications. Individuals harmed by algorithm-based conclusions denying their claims or benefits will not have the resources to overcome this religion-like deference to almighty algorithms that rule against individuals’ interests. Doctors may choose treatments because their hospital’s very expensive algorithms say to choose those treatments, even if the doctor might have decided otherwise. Corporate boards may defer to the recommendations of their management because the management used algorithms to make strategic plans, even if those plans sound suspect on further reflection and insight. Judges may grant or deny freedom based on algorithmic conclusions, notwithstanding their experience-based hesitancy to do so. Those arguing against algorithm-based conclusions will be at an immediate disadvantage, not because their arguments make sense, but because they are arguing against an algorithm.
Society needs to learn quickly that the emperor is not clothed simply because he has an algorithm claiming otherwise.
Recent Comments