Month: March 2021

The Use of First Amendment “Rights” to Suppress First Amendment Rights

A new opinion before the Sixth Circuit, if it stands, has terrifying implications for First Amendment rights in the Age of Algorithms. As early as 1997, the Supreme Court noted that social media sites are the 21st Century Equivalent of the Town Square of yore. Reno v. ACLU, 521 U.S. 844, 868 (1997). The problem is these town squares are owned by private enterprises, and thus do not fall under the First Amendment protections afforded citizens against government censorship.

Now, the Sixth Circuit has issued a ruling finding that the alleged rights of a person who controls a forum trump the rights of any individual who is required to sit in that forum, even if that forum is a public setting. In a decision handed down last week, the Sixth Circuit ruled that a professor at a public college could refuse to identify a student in his class by that’s student’s self-identified gender, on the grounds the professor had First Amendment religion and free speech rights to call that student by whatever gender the professor chose. Meriwether v. Hartop, Case No. 20-3289 (Sixth Cir. 2021), https://www.opn.ca6.uscourts.gov/opinions.pdf/21a0071p-06.pdf. The decision focused solely on the alleged right of the professor to force his personal religious ideology on his entire class. Worse, that professor was empowered, based on his own religious belief, to deny the right of a student in that class to select her own gender. The Court actually held that the college should have accepted the professor’s suggestion that he simply call the student by her last name, so as to deny her gender identification, as if that was a reasonable accommodation. At no point did the Court consider the impact of this ruling on the student herself, or why a publicly employed professor can foist his religious beliefs on his entire class. Apparently, so long as a public employee can claim his religious beliefs support his behavior, he can aim hate speech filled with prejudice, bigotry and bias at any person seeking the public services or benefits the public employee is being paid to offer.

If this ruling stands, it can be the death knell to free speech and religious liberty in the Age of Algorithms. The Court’s ruling means that when an individual is control of a forum, that individual can dictate the religious, moral and ethical beliefs of every person participating in that forum, notwithstanding the personal or profound the beliefs of those participants might be. In the Sixth Circuit case, that forum was public property and the individual controlling the forum was a professor paid with public funds. When the forum is, at best quasi-public, such as Facebook, YouTube or Twitter, and the control over forum postings is in the hand of a private-enterprise algorithm, the result is chillingly clear. Those wishing to espouse beliefs that do not fit the corporate stance on those beliefs, or even just the marketing plan of that corporation, will see their posts blocked by highly efficient (if not necessarily accurate) algorithms. Those in control of a forum will control the speech and religious beliefs of anyone in that forum, including a forum that is a social media site.

Those wishing to express their beliefs have only one remedy, just like the students in that class–they can leave. But, then the options are no options. Just as that small college has only one course on political philosophy, so too is there, for all practical purposes, only one YouTube and one Facebook. If one has to leave the town square of the Age of Algorithms in order to avoid First Amendment suppression, then the need to leave is itself suppression of First Amendment rights. Thus begins the death watch of the First Amendment?

Posted by Alfred Cowger

Interview on “Breaking Banks” podcast

An interview I did on the podcast “Breaking Banks” is now available for streaming here: https://provoke.fm/episode-382-algorithms-and-ai-the-good-the-bad-and-the-myth/

Breaking Banks is “The #1 global fintech radio show and podcast. Every week we explore the personalities, startups, innovators, and industry players driving disruption in financial services; from Incumbents to unicorns, and from the latest cutting edge technology to the people who are using it to help to create a more innovative, inclusive and healthy financial future.”

This interview, for obvious reasons, focused on the impact of AI and algorithms on the fintech industry. However, many of the issues raised would apply equally to other industries adopting algorithm-based products.

Posted by Alfred Cowger

The Limitations that Algorithms and AI Share With the Ford Pinto and Exploding Pressure Cookers

Algorithms and AI are the most exciting and profound product technology to enter society since the perfection of the internal combustion engine (although I would argue that the development of a family-affordable car was far more important). Yet, for all their awe-inspiring complexity, and their ability to make the discoveries humans could only hyphothesize, they are still products. Thus, they can become harmful to humans based on their poor design and/or defective components. In other words, algorithm-based products could be as infamous as the Ford Pinto, which had a gas tank that exploded in rear-end collisions, or lead paint, that continues to cause brain damage to children one hundred years after it was applied to house trim.

Recent news stories have provided clear examples of how readily AI and algorithms can be designed defectively. NBC News reported on an algorithm used to screen rental applicants that confused Hispanic names, and thus denied credit to a Navy veteran with top secret clearance based on criminal convictions of a Mexican drug dealer. https://www.nbcnews.com/tech/tech-news/tenant-screening-software-faces-national-reckoning-n1260975. When a product’s purpose is to collect data, and then accurately and efficiently draw conclusions from that data far quicker than any human, yet that product cannot accurately draw such conclusions, it should be subject to liability for its design defect, just like a car model that cannot be driven safely down the road. Moreover, while that car might horrifically mow down two or three pedestrians on a sidewalk, the rental screening algorithm could permanent damage the credit history of hundreds or thousands of renters. Worse, unlike the car, the renters literally won’t even know what hit them, since the non-transparent nature of all algorithms will shield the defects in that algorithm.

That same news story noted a case going before the Supreme Court involving an individual denied a car loan because a similar named individual was on the US Government’s terrorist watch list. This demonstrates how algorithms and AI share the same exposure to bad components as other more mundane products, such as pressure cookers with bad seals. It takes only one problem with a pressure cooker’s component to create a time bomb in one’s kitchen. However, at least with pressure cookers, the components are limited and identifiable. In the case of algorithms, each bit of data in the ether that is the internet, social media and The Cloud can be erroneous. From that one erroneous data point can spring untold ramifications as algorithms searching through trillions of data bits to make literally millions of decisions can draw the wrong ones from that one bit of wrong data. In the case going to the Supreme Court, it took only the error-filled U.S. terrorist watch list–which has already denied Senators and toddlers the right to board aircraft–to prevent innocent individuals from obtaining credit. Will those same individuals tagged as terrorists eventually be pulled over and shot one dark night by a security guard who has been told a terrorist is driving through a sub-division?

Before businesses, and eventually society itself, introduce artificial intelligence into every aspect of our lives, American jurisprudence must set standards for individuals harmed by defective algorithm-based products to recover for their harm, and prevent that harm from happening again to them, as well as their fellow citizens. Courts will even have to develop new procedural and evidence rules for breaking through the black boxes that shroud algorithm-based products.
Otherwise, the high technology meant to elevate us will literally kill us.
Furthermore, just

Posted by Alfred Cowger

AI’s Threat to American Jurispruence–A Non-Partisan Issue

I have been asked if my belief that AI and Algorithms are a profound threat to Civil Rights, Legal Remedies, and American Jurisprudence (hence the title of my book) is some “libtard” belief, or one that is quite at home with QAnon conspiracies. I personally believe that my fears are completely non-partisan, and create risks for all Americans, regardless of their political beliefs. It may be one of the few issues that everyone can and should still agree upon.

After all, the risk of discrimination and the loss of due process rights from AI and algorithms will cross all political spectrums. If the expectation of privacy dwindles to nothing because everything we do is data-mined, analyzed and marketized, we will all face the consequences, whether we love or hate Trump. If social media algorithms can block posts because their content is offensive to other social media customers, those on the Left and Right could find their access to the Age of Algorithm’s “public square” equally blocked by the feelings of the over-sensitive and intolerant Center.

So, the next time we progressives cheer because the right wing finds its access to mainstream social media blocked, remember that our opinions may be the next target of those algorithms. The risk that government algorithms will deny benefits unfairly, and those same algorithms will be immune from court cases to correct the unfair denials, can and will happen to our friends, loved ones and selves, regardless of who we voted for. In fact, the entire spectrum of political beliefs may collapse under the onslaught of algorithms that cannot be questioned though they are certainly flawed, and do nothing but make the government seem right, regardless of what happens to its citizenry. A government that cannot be challenged, let alone corrected, is the very definition of a tyranny, and if the United States becomes that tyranny, our fight for civil rights and legal remedies as members of the ACLU or the CPAC will seem like mere quaint old-time habits that are as relevant as carriage houses and horse stables in our back yards.

Posted by Alfred Cowger