Month: January 2021

Free Speech Should Not be Subject to Private Regulation

Before we rejoice too much that the treasonous and hate-filled messages of Trump and the Proud Boys has been silenced, consider the implications. The Supreme Court has already recognized that social media platforms are the 21st Century equivalent of soap boxes at the corner of public parks. Without access to these platforms, it is almost impossible to communicate one’s ideas and discuss those ideas with likeminded people in this new Age of Algorithms. That means our Freedom of Speech rights are in the hands of private entities, and thus our speech rights have no judicial protection.

To make matters worse, the tools used by these private entities to censor speech are a combination of algorithms and human decisions. Those algorithms have already proven to be woefully inadequate to stop the spread of falsehoods, while at the same time blocking some posts arbitrarily and unequally. Many innocent posts containing certain words or phrases have resulted in people being sent to “Facebook hell” by algorithms. The social media owners admit that the algorithms will make mistakes, but during the Covid pandemic, the humans who might veto algorithmic censoring decisions are not in the office to catch these errors.
Even when those humans are present, there is no guarantee that the censoring will be objective. After all the decision makers are human, who at times will simply adopt the “I know it when I see it’ approach to meting out punishment for “bad speech”. Moreover, both the humans that design the algorithms and the humans that review the algorithms’ censorship can have their own biases for or against certain posts. “Boys will be boys…” is still a standard by which offensive postings are evaluated.

Our First Amendment rights deserve more protection than what private enterprises promises, let alone delivers.

Posted by Alfred Cowger

I Was Interviewed About the Content of My Last Blog

My last posting was a complaint about how those wishing vaccinations from local pharmacies are being required to join the pharmacies’ Member Clubs. The local FOX affiliate, Channel 19 in Cleveland (WOIO), picked up on the story. Its investigation corroborated what I asserted. The reporter also found the excuse of the pharmacy to be weak. The story can be seen here: https://www.cleveland19.com/2021/01/26/made-sign-up-receive-marketing-ads-get-covid-vaccine-some-say-thats-not-right/

Posted by Alfred Cowger

Covid Vaccinations and the Forced Loss of Privacy

Unnoticed by virtually everyone, as far as I can tell, is the yet another impingement on the control of one’s privacy by way of the Covid vaccine. At least here in Ohio, many of the sources of vaccinations are private pharmacies, not public hospitals. When I registered my mother (who qualifies for the 1B category of persons entitled to vaccines so far), I was required to enroll her in the “savings club” of the store in which the pharmacy was located. So, in order to get my mother life-saving treatment, she was forced to turn over data about her without compensation, let alone control.

Once again, a government ignorant about the ramifications of losing control of one’s private data has allowed that to happen. In this Age of Algorithms, where AI and algorithms work best if they have access to a variety of large databases, one’s personal data is one of the most valuable commodities we personally own. We would never allow a private operation to show up one day and say “Your back yard is a good place to store my industrial supplies–get out of my way and, oh, move your dog someplace else”. Worse, if we learned a government program had been set up to allow such property trespass, the citizenry would be protesting in front of City Hall by nightfall. So why should the government allow retailers to demand that senior citizens give away their personal information for free, and in fact be required to actually sign on to an activity that is meant to mine even more data about their buying habits? The government, instead, should be ordering these retailers to collect data only for purposes of setting up vaccination reservations, and proscribing the use of that data for any marketing or other use.

To make matters worse, the “Privacy Policy” of the establishment where my mother is registered is an oxymoron of contradictory legal clauses. The retailer states that it will only give my mother’s info to its vendors in a non-individualized format, but then states the vendors may “bring to” my mother offers for sales and marketing purposes. That means the vendors are getting not just anonymous demographic data about my mother, but rather her personal data tied to her address, phone number and email address. Otherwise, how are these vendors going to “bring” directly to my mother offers that are based on her demographics? So, the promise that her personal information will not be used is immediately superseded by the fact vendors may use her personal information. All of this data can be re-sold to third-party data aggregators multiple times, put into a database with unrestricted access by third parties, and otherwise used to label and categorize my mother literally forever.

Europe is well ahead of the U.S. on ensuring that individuals know how their data might be used, and preventing the exploitation of that data without individuals agreeing to that use. Americans should not be forced to choose between privacy and health. This vaccination effort could, on the other hand, be an opportunity to set a precedence whereby Americans are entitled to goods and services in the Age of Algorithms without both paying for those goods and services and handing over valuable data without proper compensation.

Posted by Alfred Cowger

The Use of Algorithms to Substantiate Anything–Including Treason

So much could be written about how algorithms directly contributed to those historic, awful events on January 6, 2021. Marketing algorithms were most surely used to identify persons who might be interested in joining the coup planning, and to aim claims of election fraud directly that those who would be most swayed by those claims. Algorithms SHOULD have been used prior to the coup attempt by law enforcement to track the social media postings about what would happen on January 6th, though it is apparent that whatever results were discoverable via algorithms were ignored by senior US Capital security officials.

However, what I want to focus on how the term “algorithm” itself was used to make falsehoods seem true.  Algorithms, according to the affidavits of alleged experts, proved pervasive election fraud.  For example, algorithms were used to derive expected election outcomes based on early returns on Election Night.  When the final vote counts did not ultimately result in a Trump victory, these algorithms  concluded the only explanation was that, as the evening’s vote count went on, Trump votes were destroyed and Biden ballots were stuffed via some unexplainable means. At least one “expert” used algorithm-driven comparisons between the 2016 and 2020 elections to “prove” that vote fraud had to have been systemic.  One expert went so far as to claim that voter fraud was so well done, it could not be discerned from evidence, but only from algorithmic analysis.  Dozens of courts quickly rejected these analyses as proving nothing, except that the algorithm designers, the databases being used, and/or the conclusions derived by the so-called experts were all obviously and fundamentally flawed, and thus proved nothing.  Notwithstanding these irrefutable conclusions, the word “algorithm” was sufficient for Trump supporters, without a moment’s question, to ignore the decisions of all these courts.  

As the coup was unfolding, I began reading almost verbatim postings by Trump supporters that those attacking the Capitol and its police were actually Antifa and Black Lives Matter infiltrators.  Trump’s legion cited a Washington Times report stating algorithm-based facial recognition technology had proven known Antifers were at the Capitol. Within twenty-four hours the company that the Times reported undertook the analysis announced this claim was a complete falsehood.  Indeed, that company had found multiple instances of right-wingers who had been at previous violent street actions appearing in the halls of the Capitol after it was breached Once again, however, Trump supporters wanting to justify the actions of the Capitol rioters latched on to the claims that algorithms were used to prove the presence of Antifa, and they were unwavering in repeating this assertion well after the coup was suppressed and these claims were refuted completely.   Even the videos of Trump flag-waving rioters screaming “Hang Pelosi” and “Hang Pence”, vocalizing their intent to stop the electoral vote certification and harm elected officials who stood in their way, were not enough to convince Trump supporters that Antifa and BMI members were not the rioters.  Algorithms had already pronounced the “truth”, and that was enough for Trump’s supporters.

Until society becomes used to, and perhaps healthily jaded by, the application of algorithms, algorithms will take on almost mystical properties.  Studies have already shown how humans tend to presume that if a technology is complicated, it must be right.  The less a human understands the limitations of algorithms, the more likely the human is to believe any claim allegedly based on an algorithm is true.  Even those who should know better than to simply believe a claim because it has high-tech basis, such as judges, doctors and business professionals, tend to initially believe algorithms over humans.  They do not even alter their initial opinions even their own intuition and experience causes them to question an algorithmic conclusion.  

History will eventually adjudge that last week’s coup attempt will be one of the most extreme and dangerous examples of using the term “algorithm” to prove falsehoods. However, this should be a lesson for what could happen in situations with less widespread, yet still serious, implications.  Individuals harmed by algorithm-based conclusions denying their claims or benefits will not have the resources to overcome this religion-like deference to almighty algorithms that rule against individuals’ interests.  Doctors may choose treatments because their hospital’s very expensive algorithms say to choose those treatments, even if the doctor might have decided otherwise.  Corporate boards may defer to the recommendations of their management because the management used algorithms to make strategic plans, even if those plans sound suspect on further reflection and insight.  Judges may grant or deny freedom based on algorithmic conclusions, notwithstanding their experience-based hesitancy to do so.  Those arguing against algorithm-based conclusions will be at an immediate disadvantage, not because their arguments make sense, but because they are arguing against an algorithm.  

Society needs to learn quickly that the emperor is not clothed simply because he has an algorithm claiming otherwise.

Posted by Alfred Cowger