Like other legal issues surrounding speech on social media, speech that is defamatory is literally moving faster than the legal principles that are meant to address it. On one hand, legislatures long ago enacted statute of limitations based on the date of publication of the defamatory statement. Those statutes limit how long a defamer can be liable for one defamatory “publication”, so the punishment does not unfairly outweigh the action. O
n the other hand, defamers determined to cause repeated disparagement would be liable for “re-publication” of their original statements. This concept of re-publication would protect the defamed person from being harmed again and again by the original defamation being repeated. These two concepts worked well to both protect the defamed party from repeated repetition of the defaming statement, and prevent the defamer from interminable penalties long after the defamer had made the statement.
The problem is that when defamation occurs on social media, the person found to have committed the legal sin of defamation can be ruined by one bad statement, since that statement can be repeated far beyond the intended audience of the defamer, and can literally last forever in the internet. If a continual existence of the defaming statement were continually to trigger a penalty against the guilty defamer, the penalty to the defaming party could far outweigh the harm that was intended. That, in turn, can have a chilling effect on statements that may have had some valuable intent, ranging from criticism of public officials to consumer complaints against private businesses. If any defamation will live forever in the Age of Algorithms, and thus also so will the penalties, will anyone risk saying anything negative against anyone or any entity?
On the other hand, defamation in the Age of Algorithms does not eventually fade away, like the newsprint of yesteryear. A defamatory statement will always be found via search engines, and can be passed literally around the globe by thousands or millions of social media users. Thus, the party defamed could be harmed literally forever and worldwide.
On of the first court decisions to address this issue is Penrose Hill, Limited v. Mabray, Case No. 20-cv-01169-DMR (N.D. CA Aug 18, 2020). This case involved a lawsuit by a winery owner against a wine blog. The court found that a posting that is not removed from a social media site is “published” when it is first posted, and that the mere act of keeping the post up is not republication. Thus, the statute of limitations would begin running on the date of the first posting, but will expire even if the post is still up as of the date the statute of limitations runs.
The court then found that merely referencing that posting in a later tweet is not re-publication. The court based this decision on traditional cases, in which publications containing defamatory statements were merely cited in later publications. The court noted in passing that traditionally a defamatory statement can be deemed re-published if the original statement is cited with the intention of bringing to the attention of a new audience. This passing comment should have been considered more seriously by the court. People posting tweets hope that each time their tweet will indeed reach a new audience, including the hope that the new post will be re-tweeted even more broadly than the first time. Thus, one could argue that a re-tweet of an original defamatory statement should be presumed to be an effort to reach a new audience.
The court went on to find that traditional defamation law does not find re-posting of the same tweet is republication. Therefore, a verbatim reposting of the same statements by the blogger did not trigger a new statute of limitations. The problem, as demonstrated by recent history, is that the best way to spread lies and defamatory speech is to repeat again and again, hoping that it will be re-posted so many times that people will begin to believe the lies simply because they have seen them so often. That would suggest that if modern defamation law is to respond to hatemongers intent to harm individuals with lies, verbatim re-postings should be proscribed just as the original posting. That, in turn, means that the statute of limitations does not begin running until hatemongers stop repeating their lies.
These concerns merely begin the discussion about defamation law in the Age of Algorithms. What is obvious is that courts should not rely on the common law that arose when defamation was limited to publications that were limited by Industrial Age logistics to finite populations and locales, and rarely lasted a decade before they crumbled to dust.
Month: February 2021
Snow Days Ain’t What They Used to Be
My daughter was so happy for the foot of snow we received on Monday night, because she was sure that meant a Snow Day. She was horrified when her school announced Tuesday would simply be a day of remote learning, like much of the last several months. That has resulted in an argument about whether Snow Days are themselves Acts of God, because they are after all caused by one, and thus students have a divine right to have the day off from school. I assert, because after all I am a Daa-aad, that Snow Days are an archaic part of pre-internet schooling, and thus serve no purpose when students and staff alike can readily shift into remote learning until the streets are cleared. Are Snow Days yet another part of Ohio life that will be forever altered by Covid?
What Happens When the U.S. Marketplace of Ideas Is Not Even Located in the U.S.?
Lost in the debate about social media as the “marketplace of ideas” for the Age of Algorithms is the physicality of social media, or more specifically the lack thereof. The difference between the physically located marketplace of ideas for the first 250 years of the United States, and the marketplace of ideas hereafter, will add significant ramifications to the First Amendment that few are discussing now.
There is no doubt that social media is the soap box of the 21st Century. The Supreme Court has already concurred in this obvious argument. Yet, this comparison is weak. The platform of the speech-giver of old was completely physical, whether that platform was a stage or a park corner. One essential element of Free Speech rights was the ability to gain access to a public venue without losing that right just because of what one wants to say. At least the speech giver and the government official intent on preventing the speech both knew where the venue was located where the speech would occur.
The Age of Algorithms has changed that completely. Social media is not dependent on a location, let alone an advantageous one like a popular park corner, and so those exercising their free speech rights can do so literally anywhere in the world. That makes taking government action to prevent hate speech a difficult thing. If Twitter were to decide tomorrow to move its social media network to a Nigerian server tomorrow (just for example), its U.S. users would not see any difference on their computer screens. Yet, any U.S. government regulation of alleged hate crime, or even obvious crimes like sex trafficking, might not be able to reach outside of U.S. borders. So, the ability of a government to respond to the negative effects of communication might be getting more difficult.
Of course, the U.S. government could block Twitter from gaining access to U.S. computers, much like China blocks Google from the computers of its citizens. That would not put an end to Twitter itself, particularly if the sovereign where Twitter was located decided to be more sympathetic to Twitter—and its billions of dollars of revenue.
Perhaps more importantly, the moment the U.S. government decides to block a social media site due to its content, that is censorship of speech based on its content. That, in turn, is almost never permitted under the First Amendment. So, the U.S. government’s only tool to respond to harmful social media would actually result in the protection of that harmful social media in almost all cases.
On the other hand, consider what happens if the U.S. government tries to take steps to prevent censorship by social media’s owners? Well, those owners could simply move their social media servers to a more friendly jurisdiction, meaning one that values tax revenues and employment of its citizens over free speech in the United States.
This is yet another reason that the United States must come up with an alternative to social media, at least in its present form, as the marketplace of ideas for the Age of Algorithms. There is so much that can and will go wrong if social media is the primary means by which ideas and discourse are disseminated. Ultimately, both those the individual wanting to speak who is kicked offline and the government trying to prevent illegal communications via social media will be frustrated in their efforts, and the only winner will be the social media owner, generated revenues in a completely unregulated location.
Recent Comments