Like other legal issues surrounding speech on social media, speech that is defamatory is literally moving faster than the legal principles that are meant to address it. On one hand, legislatures long ago enacted statute of limitations based on the date of publication of the defamatory statement. Those statutes limit how long a defamer can be liable for one defamatory “publication”, so the punishment does not unfairly outweigh the action. O
n the other hand, defamers determined to cause repeated disparagement would be liable for “re-publication” of their original statements. This concept of re-publication would protect the defamed person from being harmed again and again by the original defamation being repeated. These two concepts worked well to both protect the defamed party from repeated repetition of the defaming statement, and prevent the defamer from interminable penalties long after the defamer had made the statement.
The problem is that when defamation occurs on social media, the person found to have committed the legal sin of defamation can be ruined by one bad statement, since that statement can be repeated far beyond the intended audience of the defamer, and can literally last forever in the internet. If a continual existence of the defaming statement were continually to trigger a penalty against the guilty defamer, the penalty to the defaming party could far outweigh the harm that was intended. That, in turn, can have a chilling effect on statements that may have had some valuable intent, ranging from criticism of public officials to consumer complaints against private businesses. If any defamation will live forever in the Age of Algorithms, and thus also so will the penalties, will anyone risk saying anything negative against anyone or any entity?
On the other hand, defamation in the Age of Algorithms does not eventually fade away, like the newsprint of yesteryear. A defamatory statement will always be found via search engines, and can be passed literally around the globe by thousands or millions of social media users. Thus, the party defamed could be harmed literally forever and worldwide.
On of the first court decisions to address this issue is Penrose Hill, Limited v. Mabray, Case No. 20-cv-01169-DMR (N.D. CA Aug 18, 2020). This case involved a lawsuit by a winery owner against a wine blog. The court found that a posting that is not removed from a social media site is “published” when it is first posted, and that the mere act of keeping the post up is not republication. Thus, the statute of limitations would begin running on the date of the first posting, but will expire even if the post is still up as of the date the statute of limitations runs.
The court then found that merely referencing that posting in a later tweet is not re-publication. The court based this decision on traditional cases, in which publications containing defamatory statements were merely cited in later publications. The court noted in passing that traditionally a defamatory statement can be deemed re-published if the original statement is cited with the intention of bringing to the attention of a new audience. This passing comment should have been considered more seriously by the court. People posting tweets hope that each time their tweet will indeed reach a new audience, including the hope that the new post will be re-tweeted even more broadly than the first time. Thus, one could argue that a re-tweet of an original defamatory statement should be presumed to be an effort to reach a new audience.
The court went on to find that traditional defamation law does not find re-posting of the same tweet is republication. Therefore, a verbatim reposting of the same statements by the blogger did not trigger a new statute of limitations. The problem, as demonstrated by recent history, is that the best way to spread lies and defamatory speech is to repeat again and again, hoping that it will be re-posted so many times that people will begin to believe the lies simply because they have seen them so often. That would suggest that if modern defamation law is to respond to hatemongers intent to harm individuals with lies, verbatim re-postings should be proscribed just as the original posting. That, in turn, means that the statute of limitations does not begin running until hatemongers stop repeating their lies.
These concerns merely begin the discussion about defamation law in the Age of Algorithms. What is obvious is that courts should not rely on the common law that arose when defamation was limited to publications that were limited by Industrial Age logistics to finite populations and locales, and rarely lasted a decade before they crumbled to dust.
Recent Comments