Showing posts with label Blogger. Show all posts
Showing posts with label Blogger. Show all posts

Wednesday, January 25, 2023

Google censors opinion condemning private censors

On December 31, Google blocked access to a Savory Tort post from 2019 on free speech and censorship in New Zealand.

I received this message from Google on New Year's Eve:

As you may know, our Community Guidelines (https://blogger.com/go/contentpolicy) describe the boundaries for what we allow--and don't allow--on Blogger. Your post titled "NZ prosecutions for sharing Christchurch vid would suppress news, free speech, but worse is empowerment of private censors" [my boldface] was flagged to us for review. This post was put behind a warning for readers because it contains sensitive content; the post is visible at http://www.thesavorytort.com/2019/03/nz-prosecutions-for-vid-sharing-would.html. Your blog readers must acknowledge the warning before being able to read the post/blog.

Why was your blog post put behind a warning for readers?

Your content has been evaluated according to our Adult Content policy. Please visit our Community Guidelines page linked in this email to learn more [link below]. We apply warning messages to posts that contain sensitive content. If you are interested in having the status reviewed, please update the content to adhere to Blogger's Community Guidelines. Once the content is updated, you may republish it at [URL omitted]. This will trigger a review of the post.

For more information, please review the following resources:
Terms of Service: https://www.blogger.com/go/terms
Blogger Community Guidelines: https://blogger.com/go/contentpolicy 

Sincerely,
The Blogger Team

Setting aside for a moment the irony of private censorship of a post about private censorship,* I wanted to understand what triggered the block. As the headline indicates, I fretted in the post about New Zealand criminal law being turned against online re-publishers of the horrifying video of mass shooting at a Christchurch mosque in 2019. I wrote that the lack of newsworthiness exception in New Zealand law would be problematic in U.S. First Amendment law, and the prosecution could not withstand analysis under Brandenburg v. Ohio (1969). And I wrote some about how the modern internet has posed a challenge to the dated First Amendment doctrine.

Willow Brugh via Wikimedia Commons and Flickr CC BY-SA 2.0
At first, I thought maybe I linked to the objectionable video itself; I had not. I did mention by "dot com" name a problematic website from earlier internet days that was infamous in freedom-of-information circles for hosting gruesome content. But I didn't hyperlink it, and the site no longer exists at that address anyway.

The message from Google referred to the "Adult Content policy."  Here's what the policy disallows:

We do allow adult content on Blogger, including images or videos that contain nudity or sexual activity. If your blog contains adult content, please mark it as 'adult' in your Blogger settings. We may also mark blogs with adult content where the owners have not. All blogs marked as 'adult' will be placed behind an 'adult content' warning interstitial. If your blog has a warning interstitial, please do not attempt to circumvent or disable the interstitial - it is for everyone’s protection.

There are some exceptions to our adult content policy:

  • Do not use Blogger as a way to make money on adult content. For example, don't create blogs that contain ads for or links to commercial porn sites.
  • We do not allow illegal sexual content, including image, video or textual content that depicts or encourages rape, incest, bestiality, or necrophilia.
  • Do not post or distribute private nude, sexually explicit, or non-explicit intimate and sexual images or videos without the subject’s consent. If someone has posted a private nude, sexually explicit, or non-explicit intimate and sexual image or video of you, please report it to us here [hyperlink omitted].

There's nothing remotely sexual about the 2019 post. Nor is there any depiction or description of violence, other than a reference to the mere occurrence of the tragedy, which was well reported in news media with plenty more detail.

Links to The Savory Tort were once banned from Facebook, too, for more than a year. When I inquired, Facebook sent me a form message saying that The Savory Tort violated Facebook terms of service for content. I sent further inquiries, made appeals, etc., but Facebook never clarified how the terms were violated. Indeed, Facebook never responded with other than form messages confirming the ban. For all the hoopla about a "Facebook supreme court" and thoughtful, human review of content, those avenues apparently are not open to the little people such as me.

Ultimately, a former student and labor attorney complained about the ban to Facebook, after he was denied permission to share a link to my blog. He kindly let me know. Subsequently, consequently?, and suddenly, links could be posted. The ban vanished as mysteriously as it had appeared. Not a word from Facebook, then or since.

The Facebook ban came about upon a complaint from someone who didn't like something I wrote, I suspected. That happens. For example, I wrote once about a family law case in the Massachusetts Supreme Judicial Court, and I was threatened with legal action by the disappointed party. 

It's easy for someone to complain to Facebook or Google Blogger about online content. The complaint is not necessarily reviewed by a real person, or it is and the person is incompetent or indifferent. It's easier to block or take down content than arbitrate a dispute. That's why trolls and publishers have been able to abuse the notice-and-takedown system that has debilitated fair use of intellectual property.

Here, Google said that the post "was flagged to us for review" (my italics) and "has been evaluated." The choice of words, muddling passive voice notwithstanding, suggests that a third party triggered the review. How anyone, even a bot, at Google then could have found adult content, or anything in violation of the content terms, is a mystery to me. I can conclude only that the block was imposed automatically upon the complaint, with no review at all.

I would seek further explanation or ask for a human review, but that, it seems, is not an option. Google offers me the opportunity to have the block reviewed only after I "update the content to adhere to Blogger's Community Guidelines." I see no violation of the guidelines now, so I don't know what to update.

Now let's come back around to that irony, which might not be coincidental.  (Irony and coincidence are not necessarily the same thing, whatever Alanis Morissette would have you believe.)  The dangers of private online censorship was the theme of my post in 2019. The block on my post occurred in December 2022 only weeks after Elon Musk began to censor his critics on Twitter. Musk is still at it, by the way, seemingly having acceded this week to Indian government demands that Twitter censor critics of Prime Minister Narendra Modi. 

At the same time in December that Musk was making headlines with Twitter censorship, the Supreme Court scheduled (for Feb. 21) the oral argument in Gonzalez v. Google LLC (track at SCOTUSblog). The case asks whether internet service providers such as Google enjoy section 230 immunity from liability in the provision of targeted content, such as search results, apart from the conduct of traditional editorial functions, akin to newspaper editors choosing letters to the editor. David McGarry explained for Reason two weeks ago, "The plaintiff is Reynaldo Gonzalez, whose daughter was murdered in a 2015 terrorist attack. [He] argues that YouTube, a Google subsidiary, should face liability because its algorithms recommended terrorist content posted on the platform that Gonzalez says aided the Islamic State."

That's a potential liability exposure that might incline Google to censor first and review later.

Perhaps someone triggered the automatic censorship of a great many online articles about private censorship, hoping to make the very point that private censorship is dangerous. If that's what happened here, I would offer a grudging salute. But I would like to see the point actually made, not just fruitlessly attempted.

At the end of the day, I'm not so broken up about the block, as opposed to a ban like Facebook's, which frustrated me no end, as I could not share content at all with family and friends. A reader who encounters a sensitive content warning wall might be only more interested to know what lies beyond. And my target audience isn't children anyway. 

I figure there's a reasonably good chance that this post will wind up behind a warning wall for having referred to a warning wall. So be it. Anyone interested enough to be investigating a four-year old story of censorship probably will get the ironist's point, and mine.

* My journalism ethics professor at Washington and Lee University in the early 1990s, the late great Lou Hodges, railed against the word "censorship" to describe private action, so would have regarded the term "private censorship" as outrageously oxymoronic. Professor Hodges was steeped in classical learning and recognized that the word "censor" comes from the Ancient Roman word referring to a public magistrate whose responsibilities, on behalf of the state, included counting people and property—thus, "census"—and the enforcement of public morals through what we now call "censorship." To honor Professor Hodges, I long insisted on the same distinction. But in recent years, I have given in to the modern trend to employ the term regardless of the private or public nature of the actor. Professor Hodges could not then have anticipated that we would soon have an "Internet" that looks very much like a public commons, thus reviving the seemingly antiquated First Amendment problem of the company town. The term "censorship" seems to me apt for a world in which transnational corporations such as Google and Meta might as well be governments from the perspective of ordinary people.

Monday, September 13, 2021

'Don't panic,' lawyers say, as Oz High Court clears way for website liability over defamatory user comments


The High Court of Australia last week greenlit defamation claims against website operators for user comments, the latest evidence of crumbling global immunity doctrine represented in the United States by the ever more controversial section 230.

There is plenty news online about the Aussie case, and I did not intend to comment.  For the academically inclined, social media regulation was the spotlight issue of the premiere Journal of Free Speech Law.

Yet I thought it worthwhile to share commentary from Clayton Utz, in which lawyers Douglas Bishop, Ian Bloemendal, and Kym Fraser evinced a mercifully less alarmist tone when they wrote, "don't panic just yet."

The Australian apex court extended the well known and usual rule of common law defamation, when not statutorily suspended: that the tale bearer is as responsible as the tale maker.  In the tech context, in other words, "[b]y 'facilitating, encouraging and thereby assisting the posting of comments' by the public," the defendants, notwithstanding their actual knowledge or lack thereof, "became the publishers," Bishop, Bloemendal, and Fraser wrote.

But it's a touch more complicated than purely strict liability.  "What is relevant is an intentional participation in the process by which a posted comment may become available to be accessed by other Facebook users," Bishop, et al., opined.  "So does that mean you should take down your corporate social media pages? That would be an over-reaction to this decision."

The lawyers emphasized that this appeal was interlocutory.  On remand in New South Wales, the media defendants may assert defenses, including innocent dissemination, justification, and truth.  Bishop, et al., advise:

In the meantime, if your organisation maintains a social media page which allows comments on your posts, you should review your monitoring of third-party comments and the training of your social media team in flagging and (if necessary) escalating problems to ensure you can have respectful, non-defamatory conversation with stakeholders.

Funny they should say so.  Coincidentally, I gave "feedback" to Google Blogger just Friday that a new option should be added for comment moderation, something like "archive," or "decline to publish for now."  The only options Google offers are spam, trash, and publish.

I have two comments posted to this blog in recent years that I hold in "Awaiting Moderation" purgatory, because they fit none of my three options.  Every time I go to comment moderation, I have to see these two at the top.  The comments express possible defamation: allegations of criminality or otherwise ill character about third parties referenced on the blog.  I don't want to republish these comments, because I do not know whether they are true.  But I don't want to trash them, because they are not necessarily valueless.  Moreover, they might later be evidence in someone else's defamation suit.

I moderate comments for this blog, so I don't think it's too much to ask the same of anyone else who publishes comments, whether individual, small business, or the transnational information empires that peer over my shoulder.  

I do worry, though, about how that works out for the democratizing potential of the internet.  I'm trained to recognize potentially defamatory or privacy invasive content; I've done it for a living.  Are we prepared to punish the blogger who contributes valuably to the information sphere, but lacks the professional training to catch a legal nuance?  Or to pay the democratic price of disallowing dialog on that writer's blog?  As a rule, ignorance of the law is no excuse, in defamation law no less than in any other area.  But understanding media torts asks a lot more of the average netizen than knowing not to jaywalk.

I don't profess answers, at least not today.  But I can tell that the sentiment of my law students, especially those a generation or more younger than I, is unreticent willingness to hold corporations strictly liable for injurious speech on their platforms.  So if I were counsel to Google or Facebook, I would be planning for a radically changed legal future.