Jillian C. York

Jillian C. York is a writer and activist.

On Facebook’s “suppression” of conservative news

The headlines this week are about Facebook’s “suppression” or “censorship” of conservative news. As Snopes points out, there are two separate things that former employees are (anonymously) accusing Facebook of: The first is the suppression of conservative news topics which, if true, is indeed troubling. If there’s breaking news about say, Ted Cruz, and a Facebook employee “blacklists” or suppresses that information, that calls into question the very premise of Facebook as a source for news, nevermind an unbiased one.

The second accusation is that Facebook is suppressing conservative media. A set of Facebook employees apparently have been hand-selecting trending topics and sources, possibly to train the algorithms to take over later on. In doing so, they have apparently disregarded some sources:

Stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories.

Kelly McBride at Poynter has written a solid piece about the ethics around Facebook’s editorializing of the news.  And editorializing is what we should call it. Putting aside for a moment the important fact that Facebook has been completely opaque about its methods here (and in other areas), ultimately I think we want some editorializing. To some degree, we do want Facebook selecting the sources from which we receive information carefully, otherwise what’s to stop Stormfront from becoming a trending news source? Google also picks and chooses what shows up as news sources, although its big tent includes everything from Snopes.com to the New York Times, Electronic Intifada to The Blaze.

Facebook could do that, or it could be transparent about its methods, and editorialize with reliance on multiple or mainstream coverage of events. Personally, with such transparency, I don’t have a problem with Facebook picking and choosing which sources it relies upon. Such lines have to be drawn somewhere. And just look at how some of the top conservative media have covered this scandal:

 

Screen Shot 2016-05-10 at 11.13.01 AM

Screen Shot 2016-05-10 at 11.13.07 AM

Screen Shot 2016-05-10 at 4.20.25 PM

 

These are publications that some people think we should rely on for news. In my view, this is not journalism, this is screeching propaganda. I would stop using Facebook if I saw these headlines start showing up. They’re simply not truthful.

Now, does the so-called liberal media, or even the left media, do the same things sometimes? I won’t deny that. Journalists are human and frankly, objectivity is bullshit. But sites like The Blaze, The Rebel, Fox, and even the New York Post have no interest in truth, and the sooner we muster up the courage to say that out loud, the better.

 

The Trouble(s) With Advocating for Free Speech

I was sorely tempted to call this post “I’m a free speech advocate, everyone is an idiot.” After reading it, you’ll understand why, though ultimately that sentence is more of a reflection of my mood at the moment than of what’s going on.

What’s going on, you see, is that suddenly, full-grown adults seem confused as to why a free speech advocate would not be thrilled with Donald Trump’s hateful rhetoric. They’re confused as to why, in Germany (and everywhere), I align myself with the Antifa. They’re absolutely fucking perplexed as to how I could possibly suggest that Breitbart is closer to Stormfront than to the New York Times. Yesterday, an actual journalist suggested that a joke I made (see below) was somehow a violation of the First Amendment.

This person calls himself a journalist.

This person calls himself a journalist.

Mr. Raile has the misfortune of being scapegoated here, but I want to be clear: He’s only the most recent in a long, long line of grown-ass adults who have, in recent months, suggested that a free speech advocate cannot also rage against the machine, or what have you. So let’s use this as a teachable moment and talk about why that’s so, so wrong.

Not all speech is created equal

One constant and often vile misconception is the “principle” that all speech is equally valuable. Mark Zuckerberg himself has suggested this recently; fighting back last year against the German state’s attempts to more carefully regulate anti-refugee speech on Facebook, Zuckerberg said that platforms shouldn’t be made to decide what constitutes “legitimate debate.” The content that Germany wanted taken down, however, was not what I would call “legitimate debate” but vile, racist, and hateful speech that sometimes veered into incitement.

That said, I agree with Zuckerberg that a platform as a large as Facebook should not be a gatekeeper. The problem with what he said isn’t that Facebook shouldn’t regulate speech, but that the speech taking place there is inherently “legitimate debate.” Zuckerberg could just as easily have said that Facebook can’t effectively regulate speech on scale, or that he doesn’t believe that powerful entities (be they states or corporations) should be in the business of doing so. Had he done that, he would have effectively toed the line between protecting free speech and condemning hate.

This selective free speech advocate sees my criticism of her as incompatible with principles of free speech (because of course she does).

This selective free speech advocate sees my criticism of her as incompatible with principles of free speech.

The good folks who crafted the Universal Declaration of Human Rights understood this. Yes, they included the right of states to put some limitations on harmful speech (limitations the U.S. does not implement), but more importantly, they included the right to freedom of thought and expression while also protecting individuals from arbitrary arrest and detention (ahem, Trump), discrimination (ahem, Republican queer/transphobes), and to the right of movement (ahem, Europe).

Criticism is not censorship

This one goes out to Mr. Raile.

Yesterday, I made a common Twitter joke, saying “Delete your account” in response to something stupid that someone else said. In this case, that someone else was The Next Web (TNW), a French publication that often publishes very techno-utopian ideas. They’ve also taken to favoring national security over the fundamental rights to free expression and privacy, which I find troubling (but not surprising given France’s moves over the past decade). Therefore, when they published a piece condemning Twitter for not wanting intelligence agencies using their API, I shorthanded my criticism to “delete your account,” a very common phrase on Twitter, for the sake of 140 characters.

But let’s say that I had meant that precisely, and that my desire was for TNW to shut down. That still wouldn’t be out of line with my principles as a free speech advocate. You see, I’m a person. I’m not the state. I’m not even Facebook or Twitter, social media entities with state-like capacities for speech regulation. I’m an individual, and if I tell someone or something to shut up, that is not censorship.

Perhaps Mr. Raile understands that, though, and simply thinks that my criticizing a publication is out of line with the spirit of my job or my self-styled identity of “free speech advocate.” He’d still be dead wrong, and frankly, it’s pretty creepy to suggest that a person can’t contain multitudes, or should act robotically toward all speech because of what they do for a living.

The truth is, I am sometimes deeply torn about free speech. Like when Donald Trump suggests putting Muslims in camps, or when Pamela Geller defends Radovan Karadzic. Which brings me to my last point…

I am a free speech advocate because…

I am a free speech advocate primarily because I do not believe that it is possible to institute fair gatekeepers. I am a free speech advocate because I am fundamentally opposed to the concept of centralized power.

In my dream world, there wouldn’t be any hate speech. In an ideal world, we would use our words to build each other up, not tear each other down. But that world is impossible, I know, because in order to eradicate hate speech, those in power would have to fine people, or lock them up, censor them, “disappear” the leaders and scare their followers into submission. These are horrible things, things that authoritarians do. And I am, at my core, against authoritarianism.

There are some on the left who suggest that censoring, or censuring those who promote hate speech is worth the cost. The collateral damage, they claim, is minimal, and anyone toeing the gray area probably deserves what’s coming to them. I can’t agree. I’ve seen what happens when borderline speech is punished, when states are given absolute authority to decide who is or isn’t a terrorist, based on speech and not actions. I’ve seen what happens when states pick and choose which speech is worthy of defending. It isn’t pretty.

And frankly, look at the world. Hillary Clinton can make pithy jokes about her government’s involvement in the murder of a head of state, but a citizen of her government goes to prison for seventeen years just for translating the texts of the enemy.

We are not equal. We are created so, but power has divided us, and supporting that power to man the gates of expression will only divide us further. That is why I’m a free speech advocate.

Guns and Breasts: Cultural Imperialism and the Regulation of Speech on Corporate Platforms

This piece was originally published as Waffen und Brüste: Kultureller Imperialismus und die Regulierung von Sprache auf kommerziellen Plattformen in the Jahrbuch Netzpolitik 2014. I am republishing it today because I was looking for it as a reference and realized it wasn’t available in English.

When celebrity comedienne Chelsea Handler wanted to make a statement about Russian president Vladimir Putin, she chose her trademark—bold comedy—mounting a horse topless to mock the authoritarian leader’s bravado and posting a photograph of the stunt to Instagram—a photo-sharing platform owned by Facebook—with the caption “Anything a man can do, a woman has the right to do better.”

Almost immediately, the image was taken down, with a notice to Handler that her post had run afoul of Instagram’s community guidelines. “Remember that our community is a diverse one, and that your posts are visible to people as young as 13 years old,” the guidelines read. “While we respect the artistic integrity of photos and videos, we have to keep our product and the content within it in line with our App Store’s rating for nudity and mature content. In other words, please do not post nudity or mature content of any kind.”

Handler responded, suggesting that the policy is sexist and threatening to quit Instagram. Like many before her, she discovered a major limit to free expression in the age of social media: Instagram, unlike most town squares, is privately owned.

But corporate platforms have, in many ways, taken on the role of the town square, or public sphere. These are places where people gather to discuss news, debate politics, and connect with other like-minded individuals. Yet, like the modern shopping mall, these are private—not public—spaces, and are governed as such. Corporate policymakers enact restrictions on these platforms that limit speech and privacy.

Though restrictions on content vary from platform to platform, the mechanisms for monitoring and removing posts or accounts is quite similar. Most platforms rely on user reporting; that is, a user sees content she finds objectionable, and utilizes the platform’s reporting or flagging tool, sending a message to the corporation’s monitors. The offending content is then reviewed and, if it is found to violate the terms of service or community guidelines, it is removed.

In Handler’s case, the terms were clear: nudity is strictly prohibited on Instagram. But other examples abound, examples in which the content in question was socially or politically edgy or controversial, where the line could be drawn either way by the reviewer at the receiving end of reports. Worse yet, sometimes content that is clearly not in violation of regulations is removed by a platform, leaving the user with few paths of recourse and calling into question the company’s procedures.

Take, for example, another story involving Instagram, in which a plus-sized woman posted images of herself in her underwear to the platform and shortly thereafter found that her entire account had been deleted. Only after considerable media attention did Instagram apologize for the situation and reinstate the user’s account, noting that they “occasionally make a mistake” in reviewing content.

It isn’t just the misapplication of a regulation that’s the problem, however; often it is the regulation itself.

US-based social media platforms—such as YouTube, Google, Twitter, and Instagram—are protected from liability by Section 230 of the Communications Decency Act[1]. This means that any online intermediary that hosts speech, be it an ISP or Facebook, cannot be held legally responsible for user-generated content, with some criminal exceptions. In essence, this enables companies to provide a platform for controversial speech, and a legal environment favorable to free expression. At the same time, Section 230 also allows these platforms to shape the character of their services; in other words, they are under no obligation to promote free expression.

Yet, the aforementioned platforms often utilize the rhetoric of free speech to promote their products. Twitter CEO Dick Costolo has referred to the company as “the free speech wing of the free speech party.” Facebook proudly touted their role in the ‘Arab Spring.’ All the while, these companies have become increasingly censorious, banning a range of content, from violence to nudity. While this is well within their legal rights, the global implications of large-scale US-based platforms taking on the role of the censor have only begun to be explored.

Exporting American values through content regulation

In the United States, there exists a clear-cut double standard when it comes to violence and sex in the media. Violence persists in mainstream television, where a wide range of violent programming—from CSI (“Crime Scene Investigation”) to The Blacklist (about a criminal teaming up with the FBI)—is regularly ranked amongst the most popular television shows. At the same time, while sexuality is on display, it has traditionally been more heavily regulated.

The Federal Communications Commission (FCC), for example, restricts broadcasting of “indecent” television programming to late hours, defined as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.” Although nudity is not explicitly mentioned, the vaguely-worded rules have long been interpreted to categorize even non-sexual nudity as “indecent.”

Similarly, the film ratings system, determined by the Motion Picture Association of America (MPAA)—an opaquely-run trade organization—has been criticized for its double standards on nudity and violence. As feminist writer Soraya Chemaly has aptly described: “The fact that people can take their 14-year olds to R-rated movies that feature beheadings, severed limbs, bloodied torsos, rapes, decapitations and worse but not to a movie that shows two women enjoying consensual sex is a serious problem.”[2]

The 2006 documentary This Film is Not Yet Rated directly addressed this issue, pointing out specific films and their ratings to illustrate how violence generally garners adolescent-friendly ratings while films containing nudity and sex are restricted to adult viewers. Though the MPAA has addressed the public’s concerns about their system as it pertains to sex and violence, in their responses, they defer to the desires of the anonymous masses of American parents with whom they claim to consult.

Unfortunately, these standards are reflected in the policies and practices of the world’s most popular social networks. Although Facebook’s Community Standards begin with a declaration that the rules are set up to “balance the needs and interests of a global population”, the treatment of violence and sex on the platform couldn’t be more different.

While the Community Standards “impose limitations on the display of nudity,” the section on violence and threats addresses terrorist groups and violent criminal activity but not the display or sharing of violent imagery or video, whether real or fictional. Graphic (violent) content is addressed in a later section, which states that “people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.”

As such, videos of beheadings by terrorists in Syria grace users’ feeds and pages glorifying automatic weaponry remain available, but a tastefully posed image of a nude model would likely be struck down. In practice, this often means that (despite claims from Facebook of exceptions for “content of personal importance” such as family photos that include children breastfeeding) paintings with nude figures, a New Yorker cartoon that included nipples, and images of women proudly showing their mastectomy scars have at times been removed from the platform.

It has been argued that popular Silicon Valley social networks are exporting American values like freedom of speech and openness. They’re also exporting American norms and mores, including a comfort with violence and a discomfort with the human body. Those in favor of this concept point to the idea that these companies are a net positive for free speech in countries where government restrictions are tight. They’ll point to cases of activists using their platforms for collective action and argue that, only by virtue of their site’s existence were such actions possible. Those against may argue that such meddling is a violation of state sovereignty, appalled at the audacity of US companies to determine what is appropriate speech elsewhere.

Rarely mentioned, however, is this: The spaces in which much of the world is engaged in public discussion on a daily basis are subject to the whims of private companies owned and operated by mostly white, mostly upper-class, mostly American men. Diversity reports released by several of these companies in the past few months demonstrate this in clear terms: Facebook’s staff is 69-percent male, 57-percent white. Google’s is 70-percent male, 61-percent white. Twitter too is 70-percent male, 59-percent white. These demographics should not go unnoticed; the individuals at the top at these companies are tasked with the creation of the norms and procedures that govern the majority of our daily online conversations globally.

As such, it is not simply a question of American values being exported, but the values of this particular demographic. In essence, it is this group of individuals that are currently defining American values for the billions of social media users who may otherwise have rarely encountered them. The unquestioning transfer of outdated media norms to the digital realm, coupled with domination of these companies by a particular class, has thus allowed for the creation of a new definition of “online freedom.”

The promotion of special interests

Although the policies and procedures of corporate platforms are decided by corporations themselves, there is significant influence from outside actors, be they lobbyists, non-governmental organizations (NGOs), or governments. These actors hold a range of views on the role of corporations in policing speech and seek to influence policies in ways that sometimes represent little more than their own interests.

In the past year, for example, European governments have sought to proactively censor terrorist content on social networks. Free speech and civil liberties organizations regularly lobby corporations to protect expression. At the same time, Twitter has partnered with Women, Action & the Media (WAM) to “escalate validated [harassment] reports to Twitter and track Twitter’s responses to different kinds of gendered harassment.”[3]

The latter measure, in particular, has been lauded for its attempts to solve the pervasive problem of harassment of women on social networks. Criticism of the plan, on the other hand, has primarily come from conservatives, who see WAM as a feminist special-interest group seeking to censor anti-feminist speech.

While this criticism is undoubtedly overwrought, the idea of special-interest groups striking relationships with companies to regulate content is worthy of investigation. And WAM is by no means the only group at the table; the Anti-Defamation League (ADL), a self-described Jewish NGO that seeks to “[fight] anti-Semitism and all forms of bigotry” and “[defend] democratic ideals”[4] has also influenced companies’ policies, most notably convincing Google to put up a top result for any Google search of the word “Jew” to explain the derogatory uses of the word. More recently, the group struck a deal with Twitter, Facebook, Google, and Microsoft to help “enforce tougher sanctions” against those posting abusive messages.

These measures alone may not be inherently problematic, but the ADL has a history of supporting censorship and pushing special interests. The group famously spoke out against the building of a mosque in lower Manhattan because of its proximity to the former World Trade Center, and last year, a local chapter of the organization urged a museum to shut down an exhibit of children’s art from Gaza. The organization also spoke out in support of a controversial advertising campaign that impugned Muslims as “savages.” With a history like that, it is hard to believe that the ADL will be an honest actor in its negotiations with social media companies.

Meanwhile, special-interest groups from other parts of the world are often met with a closed door. A recent report from Facebook showed that the company has taken down thousands of pieces of content upon request from Pakistani law enforcement despite outcry from Pakistani civil society groups. Similarly, questions from activists around the world as to the extent of corporate collaboration with the National Security Agency have gone mostly unanswered. The privilege of influence is typically extended only to US-based organizations.

While the involvement of special-interest groups in corporate policy-making may in many cases mitigate concerns about corporate demographics, the risk of untoward influence on such policies—particularly when influence is leveraged behind closed doors—is not negligible and deserves further examination. As private regulation competes with—and at times supercedes—government restrictions on speech, these spaces are increasingly a battleground for both free speech activists and advocates for stronger moderation alike.

Guns, breasts, both, neither

All too frequently, arguments in favor of closer scrutiny toward corporate regulation are met with cries of “The right to free speech doesn’t apply here!” This counter-argument, made by laymen and corporate policy makers alike, is bound to shut down discussion of the impact of corporate regulations on our expression.

While, indeed, the legal right to free speech does not apply to these spaces, it is impossible to ignore the effect corporate limitations on speech can have on societies. To that end, the sheer scale of these platforms must be noted: Facebook boasts 864 million daily users, 82 percent of whom are outside of the United States and Canada. 284 million people—77 percent of whom are outside the United States—worldwide use Twitter on a monthly basis. Instagram has 200 million active users monthly, with 65% outside of the United States. And the list goes on.

The impact of these platforms is undeniable: From the Arab uprisings to the current protests in Ferguson, Missouri, social media has emerged as an important tool for political participation, protest, and civic engagement. Their role in artistic and personal expression, however, is equally important. As spaces of public interaction are increasingly privatized, expression that is already considered “fringe” will become increasingly marginalized.

As such, whenever corporate platforms censor content—be it due to public demand, or market or government pressure—it has a chilling effect on free speech. Yes, Facebook is a private company, but it is also the largest shared platform for expression that the world has ever seen, and it’s time that we consider the additional responsibilities that such a privilege confers.

[1] http://www.law.cornell.edu/uscode/text/47/230
[2] http://www.salon.com/2013/11/06/the_mpaas_backwards_logic_sex_is_dangerous_sexism_is_fine/
[3] http://www.womenactionmedia.org/2014/11/06/harassment-of-women-on-twitter-were-on-it/
[4] http://www.adl.org/about-adl/

« Older posts

© 2016 Jillian C. York

Theme by Anders NorenUp ↑