Jillian C. York

Jillian C. York is a writer and activist.

Author: Jillian (page 1 of 193)

Guns and Breasts: Cultural Imperialism and the Regulation of Speech on Corporate Platforms

This piece was originally published as Waffen und Brüste: Kultureller Imperialismus und die Regulierung von Sprache auf kommerziellen Plattformen in the Jahrbuch Netzpolitik 2014. I am republishing it today because I was looking for it as a reference and realized it wasn’t available in English.

When celebrity comedienne Chelsea Handler wanted to make a statement about Russian president Vladimir Putin, she chose her trademark—bold comedy—mounting a horse topless to mock the authoritarian leader’s bravado and posting a photograph of the stunt to Instagram—a photo-sharing platform owned by Facebook—with the caption “Anything a man can do, a woman has the right to do better.”

Almost immediately, the image was taken down, with a notice to Handler that her post had run afoul of Instagram’s community guidelines. “Remember that our community is a diverse one, and that your posts are visible to people as young as 13 years old,” the guidelines read. “While we respect the artistic integrity of photos and videos, we have to keep our product and the content within it in line with our App Store’s rating for nudity and mature content. In other words, please do not post nudity or mature content of any kind.”

Handler responded, suggesting that the policy is sexist and threatening to quit Instagram. Like many before her, she discovered a major limit to free expression in the age of social media: Instagram, unlike most town squares, is privately owned.

But corporate platforms have, in many ways, taken on the role of the town square, or public sphere. These are places where people gather to discuss news, debate politics, and connect with other like-minded individuals. Yet, like the modern shopping mall, these are private—not public—spaces, and are governed as such. Corporate policymakers enact restrictions on these platforms that limit speech and privacy.

Though restrictions on content vary from platform to platform, the mechanisms for monitoring and removing posts or accounts is quite similar. Most platforms rely on user reporting; that is, a user sees content she finds objectionable, and utilizes the platform’s reporting or flagging tool, sending a message to the corporation’s monitors. The offending content is then reviewed and, if it is found to violate the terms of service or community guidelines, it is removed.

In Handler’s case, the terms were clear: nudity is strictly prohibited on Instagram. But other examples abound, examples in which the content in question was socially or politically edgy or controversial, where the line could be drawn either way by the reviewer at the receiving end of reports. Worse yet, sometimes content that is clearly not in violation of regulations is removed by a platform, leaving the user with few paths of recourse and calling into question the company’s procedures.

Take, for example, another story involving Instagram, in which a plus-sized woman posted images of herself in her underwear to the platform and shortly thereafter found that her entire account had been deleted. Only after considerable media attention did Instagram apologize for the situation and reinstate the user’s account, noting that they “occasionally make a mistake” in reviewing content.

It isn’t just the misapplication of a regulation that’s the problem, however; often it is the regulation itself.

US-based social media platforms—such as YouTube, Google, Twitter, and Instagram—are protected from liability by Section 230 of the Communications Decency Act[1]. This means that any online intermediary that hosts speech, be it an ISP or Facebook, cannot be held legally responsible for user-generated content, with some criminal exceptions. In essence, this enables companies to provide a platform for controversial speech, and a legal environment favorable to free expression. At the same time, Section 230 also allows these platforms to shape the character of their services; in other words, they are under no obligation to promote free expression.

Yet, the aforementioned platforms often utilize the rhetoric of free speech to promote their products. Twitter CEO Dick Costolo has referred to the company as “the free speech wing of the free speech party.” Facebook proudly touted their role in the ‘Arab Spring.’ All the while, these companies have become increasingly censorious, banning a range of content, from violence to nudity. While this is well within their legal rights, the global implications of large-scale US-based platforms taking on the role of the censor have only begun to be explored.

Exporting American values through content regulation

In the United States, there exists a clear-cut double standard when it comes to violence and sex in the media. Violence persists in mainstream television, where a wide range of violent programming—from CSI (“Crime Scene Investigation”) to The Blacklist (about a criminal teaming up with the FBI)—is regularly ranked amongst the most popular television shows. At the same time, while sexuality is on display, it has traditionally been more heavily regulated.

The Federal Communications Commission (FCC), for example, restricts broadcasting of “indecent” television programming to late hours, defined as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.” Although nudity is not explicitly mentioned, the vaguely-worded rules have long been interpreted to categorize even non-sexual nudity as “indecent.”

Similarly, the film ratings system, determined by the Motion Picture Association of America (MPAA)—an opaquely-run trade organization—has been criticized for its double standards on nudity and violence. As feminist writer Soraya Chemaly has aptly described: “The fact that people can take their 14-year olds to R-rated movies that feature beheadings, severed limbs, bloodied torsos, rapes, decapitations and worse but not to a movie that shows two women enjoying consensual sex is a serious problem.”[2]

The 2006 documentary This Film is Not Yet Rated directly addressed this issue, pointing out specific films and their ratings to illustrate how violence generally garners adolescent-friendly ratings while films containing nudity and sex are restricted to adult viewers. Though the MPAA has addressed the public’s concerns about their system as it pertains to sex and violence, in their responses, they defer to the desires of the anonymous masses of American parents with whom they claim to consult.

Unfortunately, these standards are reflected in the policies and practices of the world’s most popular social networks. Although Facebook’s Community Standards begin with a declaration that the rules are set up to “balance the needs and interests of a global population”, the treatment of violence and sex on the platform couldn’t be more different.

While the Community Standards “impose limitations on the display of nudity,” the section on violence and threats addresses terrorist groups and violent criminal activity but not the display or sharing of violent imagery or video, whether real or fictional. Graphic (violent) content is addressed in a later section, which states that “people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.”

As such, videos of beheadings by terrorists in Syria grace users’ feeds and pages glorifying automatic weaponry remain available, but a tastefully posed image of a nude model would likely be struck down. In practice, this often means that (despite claims from Facebook of exceptions for “content of personal importance” such as family photos that include children breastfeeding) paintings with nude figures, a New Yorker cartoon that included nipples, and images of women proudly showing their mastectomy scars have at times been removed from the platform.

It has been argued that popular Silicon Valley social networks are exporting American values like freedom of speech and openness. They’re also exporting American norms and mores, including a comfort with violence and a discomfort with the human body. Those in favor of this concept point to the idea that these companies are a net positive for free speech in countries where government restrictions are tight. They’ll point to cases of activists using their platforms for collective action and argue that, only by virtue of their site’s existence were such actions possible. Those against may argue that such meddling is a violation of state sovereignty, appalled at the audacity of US companies to determine what is appropriate speech elsewhere.

Rarely mentioned, however, is this: The spaces in which much of the world is engaged in public discussion on a daily basis are subject to the whims of private companies owned and operated by mostly white, mostly upper-class, mostly American men. Diversity reports released by several of these companies in the past few months demonstrate this in clear terms: Facebook’s staff is 69-percent male, 57-percent white. Google’s is 70-percent male, 61-percent white. Twitter too is 70-percent male, 59-percent white. These demographics should not go unnoticed; the individuals at the top at these companies are tasked with the creation of the norms and procedures that govern the majority of our daily online conversations globally.

As such, it is not simply a question of American values being exported, but the values of this particular demographic. In essence, it is this group of individuals that are currently defining American values for the billions of social media users who may otherwise have rarely encountered them. The unquestioning transfer of outdated media norms to the digital realm, coupled with domination of these companies by a particular class, has thus allowed for the creation of a new definition of “online freedom.”

The promotion of special interests

Although the policies and procedures of corporate platforms are decided by corporations themselves, there is significant influence from outside actors, be they lobbyists, non-governmental organizations (NGOs), or governments. These actors hold a range of views on the role of corporations in policing speech and seek to influence policies in ways that sometimes represent little more than their own interests.

In the past year, for example, European governments have sought to proactively censor terrorist content on social networks. Free speech and civil liberties organizations regularly lobby corporations to protect expression. At the same time, Twitter has partnered with Women, Action & the Media (WAM) to “escalate validated [harassment] reports to Twitter and track Twitter’s responses to different kinds of gendered harassment.”[3]

The latter measure, in particular, has been lauded for its attempts to solve the pervasive problem of harassment of women on social networks. Criticism of the plan, on the other hand, has primarily come from conservatives, who see WAM as a feminist special-interest group seeking to censor anti-feminist speech.

While this criticism is undoubtedly overwrought, the idea of special-interest groups striking relationships with companies to regulate content is worthy of investigation. And WAM is by no means the only group at the table; the Anti-Defamation League (ADL), a self-described Jewish NGO that seeks to “[fight] anti-Semitism and all forms of bigotry” and “[defend] democratic ideals”[4] has also influenced companies’ policies, most notably convincing Google to put up a top result for any Google search of the word “Jew” to explain the derogatory uses of the word. More recently, the group struck a deal with Twitter, Facebook, Google, and Microsoft to help “enforce tougher sanctions” against those posting abusive messages.

These measures alone may not be inherently problematic, but the ADL has a history of supporting censorship and pushing special interests. The group famously spoke out against the building of a mosque in lower Manhattan because of its proximity to the former World Trade Center, and last year, a local chapter of the organization urged a museum to shut down an exhibit of children’s art from Gaza. The organization also spoke out in support of a controversial advertising campaign that impugned Muslims as “savages.” With a history like that, it is hard to believe that the ADL will be an honest actor in its negotiations with social media companies.

Meanwhile, special-interest groups from other parts of the world are often met with a closed door. A recent report from Facebook showed that the company has taken down thousands of pieces of content upon request from Pakistani law enforcement despite outcry from Pakistani civil society groups. Similarly, questions from activists around the world as to the extent of corporate collaboration with the National Security Agency have gone mostly unanswered. The privilege of influence is typically extended only to US-based organizations.

While the involvement of special-interest groups in corporate policy-making may in many cases mitigate concerns about corporate demographics, the risk of untoward influence on such policies—particularly when influence is leveraged behind closed doors—is not negligible and deserves further examination. As private regulation competes with—and at times supercedes—government restrictions on speech, these spaces are increasingly a battleground for both free speech activists and advocates for stronger moderation alike.

Guns, breasts, both, neither

All too frequently, arguments in favor of closer scrutiny toward corporate regulation are met with cries of “The right to free speech doesn’t apply here!” This counter-argument, made by laymen and corporate policy makers alike, is bound to shut down discussion of the impact of corporate regulations on our expression.

While, indeed, the legal right to free speech does not apply to these spaces, it is impossible to ignore the effect corporate limitations on speech can have on societies. To that end, the sheer scale of these platforms must be noted: Facebook boasts 864 million daily users, 82 percent of whom are outside of the United States and Canada. 284 million people—77 percent of whom are outside the United States—worldwide use Twitter on a monthly basis. Instagram has 200 million active users monthly, with 65% outside of the United States. And the list goes on.

The impact of these platforms is undeniable: From the Arab uprisings to the current protests in Ferguson, Missouri, social media has emerged as an important tool for political participation, protest, and civic engagement. Their role in artistic and personal expression, however, is equally important. As spaces of public interaction are increasingly privatized, expression that is already considered “fringe” will become increasingly marginalized.

As such, whenever corporate platforms censor content—be it due to public demand, or market or government pressure—it has a chilling effect on free speech. Yes, Facebook is a private company, but it is also the largest shared platform for expression that the world has ever seen, and it’s time that we consider the additional responsibilities that such a privilege confers.

[1] http://www.law.cornell.edu/uscode/text/47/230
[2] http://www.salon.com/2013/11/06/the_mpaas_backwards_logic_sex_is_dangerous_sexism_is_fine/
[3] http://www.womenactionmedia.org/2014/11/06/harassment-of-women-on-twitter-were-on-it/
[4] http://www.adl.org/about-adl/

On Buzzfeed and misogyny

It seems like only yesterday that, at least in my friend circles, mocking Buzzfeed was a spectator sport. Then came writers (several of whom I count as friends) like Hayes Brown and Tom Gara, Sara Yasin and Jina Moore and Sheera Frankel, and I could no longer ignore their news section. And many of their pieces were really really good, at times better than the New York Times‘ coverage of a same issue.

As Fast Company makes clear, Buzzfeed is here to stay. The publication is read by 79 million people every month, a number that seems to be constantly growing. So here’s my question: Why does Buzzfeed need to trade in misogyny?

Amidst the investigative pieces and breaking news, the listicles of cats and the reporting on celebrities are countless pieces aimed against women. I don’t mean Buzzfeed “community” pieces, those unedited pieces of drivel written by unpaid writers (let’s question that another day), but rather pieces like this one, written by fully-paid Buzzfeed staffers. And no, I don’t care that the authors are women.

I’m not talking about the articles on picking a wedding dress, or how to do one’s eye makeup, or whatever. I might find articles like that to be unbefitting of a serious news publication, but hey Buzzfeed, you do you. No, I’m talking about pieces like this and this. For a publication that otherwise does an excellent job of covering real women’s issues, and has its own international women’s rights correspondent (Jina Moore) and has some of the best user data of any news publication, this is simply unacceptable. For a news publication that is getting so much else right, it’s amazing that they could get this so wrong.

I am a migrant, but.

I am a migrant amongst migrants but

I don’t have to beg

borrow

or prove myself

I don’t have to

stand in the cold

waiting with hope

for someone to let me pass

 

I am a migrant but

I walked right in

blue passport in hand

while they welcomed me with nothing more than

a look and a reminder to pay my taxes

 

I am a migrant but

no one questions whether

I should be here or

can assimilate or

find a job or

become one of them

 

I am a migrant but

when I walk down the street

surrounded by my countrymen

heads covered from the cold

no one spits or says

we don’t belong here

 

I am a migrant but

when the borders tighten

I am not who politicians say

we are protecting ourselves from

I will not inspire policies

meant to keep the other people who look vaguely like me safe

and the ones who don’t, out

 

I am a migrant but

by total chance of birth

I will never have to argue

why I deserve to live.

 

Older posts

© 2016 Jillian C. York

Theme by Anders NorenUp ↑