Jillian C. York

Jillian C. York is a writer and activist.

Remarks at the Humboldt Institute for Internet & Society’s Meeting on Internet Governance

These are, roughly, my remarks presented on a panel discussion of content governance at the Humboldt Institute, on October 10, 2014.

Who are the private actors involved in Internet governance? Internet service providers, software companies, content providers. I’m going to focus primarily on the latter, because of the impact I believe that they have on our public spheres and on governance.

As Marianne Franklin said earlier, these corporations derive their legitimacy from their property rights. These are private companies with their own sets of rights. As many are based in the United States, they are also imbued with their own speech rights. This matters, for reasons I’ll explain in a moment.

These are also truly global platforms. Centralised, free, and easy to sign up for, these sites attract a broad swath of the world’s public, who use them to engage in political and social debate, organise protests, and of course, chat with each other.

Social media companies are private, but the platforms they have created have taken on the role of the public sphere as described by Habermas: “Society engaged in critical public debate” — and are characterised by a feeling of inclusivity and freedom of expression and association. And yet the online social spaces standing in for the public sphere are private ones, owned by billionaires and shareholders. Nevertheless, we treat them as public spaces.

The way that content is regulated can be algorithmic, or it can be community-based; in the latter example, users of these sites utilize report mechanisms to police the speech of members in their network, or in their site.

Scholars Kate Crawford and Tarleton Gillespie have studied the use of community reporting, or “flagging,” on social media platforms. In a recent paper, they describe the mechanism as “understood [by many sites] to be a genuinely important and necessary part of maintaining user-friendly spaces and learning from their community.” Flagging, they argue, “act[s] as a mechanism to elicit and distribute user labor—users as a volunteer corps of regulators.”

It is this mechanism that, a few weeks ago, enabled a single user to “troll” dozens of drag queens and other LGBTQ-identified individuals, reporting them for using “fake” names, a violation of Facebook’s rules. And why does that rule exist? Because Mark Zuckerberg doesn’t like the idea of people having more than one identity.

What this means for users, netizens, is that the spaces in which they are engaged in public discussion on a daily basis are not subject to the law, but rather, the whims of private companies owned and run by mostly-white, mostly-male, mostly-American billionaires. Seriously, take a look at the diversity reports released by these companies in the past few months. Facebook is 69% male, 57% white. Google is 70% male, 61% white. And so on.

I emphasize race and gender here because I think it matters. The rules that we’re subject to on these sites often appear to reflect the morals and values of these stakeholders: Nudity is banned, violence is okay. Hate speech against certain groups is unacceptable, but imagery depicting violence against women gets a pass. Pseudonyms are okay for celebrities, but not for transgender persons.

This is where multistakeholderism doesn’t matter. These companies have been hearing from civil society, and sometimes from governments, about their policies for years, but to little progress. Yes, we’ve seen some changes—such as a change to Google+’s policy on pseudonyms—but they didn’t come from the IGF; in fact, I sat on a panel at the 2012 IGF with a Googler who blatantly lied to me on the record about a specific censorship incident. Rather, they came from public pressure, from activism, from civil society.

On the other hand, it’s worth noting that while the multistakeholder process has little to no effect on the behavior of companies, corporations have a disproportionate voice in the process. They spend loads of money on lavish events held in big tents, they pay for multistakeholder “corporate social responsibility” organizations that spend more time putting out reports that don’t get read than actually fixing problems, and they tempt civil society actors and organizations with large sums of money.

Of course, we can’t talk about all of this without talking about the role of governments, either. While the rules that corporations enact on our speech (and our privacy) are problematic, even more problematic are the ways in which corporations capitulate to governments. We’re all familiar with the transparency reports that show legal orders received by these companies, but what about the ways in which extralegal pressure is applied? Remember the time the State Department called YouTube and asked them to remove a video that was insulting to Muslims? YouTube didn’t fully comply, but they did remove the video in two countries, triggering numerous other countries to submit legal orders. Earlier this year, Twitter removed content at the behest of what turned out to be an invalid Pakistani legal order. Just this week, EU leaders met with major companies like Google and Twitter to pressure them to enact proactive measures to censor terrorist content. And as we speak, a US government/corporate partnership is astroturfing in the Middle East with a new set of watered-down, cultural relativist “rights and principles” that they will try to parade around the Arab IGF next month.

I should disclose that I tend toward free speech absolutism and have repeatedly argued for corporations to pull back their regulation of speech to align with the law in the jurisdictions in which they are based. But that isn’t the point. The point here is that, increasingly, the multistakeholder process is completely irrelevant to our speech rights on the Internet.

I say this in the hope of provoking a good discussion, but also because I truly believe it. I’ve watched for years as the IGF holds discussions on issues of free speech with no outcomes. I’ve been a part of the Global Network Initiative, and my organization walked out last year – because of a lack of transparency, a lack of achievements, and the fact that both the GNI and most of its civil society and academic members receive significant Google funding, arguably a strong conflict of interest. I’ve heard from civil society organizations that are afraid to speak out against Google for fear of losing one of their main sources of funding. I don’t believe Google money inherently invalidates your argument, but I also don’t think we can dismiss the silencing effect that it has.

So when we talk about content, what we are really talking about is the spaces in which we are allowed, able to speak. And when we talk about content in the context of Internet governance and multistakeholderism, we are talking about control: By governments, yes, but also, by companies. And at the moment, we are in the midst of an era where corporations hold an extraordinary amount of power over our privacy, our right to association, and our speech.

We need a solution to this problem, and I won’t profess to have it, in full. But I do have a few recommendations:

First, we need to take a realistic look at where content control is happening. Many NGOs, and particularly the Global Network Initiative, are focused entirely on the ways in which government regulates speech, whether through laws and censorship or through legal orders to companies.

We also need to take a realistic look at how companies are behaving in our spaces. As I said before, Google funding (for example) does not automatically invalidate your argument, but I don’t think enough of us are asking how much their and other companies’ presence in our civil society spaces is impacting our advocacy.

Finally, we must hold corporations accountable for their behavior in multistakeholder processes. Are they shutting down conversations? Are they lying on panels? We cannot be afraid to call that out where we see it. And we must never let our funding silence us.

Facebook responds to the “real name” problem

As you know if you’ve ever read this blog, Facebook has a serious problem with users abusing its reporting mechanisms, particularly when it comes to the “real name” policy. After a recent incident (well-documented by many) involving LGBT performers, the company has finally responded. But the response, of course, is weak. Here’s a response from Chris Cox, a Facebook staffer, with my comments in-line (bold is mine):

I want to apologize to the affected community of drag queens, drag kings, transgender, and extensive community of our friends, neighbors, and members of the LGBT community for the hardship that we’ve put you through in dealing with your Facebook accounts over the past few weeks.
In the two weeks since the real-name policy issues surfaced, we’ve had the chance to hear from many of you in these communities and understand the policy more clearly as you experience it. We’ve also come to understand how painful this has been. We owe you a better service and a better experience using Facebook, and we’re going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were.

The way this happened took us off guard. An individual on Facebook decided to report several hundred of these accounts as fake. These reports were among the several hundred thousand fake name reports we process every single week, 99 percent of which are bad actors doing bad things: impersonation, bullying, trolling, domestic violence, scams, hate speech, and more — so we didn’t notice the pattern. The process we follow has been to ask the flagged accounts to verify they are using real names by submitting some form of ID — gym membership, library card, or piece of mail. We’ve had this policy for over 10 years, and until recently it’s done a good job of creating a safe community without inadvertently harming groups like what happened here.

False. As I’ve been documenting for nearly five years, this is a serious problem that affects users around the world.

Our policy has never been to require everyone on Facebook to use their legal name. The spirit of our policy is that everyone on Facebook uses the authentic name they use in real life. For Sister Roma, that’s Sister Roma. For Lil Miss Hot Mess, that’s Lil Miss Hot Mess. Part of what’s been so difficult about this conversation is that we support both of these individuals, and so many others affected by this, completely and utterly in how they use Facebook.

Also false. Facebook’s actual policy states: “What names are allowed on Facebook? … The name you use should be your real name as it would be listed on your credit card, driver’s license or student ID”. Users who are reported are forced to submit (insecurely, no less) this information.

We believe this is the right policy for Facebook for two reasons. First, it’s part of what made Facebook special in the first place, by differentiating the service from the rest of the internet where pseudonymity, anonymity, or often random names were the social norm. Second, it’s the primary mechanism we have to protect millions of people every day, all around the world, from real harm. The stories of mass impersonation, trolling, domestic abuse, and higher rates of bullying and intolerance are oftentimes the result of people hiding behind fake names, and it’s both terrifying and sad. Our ability to successfully protect against them with this policy has borne out the reality that this policy, on balance, and when applied carefully, is a very powerful force for good.

What can I say? Facebook believes this to be true, but a large amount of the abuse on the site occurs with people using their real names. Or names that look like real names. Whatever, they’re only reported when they look “fake,” which is the failure of their system.

All that said, we see through this event that there’s lots of room for improvement in the reporting and enforcement mechanisms, tools for understanding who’s real and who’s not, and the customer service for anyone who’s affected. These have not worked flawlessly and we need to fix that. With this input, we’re already underway building better tools for authenticating the Sister Romas of the world while not opening up Facebook to bad actors. And we’re taking measures to provide much more deliberate customer service to those accounts that get flagged so that we can manage these in a less abrupt and more thoughtful way. To everyone affected by this, thank you for working through this with us and helping us to improve the safety and authenticity of the Facebook experience for everyone.

Well good. Finally. I guess it took Americans being affected for them to care.

On the many, many faces of courage: A follow-up

Wow. When we created this yesterday and tossed it up on my site, I wasn’t expecting the huge response it received (something like 3,000 hits and ~200 retweets in less than 24 hours). Most of the reactions were extremely positive. Some were disgusting and misogynist, but not unexpected. Some were critical, but fair: I especially take to heart points about the erasure of non-whiteness or non-maleness in the initial image, and Renata’s astute comment all of the faces highlighted having much more exposure already than those whose activism takes place offline or out of the public eye.

While I absolutely take those critiques to heart (and have spoken to folks individually), I do think that, in a sense, they miss the forest for the trees. As I said in the initial post, the point of this exercise was not to erase or diminish the sacrifices of people like Aaron Swartz or Chelsea Manning or Jake Appelbaum or Julian Assange, but to point out two things: First, that whomever created the graphic may have failed to recognize the whiteness of the images, perpetuating privilege (even if, as I’ve acknowledged,  the individuals in the image are indeed disempowered). And second: that courage has many faces.

There was one comment from WikiLeaks (whose response I do appreciate) that particularly struck me, and while I disagree with it, I can accept it as critique for explaining purpose more effectively.  By messaging that “courage is contagious” with a group of white-seeming, male-seeming faces, we are not effectively getting “the audience to act using whatever tricks one can.” Instead, we are speaking only to a possible segment of the audience that may not feel inspired by seeing faces like their own not reflected back at them.

I stand by what I did, and am particularly inspired to see the ways in which others have remixed the idea to demonstrate that courage indeed does have many faces, many of which are unknown or less empowered than even those on the image I presented.  I enjoyed @zararah’s take, which included some of her heroes. Nick Farr’s version was lovely to include me, but much more importantly, it reminds us that “hero” takes on a different meaning for each person. Renata highlighted activists that I’d never even heard of. Sarah took the opportunity to remind people to support research for Huntington’s Disease.

The wonderful thing is, misogynist assholes aside, even those with serious reservations were able to get behind the concept. While they may not have agreed with my approach, no serious person contested the idea that more faces need to be known, that some are routinely excluded, and that there are many people both spreading and catching the courage bug. For that I’m grateful, and inspired.

« Older posts

Creative Commons License
Jillian C. York by is licensed under a Creative Commons Attribution 4.0 International License.

Theme by Anders NorenUp ↑