Jillian C. York

Jillian C. York is a writer and activist.

Month: April 2011 (page 1 of 3)

Threats from Syria

A few days ago, Anas Qtiesh* wrote of spam bots intentionally targeting the #Syria hashtag with neutral or pro-regime messages. I was then asked to write a piece on the subject for the Guardian‘s Comment is Free.

Today, I find that I’ve been added to a list of “information terrorists” (along with the Guardian’s Brian Whitaker, Anderson Cooper, YouTube and TIME magazine…such good company!)
The user who set up the account has claimed in tweets that foreigners are lying about violence inside Syria, and that Syria is facing an “organized attack” from mass media. I’ve received threats–both obvious and veiled–from what I can only assume to be regime apologists.

But allow me to clarify, nonetheless: I have not specifically spoken in favor of revolution in Syria, nor is that relevant. What I have done, in many cases, is retweeted Syrian friends (whom I know in real life and trust), some of whom are risking their lives to get information out of the country. I have shared videos from Daraa and Homs which show civilian deaths. I have written about government attacks on information technology in various ways.

I love Syria. I’ve visited Syria. I want what is best for Syria and its people. And so I stand in solidarity with my Syrian friends, but I do not publicly profess a specific opinion on the Syrian government or how to change it. I wish for the principles of universal human rights to be applied in Syria, and I trust the Syrian people to fight for that as they see fit.

*full disclosure: he’s my partner

When Tech Companies Do Right

Yesterday, I mentioned in a post the importance of talking about tech companies not only when they do poorly, but also when they do right. In that post, I mentioned how Twitter has shied from moderating content on their platform even in the most contentious of circumstances, showing their dedication to free expression online. There are numerous other examples: YouTube’s dedication to leaving up violent content when it constitutes news (particularly relevant in the recent uprisings across the Arab world) is a good one. And while I disagree with Flickr’s decision vis-a-vis Hossam El-Hamalawy’s uploading of photos from Amn El Dawla, I appreciate Yahoo!’s thoughtfulness in evaluating their policies after the fact.

Recently, the Electronic Frontier Foundation (aka my soon-to-be-employer) released a report evaluating transparency and privacy practices of the most popular online companies. The easy-to-read format of the report shows Google coming out on top, which doesn’t surprise me: The tech giant has made a valiant effort to make users aware of government requests for data, as well as government-initiated content takedowns. Twitter appears to be doing a great job as well, which doesn’t surprise me. On the other hand, the absence of Skype betrays a narrative of it being a “safe tool” (something that privacy experts have long known not to be true).

Though the report is thorough in dealing with privacy and transparency, I would love to see a similar report on how the same companies rank when it comes to censoring user content (hey, maybe that’ll be my first project!). I would imagine we’d see Twitter somewhere near the top, and Facebook just about dead last.

On Facebook’s deletion of a gay kiss (or why community policing doesn’t work)

It used to be that I had to seek out instances of overreaching Facebook censorship. Now, thanks to loads of recent high-profile examples and increased popular interest in the topic, they fall right into my lap. On BoingBoing today, Richard Metzger, a self-described “married, middle-aged man,” writes that he became a spokesman for gay rights overnight when Facebook deleted a photo he had posted of two gay men kissing.

After posting the photo, one friend of many had left a homophobic comment, most likely reporting the image as a TOS violation. Despite the fact that Metzger’s other friends jumped in to defend the image and take down the homophobe, Facebook’s automated systems (or, worse, one of their staff members) plucked the image, sending Metzger the usual automated message. Metzger writes:

According to Facebook’s FAQ on matters like this, EVERY claim of “abusive” posts is investigated by an actual live human being. If we take them at their word, it wasn’t automatically deleted.

My assumption is that “Jerry” complained and that perhaps a conservative or religious person working for Facebook –maybe it was an outsourced worker in another country, I can’t say–got that case number, looked at it for a split second, vaguely (or wholeheartedly, who can say?) agreed with “Jerry” (or it was just “easier” to “agree” with him as a matter of corporate policy) dinged it and moved on. I doubt that there was very, very little thought given to the matter. “Delete” and move on to the next item of “abusive material” on the list.

Metzger also markedly notes:

The real problem here is certainly not that Facebook is a homophobic company. It’s that their terrible corporate policy on censorship needs to stop siding with the idiots, the complainers and the least-enlightened and evolved amongst us as a matter of business expediency!

Indeed. As others have pointed out, this takedown has resulted in a veritable Streisand effect, pushing this story beyond a local issue and into the stratosphere. Similarly, Facebook’s removal of page calling for a third Palestinian intifada has resulted in literally dozens of copycat pages and groups, making it increasingly difficult for Facebook to enforce their terms of service without bias (which is to say: Facebook claims they only took down the page after spotting actual incitement; with few Arabic-speaking staff members, it’ll be nearly impossible to do the same for each page).

Facebook pages calling for a third intifada in Palestine

A fresh crop of pages calling for a third intifada in Palestine

Ultimately, this is not an issue of Facebook bias, but of a poorly implemented community policing system. As I said in my talk at re:publica XI, community policing may simply not be the answer. Metzger ended up with a positive outcome in this case, but no thanks to Facebook’s lack of robust processes. In fact, I would hedge my bets that he received the response he did because the story had already blown up as it did.

I’m no longer convinced of the system. I’ve long expressed the feeling that community policing is skewed against activists and the semi-famous, but examples like this illustrate that it’s worse than I thought. No, Facebook is not inherently homophobic, but this case shows that either they’re lying about automated processes or their review staff are poorly trained. In either case, anyone who posts a photo or video that borders on contentious is at risk of seeing their content removed.

Now, this last bit feels like an addendum when in fact it deserves an entire post, but I want to highlight an example of a company that doesn’t really moderate content, and what that means. Twitter, which has built a reputation on defending its users, has stated to users time and time again that unless content is strictly in violation of their TOS, they won’t touch it. In many cases, this has included contentious content that sits somewhere in the grey area.

My colleague and friend Ethan Zuckerman recently shared an extremely contentious example of this with me; after spotting clear calls for violence against Christians in Nigeria on Twitter, he emailed a staffer at Twitter out of concern. The Twitter staffer expressed shared concern, but stated that Twitter doesn’t moderate content. After chatting with Ethan, we ultimately realized that this was probably best: If Twitter were to take that account down, there’s no telling how many would pop up in its place. When Ethan shared the verdict, as well as the results of our discussion with a friend in Nigeria, he told me that the friend came around on the issue too.

As Supreme Court Justice Louis Brandeis advised, in his famous Whitney v. California opinion in 1927, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”

Older posts

Creative Commons License
Jillian C. York by is licensed under a Creative Commons Attribution 4.0 International License.

Theme by Anders NorenUp ↑