Jillian C. York

Jillian C. York is a writer and activist.

Page 3 of 182

An Interview with Bassem Youssef

Dr._Bassem_Youssef_(graffiti;_2012-04-20)I just got around to watching this in the course of a book chapter I’m writing on participatory media in the Middle East, but it’s absolutely still worth a watch. Choice quote:

“We did all of this for the sake of a different media, a media that respected people’s minds and intelligence and at the same time keeping [sic] them informed and helping them to combat lies and misinformation that plagued many of the media outlets. As we did that, we never claimed to be freedom fighters or political activists. We believed that it was enough to be the normal everyday people who did not buy into the everyday life of propaganda that is full of lies and deceit.”

Why Ronan Farrow is completely and utterly wrong about corporate regulation of speech

Normally, I’d have pitched an op-ed rebuttal or something, but this is just too stupid not to tear apart quick-and-dirty blog style. The TL;DR for context is that Ronan Farrow, privileged celeb commentator, wrote a ridiculous Washington Post piece about how social media companies should censor terrorists. Nevermind that this debate is two years old, or that the law enforcement and intelligence communities have already spoken in favor of not removing such content, or the fact that there aren’t many instances in which it’s legally required to do so…nevermind all of that, because Ronan Farrow is upset! And surely when Ronan Farrow gets upset, companies listen, which…isn’t good for any of us. So, here goes:

Farrow starts by bringing up the Rwandan genocide (which is not a very good comparison considering the extreme differences in media types) then proceeds to highlight how terrorists these days are using social media. So far, okay. Then, he says:

But officials at social media companies are leery of adjudicating what should be taken down and what should be left alone. “One person’s terrorist is another person’s freedom fighter,” one senior executive tells me on condition of anonymity. Making that call is “not something we’d want to do.”

First off, bullshit – Twitter and Facebook already remove (certain) terrorist content, and every platform adjudicates to some degree—be it on the basis of TOS or government request—what content remains up. For companies, the real question with terrorist content is what to censor. For example, it seemed pretty simple for Twitter to take down Al-Shabaab’s account following the Westgate Mall massacre, because there was consistent glorification of violence (whether that’s a crime or not is another question). But they’ve clearly had a harder time determining whether to take down some of ISIS’ accounts, because many of them simply don’t incite violence. Like them or not (and I’m going with “not”), their function seems to be reporting on their land grabs, which does have a certain utility for reporters and other actors.

There are legitimate free-speech questions here: What about reporting on propaganda? What about peaceful lectures by otherwise violent terrorists? But those grey areas don’t excuse a lack of enforcement against direct calls for murder, which these companies supposedly ban. “I understand there are freedom of speech concerns, but I don’t think that describes what’s going on with much of the content on YouTube,” says Evan Kohlmann, a counter-terrorism analyst with Flashpoint Partners and NBC News. “No one’s suggesting they remove all journalistic clips… This is about extremely explicit content, calling for violence.”

I agree with Farrow in theory here: Companies’ TOS, like them or not, typically ban calls for violence and as my friend Dalia Othman points out, these rules are applied selectively. While I’d personally prefer that companies only regulate speech when required to do so by law (a concept Farrow seems not to grasp), many such calls for incitement may very well violate local laws, if not US ones.

Also though: I think Farrow is being disingenuous here. He doesn’t just want incitement censored, he wants terrorist accounts removed entirely, which (as I’ve said many times before) is incredibly problematic for so many reasons. But more on that in a moment.

Another objection is practical. There’s simply too much content to monitor, and too many openings for it to come back when quashed. An executive at one major social media company described it as the “whack-a-mole” phenomenon—take down one video, it springs up elsewhere. But flawed enforcement shouldn’t excuse inaction any more than it did in Rwanda 20 years ago, when the U.S. government deemed jamming solutions too legally complex, too expensive, too impractical. The perfect, then as now, was the enemy of the good.

Clearly, Farrow has never heard of the Streisand Effect, so in the off-chance that he’s reading this, I recommend he do some reading.

It’s a serious issue: Take down a terrorist account and 20 more pop up in its place. As Farrow points out, the ISIS folks are sophisticated – censor them enough and soon they’ll start mirroring their sites and otherwise finding workarounds that will help them grow stronger. Weigh that against ignoring them.

More troubling still is the fact that these companies already know how to police and remove content that violates other laws. Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?

Oh, yes, Farrow, tell me how effective the policing of child pornography is. Did you know, for example, that those algorithms have resulted in a situation where almost no cloud storage service will allow legitimate adult nude images, because the algorithms aren’t sophisticated enough to detect the difference? Have you thought about how that affects artists? Have you seen how people are losing their accounts—and sometimes, all of their content, and their money—because of one nude image stored on a service erroneously detected as child pornography?

Furthermore: Do you really think the DMCA is a good thing? If so, there’s a whole other conversation we need to have, my friend.

They don’t. Indeed, Twitter, YouTube and Facebook all say they strictly refuse to police content themselves—instead relying on third parties, mostly users around the world, to flag objectionable content. But the constant torrent of new content is not a burden that can be practically managed by the crowd—any more than companies expect users to serve as the prime monitor for child pornography.

They do, and this is a huge problem that’s been written about by numerous experts, myself included. You say you don’t want to rely on the crowd; cool, we agree! But wait, you think the companies themselves should monitor speech proactively? How exactly would that work, Ronan? What are the determiners? Do you propose creating a blood algorithm or do you want to take every terrorist account down? What about in cases where the terrorist organization is also a legitimate political party (such as Hezbollah in Lebanon)? Do you think that because US companies are the hosts of speech, that they should be able to use that power to effect political change in other countries? I mean, I shouldn’t be surprised – this sounds like the kind of bullshit Jared Cohen would say.

As always, beneath legitimate practical and ethical concerns, there is a question about the bottom line. Section 230 of the Telecom Act of 1996 inoculates these companies from responsibility for content that users post—as long as they don’t know about it. Individuals involved in content removal policies at the major social media companies, speaking to me on condition of anonymity, say that’s a driving factor in their thinking. “We can’t police any content ourselves,” one explains. Adds another: “The second we get into reviewing any content ourselves, record labels say, ‘You should be reviewing all videos for copyright violations, too.’”

The Communications Decency Act, Ronan. Say it with me. I mean, you don’t even know the relevant laws, but you consider yourself an expert on this subject? Or did the Washington Post publish your piece because you’re a good-looking man with a famous mother?

Stick to what you’re good at, Ronan…it’s not this.

These companies have a moral obligation to do more. And U.S. law should not create a legal barrier for them to act when lives are on the line. The current regime—enforced ignorance and half-measures—may be among our apologies when we recite Iraq’s “never again”s.

There’s no legal barrier, Ronan. Companies remove content all the fucking time. The fact that they’re not removing the content that you want them to because they’re considering the free speech implications of it or because of other reasons that maybe you just don’t understand does not equal “a legal barrier.”

For chrissake, Ronan, just stop.

On the Knight News Challenge

Jillian York at Civic Media

This is me giving a short talk about the project at the Knight/MIT Civic Media event in Cambridge, Massachusetts. Screenshot from Ramzi Jaber.

I’m really excited to say that a project I’ve been working on for two years, OnlineCensorship.org (currently in alpha), is one of this year’s Knight News Challenge winners, along with an incredible set of projects. The project came about in late 2011, on my first trip to Palestine. I remember it clearly; I was chatting over a lunch of chicken and rice with Ramzi Jaber—whose Visualizing Palestine is making waves—about Facebook censoring Palestinian content, and he mentioned that he owned the domain OnlineCensorship.org. I don’t remember the rest of that day’s conversation, but emails between us starting in January 2012 show the project making rapid progress – we built it, found some great advisers to help and worked on iterating the questions. The only thing we didn’t have was money.

For awhile, we worked closely with Rebecca MacKinnon to try to get funded, but as her project, Ranking Digital Rights (also a News Challenge winner!) got off the ground, and efforts at funding OC.org failed, the project went dormant for some time.

Willow Brugh illustrates (literally) why this project is important.

Willow Brugh illustrates (literally) why this project is important.

Now, I’m thrilled to say that Ramzi and I are ready to move it forward. The project will now be housed within EFF and will benefit from our vast expertise, with Visualizing Impact working alongside us, contributing their particular expertise to the design end of things. We’re actively seeking additional advisers (email me!) and will be sharing more about our timeline starting in September.

Thank you so much to the Knight Foundation, EFF, Visualizing Impact, and many friends for your incredible support!

Further reading:

« Older posts Newer posts »

Creative Commons License
Jillian C. York by is licensed under a Creative Commons Attribution 4.0 International License.

Theme by Anders NorenUp ↑