« PREVIOUS ENTRY
Testing, testing

NEXT ENTRY »
“Bluesnarfing”

Are computer viruses a form of free speech?

I just noticed this now — over at Boing Boing, Cory Doctorow wrote an incredibly nice comment on my piece about virus writers! He goes on to offer a interesting critique of the piece based on a particular issue it raises: Whether computer code is protected as free speech. As Cory wrote:

Clive touches on, and dismisses the free-speech arguments for publishing malware code (interestingly, he does so without any quotes from legal scholars and impact litigators who work on First Amendment issues, and so ends up eliding the nuance in the argument and presenting a somewhat blunted picture of the issue) and only lightly touches on the far more important notion of legitimate security research.

If, as Schneier says, “Any person can create a security system so clever s/he can’t think of a way to defeat it,” then the only experimental methodology for evaluating the relative security of a system is publishing its details and inviting proof of its flaws — proof readily embodied in malware.

Codebreakers and worm-writers are the only mechanism we know about for reliably strengthening systems, and the idea that they should refrain from publishing their research in order to keep us safe is fundamentally flawed, since it depends on the idea that malicious people will never be clever enough to independently reproduce their techniques, and that the public is better served by remaining ignorant of the potential risks in the systems they’ve bought than by being exposed to the evidence of the rampant flaws in those systems.

This notion falls flat when considered in light of the real world. If a developer was building condos whose doors could all be unlocked with an unbent paper-clip, this line of reasoning demands that the person(s) who discover this should keep mum about it, in the hopes that no bad guy ever catches on. In the real world, the best answer is usually to scream about this to high heaven, so that the bad developer can’t silence you and cover his ass, and so that his customers can get their locks fixed.

This is an excellent debate. But I don’t think my article dismisses the free-speech argument. Indeed, quite the opposite — it explicitly states that code is a form of speech. As I wrote …

… in most countries, writing viruses is not illegal. Indeed, in the United States some legal scholars argue that it is protected as free speech. Software is a type of language, and writing a program is akin to writing a recipe for beef stew. It is merely a bunch of instructions for the computer to follow, in the same way that a recipe is a set of instructions for a cook to follow.

Cory’s quite right that I didn’t actually quote any legal scholars or free-speech experts in the piece, though I certainly have interviewed plenty of them on this issue (including, back when I was writing about the DeCSS trial, the insanely brilliant Dave Touretzky).

I think Cory may be confusing my discussion of the ethics of virus writing with the right to compose malware. My article points out that the ethics of virus-writing are considered, by many, to be somewhat suspect. That doesn’t dismiss the point about the free-speech argument for writing computer code. Quite clearly, code is a form of speech — and for the record I would be very, very glad to see it thusly protected! (Not the least because, while being myself only a very crappy and very occasional programmer of the lamest computer languages in existence, I find programming a superbly intellectual endeavour.)

But just because one has a right to do something does not mean it’s ethical to do it. (And vice versa — there are, sadly, plenty of people in jail for things that were against the law but were not even vaguely unethical, including much civil disobedience.) Rights and ethics are very different things. What my article was in part trying to explore were the complex ethics of virus writing.

As Cory pointed out, one can also make a good ethical case in favor of virus writing. I, too, pointed out the argument:

Indeed, a number of them say they are making the world a better place, because they openly expose the weaknesses of computer systems. When Philet0ast3r or Mario or Mathieson finishes a new virus, they say, they will immediately e-mail a copy of it to antivirus companies. That way, they explained, the companies can program their software to recognize and delete the virus should some script kiddie ever release it into the wild. This is further proof that they mean no harm with their hobby, as Mathieson pointed out. On the contrary, he said, their virus-writing strengthens the ”immune system” of the Internet.

Having said that, it is true that I presented the devil’s-advocate side: That widely publishing your virus code can fall into an ethical grey area. That’s because viruses behave in ways rather different from other types of code. Cory quotes Schneider to point out a well-known security fact: That one good way to improve the security of a computer system is for smart outsiders to examine it carefully and discover its weaknesses. That’s why open-source code like Linux is so stable; tens of thousands of smart geeks have looked at the code and immediately publicized any weaknesses they find. That’s also why smart goverment agencies and companies hire tiger teams to try and break into their own systems. It’s a great way to probe your own vulnerabilities.

Viruses and worms are a bit different, though, I think. Attempting a series of hack-attacks on a particular system to try and uncover its weaknesses is a stable, controlled experiment. You can run the experiment in such a way that you don’t actually hurt anyone. But when viruses and worms are released, they’re inherently uncontrolled. They interact with thousands of unknown computer systems with unknown configurations; there’s simply no way to fully predict what’s going to happen. They might be harmless. Or they might screw up the 911 system in a city, as Slammer did. There are plenty of ways to test a system with a single hack attack. You can prove that backdoor exists by invading it on a single machine; no harm done. But there is no way to definitively prove a virus would spread the way you think it would without releasing it — and that’s an inherently chaotic endeavor.

Yet I still don’t think one should be prohibited from writing worm and virus code — because as Cory says, the code can be a good way to show, on paper, that a vulnerability exists that ought to be patched. The grey area is: How to publish the findings? If you put the full code to your worm up online, you can be pretty certain someone will release it. And though publishing may indeed be your legal right, in this context, it’s also an ethical decision. Are there other ways to inform the world about a worm-related vulnerability you’ve discovered? Publish only the most relevant parts of the code? That might not be a bad idea, since script kiddies won’t be able to write the rest of the code necessary to get the worm up and running; a smart security official, however, could look at those few lines of worm code and immediately grok the vulnerability they need to patch.

The counterargument would be that “software companies drag their heels — nobody would ever really fix a vulnerability until they’ve had it literally slammed in their faces. You have show the full worm code — hell, it probably has to be released in the wild — before they’ll really take it seriously.” Fair enough. The vulnerability that the Slammer worm exploited had been known for months; people only got serious about patching it once they’d seen how much damage a worm could do. And, once again, one ought indeed to have the free-speech right to publish a worm. But I don’t think exculpates you from the ethical aspects of the debate. Rights are clean and simple; ethics are super-messy.

Since Cory opened his posting about my article with an incredibly nice compliment about my writing, I’m going to close with one about his: Readers! If you are still with me here, DO NOT WALK but RUN to the nearest bookstore and get a copy of Cory’s latest novel, Eastern Standard Tribe. He is easily one of the most talented science fiction writers I’ve ever encountered, and my shelves are groaning with the stuff. (His last novel Down and Out in the Magic Kingdom fried my noodle and inspired a zillion interesting conversations about the future of the reputation economy.) Go. Now!


blog comments powered by Disqus

Search This Site


Bio:

I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).

More of Me

Twitter
Tumblr

Recent Comments

Collision Detection: A Blog by Clive Thompson