Graham's Plan for Spam

Brant Gurganus

Professor House

Rhetoric of Science

28 September 2006

Graham’s Plan for Spam

Paul Graham once wrote a work entitled, “A Plan for Spam.” Graham had been working on a Web-based e-mail reader. At first, the purpose of the e-mail reader had nothing to do with spam, also known as junk e-mail, but with exercising a new programming language that Graham was developing. However, in the course of developing some spam filtering features for the reader, Graham developed a new approach to the spam problem. Graham’s plan centered on one critical notion: “The Achilles heel of the spammers is their message.” (Graham). This paper will not discuss his approach; for that, his essay is publically available. Instead, this paper is about how Graham made clear that prior approaches were wrong and that the approach he developed was on a better track for solving the spam problem. This was done through a familiarity with the attitude of a hacker.

I don’t know why I avoided trying the statistical approach for so long. I think it was because I got addicted to trying to identify spam features myself, as if I were playing some kind of competitive game with the spammers. (Nonhackers don’t often realize this, but most hackers are very competitive.) When I did try statistical analysis, I found immediately that it was much cleverer that I had been. It discovered, of course, that terms like “virtumundo” and “teens” were good indicators of spam. But it also discovered that “per” and “FL” and “ff0000” are good indicators of spam. In fact, “ff0000” (html for bright red) turns out to be as good an indicator of spam as any pornographic term. (Graham).

The second section of Graham’s essay, dealing with other approaches to spam detection, ends with the previous paragraph. Graham realizes that it is other hackers, “[people] who [enjoy] exploring the details of programming systems and how to stretch their capabilities,” (Raymond, Hacker) that will be implementing the spam detection algorithms. He appeals to them by saying he is “playing some kind of competitive game with the spammers” (Graham) and pointing out that: “Most hackers are very competitive.” (Graham) Such phrases show that Graham knows what being a hacker is like. By indicating that he “avoided trying the statistical approach for so long,” (Graham) those sentences also show that Graham did not skip the techniques that other hackers had tried. He had tried rule-based approaches and other prominent approaches, but they just were not adequate. In the last sentences of the paragraph Graham points out that statistical filtering catches obvious indicators such as “teens.” He then shows statistical filtering doing even better by catching less obvious indicators such as “per.” Who would have thought that a small word like “per” was such a strong indicator of spam?

One great advantage of the statistical approach is that you don’t have to read so many spams. Over the past six months, I’ve read literally thousands of spams, and it is really kind of demoralizing. Norbert Wiener said if you compete with slaves you become a slave, and there is something similarly degrading about competing with spammers. To recognize individual spam features you have to try to get into the mind of the spammer, and frankly I want to spend as little time inside the minds of spammers as possible. (Graham).

The essay’s fourth section about advantages of statistical filtering starts with the previous paragraph. This paragraph is particularly strong in that it shows Graham has an understanding of hacker attitude and uses it to his advantage. Part of the hacker attitude is that boredom and drudgery are evil (Raymond, How to Become a Hacker), and Graham addresses this in the first sentence of the paragraph. Reading six months of spam messages certainly would be boring. The rest of the paragraph appeals to, strictly speaking, any non-spammer including hackers. Not only does he call the work necessary for other spam recognition techniques “demoralizing,” he associates the work with slavery. Slavery has had a rather negative connotation in the United States for at least half a century, so the connection between other spam recognition techniques and slavery is rather bold.

Graham continues with further passages digging deeper into how the statistical approach works. At that point, his rhetoric becomes directed toward people who believe statistical filters can be beat.

To beat Bayesian filters, it would not be enough for spammers to make their emails unique or to stop using individual naughty words. They’d have to make their mails indistinguishable from your ordinary mail. And this I think would severely constrain them. Spam is mostly sales pitches, so unless your regular mail is all sales pitches, spams will inevitably have a different character. And the spammers would also, of course, have to change (and keep changing) their whole infrastructure, because otherwise the headers would look as bad to the Bayesian filters as ever, no matter what they did to the message body. I don’t know enough about the infrastructure that spammers use to know how hard it would be to make the headers look innocent, but my guess is that it would be even harder than making the message look innocent. (Graham).

The beauty of Graham’s argument in this paragraph is that he not only says that his filtering technique could be beat, he actually tells how to beat it. However, he points out in the second sentence that this is not a problem. Because his statistical approach is based on the content of the message, the content of the message would have to be what the recipient normally receives and at that point, it is no longer spam. In the paragraph following, he actually gives an example message that might pass: “Hey there. Thought you should check out the following:” However, he points out that “it will be hard even to get this past filters, because if everything else in the email is neutral, the spam probability will hinge on the url, and it will take some effort to make that look neutral.” In his approach, Graham uses a pattern of telling how to get past the statistical filters and then points out that it either is not a problem or hinges on an unlikely event.

The rest of Graham’s paper consists of appendices dealing with actual examples of processing spam with the statistical approach. Those, however, are irrelevant to the techniques used to communicate and convince others to use the statistical approach as well. What is relevant though is that he actually implemented the ideas. According to Bruno Latour, an idea such as a statistical approach to spam filtering becomes more of a fact when it gets used (Latour 23). Graham made the first step of making statistical filtering an established idea by actually implementing it. Soon Mozilla Thunderbird, SpamAssassin, and other e-mail-related products would implement statistical techniques solidifying Graham’s plan in the combat against spam.

The theme of his rhetoric is that he understands a hacker. He does this in the first example by showing that he has tried what other hackers have tried. He does this in the second example by appealing to the desire for solutions to boredom and drudgery. He does this in the third example by showing the creativity of the approach to correct itself. That is his expository power. He exposes the ideas in ways that show a great deal of understanding of hackers, his audience.

Works Cited

Graham, Paul. “A Plan for Spam.” August 2002. Paul Graham. 14 September 2006 <>.

Latour, Bruno. Science in Action. Cambridge: Harvard UP, 1987.

Raymond, Eric Steven. “Hacker.” Vers. 4.4.7. The Jargon File. 19 September 2006 <>.

“How to Become a Hacker.” Vers. 1.35. 3 August 2006. Eric S. Raymond’s Home Page. 14 September 2006 <>.