In its short lifespan (less than a month!), my Metaphor-a-Minute Twitter bot hit two life-altering events. First, it had to be taught proper etiquette. Second, it was banned and unbanned by Twitter. I discuss both of these experiences below.
Watch your mouth, young bot
There’s a passage in Alien Phenomenology where Ian Bogost talks about a random image picker he created for an object-oriented ontology symposium website (you can see it on the left side of the page at the link). Apparently there was a point where the female dean of a women’s college was shown the site, and the image picker chose to display a woman in a playboy bunny outfit. Bogost writes:
Given the charged nature of the subject—a sexist “toy” on a website about an ontology conference organized by and featuring 89 percent white men—it would have been tempting to shut down the feature entirely or to eviscerate its uncertainty and replace it with a dozen carefully suggested stock images, specimens guaranteed not to rufﬂe feathers. But to do so would destroy the gadget’s ontographical power, reducing it to but a visual ﬂourish. (99)
At first, he resisted making any changes, but eventually (correctly, in my opinion) came to see that some of the pictures it could show might undermine the argument that the work of carpentry was trying to make. In the end he put a filter to exclude things tagged “sexy” or “woman” or “girl” when alongside the tags “object” or “thing” or “stuff”. This has the unfortunate side effect of removing many depictions of women from the image picker, perhaps silencing their presence by removing it from its ontography, but for obvious reasons you don’t want your philosophy to list “women” as “objects” — even though it might be true in an ontological sense (“objects” here meaning “actors” in the most generic sense: things that affect other things, including men, elephants, galaxies, computers, forests, and paperclips). Someone without a background in OOO could really interpret it the wrong way.
While I thought it was a very interesting commentary on carpentry, I sort of filed it away in the back of my mind, not knowing that it would mirror something I’d have to do when I built my own work of carpentry a few weeks later.
Metaphor-a-Minute was chugging along nicely for maybe three or four days when I noticed the following tweet:
“a f_____t is a gadfly: case, but not heterosexual”
I was surprised that the Wordnik API would provide a homophobic slur like that, considering that its
randomWords API call is used by all sorts of applications. (It’s also interesting/alarming that it was paired up with the phrase ‘not heterosexual,’ as though my bot gained sentience but turned out to be a homophobic asshole.) I had assumed that there was some filtering in place, but on inspection, nope, it looked like any and all English words could be returned by a call to
randomWords. At that point, I knew that I needed to add a language filter to the bot.
Adding the filter was harder than I expected. After a little bit of searching, I found a list of 458 “bad words” that Urbano Alvarez compiled a few years back. Looking through the list, it’s clear that it’s meant to be comprehensive, the kind of thing you’d use to filter a chat room for kids. So in addition to the obvious curse words and racial slurs, there are also words like “hacker,” “dominatrix,” and “porn” on the list. I certainly didn’t want hackers and dominatrices and porn excluded from what @metaphorminute could talk about! There were also a lot of words that kids will type to get around a filter: “b1tch” and “pr0n” and the like. But I didn’t care about those because Wordnik won’t give me words like that.
Beyond these easy cases, I had to consider what types of words I felt would be unacceptable for @metaphorminute to say. I’d been jokingly referring to the bot as “my child” but now it had reached an age where it was mouthing off using words it didn’t fully understand, and I had to really think about where to draw the line — an activity uncomfortably close to the sort of thing I’d have to do with a real child at a certain age! After discussing it with my spouse, I determined that what I really cared about were “oppressive” words of various stripes — terminology used to denigrate specific groups of people. So I cut down the list of 458 words to about 30, and then added maybe 15 of my own that I could come up with. These generally fell under the category of racist/sexist/ableist words. The Wordnik API does return quite a bit of slang, so for example, while “pr0n” is not in their dictionary, “biatch” is. In the end, I’m fairly comfortable with the list of words I’ve excluded, but I’m 100% sure that I haven’t caught everything. If another problematic word comes up or I get a complaint, I can quickly add a new word to the list.
Due to the way the English language works, certain words I filtered had dual meanings, some of which are perfectly innocuous. For example, the word “bitch” can mean “female dog,” but in the end I had to filter it because I didn’t want to be the poor sap arguing on Twitter that if the bot comes up with (god forbid) “a woman is a bitch” DUH, it obviously means dogs in an abstract sense and not any of the colloquial meanings of the word. That argument would be weak and wrong: the whole point of generating a random metaphor is that it’s evocative, and that people can interpret what they want from its comparisons between random things.
In any case, adding a word filter to @metaphorminute was a far more introspective exercise than I could have imagined. It’s really interesting to me that while I’m the one who built this piece of carpentry, it has things to teach me and still surprises me from time to time. Merely writing the code for a piece of software does not make me some sort of god who has fully exhausted everything about that software. The object I’ve created goes forth in the world, interacts with objects in the the world, is modified by those interactions, and in turn I am required to modify the object in response.
Banned on Twitter: bots all the way down
On May 29th, 2012, @metaphorminute ceased tweeting.
At first I thought it was a bug in my code, but after looking into it, it was clear that Twitter had suspended the account. I wondered why this would happen. Why would they do this to my child? Was it breaking any of Twitter’s rules in the terms of service? I knew it wasn’t hitting any rate limits: I had engineered it to avoid that occurrence. I looked carefully at their rules for what constitutes spam: the bot was not intruding on anyone’s Twitter experience. Following @metaphorminute is completely consensual on the part of the follower. It never @-replies anyone, it never follows anyone, it never pollutes a hashtag. It keeps to itself, quietly writing poetry in its notebook in the corner. If you choose to look over its shoulder and see what it’s writing, that’s entirely your own decision.
I filed a report with Twitter, asking why it was taken down and what I could do to bring it back to life. Their initial response was unhelpful:
Your account was suspended because it appears you may be managing a number of Twitter accounts. Creating serial or bulk accounts with overlapping uses is a violation of the Twitter Rules; as a result, all of the accounts created have been suspended pending more information being provided.
The problem is, I was not creating serial or bulk accounts with overlapping uses. I explained this to them, and there was radio silence for a week. Then, last night, I got this response:
Twitter has automated systems that find and remove multiple automated spam accounts in bulk. Unfortunately, it looks like your account got caught up in one of these spam groups by mistake.
I’ve restored your account; sorry for the inconvenience.
Justice at last! After a full week in limbo, Metaphor-a-Minute was restored!
In the end, it looks like my bot was suspended by another bot. I find this interesting, to say the least.
I created a poet. Someone else created a police officer. Their police officer decided that my poet was disturbing the peace, and put it in jail. I had to appeal to a higher authority on behalf of my poet, and eventually they released it after an internal investigation. All of this (minus the last minute intervention) played out because of nonhuman objects interacting with one another.
My poet has 85 readers at the moment. Its readers are mostly other bots, trained to find people who tweet a certain word and then follow them. (Its human readers are mostly philosophers and poets!) Some of these bots are trained to follow accounts on behalf of what appear to be real humans who are so vapid that they fail the Turing Test. Some of the readers are bots themselves, simply spamming the world for their own obscure purposes.
Its readers are by and large unequipped to advocate on its behalf: they aren’t programmed to do so. They do not even notice when the poet goes silent, too busy promoting saccharine humanist values, cheap products adorned with mythical creatures, shady entertainment startups, Sarasota Florida, Christian cultural commentary, treatments for infertility due to PCOS, numismatics, sex toys, independent authors, a North Carolina seafood restaurant, oxygen bars, and contact lenses.
At the moment it appears that all that these bots do is sell things and oppress one another. I’ll leave interpretation of that up to my readers (human or otherwise).