The strange journey of Metaphor-a-Minute

by Darius Kazemi on June 5, 2012

in philosophy,projects,twitter,weirdness

In its short lifespan (less than a month!), my Metaphor-a-Minute Twitter bot hit two life-altering events. First, it had to be taught proper etiquette. Second, it was banned and unbanned by Twitter. I discuss both of these experiences below.

Watch your mouth, young bot

There’s a passage in Alien Phenomenology where Ian Bogost talks about a random image picker he created for an object-oriented ontology symposium website (you can see it on the left side of the page at the link). Apparently there was a point where the female dean of a women’s college was shown the site, and the image picker chose to display a woman in a playboy bunny outfit. Bogost writes:

Given the charged nature of the subject—a sexist “toy” on a website about an ontology conference organized by and featuring 89 percent white men—it would have been tempting to shut down the feature entirely or to eviscerate its uncertainty and replace it with a dozen carefully suggested stock images, specimens guaranteed not to ruffle feathers. But to do so would destroy the gadget’s ontographical power, reducing it to but a visual flourish. (99)

At first, he resisted making any changes, but eventually (correctly, in my opinion) came to see that some of the pictures it could show might undermine the argument that the work of carpentry was trying to make. In the end he put a filter to exclude things tagged “sexy” or “woman” or “girl” when alongside the tags “object” or “thing” or “stuff”. This has the unfortunate side effect of removing many depictions of women from the image picker, perhaps silencing their presence by removing it from its ontography, but for obvious reasons you don’t want your philosophy to list “women” as “objects” — even though it might be true in an ontological sense (“objects” here meaning “actors” in the most generic sense: things that affect other things, including men, elephants, galaxies, computers, forests, and paperclips). Someone without a background in OOO could really interpret it the wrong way.

While I thought it was a very interesting commentary on carpentry, I sort of filed it away in the back of my mind, not knowing that it would mirror something I’d have to do when I built my own work of carpentry a few weeks later.

Metaphor-a-Minute was chugging along nicely for maybe three or four days when I noticed the following tweet:

“a f_____t is a gadfly: case, but not heterosexual”

I was surprised that the Wordnik API would provide a homophobic slur like that, considering that its randomWords API call is used by all sorts of applications. (It’s also interesting/alarming that it was paired up with the phrase ‘not heterosexual,’ as though my bot gained sentience but turned out to be a homophobic asshole.) I had assumed that there was some filtering in place, but on inspection, nope, it looked like any and all English words could be returned by a call to randomWords. At that point, I knew that I needed to add a language filter to the bot.

Adding the filter was harder than I expected. After a little bit of searching, I found a list of 458 “bad words” that Urbano Alvarez compiled a few years back. Looking through the list, it’s clear that it’s meant to be comprehensive, the kind of thing you’d use to filter a chat room for kids. So in addition to the obvious curse words and racial slurs, there are also words like “hacker,” “dominatrix,” and “porn” on the list. I certainly didn’t want hackers and dominatrices and porn excluded from what @metaphorminute could talk about! There were also a lot of words that kids will type to get around a filter: “b1tch” and “pr0n” and the like. But I didn’t care about those because Wordnik won’t give me words like that.

Beyond these easy cases, I had to consider what types of words I felt would be unacceptable for @metaphorminute to say. I’d been jokingly referring to the bot as “my child” but now it had reached an age where it was mouthing off using words it didn’t fully understand, and I had to really think about where to draw the line — an activity uncomfortably close to the sort of thing I’d have to do with a real child at a certain age! After discussing it with my spouse, I determined that what I really cared about were “oppressive” words of various stripes — terminology used to denigrate specific groups of people. So I cut down the list of 458 words to about 30, and then added maybe 15 of my own that I could come up with. These generally fell under the category of racist/sexist/ableist words. The Wordnik API does return quite a bit of slang, so for example, while “pr0n” is not in their dictionary, “biatch” is. In the end, I’m fairly comfortable with the list of words I’ve excluded, but I’m 100% sure that I haven’t caught everything. If another problematic word comes up or I get a complaint, I can quickly add a new word to the list.

Due to the way the English language works, certain words I filtered had dual meanings, some of which are perfectly innocuous. For example, the word “bitch” can mean “female dog,” but in the end I had to filter it because I didn’t want to be the poor sap arguing on Twitter that if the bot comes up with (god forbid) “a woman is a bitch” DUH, it obviously means dogs in an abstract sense and not any of the colloquial meanings of the word. That argument would be weak and wrong: the whole point of generating a random metaphor is that it’s evocative, and that people can interpret what they want from its comparisons between random things.

In any case, adding a word filter to @metaphorminute was a far more introspective exercise than I could have imagined. It’s really interesting to me that while I’m the one who built this piece of carpentry, it has things to teach me and still surprises me from time to time. Merely writing the code for a piece of software does not make me some sort of god who has fully exhausted everything about that software. The object I’ve created goes forth in the world, interacts with objects in the the world, is modified by those interactions, and in turn I am required to modify the object in response.

Banned on Twitter: bots all the way down

On May 29th, 2012, @metaphorminute ceased tweeting.

At first I thought it was a bug in my code, but after looking into it, it was clear that Twitter had suspended the account. I wondered why this would happen. Why would they do this to my child? Was it breaking any of Twitter’s rules in the terms of service? I knew it wasn’t hitting any rate limits: I had engineered it to avoid that occurrence. I looked carefully at their rules for what constitutes spam: the bot was not intruding on anyone’s Twitter experience. Following @metaphorminute is completely consensual on the part of the follower. It never @-replies anyone, it never follows anyone, it never pollutes a hashtag. It keeps to itself, quietly writing poetry in its notebook in the corner. If you choose to look over its shoulder and see what it’s writing, that’s entirely your own decision.

I filed a report with Twitter, asking why it was taken down and what I could do to bring it back to life. Their initial response was unhelpful:

Your account was suspended because it appears you may be managing a number of Twitter accounts. Creating serial or bulk accounts with overlapping uses is a violation of the Twitter Rules; as a result, all of the accounts created have been suspended pending more information being provided.

The problem is, I was not creating serial or bulk accounts with overlapping uses. I explained this to them, and there was radio silence for a week. Then, last night, I got this response:

Twitter has automated systems that find and remove multiple automated spam accounts in bulk. Unfortunately, it looks like your account got caught up in one of these spam groups by mistake.

I’ve restored your account; sorry for the inconvenience.

Justice at last! After a full week in limbo, Metaphor-a-Minute was restored!

In the end, it looks like my bot was suspended by another bot. I find this interesting, to say the least.

I created a poet. Someone else created a police officer. Their police officer decided that my poet was disturbing the peace, and put it in jail. I had to appeal to a higher authority on behalf of my poet, and eventually they released it after an internal investigation. All of this (minus the last minute intervention) played out because of nonhuman objects interacting with one another.

My poet has 85 readers at the moment. Its readers are mostly other bots, trained to find people who tweet a certain word and then follow them. (Its human readers are mostly philosophers and poets!) Some of these bots are trained to follow accounts on behalf of what appear to be real humans who are so vapid that they fail the Turing Test. Some of the readers are bots themselves, simply spamming the world for their own obscure purposes.

Its readers are by and large unequipped to advocate on its behalf: they aren’t programmed to do so. They do not even notice when the poet goes silent, too busy promoting saccharine humanist values, cheap products adorned with mythical creatures, shady entertainment startups, Sarasota Florida, Christian cultural commentary, treatments for infertility due to PCOS, numismatics, sex toys, independent authors, a North Carolina seafood restaurant, oxygen bars, and contact lenses.

At the moment it appears that all that these bots do is sell things and oppress one another. I’ll leave interpretation of that up to my readers (human or otherwise).

{ 5 comments }

Zack Hiwiller June 6, 2012 at 10:52 am

Can a bot create a slur? Is it a slur if it has no intent? If I throw a bunch of Scrabble tile on the floor and they spell out “faggot”, have I done something offensive? Has Scrabble? Has Hasbro?

I understand the need to censor a bot if it is being paired with a message and those two are incongruent. But what is Metaphor a Minute’s message that a slur would interfere with?

Darius Kazemi June 6, 2012 at 12:56 pm

It is in fact a slur even if it has no intent: it comes down to the difference between “you said something racist” and “you are racist” — if you say a racial slur, then the former is true.

Since you use Scrabble tiles as an example, it’s worth noting that slurs are constantly being removed from the official Scrabble dictionary. So while your Scrabble tiles can technically form slurs when used outside of the official context of Scrabble, they are illegal moves when used within the official context of Scrabble. Similarly, I could cut out letters from the Washington Post and spell out a slur, but the Washington Post is not liable for that racial slur.

The context of Metaphor a Minute is the creation of metaphors in the English language. So for example, if there’s an English word that’s a slur in some other language, I’m not concerned with that, because anyone can see that the domain of the bot is the English language (more subtly, it’s U.S. English). Because the context is “English language metaphors,” and you can construct metaphors using slurs in English, I need to alter that usage somehow. If you read the section of Alien Phenomenology where Ian discusses his decision to censor his image-picker, I think it would become clearer. I don’t want to sit there and explain to someone that, no, the slur was totally random so it’s really okay! It would be one thing if the metaphors were only viewable by, say, people who build bots and do philosophy and understand what I’m doing — but this is a performance that takes place in public, and I have to answer to the public. It’s partly a covering-my-ass move, for sure.

When you say, “I understand the need to censor a bot if it is being paired with a message and those two are incongruent,” I get what you’re saying, but the problem is I have no way to programmatically evaluate whether there’s an incongruency. For example, the bot could say “______ is a slur: unacceptable and malicious” and the use of the word would be perhaps questionable but in my opinion totally fine. But I can’t check for scenarios like that without, say, using a human to moderate each tweet. So instead, I censor the bot.

Spankminister June 17, 2012 at 7:54 pm

It is, to me, monumentally irresponsible for Hasbro/Merriam-Webster to so arbitrarily remove words from the Official SCRABBLE (TM) Players Dictionary with little to no regard for the disastrous effects this might have on the balance of the game. It’ll come down to a professional player in a finals match being one STIFFIE away from a triple word score, and only then will there be cries that he/she was robbed.

Spankminister June 17, 2012 at 11:27 pm

Okay, I wrote that last comment as a joke, but the actual comments linked in that article seem to express genuine concern for the state of the Scrabble metagame. The guy’s user picture with his Scrabble nerd Kasparov stare really do it for me.

Rowan Lipkovits July 31, 2012 at 3:06 pm

Brings to mind this old controversy: http://www.konformist.com/1998/rapead.htm

Comments on this entry are closed.

{ 6 trackbacks }

Previous post:

Next post: