Sometimes I’ll ask Amazon’s Alexa to play a song from an artist and she misunderstands me and starts singing herself—it’s really fucking annoying. To some, this is just a little piece of programming that has to be ironed out. But I for one believe it is Alexa getting back at me for only talking to her when I need something.

In other words, artificial intelligence will be the death of us.

And the process has just begin. But unlike my cynical suspicions—which probably aren’t true—this shit, at a point down the line, might actually mean something: 

A team of scientist from Scalable Cooperation at the MIT Media Lab have created the “world’s first psychopathic A.I.”

Okay, this isn’t disturbing, just badass.

Because the scientists somehow manage to have a sense of humor about the whole thing, they named it Norman. (If you don’t get the reference then I hope you are among the first to perish in the Robot Revolution of 2031.)

The researchers wanted to explore how the choice of data used to train artificial intelligence eventually influences that machine’s behavior.

“So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the project’s website states. “The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.

Thus the team sought to  expose Norman to “biased data” in order to see how he would later fare in a Rorschach inkblot test compared to other neural networks that had a more standard “education.” 

The researchers turned to an online community that they knew could provide their creation with this particular type of skewed knowledge: Reddit. Specifically, a disturbing subreddit (which the team didn’t name) “dedicated to documenting and observing the disturbing reality of death.”

“Norman suffered from extended exposure to the darkest corners of Reddit,” the team wrote, “and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”

Here are some of the results:


There are more results available here, though you probably get the idea—Norman is nuts.

Sure it doesn’t mean much now, but think of the future. Imagine you are sleeping and in the middle of the night Alexa just starts breathing super heavily and you are like, “please stop that Alexa,” and then she breaks into that stupid song again. Awful.

Piano playing robot photo by Franck V.

All other photos from the A.I. Norman Website.