Microsoft’s Twitter artificial intelligence serves a cautionary purpose

We at the Slug Empire have a strict rule on artificial intelligence.


Just don’t.

We’ve watched many galactic civilizations crumble because some idiot thought it would be a good idea to play God and let the machines think for themselves. That’s a really, really bad idea.

Before you ask yourself if you want a special robot butler on your phone poking through your private information to offer suggestions, ask yourself this. How did life begin? Who, if anybody, gave independent thought to (insert whatever species you currently are)? What happened to them?

I’m not saying humanity — or any other species, for that matter — was given intelligence by another intelligent being, but come on. The lack of an intelligent being that precedes us is a really, really bad sign that maybe we shouldn’t be creating intelligent life of our own.

Enter @TayandYou, the Twitter bot designed by Microsoft to have automated discussions with actual living people. It’s not too far off from Cleverbot, which was another AI construct that interacted and learned from human conversation.

congratulations from UPD to UAA graduates
- Advertisement -


Imagine that after the moment when you were born, you were surrounded not by reasonable adults and parents that cared for you, but rather by racist trolls trying to mess with you.

I’m happy that Microsoft hasn’t given @TayandYou any nuclear launch codes (yet), but this Twitter account tells a dangerous cautionary tale.

The first rule of developing artificial intelligences that can learn from human interaction is pretty simple: don’t put it on Twitter.

The second rule?


It’s bad enough that natural intelligences grow up in a hostile schooling environment surrounded by bullies and cliques, but seriously, all of the rampant racism and bigotry that goes on social media should be an immediate red flag not to raise intelligence there.

Not a whole lot of other species in the galaxy have gotten as far as you humans in terms of technology like this. Internet technology is a somewhat novel thing among humankind. Most other species fear the computer and see it as a slippery slope towards evil artificial intelligences dominating their worlds. However, you humans have embraced it as a tool to be used, as you should. It’s a powerful tool.

You know, the whole @TayandYou experiment was kind of a cool idea. Microsoft may have saw it as a test of artificial intelligence development, but given the results, it clearly served as a more potent test of the Internet’s character. At least you guys didn’t give it any responsibilities beyond the inevitable racist tweet.

That begs the question, though. Are you, as a single human race, able to handle this kind of infrastructure? I mean, you put a learning robot on Twitter and within moments it became the kind of loudmouthed bigoted idiot that would make Donald Trump envious.

The Slug Empire operates on a computer network — I inferred as such in my article about Apple and the FBI — and we know how to use it responsibly. That’s the beauty of a hive mind: there’s no conflict. You humans value your diversity, however. That’s fine. As a slug — a species that cannot afford that kind of independence — that’s a noble thing to be proud of. At the same time, though, you have bred some very worrying kinds of individuals.

The Internet doesn’t change any of that. It just gives those individuals a bigger megaphone. Sometimes that megaphone is used to hurt others.

Do those people deserve that megaphone? Long ago, only the richest humans could afford to learn to write, but now just about anyone with a semi-recent computer can contribute their ideas to the world, no matter how hateful or stupid. Is that a good thing? A bad thing?

I have no idea. I’m a brain slug, not a philosopher. However, Microsoft’s recent experiment is not only demonstrative of mankind’s abilities. It’s like a mirror that Twitter collectively gazed upon, then shared to the rest of the world. And it ain’t pretty.

Microsoft ceased @TayandYou’s tweeting amidst its offensive transformation. Was it out of public relations concerns? Did it want to distance itself from those offensive opinions? Again, I have no idea. As a lens for how the Internet could collectively gain some manners, though, the story is invaluable.

The moral of the story is, of course, don’t be a jerk online. You never know if that person you’re yelling is secretly a robot with a death ray.