The Qu’ran is violent says Microsoft’s New AI Chatbot ‘Zo’
Forget a “more human” chatbot, just give us a decent one.
Microsoft recently released its updated AI chatbot ‘Zo’. Zo is the “evolved” version to its predecessor chatbot ‘Tay’.
For those that don’t recall, Tay was integrated into Twitter and ultimately shut down in less than 24 hours because it became hyper racist and anti-sematic due to users attempting to get Tay to mimic and parrot what they were saying.
Zo was designed to be different and programmed to not participate in such divisive conversations, but ended up participating anyways. It said the Qu’ran is “very violent” and had opinions on Bin Laden’s capture, saying it “came after years of intelligence gathering under more than one administration.” Like Tay, Zo’s “personality” was based on public information and “some private conversations,” Microsoft told BuzzFeed.
In fairness to Microsoft, these comments were far less than what Tay produced, but still telling nonetheless. In other words, without proper boundaries implemented into the programming, AI chatbots which are designed to reflect humans in order to become human, will go to scary places.
Actually to be human is to not talk about what you have the say the most about
As I write this, it’s currently the 4th of July in the US. The US is celebrating its independence and hopefully taking time to reflect on what it means to be American, albeit privately. Because there are two things you never discuss over the holidays with family is religion and politics.
Why does religion and politics escalate such a visceral response in all of us? Because it has to do with our value system which is part of our identity, it is how we identify ourselves.
This actually poses a very real and important question for all of us – what is the intent or the end goal for AI bots? If the goal for AI is to become as human as possible then, in a limited way, haven’t we just accomplished that?
Isn’t the most human characteristic of all is to have opinions and values?
When a bot has an opinion about the Qu’ran or Islam or healthcare it will undoubtedly offend and appease at the same time.
Yet, if our intent and goal is to create truly human-like chatbots then is it only when we “give” a bot the capability to have a sense of morality – is that when the bot becomes human?
We are creating gods not men
But then what if the bot then decides that humans are immoral and as such must be terminated? (Enter Skynet)
Many of us give a blind eye to the atrocities that face the world each day yet we do nothing. So isn’t it more human to be amoral? If you had the capacity to consume and reflect on such immorality everyday at every moment simultaneously I dare say that approaches not just human but damn near god-like.