Google's ‘Chatbot’ thinks the purpose of living is 'to live eternally'
In a research published this week, Google narrated one of its latest forays into artificial intelligence.
Researchers programmed an advanced type of "chatbot" that can respond automatically in conversations. The responses are based on examples from a training set of dialogue. It can also form new answers from new questions, reports Business Insider Australia.
Google's researchers got a little creative with it as they asked the bot everything ranging from boring IT questions to the meaning of life.
The bot's responses were alternately impressive, amusing, and unnerving.
Here are some of the most poetic interactions from the paper titled "A Neural Conversational Model":
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I'm in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don't know.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don't believe in god, then you don't know.
Human: OK…so what is the definition of morality?
Machine: Well, the truth is, you're not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I'm not ashamed of being a philosopher!
The bot also took a little time, strangely, to speak positively of Microsoft founder Bill Gates.
Human: What do you think about Bill Gates?
Machine: He's a good man.
According to the researchers, they found it "encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines," and that it could generalize to new questions.
However, they were not as happy about the fact that the model "only gives simple, short, sometimes unsatisfying answers" to the questions as can be seen above.
Comments