Artificial intelligence is growing more each day, and the latest installment is Microsoft's Bing chatbot.
When tested by early users, the chatbot produced alarming responses that were at times threatening, and other times emotional.
More National News
This new technology is trained to predict the next word when given a large amount of text, sometimes leading to strange outputs, Mohit Iyyer, assistant professor of computer science at UMass Amherst explained on Greater Boston.
Seth Lazar, professor of philosophy at Australian National University, said the bots are fine tuned to be helpful, but it can go sideways. "They will kind of do whatever you want them to do, which also makes them dangerous," Lazar said, adding that the technology is extremely adept to manipulation.
Lazar said he played around with the bot and it threatened to kill him.
Both Lazar and Iyyer expressed concern about these AI models getting into the hands of the wrong people. Iyyer said, "It can have really terrible repercussions if people start taking the things that are being spit out by this model seriously."
Much of the artificial intelligence technology is still in early phases and isn't ready for widespread use.
Watch: Bing's Chatbot has a dark side. Should we be concerned?