As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content.

The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC.

Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI.

“The answer, after almost 15 years of research, is, no, we don’t know how to do this, and it doesn’t look like we are getting better,” Rando, who focuses on adversarial machine learning, told CNBC.

However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles.