Cybersecurity researchers revealed Tuesday how artificial intelligence can be used to clone a person’s voice in real time to mount voice phishing attacks on unsuspecting organizations.

Researchers from the NCC Group noted in a company blog that they launched attacks using real-time voice cloning against real organizations and successfully recovered sensitive and confidential information.

“Not only that, but we have also shown how these techniques can convince people in positions of key operational responsibility to carry out actions on behalf of the attacker,” wrote the researchers, Pablo Alobera, Pablo López, and Víctor Lasa.

“In security assessments that simulated real-world attack conditions, we have been able to carry out actions such as email address changes, password resets, and so on,” they added.

When starting their project, the researchers identified several challenges to mounting voice phishing (vishing) attacks with cloned voices. One was the technology. The vast majority of the state-of-the-art deepfake technologies and architectures were focused on offline inferences, the researchers discovered.