I tried to fool my brother, sort of.

Next to him and his Pekingese on the couch, without context or introduction, I played an audio clip of me—deepfake audio of my voice that I’d asked cybersecurity startup Doppel to make. Fake Me’s voice sounded distressed, stilted, and just persuasive enough that he narrowed his eyes, scrunched his nose, and asked: “That’s AI, right?” My extremely online brother was far from fooled, but he was unsettled.

I’d asked Doppel to show me (as viscerally as possible) what deepfake audio sounded like at a moment where, as a society, we’ve perhaps never been more aware of them. The rise of generative AI-powered deepfakes has taken a technology that’s long been alarming and made it increasingly ubiquitous.

In recent months, deepfakes have been rapidly improving, said Kevin Tian, Doppel CEO and cofounder. Doppel Simulations demonstrates the state of social engineering attacks to companies, is used for training, and to test vulnerabilities.

“Our simulation capabilities are very much like vibe coding,” he told Fortune. “You give it a prompt, and that AI agent can then construct a phishing social engineering campaign that hits you specifically on your phone, maybe shoots off an SMS right after that phone call to confirm some things. Of course, you can still do the traditional phishing email…But over the past few months, what’s gotten significantly better is the ability to do real-time, synchronous deepfake conversations in an intelligent manner. I can chat with my own deepfake in real-time. It’s not scripted, it’s dynamic.”