It’s becoming increasingly common for people to use ChatGPT and other AI chatbots like Gemini, Copilot and Claude in their everyday lives. A recent survey from Elon University’s Imagining the Digital Future Center found that half of Americans now utilize these technologies.

“By any measure, the adoption and use of LLMs [large language models] is astounding,” Lee Rainie, director of Elon’s Imagining the Digital Future Center, said in a university news release. “I am especially struck by the ways these tools are being woven into people’s social lives.”

And while these tools can be useful when it comes to, say, helping you write an email or brainstorm questions for a doctor’s appointment, it’s wise to be cautious about how much information you share with them.

A recent study from the Stanford Institute for Human-Centered AI helps explain why. Researchers analyzed the privacy policies of six of the top U.S. AI chat system developers (OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Amazon’s Nova, Meta’s MetaAI and Microsoft’s Copilot) and found that all of them appear to use customer conversations to “train and improve their models by default” and “some retain this data indefinitely.”