Fixing the issue required more than just rewarding 'safe answers.'
Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic.
Anthropic think they have found the reason for blackmail-like behaviour in its chatbot Claude: fictional stories online.
But training on "synthetic stories" that model good AI behavior can help.
Anthropic recently released a report saying it had solved Claude’s “agentic misalignment,” or the bot’s behaviors that deviated from humans’ best interests.