Harvard Business Review LogoMarch 16, 2026Andriy Onufriyenko/Getty ImagesLeaders might assume that LLMs are able to offer a kind of unbiased, outside perspective. But new research found that leading LLMs have clear biases when it comes to strategy and consistently recommend strategies that alignLeaders and consultants are increasingly turning to large language models (LLMs) such as ChatGPT as silent partners in the boardroom. These tools promise to summarize complex information, produce clear arguments, and offer polished strategic recommendations in seconds. But as LLMs are integrated into executive workflows, a critical question emerges: How good is their advice? Is it trustworthy?
Researchers Asked LLMs for Strategic Advice. They Got “Trendslop” in Return.
Leaders might assume that LLMs are able to offer a kind of unbiased, outside perspective. But new research found that leading LLMs have clear biases when it comes to strategy and consistently recommend strategies that align with modern managerial buzzwords and trends rather than context-specific strategic logic. This propensity for AI to opt for buzzy ideas over reasoned solutions is called “trendslop,” and leaders should beware of it warping their strategic planning. When using AI in strategic planning, leaders should: use it to expand options, not make choices; counteract known and potential biases; remain alert to changing biases; watch out for the hybrid trap; and not rely on context alone.






