In this article

Over a decade ago, Meta

– then known as Facebook – hired researchers in the social sciences with the goal of analyzing how the social network’s services were impacting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations.

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings of Meta’s internal research and documents seemingly contradicted how the company portrayed itself in public. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way.

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies like OpenAI and Anthropic subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users, and publishing their findings.