ByAlex Knapp,
AI’s cavalcade of constraints. Uncovering a lost sacred manuscript. Why it’s okay to drink a little coffee before bed. All that and more in this week’s edition of The Prototype. To get it in your inbox, sign up here.
A
nthropic’s Claude models are putting out faulty code right now, according to my colleague Thomas Fox-Brewster. And it’s not just code, either. Many customers are complaining about what they see as degradations in the AI company’s models – products that once performed at a high level are slowly becoming worse. These echo similar complaints you can find in Reddit forums about models from Google and OpenAI, too.
One likely suspect here is capacity. Simply put, the more tokens an AI model uses, the better outputs it’s likely to produce – but that also uses more computing power. And that’s increasingly constrained as a growing number of users adopt AI for different applications. This is why it’s hard to see a week go by without news of a new data center deal – tech companies need more processors to meet customer demand. It’s also likely one reason why companies are increasingly moving to usage-based pricing for their AI products.








