A malicious Hugging Face repository posing as an OpenAI release delivered infostealer malware to Windows systems and logged 244,000 downloads before being removed, raising fresh concerns about how enterprises source and validate AI models from public repositories.
The repository, named Open-OSS/privacy-filter, impersonated OpenAI’s legitimate Privacy Filter release, copied its model card almost word-for-word, and included a malicious loader.py file that fetched and executed credential-stealing malware on Windows hosts, AI security firm HiddenLayer said in a research advisory.
“The repository reached the #1 trending position on Hugging Face with approximately 244K downloads and 667 likes in under 18 hours, numbers that were almost certainly artificially inflated to make the repository appear legitimate,” the advisory added.
The incident highlights growing concerns that public AI model registries are emerging as a new software supply-chain risk for enterprises, particularly as developers and data scientists increasingly clone open-source models directly into corporate environments with access to source code, cloud credentials, and internal systems.
The README accompanying the fake model diverged from the legitimate project in one key area, instructing users to run start.bat on Windows or execute python loader.py on Linux and macOS.













