Back to Articles

A practical guide for IT, security, and ML platform teams running Hugging Face models behind a JFrog Artifactory proxy — covering the legacy → Machine Learning repository layout migration before June 2026, why proxy environments hit HTTP 429 rate limits, and when Hugging Face Enterprise Plus and Model Gateway are the right answer.

TL;DR — JFrog Artifactory can proxy Hugging Face Hub for caching, scanning, and governance, but it inherits the rate limits of whatever Hub identity you configure on the remote repository, and its Xet protocol implementation is surface-level and misses Xet's deduplication benefits — in practice it nearly doubles your storage footprint. Before June 2026, every legacy "Hugging Face" repository in Artifactory needs to be migrated to the new "Machine Learning" repository layout. For enterprises with serious AI workloads, Hugging Face Enterprise Plus provides higher rate limits, organizational SSO/SCIM identity, audit logs, and Model Gateway — a Hugging Face–native internal model registry that solves the gated-model permission problem (Llama, Gemma, Mistral) at the org level and delivers true content-addressed storage. The most resilient architecture pairs Artifactory as the universal artifact perimeter with a Hugging Face Enterprise Plus organization providing identity, governance, and the model-distribution layer.