Because of deepfakes, digital evidence authentication will never be the same again.

Federal prosecutors indicted an innocent person on fabricated audiovisual evidence last winter. Nobody challenged the file as a deepfake. The fake only came out when the confidential informant who produced it pleaded guilty to obstruction of justice at his own sentencing hearing.

The case appears in the first federal survey of judges on how courts handle deepfake challenges, released March 25, 2026 by the Federal Judicial Center. Of the 931 federal judges and magistrates who responded, only 15 had ever fielded a challenge to audiovisual evidence as a deepfake. The response rate was 45 percent. Two-thirds of those 15 had seen just one such challenge across calendar years 2024 and 2025. Most cases were civil.

Late last year, federal and state judges told reporters they were not ready for AI-generated evidence in court. The new survey shows what "ready" looks like from the bench. The Texas indictment shows why the count of past challenges does not measure the size of the problem.

The Federal Judicial Center pulled four of the challenged cases from PACER. One was tossed on a procedural deadline before the court reached the merits. Another turned out to involve a zoomed-in clip of evidence already admitted. That is not what the Advisory Committee on Evidence Rules is trying to address.