{
  "threat_severity" : "Important",
  "public_date" : "2026-01-21T21:13:11Z",
  "bugzilla" : {
    "description" : "vLLM: vLLM: Arbitrary code execution via untrusted model loading",
    "id" : "2431865",
    "url" : "https://bugzilla.redhat.com/show_bug.cgi?id=2431865"
  },
  "cvss3" : {
    "cvss3_base_score" : "8.8",
    "cvss3_scoring_vector" : "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
    "status" : "verified"
  },
  "cwe" : "CWE-94",
  "details" : [ "vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.", "A flaw was found in vLLM, an inference and serving engine for large language models (LLMs). This vulnerability allows a remote attacker to achieve arbitrary code execution on the vLLM host during model loading. This occurs because vLLM loads Hugging Face `auto_map` dynamic modules without properly validating the `trust_remote_code` setting. By influencing the model repository or path, an attacker can execute malicious Python code at server startup, even before any API requests are handled." ],
  "statement" : "This vulnerability is rated Important for Red Hat as vLLM, an inference and serving engine for large language models, is vulnerable to arbitrary code execution. An attacker influencing the model repository or path can execute malicious Python code during server startup, affecting vLLM versions 0.10.1 through 0.13.x.",
  "affected_release" : [ {
    "product_name" : "Red Hat AI Inference Server 3.2",
    "release_date" : "2026-02-27T00:00:00Z",
    "advisory" : "RHSA-2026:3461",
    "cpe" : "cpe:/a:redhat:ai_inference_server:3.2::el9",
    "package" : "rhaiis/vllm-cuda-rhel9:sha256:dcb9d1cd005c40b6db6f893e56419e383b9dcc0d38315605cb1457e2af5354f7"
  }, {
    "product_name" : "Red Hat AI Inference Server 3.2",
    "release_date" : "2026-02-27T00:00:00Z",
    "advisory" : "RHSA-2026:3462",
    "cpe" : "cpe:/a:redhat:ai_inference_server:3.2::el9",
    "package" : "rhaiis/vllm-rocm-rhel9:sha256:53007894763e03f609c35c727cb738db3c2130b19fa0e1069c24240e0870fb7a"
  }, {
    "product_name" : "Red Hat OpenShift AI 2.25",
    "release_date" : "2026-03-04T00:00:00Z",
    "advisory" : "RHSA-2026:3782",
    "cpe" : "cpe:/a:redhat:openshift_ai:2.25::el9",
    "package" : "rhoai/odh-vllm-gaudi-rhel9:sha256:9c445ef3bebd885b7d5992d4a6a42cc8f01f0bd53dbfe4b70b6c613c52bad76b"
  }, {
    "product_name" : "Red Hat OpenShift AI 3.3",
    "release_date" : "2026-03-04T00:00:00Z",
    "advisory" : "RHSA-2026:3713",
    "cpe" : "cpe:/a:redhat:openshift_ai:3.3::el9",
    "package" : "rhoai/odh-vllm-gaudi-rhel9:sha256:30dd95f0c900b81b80e435796d82dd556814dd6d46c6b43b7dd879bcfdb8420e"
  }, {
    "product_name" : "Red Hat OpenShift AI 3.4",
    "release_date" : "2026-03-19T00:00:00Z",
    "advisory" : "RHSA-2026:5119",
    "cpe" : "cpe:/a:redhat:openshift_ai:3.4::el9",
    "package" : "rhoai/odh-vllm-gaudi-rhel9:sha256:a71763bda2e2c3a9827771c5a6116f4372fb827f2c7d2324adcd6d6a5eff9d54"
  } ],
  "package_state" : [ {
    "product_name" : "Red Hat AI Inference Server",
    "fix_state" : "Affected",
    "package_name" : "rhaiis/vllm-spyre-rhel9",
    "cpe" : "cpe:/a:redhat:ai_inference_server:3"
  }, {
    "product_name" : "Red Hat AI Inference Server",
    "fix_state" : "Affected",
    "package_name" : "rhaiis/vllm-tpu-rhel9",
    "cpe" : "cpe:/a:redhat:ai_inference_server:3"
  }, {
    "product_name" : "Red Hat Enterprise Linux AI (RHEL AI) 3",
    "fix_state" : "Affected",
    "package_name" : "rhelai3/bootc-aws-cuda-rhel9",
    "cpe" : "cpe:/a:redhat:enterprise_linux_ai:3"
  }, {
    "product_name" : "Red Hat Enterprise Linux AI (RHEL AI) 3",
    "fix_state" : "Affected",
    "package_name" : "rhelai3/bootc-azure-cuda-rhel9",
    "cpe" : "cpe:/a:redhat:enterprise_linux_ai:3"
  }, {
    "product_name" : "Red Hat Enterprise Linux AI (RHEL AI) 3",
    "fix_state" : "Affected",
    "package_name" : "rhelai3/bootc-cuda-rhel9",
    "cpe" : "cpe:/a:redhat:enterprise_linux_ai:3"
  }, {
    "product_name" : "Red Hat Enterprise Linux AI (RHEL AI) 3",
    "fix_state" : "Affected",
    "package_name" : "rhelai3/bootc-gcp-cuda-rhel9",
    "cpe" : "cpe:/a:redhat:enterprise_linux_ai:3"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Not affected",
    "package_name" : "rhoai/odh-kserve-agent-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Not affected",
    "package_name" : "rhoai/odh-kserve-controller-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Not affected",
    "package_name" : "rhoai/odh-kserve-router-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Not affected",
    "package_name" : "rhoai/odh-kserve-storage-initializer-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Affected",
    "package_name" : "rhoai/odh-vllm-cpu-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Affected",
    "package_name" : "rhoai/odh-vllm-cuda-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  }, {
    "product_name" : "Red Hat OpenShift AI (RHOAI)",
    "fix_state" : "Affected",
    "package_name" : "rhoai/odh-vllm-rocm-rhel9",
    "cpe" : "cpe:/a:redhat:openshift_ai"
  } ],
  "references" : [ "https://www.cve.org/CVERecord?id=CVE-2026-22807\nhttps://nvd.nist.gov/vuln/detail/CVE-2026-22807\nhttps://github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5\nhttps://github.com/vllm-project/vllm/pull/32194\nhttps://github.com/vllm-project/vllm/releases/tag/v0.14.0\nhttps://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr" ],
  "name" : "CVE-2026-22807",
  "mitigation" : {
    "value" : "To mitigate this issue, ensure that vLLM instances are configured to load models only from trusted and verified repositories. Restrict access to the model repository path to prevent unauthorized modification or introduction of malicious code. Implement strict access controls and integrity checks for all model sources.",
    "lang" : "en:us"
  },
  "csaw" : false
}