HomeVllmCVE-2025-66448

CVE-2025-66448

HIGH
7.1CVSS
Published: 2025-12-01
Updated: 2025-12-03
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H
Attack Vector
network
Complexity
high
Privileges
low
User Action
required
Scope
unchanged
Confidentiality
high
Integrity
high
Availability
high
Weaknesses
CWE-94

Metadata

Primary Vendor
VLLM
Published
12/1/2025
Last Modified
12/3/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2025-66448 | HIGH Severity | CVEDatabase.com | CVEDatabase.com