HomeVllmCVE-2025-62164

CVE-2025-62164

HIGH
8.8CVSS
Published: 2025-11-21
Updated: 2025-12-04
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Attack Vector
network
Complexity
low
Privileges
low
User Action
none
Scope
unchanged
Confidentiality
high
Integrity
high
Availability
high
Weaknesses
CWE-20CWE-123CWE-502CWE-787

Metadata

Primary Vendor
VLLM
Published
11/21/2025
Last Modified
12/4/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllmvllm : vllmvllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2025-62164 | HIGH Severity | CVEDatabase.com | CVEDatabase.com