HomeVllmCVE-2025-25183

CVE-2025-25183

LOW
2.6CVSS
Published: 2025-02-07
Updated: 2025-07-01
AI Analysis

Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N
Attack Vector
network
Complexity
high
Privileges
low
User Action
required
Scope
unchanged
Confidentiality
none
Integrity
low
Availability
none
Weaknesses
CWE-354

Metadata

Primary Vendor
VLLM
Published
2/7/2025
Last Modified
7/1/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2025-25183 | LOW Severity | CVEDatabase.com | CVEDatabase.com