HomeVllmCVE-2025-46570

CVE-2025-46570

LOW
2.6CVSS
Published: 2025-05-29
Updated: 2025-06-24
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N
Attack Vector
network
Complexity
high
Privileges
low
User Action
required
Scope
unchanged
Confidentiality
low
Integrity
none
Availability
none
Weaknesses
CWE-208CWE-203

Metadata

Primary Vendor
VLLM
Published
5/29/2025
Last Modified
6/24/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2025-46570 | LOW Severity | CVEDatabase.com | CVEDatabase.com