HomeVllmCVE-2025-59425

CVE-2025-59425

HIGH
7.5CVSS
Published: 2025-10-07
Updated: 2025-10-16
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
Attack Vector
network
Complexity
low
Privileges
none
User Action
none
Scope
unchanged
Confidentiality
high
Integrity
none
Availability
none
Weaknesses
CWE-385

Metadata

Primary Vendor
VLLM
Published
10/7/2025
Last Modified
10/16/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllmvllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief