HomeVllmCVE-2025-62426

CVE-2025-62426

MEDIUM
6.5CVSS
Published: 2025-11-21
Updated: 2025-12-04
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Attack Vector
network
Complexity
low
Privileges
low
User Action
none
Scope
unchanged
Confidentiality
none
Integrity
none
Availability
high
Weaknesses
CWE-770

Metadata

Primary Vendor
VLLM
Published
11/21/2025
Last Modified
12/4/2025
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllmvllm : vllmvllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2025-62426 | MEDIUM Severity | CVEDatabase.com | CVEDatabase.com