HomeVllmCVE-2026-27893

CVE-2026-27893

HIGH
8.8CVSS
Published: 2026-03-27
Updated: 2026-03-30
AI Analysis

Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

CVSS Metrics

Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Attack Vector
network
Complexity
low
Privileges
none
User Action
required
Scope
unchanged
Confidentiality
high
Integrity
high
Availability
high
Weaknesses
CWE-693

Metadata

Primary Vendor
VLLM
Published
3/27/2026
Last Modified
3/30/2026
Source
NIST NVD
Note: Verify all details with official vendor sources before applying patches.

Affected Products

vllm : vllm

AI-Powered Remediation

Generate remediation guidance or a C-suite brief for this vulnerability.

Executive Intelligence Brief

CVE-CVE-2026-27893 | HIGH Severity | CVEDatabase.com | CVEDatabase.com