GHSA-ggpf-24jw-3fcw – vllm
Package
Manager: pip
Name: vllm
Vulnerable Version: >=0 <0.8.0
Severity
Level: Critical
CVSS v3.1: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CVSS v4.0: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N
EPSS: N/A pctlN/A
Details
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.0 ## Description https://github.com/vllm-project/vllm/security/advisories/GHSA-rh4j-5rhw-hr54 reported a vulnerability where loading a malicious model could result in code execution on the vllm host. The fix applied to specify `weights_only=True` to calls to `torch.load()` did not solve the problem prior to PyTorch 2.6.0. PyTorch has issued a new CVE about this problem: https://github.com/advisories/GHSA-53q9-r3pm-6pq6 This means that versions of vLLM using PyTorch before 2.6.0 are vulnerable to this problem. ## Background Knowledge When users install VLLM according to the official manual  But the version of PyTorch is specified in the requirements. txt file  So by default when the user install VLLM, it will install the PyTorch with version 2.5.1  In CVE-2025-24357, weights_only=True was used for patching, but we know this is not secure. Because we found that using Weights_only=True in pyTorch before 2.5.1 was unsafe Here, we use this interface to prove that it is not safe.  ## Fix update PyTorch version to 2.6.0 ## Credit This vulnerability was found By Ji'an Zhou and Li'shuo Song
Metadata
Created: 2025-04-23T02:26:06Z
Modified: 2025-04-23T02:26:06Z
Source: https://github.com/github/advisory-database/blob/main/advisories/github-reviewed/2025/04/GHSA-ggpf-24jw-3fcw/GHSA-ggpf-24jw-3fcw.json
CWE IDs: ["CWE-1395"]
Alternative ID: N/A
Finding: F061
Auto approve: 1