CVE-2026-21869

Publication date 8 January 2026

Last updated 13 January 2026


Ubuntu priority

Cvss 3 Severity Score

8.8 · High

Score breakdown

Description

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

Status

Package Ubuntu Release Status
llama.cpp 25.10 questing
Needs evaluation
25.04 plucky Not in release
24.04 LTS noble Not in release
22.04 LTS jammy Not in release

Severity score breakdown

Parameter Value
Base score 8.8 · High
Attack vector Network
Attack complexity Low
Privileges required None
User interaction Required
Scope Unchanged
Confidentiality High
Integrity impact High
Availability impact High
Vector CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H