{"schema_version":"1.7.2","id":"OESA-2025-2371","modified":"2025-10-11T13:20:09Z","published":"2025-10-11T13:20:09Z","upstream":["CVE-2025-52566"],"summary":"llama.cpp security update","details":"Security Fix(es):\n\nllama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp&apos;s tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.(CVE-2025-52566)","affected":[{"package":{"ecosystem":"openEuler:24.03-LTS-SP1","name":"llama.cpp","purl":"pkg:rpm/openEuler/llama.cpp&distro=openEuler-24.03-LTS-SP1"},"ranges":[{"type":"ECOSYSTEM","events":[{"introduced":"0"},{"fixed":"20230815-5.oe2403sp1"}]}],"ecosystem_specific":{"aarch64":["llama.cpp-20230815-5.oe2403sp1.aarch64.rpm"],"src":["llama.cpp-20230815-5.oe2403sp1.src.rpm"],"x86_64":["llama.cpp-20230815-5.oe2403sp1.x86_64.rpm"]}}],"references":[{"type":"ADVISORY","url":"https://www.openeuler.org/zh/security/security-bulletins/detail/?id=openEuler-SA-2025-2371"},{"type":"ADVISORY","url":"https://nvd.nist.gov/vuln/detail/CVE-2025-52566"}],"database_specific":{"severity":"High"}}
