..
ggml-alloc.h
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (llama/16653)
2025-12-17 15:19:49 +02:00
ggml-backend.h
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (llama/16653)
2025-12-17 15:19:49 +02:00
ggml-blas.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-cann.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-cpp.h
ggml : fix ggml_gallocr_ptr type (ggml/1205)
2025-05-01 13:29:02 +03:00
ggml-cpu.h
ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (llama/17951)
2025-12-17 15:19:47 +02:00
ggml-cuda.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-hexagon.h
Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)
2025-11-09 23:38:03 +02:00
ggml-metal.h
metal : refactor + optimize v2 (llama/15995)
2025-09-20 13:46:10 +03:00
ggml-opencl.h
Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama/10693)
2024-12-18 12:52:16 +02:00
ggml-opt.h
finetune: SGD optimizer, more CLI args (llama/13873)
2025-08-18 20:30:45 +03:00
ggml-rpc.h
rpc : fix alloc size logic (llama/17116)
2025-12-12 17:53:18 +02:00
ggml-sycl.h
ggml : build backends as libraries (llama/10256)
2024-11-20 21:00:08 +02:00
ggml-vulkan.h
vulkan: Make Vulkan optional at runtime (ggml/11493). (llama/11494)
2025-02-27 08:55:36 +02:00
ggml-webgpu.h
ggml: Add initial WebGPU backend (llama/14521)
2025-07-20 00:23:50 +03:00
ggml-zdnn.h
zdnn: refactor codebase + add docs (llama/16178)
2025-09-29 15:18:09 +03:00
ggml-zendnn.h
ggml-zendnn : add ZenDNN backend for AMD CPUs (llama/17690)
2025-12-12 17:53:21 +02:00
ggml.h
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (llama/16653)
2025-12-17 15:19:49 +02:00
gguf.h
GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030)
2025-01-14 10:38:01 +02:00