whisper.cpp/ggml/src
Georgi Gerganov 945d3151d9 ggml : restore ggml_type_sizef() to aboid major version bump (ggml/1441) 2026-03-18 15:18:24 +02:00
..
ggml-blas ggml: update comments for backends which have no memory to report (llama/20157) 2026-03-16 13:10:15 +02:00
ggml-cann CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (llama/18968) 2026-02-15 21:44:37 +02:00
ggml-cpu ggml : try fix arm build (#0) 2026-03-16 13:10:15 +02:00
ggml-cuda CUDA: limit number of FA stream-k CUDA blocks (llama/20586) 2026-03-16 13:10:15 +02:00
ggml-hexagon hexagon: Q4_0 and MXFP4 repack fixes (llama/20527) 2026-03-16 13:10:15 +02:00
ggml-hip hip: compile debug builds with -O2 on hip to avoid a compiler bug (llama/20392) 2026-03-16 13:10:15 +02:00
ggml-metal metal : add FA specialization for HSK = 320, HSV = 256 (llama/20549) 2026-03-16 13:10:15 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (llama/16492) 2025-10-15 09:29:17 +03:00
ggml-opencl opencl: fix l2_norm (llama/20480) 2026-03-16 13:10:15 +02:00
ggml-openvino ggml : add OpenVINO backend (llama/15307) 2026-03-16 13:10:15 +02:00
ggml-rpc rpc : use unordered_map::reserve and emplace (llama/18513) 2026-01-14 09:11:59 +02:00
ggml-sycl add op gated_delta_net (llama/20455) 2026-03-16 13:10:15 +02:00
ggml-virtgpu ggml-virtgpu: improve the reliability of the code (llama/19846) 2026-02-27 20:57:58 +02:00
ggml-vulkan vulkan: use graphics queue on AMD (llama/20551) 2026-03-16 13:10:15 +02:00
ggml-webgpu ggml-webgpu: Add supports for `GGML_OP_REPEAT` (llama/20230) 2026-03-16 13:10:15 +02:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (llama/18967) 2026-01-30 15:56:40 +02:00
ggml-zendnn ggml-zendnn: update code for latest ZenDNN API (llama/19923) 2026-02-27 20:57:58 +02:00
CMakeLists.txt ggml : add OpenVINO backend (llama/15307) 2026-03-16 13:10:15 +02:00
ggml-alloc.c ggml : make `ggml_is_view` as API (llama/19539) 2026-02-27 20:57:58 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (llama/19150) 2026-01-30 15:56:40 +02:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (llama/19150) 2026-01-30 15:56:40 +02:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (llama/18587) 2026-01-14 09:11:59 +02:00
ggml-backend-reg.cpp ggml : add OpenVINO backend (llama/15307) 2026-03-16 13:10:15 +02:00
ggml-backend.cpp llama : disable graph reuse with pipeline parallelism (llama/20463) 2026-03-16 13:10:15 +02:00
ggml-common.h ggml : add NVFP4 quantization type support (llama/19769) 2026-03-16 13:10:15 +02:00
ggml-impl.h ggml : add NVFP4 quantization type support (llama/19769) 2026-03-16 13:10:15 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (llama/13873) 2025-08-18 20:30:45 +03:00
ggml-quants.c ggml : guard against sumq2 being 0 in IQ4_NL (llama/20460) 2026-03-16 13:10:15 +02:00
ggml-quants.h ggml : add NVFP4 quantization type support (llama/19769) 2026-03-16 13:10:15 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c ggml : restore ggml_type_sizef() to aboid major version bump (ggml/1441) 2026-03-18 15:18:24 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-05-29 09:56:26 +03:00
gguf.cpp gguf : sync (ggml/0) 2026-02-27 20:57:58 +02:00