whisper.cpp/ggml/src
junchao-zhao f8242ec483 ggml : fix LoongArch compile error with 128-bit SIMD (llama/11701) 2025-02-27 08:55:36 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann llama : add Qwen2VL support + multimodal RoPE (llama/10361) 2024-12-18 12:52:16 +02:00
ggml-cpu ggml : fix LoongArch compile error with 128-bit SIMD (llama/11701) 2025-02-27 08:55:36 +02:00
ggml-cuda CUDA: support for mat. mul. with ne03 != ne13 (llama/11656) 2025-02-27 08:55:36 +02:00
ggml-hip HIP: force max threads per block to be 1024 (llama/11621) 2025-02-27 08:55:36 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (llama/10361) 2024-12-18 12:52:16 +02:00
ggml-metal metal : avoid breaking build when metal API predates TARGET_OS_VISION (llama/11690) 2025-02-27 08:55:36 +02:00
ggml-musa CUDA: use mma PTX instructions for FlashAttention (llama/11583) 2025-02-03 22:00:57 +02:00
ggml-opencl ggml : add opencl backend (skip) (llama/10693) 2025-01-14 10:38:01 +02:00
ggml-rpc rpc: fix known RCE in rpc-server (ggml/1103) 2025-02-27 08:55:36 +02:00
ggml-sycl SYCL : SOFTMAX F16 mask support and other fixes (llama/11261) 2025-02-03 22:00:57 +02:00
ggml-vulkan vulkan: optimize coopmat2 iq2/iq3 callbacks (llama/11521) 2025-02-27 08:55:36 +02:00
CMakeLists.txt `ci`: use sccache on windows instead of ccache (llama/11545) 2025-02-03 22:00:57 +02:00
ggml-alloc.c vulkan: use smaller combined allocations to avoid fragmentation (llama/11551) 2025-02-27 08:55:36 +02:00
ggml-backend-impl.h rpc : early register backend devices (llama/11262) 2025-02-03 22:00:57 +02:00
ggml-backend-reg.cpp ggml : allow loading backend with env variable (ggml/1059) 2025-01-14 10:38:01 +02:00
ggml-backend.cpp ggml-backend : only offload from host buffers (fix) (llama/11124) 2025-01-14 10:38:01 +02:00
ggml-common.h CUDA: rename macros to avoid conflicts with WinAPI (llama/10736) 2024-12-18 12:52:16 +02:00
ggml-impl.h GGUF: C++ refactor, backend support, misc fixes (llama/11030) 2025-01-14 10:38:01 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : refactor online repacking (llama/10446) 2024-12-18 12:52:16 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c CPU/CUDA: fix (GQA) mul mat back, add CUDA support (llama/11380) 2025-02-03 22:00:57 +02:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (llama/11279) 2025-02-03 22:00:57 +02:00