whisper.cpp/ggml/src
Georgi Gerganov 1fbdfb1d36 files : remove old wkv6 (#0)
ggml-ci
2025-03-27 11:06:03 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann MUL_MAT optimization (llama/12382) 2025-03-27 11:06:03 +02:00
ggml-cpu ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0) 2025-03-27 11:06:03 +02:00
ggml-cuda files : remove old wkv6 (#0) 2025-03-27 11:06:03 +02:00
ggml-hip HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032) 2025-03-08 15:13:01 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (llama/10361) 2024-12-18 12:52:16 +02:00
ggml-metal metal : refactor mat-vec code (llama/12569) 2025-03-27 11:06:03 +02:00
ggml-musa cuda : enable CUDA Graph on CUDA Toolkit < 12.x (llama/12394) 2025-03-27 11:06:03 +02:00
ggml-opencl opencl: simplify kernel embedding logic in cmakefile (llama/12503) 2025-03-27 11:06:03 +02:00
ggml-rpc ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-sycl files : remove old wkv6 (#0) 2025-03-27 11:06:03 +02:00
ggml-vulkan vulkan: fix mul_mat_vec failure in backend tests (llama/12529) 2025-03-27 11:06:03 +02:00
CMakeLists.txt Fix build on Windows when ccache enabled (ggml/9954) (llama/9976) 2025-03-27 11:06:03 +02:00
ggml-alloc.c ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-reg.cpp ggml-backend : fix backend search path (llama/12330) 2025-03-27 11:06:03 +02:00
ggml-backend.cpp ggml : portability fixes for VS 2017 (llama/12150) 2025-03-08 15:13:01 +02:00
ggml-common.h CUDA: use arch list for compatibility check (llama/11775) 2025-02-27 08:55:36 +02:00
ggml-impl.h ggml : sync/merge cmake,riscv,powerpc, add common.cmake (ggml/0) 2025-03-27 11:06:03 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : portability fixes for VS 2017 (llama/12150) 2025-03-08 15:13:01 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c llama: Add support for RWKV v7 architecture (llama/12412) 2025-03-27 11:06:03 +02:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (llama/11279) 2025-02-03 22:00:57 +02:00