| .. |
|
ggml-amx
|
ggml : adapt AMX to tensor->grad removal (llama/0)
|
2024-11-20 21:00:08 +02:00 |
|
ggml-blas
|
ggml : add support for dynamic loading of backends (llama/10469)
|
2024-12-08 20:14:35 +02:00 |
|
ggml-cann
|
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-cpu
|
ggml-cpu: faster AVX2 variant for IQ1_M (llama/12216)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-cuda
|
CUDA: fix FA logic for PTX 7.0 and CC >= 7.5 (llama/12222)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-hip
|
HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-kompute
|
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
|
2024-12-18 12:52:16 +02:00 |
|
ggml-metal
|
metal : simplify kernel arguments using a struct (ggml/3229) (llama/12194)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-musa
|
CUDA: app option to compile without FlashAttention (llama/12025)
|
2025-02-27 08:55:36 +02:00 |
|
ggml-opencl
|
opencl: Noncontiguous `norm`, `rms_norm`, disable `fp16` for some ops (llama/12217)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-rpc
|
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-sycl
|
SYCL: Disable f16 Unary OPs as not supported by the kernels (llama/12201)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-vulkan
|
vulkan : sync (llama/0)
|
2025-03-08 15:13:01 +02:00 |
|
CMakeLists.txt
|
cmake : fix undefined reference errors for std::filesystem in ggml (#12092) (llama/12094)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-alloc.c
|
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-backend-impl.h
|
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-backend-reg.cpp
|
ggml : portability fixes for VS 2017 (llama/12150)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-backend.cpp
|
ggml : portability fixes for VS 2017 (llama/12150)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-common.h
|
CUDA: use arch list for compatibility check (llama/11775)
|
2025-02-27 08:55:36 +02:00 |
|
ggml-impl.h
|
MUSA: support ARM64 and enable dp4a .etc (llama/11843)
|
2025-02-27 08:55:36 +02:00 |
|
ggml-opt.cpp
|
ggml-opt: fix data corruption (ggml/1022)
|
2024-12-08 20:14:35 +02:00 |
|
ggml-quants.c
|
ggml : portability fixes for VS 2017 (llama/12150)
|
2025-03-08 15:13:01 +02:00 |
|
ggml-quants.h
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
|
ggml-threading.cpp
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
|
ggml-threading.h
|
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
|
2024-12-18 12:52:16 +02:00 |
|
ggml.c
|
ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118)
|
2025-03-08 15:13:01 +02:00 |
|
gguf.cpp
|
cmake : add sanitizer flags for llama.cpp (llama/11279)
|
2025-02-03 22:00:57 +02:00 |