whisper.cpp/ggml/src
Georgi Gerganov dc693ca8c9
metal : improve `MUL_MAT_ID` (llama/15541)
* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci
2025-09-20 13:42:42 +03:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (#3426) 2025-09-19 05:33:53 +02:00
ggml-cann CANN: ROPE cache sin/cos repeat (llama/15501) 2025-09-20 13:42:41 +03:00
ggml-cpu ggml: add `conv3d` op (llama/15182) 2025-09-20 13:42:39 +03:00
ggml-cuda Add a warning for special devices (llama/15563) 2025-09-20 13:42:42 +03:00
ggml-hip HIP: bump requirement to rocm 6.1 (llama/15296) 2025-08-18 20:30:45 +03:00
ggml-metal metal : improve `MUL_MAT_ID` (llama/15541) 2025-09-20 13:42:42 +03:00
ggml-musa CUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433) 2025-09-20 13:42:38 +03:00
ggml-opencl opencl: fix support ops condition for `rms_norm` (llama/15560) 2025-09-20 13:42:41 +03:00
ggml-rpc ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188) 2025-08-18 20:30:45 +03:00
ggml-sycl vulkan : support ggml_mean (llama/15393) 2025-09-20 13:42:40 +03:00
ggml-vulkan vulkan: Remove splitting for mul_mat_id (llama/15568) 2025-09-20 13:42:42 +03:00
ggml-webgpu ggml WebGPU: add support for quantization types (llama/15440) 2025-09-20 13:42:39 +03:00
ggml-zdnn ggml : initial zDNN backend (llama/14975) 2025-08-18 20:30:45 +03:00
CMakeLists.txt ggml: initial IBM zDNN backend (llama/14975) 2025-08-18 20:30:45 +03:00
ggml-alloc.c llama : add gpt-oss (llama/15091) 2025-08-18 20:30:45 +03:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-reg.cpp ggml: initial IBM zDNN backend (llama/14975) 2025-08-18 20:30:45 +03:00
ggml-backend.cpp sched : fix possible use of wrong ids tensor when offloading moe prompt processing (llama/15488) 2025-09-20 13:42:39 +03:00
ggml-common.h llama : add gpt-oss (llama/15091) 2025-08-18 20:30:45 +03:00
ggml-impl.h llama : add gpt-oss (llama/15091) 2025-08-18 20:30:45 +03:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (llama/13873) 2025-08-18 20:30:45 +03:00
ggml-quants.c ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379) 2025-08-18 20:30:45 +03:00
ggml-quants.h llama : add gpt-oss (llama/15091) 2025-08-18 20:30:45 +03:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c ggml: add `conv3d` op (llama/15182) 2025-09-20 13:42:39 +03:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-05-29 09:56:26 +03:00
gguf.cpp ggml : prevent integer overflow in gguf tensor size calculation (llama/14595) 2025-07-12 19:23:56 +03:00