whisper.cpp/ggml/include
Acly 471df139fa Add `ggml_roll` (ggml/1274)
* ggml : add ggml_roll

* use set/get_op_params & std::min
2025-06-21 07:34:17 +03:00
..
ggml-alloc.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend.h Add `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (llama/13386) 2025-05-13 13:59:21 +03:00
ggml-blas.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cann.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpp.h ggml : fix ggml_gallocr_ptr type (ggml/1205) 2025-05-01 13:29:02 +03:00
ggml-cpu.h ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107) 2025-05-01 13:29:02 +03:00
ggml-cuda.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-kompute.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-metal.h repo : update links to new url (llama/11886) 2025-02-27 08:55:36 +02:00
ggml-opencl.h Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama/10693) 2024-12-18 12:52:16 +02:00
ggml-opt.h mnist: fix segmentation fault (ggml/1227) 2025-05-19 14:58:39 +03:00
ggml-rpc.h rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943) 2025-05-01 13:29:02 +03:00
ggml-sycl.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-vulkan.h vulkan: Make Vulkan optional at runtime (ggml/11493). (llama/11494) 2025-02-27 08:55:36 +02:00
ggml.h Add `ggml_roll` (ggml/1274) 2025-06-21 07:34:17 +03:00
gguf.h GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030) 2025-01-14 10:38:01 +02:00