whisper.cpp/ggml/include
Aaron Teo f499271c4e
ggml-cpu: drop support for nnpa intrinsics (llama/15821)
2025-09-20 13:42:50 +03:00
..
ggml-alloc.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend.h llama : separate compute buffer reserve from fattn check (llama/15696) 2025-09-20 13:42:45 +03:00
ggml-blas.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cann.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpp.h ggml : fix ggml_gallocr_ptr type (ggml/1205) 2025-05-01 13:29:02 +03:00
ggml-cpu.h ggml-cpu: drop support for nnpa intrinsics (llama/15821) 2025-09-20 13:42:50 +03:00
ggml-cuda.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-metal.h repo : update links to new url (llama/11886) 2025-02-27 08:55:36 +02:00
ggml-opencl.h Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama/10693) 2024-12-18 12:52:16 +02:00
ggml-opt.h finetune: SGD optimizer, more CLI args (llama/13873) 2025-08-18 20:30:45 +03:00
ggml-rpc.h rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943) 2025-05-01 13:29:02 +03:00
ggml-sycl.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-vulkan.h vulkan: Make Vulkan optional at runtime (ggml/11493). (llama/11494) 2025-02-27 08:55:36 +02:00
ggml-webgpu.h ggml: Add initial WebGPU backend (llama/14521) 2025-07-20 00:23:50 +03:00
ggml-zdnn.h ggml: initial IBM zDNN backend (llama/14975) 2025-08-18 20:30:45 +03:00
ggml.h ggml: add ops for WAN video model (cuda && cpu) (llama/15669) 2025-09-20 13:42:49 +03:00
gguf.h GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030) 2025-01-14 10:38:01 +02:00