whisper.cpp/ggml/src
Aman Gupta b68222f92c CUDA: add conv_2d_transpose (llama/14287)
* CUDA: add conv_2d_transpose

* remove direct include of cuda_fp16

* Review: add brackets for readability, remove ggml_set_param and add asserts
2025-06-21 07:34:17 +03:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas cmake : Fix broken CMake error messages (ggml/1252) 2025-06-01 15:14:44 +03:00
ggml-cann CANN: Simplify the environment variable setting(#13104) 2025-06-10 12:40:33 +03:00
ggml-cpu Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286) 2025-06-21 07:34:17 +03:00
ggml-cuda CUDA: add conv_2d_transpose (llama/14287) 2025-06-21 07:34:17 +03:00
ggml-hip HIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202) 2025-06-18 12:40:34 +03:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (llama/10361) 2024-12-18 12:52:16 +02:00
ggml-metal metal : add mean kernel (llama/14267) 2025-06-21 07:34:17 +03:00
ggml-musa musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (llama/13647) 2025-05-27 18:03:00 +03:00
ggml-opencl opencl: add `mul_mv_id_q4_0_f32_8x_flat` (llama/14003) 2025-06-18 12:40:34 +03:00
ggml-rpc rpc : nicer error messages for RPC server crash (llama/14076) 2025-06-18 12:40:34 +03:00
ggml-sycl sycl: add usage of enqueue_functions extension (llama/14244) 2025-06-21 07:34:17 +03:00
ggml-vulkan Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (llama/14249) 2025-06-21 07:34:17 +03:00
CMakeLists.txt Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286) 2025-06-21 07:34:17 +03:00
ggml-alloc.c ggml: Don't assert fail when tensor data changes (llama/13222) 2025-05-07 15:39:32 +03:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend-reg.cpp build : suppress gcc15 compile warnings (llama/14261) 2025-06-21 07:34:17 +03:00
ggml-backend.cpp sched : avoid changing cur_copy when a graph is already allocated (llama/13922) 2025-06-01 15:14:44 +03:00
ggml-common.h ggml-cpu : split arch-specific implementations (llama/13892) 2025-06-10 12:40:33 +03:00
ggml-impl.h ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-05-29 09:56:26 +03:00
ggml-opt.cpp mnist: fix segmentation fault (ggml/1227) 2025-05-19 14:58:39 +03:00
ggml-quants.c ggml-cpu : split arch-specific implementations (llama/13892) 2025-06-10 12:40:33 +03:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797) 2024-12-18 12:52:16 +02:00
ggml.c Add `ggml_roll` (ggml/1274) 2025-06-21 07:34:17 +03:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-05-29 09:56:26 +03:00
gguf.cpp gguf: fix failure on version == 0 (llama/13956) 2025-06-10 12:40:33 +03:00