Commit Graph

4079 Commits

Author SHA1 Message Date
uvos b73f67d3f6 HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
2025-10-12 11:16:23 +03:00
Eve b0560310aa vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using
2025-10-12 11:16:23 +03:00
lhez 31bb869929 opencl: support pad_ext (llama/15888) 2025-10-12 11:16:23 +03:00
Reese Levine 8208cea829 ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
lhez 199626d79e opencl: support ne3 in get_rows (llama/15866) 2025-10-12 11:16:23 +03:00
Ruben Ortlam c3b5c4d934
whisper : Support using devices of type iGPU (#3469) 2025-10-11 17:55:16 +03:00
Andreas Lubbe 85871a9469
whisper : add support for --carry-initial-prompt (#3395)
* Add support for --carry-initial-prompt

* PR fixes for ruby and go

* Refactoring for readability

* WIP 1

* WIP 2

* PR fixes

* More PR fixes

* PR fix

* Further simplification

* d'oh

* One more logic fix

* Update src/whisper.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Truncate prompt_past0 upon initialization

* Slight simplification

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-10 19:51:15 +03:00
Andreas Lubbe a0ca50f3b9
cli: Fix assignment for vad_min_silence_duration_ms (#3467)
* cli: Fix assignment for vad_min_silence_duration_ms

Found and fixed this simple copy/paste error

* server : fix vad_min_silence_duration_ms assignment

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-10-10 15:21:03 +02:00
Georgi Gerganov d3a29d7b88
minor : fix code style (#3463) 2025-10-10 11:33:01 +03:00
Silviu Caragea 85d1d3d3dc
vad : free vad_segments in whisper_vad (#3463)
This commit fixes multiple issues:

* memory leak because vad_segments is never released
* avoid segmentation fault when whisper_vad_segments_from_samples returns nullptr.
* avoid potential segmentation fault when the app fails to allocate memory for filtered samples and the vad context is released but also get released withing state itself when whisper_free_state is called
2025-10-10 06:20:21 +02:00
Georgi Gerganov 98930fded1
whisper : clean-up headers 2025-10-09 10:48:52 +03:00
KITAITI Makoto 8877dfc11a
[skip ci]Bump Ruby bindings' version to 1.3.4 (#3461) 2025-10-08 20:45:20 +09:00
Daniel Bevenius c8223a8548
vad : fix memory leaks in VAD implementation (#3453)
* vad : fix memory leak by storing ggml_context in vad context struct

This commit addresses a memory leak issue in the voice activity
detection (VAD) where the ggml_context is not stored within the vad
context structure.

The motivation for this change that this is causing the context memory
to stay allocated and the tensor still point to that memory but this
memory is never freed.

* vad : free memory allocated for VAD hparams

This commit frees the model hyperparameters allocated for the VAD
context in the `whisper_vad_free` function. Specifically, it deletes the
`encoder_in_channels`, `encoder_out_channels`, and `kernel_sizes` arrays
allocated with `new[]` in the `whisper_vad_init` function.

The motivation for this is to prevent memory leaks when the VAD.

* vad: free ggml buffer in whisper_vad_free

This commit frees the ggml buffer in the whisper_vad_free function to
prevent memory leaks.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3452

* Revert "vad : fix memory leak by storing ggml_context in vad context struct"

This reverts commit aeafca437e.

* whisper : free ggml context in whisper_vad_init_context

This commit frees the ggml_context after initializing the VAD context in
the whisper_vad_init_context function.

The motivation for this is to prevent memory leaks.
2025-10-06 14:57:44 +02:00
KITAITI Makoto 7849aff7a2
ruby : Loose RegExp for test (#3448) 2025-10-01 15:33:11 +03:00
Daniel Bevenius 2a56869669
bindings-java : disable flash attention by default (#3445)
This commit disables flash-attention for the Java binding test so that
the testFullTranscribe test passes.

Without this change the test was failing because the expected output
mismatches after the flash-attention change:
```console
<And so my fellow Americans ask not what your country can do for you ask what you can do for your country.>
but was:
<and so my fellow Americans ask not what your country can do for you ask what you can do for your country>
```

An alternative would also be to update the expected output but it felt
better to keep the same expected output and disable flash-attention and
not just change the expected output to match the new behavior.
2025-10-01 09:13:34 +02:00
Georgi Gerganov 8c0855fd6b
bench : update [no ci] 2025-09-30 21:40:32 +03:00
Georgi Gerganov 47fcd7da8b
scripts : add -nfa option [no ci] 2025-09-30 21:37:00 +03:00
Georgi Gerganov 8a67c55c8a
wchess : fix link [no ci] 2025-09-30 21:28:03 +03:00
Georgi Gerganov 41fc9dea6a
release : v1.8.0 2025-09-30 21:25:36 +03:00
Daniel Bevenius 5904d00dbb
examples : add wchess.wasm to wasm examples build (#3443)
* examples : add wchess.wasm to wasm examples build

This commit add the wchess.wasm example to the wasm examples that are
deployed to https://ggml.ai/whisper.cpp.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3434#issuecomment-3346980420
2025-09-30 16:23:01 +02:00
Georgi Gerganov 0b3587acdd
whisper : enable flash attention by default (#3441) 2025-09-30 15:47:20 +03:00
Georgi Gerganov 1e5ad50f8f
bench : add rtx 5090 [no ci] 2025-09-30 13:58:15 +03:00
Georgi Gerganov 527ff158d0 ggml : bump version to 0.9.4 (ggml/1363) 2025-09-30 13:54:08 +03:00
Georgi Gerganov e4bf87b0e9
bench : update [no ci] 2025-09-30 12:51:25 +03:00
Georgi Gerganov b57b9d3a27
sync : ggml 2025-09-30 12:31:08 +03:00
anavp-nvidia 62b3b86e3f
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)
* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs

* fix to ensure test-backend-ops check passes
2025-09-30 12:31:04 +03:00
Georgi Gerganov 78f85f2b92
metal : dynamic simdgroups for MV kernels (llama/16340)
* metal : dynamic simdgroups for MV kernels

* cont : minor
2025-09-30 12:31:04 +03:00
Charles Xu 01e86b69ab
kleidiai : fix work size and threads sync for fp16 (llama/16246) 2025-09-30 12:31:04 +03:00
alex-spacemit 35ebdf7304
ggml: riscv: add riscv spacemit backend (llama/15288)
* ggml: add spacemit backend

Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23

* add new line at end of file

Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2

* add riscv zba extension limit

Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce

* fixed for review comments, file renamed and format

Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce

* fixed for code format, after clang-format

Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2

* use _Float16 instead of __fp16

Change-Id: I039fb02bb95270e641bc4442204e658735859d43

* add ci for riscv64-spacemit-ime-native

Change-Id: I711c1033061df1a289ea77891b2997599dfe8279

* update debian-13-riscv64-spacemit-ime-native ci label

Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a

* remove license comment for spacemit ime

Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3

* upgrade binutils for gcc ime

Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45

* add spacemit ime cross jobs

Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6

* remove native compile for riscv64-spacemit-ime

Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e

* ci : add caching for spacemit ime cross toolchain

Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de

* ci: bug fixed for cache path and env

Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a

* Update .github/workflows/build-linux-cross.yml for cache path

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* bugfixed for  build-linux-cross.yml,  syntax error

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: cailinxi <linxi.cai@spacemit.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-30 12:31:03 +03:00
Rafal Lewczuk 94fe9bbe2b
ggml-backend : add root cause in error message if loading backend library fails (llama/16172)
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
2025-09-30 12:31:00 +03:00
Georgi Gerganov 32be14f8eb
bench : update [no ci] (#3439) 2025-09-29 17:42:38 +03:00
Georgi Gerganov a77d11d91e
bench : warm-up all kernels (#3438) 2025-09-29 17:27:53 +03:00
Georgi Gerganov 22c12ee86d
ggml : remove oboslete files (#0) 2025-09-29 16:47:30 +03:00
Georgi Gerganov d8cdcce884
ci : add self-hosted workflows (#3437)
* ci : add self-hosted workflows

* cont : fail workflow if there is an error
2025-09-29 16:42:39 +03:00
Georgi Gerganov b4909a6c78
whisper : remove ggml_mul_mat padding (#3436) 2025-09-29 16:42:08 +03:00
Georgi Gerganov fcf0181ee2
talk-llama : sync llama.cpp 2025-09-29 15:18:41 +03:00
Georgi Gerganov 404a93114c
sync : ggml 2025-09-29 15:18:18 +03:00
Georgi Gerganov 3201382792
cmake : remove metal flag (llama/0) 2025-09-29 15:18:13 +03:00
Sigbjørn Skjæret 112e10f2e4
ggml : check cuda and metal argsort limits and add test (llama/16323)
* check cuda argsort limits and add test

* add metal check
2025-09-29 15:18:12 +03:00
Georgi Gerganov 7ce0a7bcd0
ggml : fix dependencies for ggml_set_rows (llama/16318) 2025-09-29 15:18:12 +03:00
Jeff Bolz a375e4c4d2
vulkan: Fix validation failure in quantized flash attention (llama/16292) 2025-09-29 15:18:12 +03:00
Sigbjørn Skjæret 5c6e795607
ggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (llama/16307)
* fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32

* add test that fails on simd
2025-09-29 15:18:12 +03:00
Jeff Bolz 55d45edf6d
vulkan: 64-bit im2col (llama/16135)
* vulkan: 64-bit im2col

Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.

* fix validation error for large im2col
2025-09-29 15:18:12 +03:00
Georgi Gerganov 0102733cca
metal : extend mat-mat multiplication support (llama/16225)
* metal : support mul_mm with src1->type == GGML_TYPE_F16

* metal : support mul_mm_id with src1->type == GGML_TYPE_F16

[no ci]

* metal : mul_mm support ne00 % 32 != 0

* metal : support mul_mm_id with ne00 % 32 != 0

* cont : remove unnecessary unrolls

* cont : simplify data loading

* metal : optimize mul_mm when output bounds checks are not needed
2025-09-29 15:18:12 +03:00
Georgi Gerganov 45976f2857
metal : fuse non-sequential nodes (llama/16102)
* metal : fuse non-sequential nodes

* cont : add comment

* cont : simplify bounds checks
2025-09-29 15:18:12 +03:00
Jeff Bolz 91ab93b756
vulkan: handle mat_mul with A matrix > 4GB (llama/16176)
* vulkan: handle mat_mul with A matrix > 4GB

This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.

Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.

* build fixes
2025-09-29 15:18:12 +03:00
Jeff Bolz eb982dd786
vulkan: support arbitrary KV dimension in flash attention (llama/16160)
The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).
2025-09-29 15:18:12 +03:00
Acly bc1ac13c2f
vulkan : make the vulkan.hpp dynamic dispatcher instance private (llama/16224)
* don't use VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE which can cause conflicts if application or other libraries do the same
2025-09-29 15:18:12 +03:00
Aman Gupta 85e4455cd3
CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (llama/16277)
* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32

This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.

My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32

* Review: refactor if statement
2025-09-29 15:18:11 +03:00
Johannes Gäßler e856483cd6
CUDA: refactor and deduplicate vector FA kernels (llama/16208)
* CUDA: refactor and deduplicate vector FA kernels
2025-09-29 15:18:11 +03:00