Compare commits

...

2043 Commits

Author SHA1 Message Date
Georgi Gerganov 4bf733672b talk-llama : sync llama.cpp 2026-05-02 15:02:42 +03:00
Georgi Gerganov 18162bcf61 cmake : add FindNCCL.cmake (ggml/0) 2026-05-02 15:02:42 +03:00
Georgi Gerganov 8384aa8086 sync : ggml 2026-05-02 15:02:42 +03:00
Georgi Gerganov bbdaa21aa7 ggml : remove obsolete rms_norm.wgsl (ggml/0) 2026-05-02 15:02:42 +03:00
Georgi Gerganov a5a8496d31 ggml : remove obsoloete wgsl templates (ggml/0) 2026-05-02 15:02:42 +03:00
Georgi Gerganov 28f8534532 ggml : bump version to 0.10.2 (ggml/1474) 2026-05-02 15:02:42 +03:00
Yiwei Shao 4861a3eeb5 hexagon: hmx flash attention (llama/22347)
* hmx: extract shared interleave headers and unify matmul batched

* hmx: add HMX-accelerated flash attention for prefill

* hmx: replace asm wrappers with Q6_ intrinsics in hmx-utils.h

Switches three single-instruction helpers from inline asm to the matching
Q6_ intrinsics, matching the style established by aizip f8737609a and used
by the upstream PR #21554 hmx-matmul-ops.c rewrite:

  hmx_set_output_scales       asm "bias=mxmem2"  -> Q6_bias_mxmem2_A
  hmx_load_tile_pair_fp16     asm packet         -> Q6_activation_hf_mxmem_RR
                                                    + Q6_weight_hf_mxmem_RR
  hmx_consume_accumulator_fp16 asm "mxmem=acc"   -> Q6_mxmem_AR_after_hf

hmx_load_tiles_fp16 stays on inline asm: it uses ":deep" activation
streaming, and the mixed Q6_activation_hf_mxmem_RR_deep + non-deep
Q6_weight_hf_mxmem_RR pair fails the HMX backend constraint check
("activate weight pair (1) exceeds limit (1)"). The asm bundle keeps
both halves in one VLIW packet and avoids the diagnostic.

Functionally equivalent — same instructions emitted; the Q6_ intrinsics
just give the compiler more visibility for scheduling.

* hmx: drop the duplicate interleave_fp16_weight_chunk_to_tiles

* hmx:  apply upstream optimization to hmx-flash-attn-ops.c
apply restrict, __builtin_assume, and pointer accumulation to the three HMX workers (qk_dot, o_update, o_norm) and the matching inline HMX loops in op_hmx_flash_attn_ext.

* hmx: unify interleave helper

* hmx: multi-thread Q load / O store and enable prefill FA dispatch

Extract inline Q-load and O-store loops into worker_pool-parallel helpers
(fa_phase_q_load, fa_phase_o_store) so HVX threads split the F32↔F16
conversion work across row ranges.  Also relax the softmax threading
gate from n_row_vec_cnt >= n_threads to >= 2, which was unnecessarily
forcing single-thread fallback when n_rows_g < 512.

On the dispatch side, remove the ne[2] != 1 guard that blocked multi-head
(prefill) FA from reaching the HTP backend — GQA is already handled
internally by both the HMX and HVX flash-attention paths.

* hmx: relax matmul pipeline gate to cover k > n shapes (e.g. FFN_down)

* hmx: optimize FA softmax mask phase (no-ALiBi fast path + GQA dedup)

* hmx: Add an asm memory clobber at the phase boundary to prevent reorder bug

* [experimental]: fp16 softmax (EXP2_HF) to accelerate fa

Bake log2(e) into qk_scale and use hvx_exp2_hf directly for P and m_diff
(base-2 consistent, matches htp-ops-lib). ~22 ALU ops for 64 lanes vs
~44 for the F32 round-trip path.

* hmx flash-attn: refine cost model coefficients based on profiling data

* hmx flash-attn: replace asm clobber with targeted volatile reads on vtcm_d_tiles

* hmx flash-attn: fix prefill correctness (dst indexing, softmax reduce, V stride)

* hmx flash-attn: fix p_tiles dual-tile OOB race; enable MT + pipeline

* hmx flash-attn: preserve additive mask bias in no-ALiBi fast path

The no-ALiBi fast path (max_bias==0) was skipping mask add entirely on
the assumption that mask values are only {0, -inf}.  This is wrong when
the mask carries additive positional bias — those terms were silently
dropped.  Keep the slope-mul skip (slope≡1.0) but add mask back so the
bias survives; vmux still clamps below -16 to -inf.

Also add HMX FA coverage to test-backend-ops: prefill shapes (nb=64,
nb=32) × {mask on/off} × {ALiBi on/off} × {softcap on/off}, F16 KV,
hs ∈ {64, 128}.

* hmx: fix softcap+EXP2_HF interaction, tighten matmul pipeline gate, add FA tests

- flash-attn: when EXP2_HF is on AND logit_softcap is active, fold
  log2(e) into the post-tanh multiplier (v_cap) instead of pre-baking
  it into qk_scale.  Pre-baking shifted the tanh knee from x≈c to
  x≈c/log2(e) and produced numerically wrong softcapped outputs
  whenever both knobs were enabled.
- flash-attn softmax (fa_softmax_thread): replace the union+memcpy
  scalar extract pattern with HVX vmux-based per-row accumulators on
  rowmax/rowsum.  Add hvx_vec_get_f16 helper in hvx-base.h.  Functional
  parity, less scalar code, clearer hf/qf16 lane-format contract.
- matmul (hmx_mat_mul_permuted_qk_0_d16a32): pick pipeline vs sequential
  layout based on whether the chunker actually yields >=2 n-chunks,
  instead of the static (m>=128 && n>=256) gate.  Avoids paying for
  output double-buffer + worker dispatch when there is no HMX/HVX
  overlap to gain (e.g. shapes that collapse to one n-chunk).
- tests: add HMX flash-attention coverage over the
  {mask, ALiBi (max_bias), logit_softcap} cross-product for the prefill
  path — head_dim 64/128, GQA 4×4, kv=512/nb=64 plus a kv=113/nb=32
  non-aligned case.

* [Help Wanted]: refactor D matrix computation into separate function for clarity and maintainability

* format code

* hexagon: looks like -O3 is causing issues with the large code base, switch to -O2 and -flto instead

* hexagon: use hex_ prefix for swap_ptr

* hexagon: move vtcm_seq_alloc into vtcm-utils.h

More vtcm allocator updates are coming so it makes sense to start the separate hdr for it.

* hmx-utils: add hmx_prefix for layout converters

* hmx-mm: move main hmx_mm functions to the end, remove unused fwd decls, etc

* hmx-mm: remove unused qweight_fetch_task_state_t and minor alignment fixes

* hmx-fa: minor alignment fixes

* hmx-fa: move hmx_flash_atten into hmx-ops.h

* hmx-fa: remove redundant workpool pointer in the hmx_fa_ctx, plus minor alignment updates

* hmx-fa: minor alignment and simplifications

* hexagon: move FA_EXP_F16 option to hostside CMake file

* hmx-fa: use hvx_vec_splat_f16 instead of fp16_to_bits

* hmx-fa: add hvx_splat_u16/u8 and use that in the fa instead custom hvx_fill

* hmx-fa: some more alignment updates in the core fa function

* hmx-fa: keep slopes in vtcm in fp16

Saves malloc/free and removes the need for float -> fp16 downcast on every use.

* hexagon: consistent noinline usage (after static)

* hex-hmx: consistent use FARF_HIGH to enable debug output

* hmx-utils: no need for always_inline attr

* hex-hmx: consistent noinline usage (static noinline ...)

* hex-hmx: simplify init_col_scales

* hexagon: fix editorconfig errors

* hmx-mm: minor alignment fixes

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-05-02 15:02:42 +03:00
Aparna M P f2ce24fa5c hexagon: enable non-contiguous row tensor support for unary ops (llama/22574) 2026-05-02 15:02:42 +03:00
Masashi Yoshimura 9623c1203b ggml-webgpu: Fix vectorized handling in mul-mat and mul-mat-id (llama/22578)
* Fix vectorized condition of mul-mat-fast pipeline and add vectorized variant to mul-mat-id

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-05-02 15:02:42 +03:00
Jeff Bolz 95053f68e4 vulkan: Support asymmetric FA in coopmat2 path (llama/21753)
* vulkan: Support asymmetric FA in coopmat2 path

There has been some recent interest/experimentation with mixed quantization
types for FA. I had originally designed the cm2 FA shader with this in mind
(because I didn't realize it wasn't supported at the time!), this change
adds the missing pieces and enables it.

Also support Q1_0 since people have been trying that out (seems crazy, but
who knows).

We should be able to do similar things in the coopmat1/scalar path, but
there's another change open against the scalar path and I don't want to
conflict.

* reorder cases
2026-05-02 15:02:42 +03:00
Georgi Gerganov 35cb684129
ggml : try fix win32 build (#0) 2026-05-01 18:53:30 +03:00
Georgi Gerganov e10025351c
sync : ggml 2026-05-01 13:08:32 +03:00
Chen Yuan ccd04522f9
ggml-webgpu: add the upscale shader (llama/22419)
* shader(upscale): add the upscale shader with nearest, bilinear and bicubic implementations

* shader(upscale): use macro
2026-05-01 13:07:36 +03:00
Masashi Yoshimura b34a9f3d83
ggml-webgpu: Improve performance of mat-vec and mat-mat for MUL_MAT_ID (llama/22464)
* Add mat-vec fast path of MUL_MAT_ID.

* Add shared accumulation vec logic and the other types supports.

* Add i-quant mat-mat for MUL_MAT_ID and fix some parts

* Remove n_experts from shader_lib_context.
2026-05-01 13:07:35 +03:00
Ruben Ortlam 0c7c3ba570
vulkan: add get/set tensor 2d functions (llama/22514)
* vulkan: add get/set_tensor_2d functions

* fix backend interface comments

* Update ggml/src/ggml-metal/ggml-metal.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-05-01 13:07:35 +03:00
Johannes Gäßler 582d2562a4
CUDA: fix tile FA kernel on Pascal (llama/22541) 2026-05-01 13:07:35 +03:00
Rithik Sharma d74c56862b
add fast matmul iquants (llama/22504) 2026-05-01 13:07:35 +03:00
Max Krasnyansky 66392cf1a2
hexagon: make vmem and buffer-size configurable (llama/22487)
* hexagon: allow host to set max vmem size

We use a sane default but it's helpful to allow for an override if needed.

* hexagon: add support for measuring vmem space and move pinned mmaping management to host

* hexagon: update vmem checks to use uint64

* hexagon: bump op buffers to 16 (matches max mmaps)

* hexagon: bump default vmem to 3.2GB

* hexagon: add support for autodetecting vmem space and some logging cleanup in that area

* hexagon: fix whitespace warnings

* Update scripts/snapdragon/adb/run-cli.sh

Co-authored-by: Pascal <admin@serveurperso.com>

* hex-adb: fix run-completion script

---------

Co-authored-by: Pascal <admin@serveurperso.com>
2026-05-01 13:07:35 +03:00
Anav Prasad aec8e69c2f
CUDA: fuse SSM_CONV + ADD(bias) + SILU (llama/22478) 2026-05-01 13:07:35 +03:00
shalinib-ibm 9f2cec1840
ggml-cpu : disable tiled matmul on AIX to fix page boundary segfault (llama/22293)
* ggml-cpu : disable tiled matmul on AIX to fix page boundary segfault

vec_xst operations in the tiled path crash on AIX when writing
near 4KB page boundaries due to strict memory protection. Fall
back to mnpack implementation on AIX for stable execution.

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>

* Update ggml/src/ggml-cpu/llamafile/sgemm.cpp

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* Update sgemm.cpp

* Update sgemm.cpp

---------

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2026-05-01 13:07:34 +03:00
Georgi Gerganov c59a773605
examples : update to Q1_0 2026-05-01 13:07:33 +03:00
Georgi Gerganov 320c048724
sync : ggml 2026-04-30 21:44:28 +03:00
Georgi Gerganov ad670182d9
ggml : bump version to 0.10.1 (ggml/1469) 2026-04-30 11:29:23 +03:00
Aman Gupta 44e7803661
ggml-cuda: refactor fusion code (llama/22468)
* ggml-cuda: refactor fusion code

* apply formatting + make env variable truthy
2026-04-30 11:29:23 +03:00
qiurui144 6119537e9a
ggml-cpu: cmake: append xsmtvdotii march for SpacemiT IME (llama/22317)
* ggml-cpu: cmake: append xsmtvdotii march for SpacemiT IME

When GGML_CPU_RISCV64_SPACEMIT=ON is set, ime1_kernels.cpp contains
inline asm for the vmadot family which requires the xsmtvdotii custom
extension.(problem can see in some blogs and make sure in K3 platform)
The current CMakeLists does not include xsmtvdotii, so any toolchain
that honours the explicit -march (tested with SpacemiT GCC 15.2) fails
at the assembler stage:

  Error: unrecognized opcode `vmadot v16,v14,v0',
         extension `xsmtvdotii' required

Append _xsmtvdotii to MARCH_STR when GGML_CPU_RISCV64_SPACEMIT is
enabled so the IME path can actually build with a capable toolchain.
No effect on builds that leave GGML_CPU_RISCV64_SPACEMIT off.

toolchain from https://www.spacemit.com/community/resources-download/Tools

* Update ggml/src/ggml-cpu/CMakeLists.txt

Co-authored-by: alex-spacemit <jinghui.huang@spacemit.com>

---------

Co-authored-by: alex-spacemit <jinghui.huang@spacemit.com>
2026-04-30 11:29:23 +03:00
Reese Levine fa20229eeb
ggml-webgpu: Fix bug in FlashAttention support check (llama/22492)
* Fix flashattention support check for devices that don't support subgroups

* set path to none if kv_tile doesn't fit
2026-04-30 11:29:23 +03:00
hrushitfujitsu 3076725eb0
ggml : add sve tuned code for gemm_q8_0_4x8_q8_0() kernel (llama/21916)
* Added sve tuned code for gemm_q8_0_4x8_q8_0() kernel

* Change arrays to static const in repack.cpp

---------

Co-authored-by: Vithulep <prashant.vithule@fujitsu.com>
2026-04-30 11:29:23 +03:00
Johannes Gäßler 5301139374
TP: fix delayed AllReduce + zero-sized slices (llama/22489) 2026-04-30 11:29:23 +03:00
Michael Wand c200b588f8
ggml-cuda: Repost of 21896: Blackwell native NVFP4 support (llama/22196) 2026-04-30 11:29:22 +03:00
lnigam b553e17071
ggml-cuda: add flash-attn support for DKQ=320/DV=256 with ncols2=32 (… (#22286)
* ggml-cuda: add flash-attn support for DKQ=320/DV=256 with ncols2=32 (GQA=32)

Adds MMA-f16 and tile kernel configs, dispatch logic, template instances,
and tile .cu file for Mistral Small 4 (head sizes 320/256), restricting to
ncols2=32 to support GQA ratio 32 only.

* Adding check to return BEST_FATTN_KERNEL_NONE in case GQA!=32

* Apply suggestions from code review

Address review comments

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Address review comments and making kernel config default to DQK=512, DV=512 instead of DQK=256,DV=256

* Fixed bug with sinks=1, with ncols=32, there are two warp-groups created but sinks index is same(0,...,15) for both the groups hence with sinks=1, output is not matching with CPU output. Added sink_base which will be base index for each warp_group (threadIdx.y / np)

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/template-instances/generate_cu_files.py

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:29:22 +03:00
Matt Corallo e69c109aac
vulkan: Coalesce Q4_K/Q5_K scale loads (llama/21751)
Some SPIR-V compilers (notably mesa) don't handle the current
vulkan Q4_K/Q5_K scale load pattern in mul_mat particularly well.
While reading three `u8`s from the 12-byte scale array should (at
least on some hardware) result in loading the full 12 bytes in a
single LOAD followed by whatever extraction is needed, at least
the ANV Intel driver really can't practically perform this
optimization.

`mesa`'s unsigned upper bound logic doesn't handle tracking bounds
through ternary, resulting in the `(is < 4) ? ... : is - 4` having
an infinite upper bound (as it cannot prove `is - 4` doesn't
underflow). While this could still be rectified if mesa looked at
the array bounds, it currently doesn't and `glslc` currently emits
SPIR-V that doesn't allow for this optimization anyway (though
maybe it will at some point, see
https://github.com/KhronosGroup/glslang/issues/4206).

In mul_mat_vecq we took a different approach to loading the same
fields. We read the first two bytes we needed from `scale` then
took a branch before deciding whether we needed to read a third
byte. In mesa this did, indeed, lead to a top-level branch with
conditional loads. As such these loads ended up not being
coalesced either (at least in the ANV driver) resulting in
additional instructions in our hot loop.

Instead, here, we go ahead and force loading the full 12 bytes and
extract the bits we need from the packed-u32s instead. In mul_mat
there's a few less ternaries and only one extra shift, so even on
drivers that did optimize the previous loads properly the only
material change should be pulling a few extra bytes into registers
(which on most hardware won't cost anything anyway, though
ironically on Intel it theoretically could). In mul_mat_vecq this
requires a bit of extra math and may read bytes from the u32 that
weren't needed, but it seems likely avoiding the branch is a win
on most platforms.

On Intel Xe2/mesa 26.0.4 with the optimizations from
https://gitlab.freedesktop.org/mesa/mesa/-/work_items/15162,

for shader matmul_id_subgroup_q4_k_f32_f16acc_aligned_l:
 * Instruction Count: 2753 -> 2688
 * SEND Count: 269 -> 261
 * Cycle Count: 273976 -> 266138
 * Max live registers: 248 -> 246
 * Non SSA regs after NIR: 381 -> 382

for shader matmul_id_subgroup_q5_k_f32_f16acc_aligned_l:
 * Instruction Count: 2767 -> 2702
 * SEND Count: 271 -> 263
 * Cycle Count: 274140 -> 268144
 * Max live registers: 248 -> 246
 * Non SSA regs after NIR: 381 -> 382

for shader mul_mat_vec_id_q4_k_q8_1_f32:
 * Instruction Count: 1930 -> 1646
 * SEND Count: 116 -> 71
 * Cycle Count: 1348306 -> 843350
 * Max live registers: 78 -> 84
 * Non SSA regs after NIR: 300 -> 135

for shader mul_mat_vec_id_q5_k_q8_1_f32:
 * Instruction Count: 2207 -> 1922
 * SEND Count: 131 -> 86
 * Cycle Count: 1392012 -> 1037836
 * Max live registers: 90 -> 90
 * Non SSA regs after NIR: 300 -> 135

for shader mul_mat_vec_q4_k_q8_1_f32:
 * Instruction Count: 2029 -> 1749
 * SEND Count: 111 -> 66
 * Cycle Count: 1347278 -> 840118
 * Max live registers: 74 -> 80
 * Non SSA regs after NIR: 299 -> 134

for shader mul_mat_vec_q5_k_q8_1_f32:
 * Instruction Count: 2307 -> 2022
 * SEND Count: 126 -> 81
 * Cycle Count: 1379820 -> 954042
 * Max live registers: 86 -> 86
 * Non SSA regs after NIR: 299 -> 134

On one Arc Pro B60, unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL:
 * pp512: 907.34 ± 9.28 -> 941.94 ± 10.53 (+4%)
 * pp2048: 897.95 ± 1.82 -> 931.55 ± 1.79 (+4%)
 * tg128: 49.49 ± 0.02 -> 49.86 ± 0.05 (+ <1%)

On one Arc Pro B60, unsloth/Qwen3.5-27B-GGUF:Q4_K_S:
 * pp512: 324.13 ± 10.52 -> 354.33 ± 6.81 (+9%)
 * pp2048: 329.80 ± 0.25 -> 357.10 ± 0.06 (+8%)
 * tg128: 17.11 ± 0.01 -> 18.11 ± 0.01 (+6%)

On four Arc Pro B60s, unsloth/Qwen3.5-122B-A10B-GGUF:Q5_K_S with
-sm layer (note that -sm tensor improvements will naturally be
less):
 * pp512: 264.55 ± 2.81 -> 280.45 ± 3.94 (+6%)
 * pp2048: 319.32 ± 2.72 -> 335.70 ± 3.48 (+5%)
 * tg128: 26.39 ± 0.01 -> 26.67 ± 0.01 (+1%)
2026-04-30 11:29:22 +03:00
Reese Levine 4ea5b6febc
ggml-webgpu: fix buffer aliasing for ssm_scan and refactor aliasing logic (llama/22456)
* Refactor buffer aliasing to be part of shader lib decisions

* cleanup

* formatting
2026-04-30 11:29:22 +03:00
Jeff Bolz 35fa508360
vulkan: add barrier after writetimestamp (llama/21865) 2026-04-30 11:29:22 +03:00
Emil Askerov 0fa31f9bb6
ggml: improve SPIR-V headers detection with __has_include (llama/21918)
* ggml: improve SPIR-V headers detection with __has_include while preserving original _WIN32 logic

* Address review comments: fix fallback logic and add FreeBSD support

* Remove spirv_cross fallback as per review

* Remove redundant __has_include check
2026-04-30 11:29:22 +03:00
Adrien Gallouët 6fceff2eb4
ggml : skip already registered backends and devices (llama/22296)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-04-30 11:29:21 +03:00
Adrien Gallouët ca624d86ab
ggml : revert to -lm linking instead of find_library (llama/22355)
* ggml : revert to -lm linking instead of find_library

`find_library(MATH_LIBRARY m)` was introduced recently, but it breaks
CUDA compilation with GGML_STATIC. I could not find any valid use case
where we would prefer `find_library` over the standard `-lm` approach.

This commit is also meant to start a discussion if there is a valid
reason to keep `find_library(MATH_LIBRARY m)`, we should clarify what
problem it was solving and find an alternative fix that does not break
CUDA with GGML_STATIC.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : use MATH_LIBRARY only if defined

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : fix initial broken condition

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : always respect MATH_LIBRARY when defined

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-04-30 11:29:21 +03:00
hipudding 70e4c0aec0
CANN: add new ops, optimize existing ops (llama/21204)
New operators:
- GGML_OP_SET: implement via aclnnInplaceCopy on target region
- GGML_OP_CUMSUM: implement via aclnnCumsum
- GGML_OP_FILL: implement via aclnnInplaceFillScalar
- GGML_OP_DIAG: implement via aclnnInplaceCopy on diagonal strides
- GGML_OP_TRI (lower/lower_diag/upper_diag/upper): implement via
  aclnnTril(-1/0) and aclnnTriu(0/1) with appropriate diagonal offsets
- GGML_OP_SOLVE_TRI: implement via aclnnTriangularSolve
- GGML_UNARY_OP_SOFTPLUS: implement via aclnnSoftplus

Optimizations:
- GLU (SwiGLU/GeGLU/GeGLU_ERF/GeGLU_QUICK): fuse with aclnnSwiGlu /
  aclnnGeGluV3 when applicable; fallback conditions now checked inside
  each function rather than at the call site
- CROSS_ENTROPY_LOSS: replace 5-kernel sequence (LogSoftmax→Mul→
  ReduceSum×2→Muls) with single aclnnSoftmaxCrossEntropyWithLogits call
- L2_NORM: fix in-place ClampMin on norm result (was clamping wrong
  tensor); add eps clamping before division to avoid divide-by-zero
- PAD_REFLECT_1D: eliminate per-ne[3] loop; assert contiguity and call
  ReflectionPad1d once on the full 4-D view; remove redundant nb copies
- GET_ROWS: replace IndexSelect with GatherV2 per batch slice; refactor
  helper into gather_batched lambda with batch loop inlined
- SET_ROWS: replace IndexCopy with InplaceIndexCopy per batch slice;
  refactor helper into scatter_batched lambda with batch loop inlined
- OUT_PROD: replace O(ne[3]*ne[2]*ne[1]) Ger+InplaceAdd loop with
  per-slice Matmul loop (src0 @ src1^T); handles strided-broadcast
  batch dims where ne02/ne03 may differ from ne2/ne3
- backend memset_tensor: implement via aclrtMemset (was NULL)

Bug fixes:
- COUNT_EQUAL: use non-inplace EqTensor into a same-type temporary
  buffer instead of InplaceEqTensor, avoiding corruption of src0
- ACL graph cache (USE_ACL_GRAPH): restore node_type and src_type[]
  fields in ggml_graph_node_properties; has_matching_properties() was
  missing type checks, causing F16 and BF16 tensors (same nb[0]=2) to
  incorrectly share cached graphs and produce wrong results (ERR≈679)
- graph cache op_params matching: compare full GGML_MAX_OP_PARAMS
  bytes so that ops differing only in parameters are not incorrectly
  replayed from cache
2026-04-30 11:29:21 +03:00
Rithik Sharma 9c233f11f0
ggml-webgpu: add Q1_0 support (llama/22374)
* add fast matmul matvec q1_0 kernel

* ggml-webgpu: drop redundant zero-fills in Q1_0 shmem init
2026-04-30 11:29:21 +03:00
Rithik Sharma f675a8c926
add fast mat-vec kernels for i-quants (llama/22344) 2026-04-30 11:29:21 +03:00
unraido c9ba41397c
fix: rpc-server cache may not work in Windows environments (llama/22394)
* fix: create directory and log cache file name.

* Remove GGML_LOG_INFO conditional compilation.

---------

Co-authored-by: kotaro <kotaro.kusunoki@gmail.com>
2026-04-30 11:29:21 +03:00
Adrien Gallouët f5c3ce17d5
ggml : use 64 bytes aligned tile buffers (llama/21058)
| Model                            | Test   |   t/s OLD |   t/s NEW |   Speedup |
|:---------------------------------|:-------|----------:|----------:|----------:|
| qwen35 0.8B BF16                 | pp512  |    584.59 |    595.41 |      1.02 |
| qwen35 0.8B BF16                 | tg128  |     52.23 |     52.82 |      1.01 |
| qwen35 0.8B IQ2_M - 2.7 bpw      | pp512  |    260.64 |    261.70 |      1.00 |
| qwen35 0.8B IQ2_M - 2.7 bpw      | tg128  |     81.17 |     80.89 |      1.00 |
| qwen35 0.8B IQ2_XXS - 2.0625 bpw | pp512  |    302.36 |    302.56 |      1.00 |
| qwen35 0.8B IQ2_XXS - 2.0625 bpw | tg128  |     84.93 |     85.12 |      1.00 |
| qwen35 0.8B IQ3_XXS - 3.0625 bpw | pp512  |    263.22 |    260.01 |      0.99 |
| qwen35 0.8B IQ3_XXS - 3.0625 bpw | tg128  |     80.29 |     78.94 |      0.98 |
| qwen35 0.8B IQ4_NL - 4.5 bpw     | pp512  |    728.65 |    742.09 |      1.02 |
| qwen35 0.8B IQ4_NL - 4.5 bpw     | tg128  |     82.39 |     84.46 |      1.03 |
| qwen35 0.8B IQ4_XS - 4.25 bpw    | pp512  |    681.33 |    677.06 |      0.99 |
| qwen35 0.8B IQ4_XS - 4.25 bpw    | tg128  |     80.18 |     79.28 |      0.99 |
| qwen35 0.8B Q2_K_M               | pp512  |    413.28 |    415.94 |      1.01 |
| qwen35 0.8B Q2_K_M               | tg128  |     81.90 |     82.78 |      1.01 |
| qwen35 0.8B Q3_K_M               | pp512  |    493.17 |    495.08 |      1.00 |
| qwen35 0.8B Q3_K_M               | tg128  |     82.75 |     83.23 |      1.01 |
| qwen35 0.8B Q3_K_S               | pp512  |    429.35 |    427.64 |      1.00 |
| qwen35 0.8B Q3_K_S               | tg128  |     86.69 |     87.02 |      1.00 |
| qwen35 0.8B Q4_0                 | pp512  |    783.46 |    782.32 |      1.00 |
| qwen35 0.8B Q4_0                 | tg128  |     88.23 |     87.90 |      1.00 |
| qwen35 0.8B Q4_1                 | pp512  |    741.71 |    729.76 |      0.98 |
| qwen35 0.8B Q4_1                 | tg128  |     85.44 |     86.01 |      1.01 |
| qwen35 0.8B Q4_K_M               | pp512  |    676.24 |    681.31 |      1.01 |
| qwen35 0.8B Q4_K_M               | tg128  |     76.59 |     77.06 |      1.01 |
| qwen35 0.8B Q4_K_S               | pp512  |    683.12 |    688.81 |      1.01 |
| qwen35 0.8B Q4_K_S               | tg128  |     80.50 |     81.19 |      1.01 |
| qwen35 0.8B Q5_K_M               | pp512  |    635.33 |    642.11 |      1.01 |
| qwen35 0.8B Q5_K_M               | tg128  |     72.07 |     72.49 |      1.01 |
| qwen35 0.8B Q5_K_S               | pp512  |    660.95 |    658.18 |      1.00 |
| qwen35 0.8B Q5_K_S               | tg128  |     72.19 |     72.95 |      1.01 |
| qwen35 0.8B Q6_K                 | pp512  |    647.97 |    638.84 |      0.99 |
| qwen35 0.8B Q6_K                 | tg128  |     72.83 |     72.49 |      1.00 |
| qwen35 0.8B Q8_0                 | pp512  |    805.01 |    785.49 |      0.98 |
| qwen35 0.8B Q8_0                 | tg128  |     70.10 |     70.13 |      1.00 |

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-04-30 11:29:20 +03:00
Rithik Sharma 1478450e61
add performance-portable tuning for register-tile and subgroup matmul (llama/22241) 2026-04-30 11:29:20 +03:00
Gaurav Garg 7296b9c7fa
Fix recurrent state serialization for partial reads and writes (llama/22362)
The previous code worked only for full tensor reads and writes and was hitting `GGML_ASSERT(size == ggml_nbytes(tensor)); ` assert when tested with llama-server.
2026-04-30 11:29:20 +03:00
Oliver Simons 9bf6c3c860
CUDA: better coalesce data-access for contiguous concat (llama/22330)
Also, distribute all elements across CTAs evenly instead of launching
one CTA per dim
2026-04-30 11:29:20 +03:00
Sigbjørn Skjæret 2f3df42cdd
ggml-cpu : re-enable fast gelu_quick_f16 (llama/22339) 2026-04-30 11:29:20 +03:00
Eve 4e11277a19
ggml-cpu: optimize avx2 q6_k (llama/22345) 2026-04-30 11:29:20 +03:00
lhez 93a3f37642
opencl: add iq4_nl support (llama/22272)
* opencl: add general support for iq4_nl

* opencl: add iq4_nl gemm/gemv for adreno

* opencl: pack 2 lut entries into a uint
2026-04-30 11:29:19 +03:00
Trivikram Reddy 1be2adf7b3
hexagon: guard HMX clock request for v75+ platforms (llama/22377) 2026-04-30 11:29:19 +03:00
Johannes Gäßler da738a74f5
CUDA: reduce MMQ stream-k overhead (llama/22298)
* CUDA: reduce MMQ stream-k overhead

* use 32 bit integers for kbc
2026-04-30 11:29:19 +03:00
Developer-Ecosystem-Engineering 21da84303e
metal : optimize Metal Tensor API usage for GGML_OP_MUL_MAT (llama/20962)
* Optimize Metal Tensor API usage for matmul2d

Separates the Metal Tensor API (matmul2d) path in kernel_mul_mm into its own standalone kernel, gated by GGML_METAL_HAS_TENSOR.

The legacy simdgroup_matrix kernel is preserved under #else.

Previously both paths were interleaved via #ifdef blocks within a single kernel, forcing the tensor path to share the legacy kernel's data layout and threadgroup memory scheme. Splitting the kernel enabled memory and dispatch optimizations that weren't possible when the two paths shared code structure.

* cont : cleanup

* cont : cleanup

* cont : cleanup

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-04-30 11:29:19 +03:00
Neo Zhang 6296fd5a90
Optimize Q4_0 mul_mat for Arc770, add scripts (llama/22291)
* opt arc770 for Q4_0

* add for Q4_0

* update the script

* add help script for windows

* update guide

* fix format issue

* convert from dos to unix for format issue

* fix missed -sm parameter
2026-04-30 11:29:19 +03:00
Reese Levine c235b05d8a
ggml-webgpu: support for SSM_SCAN and disable set_rows error checking (llama/22327)
* Implement ssm_scan

* Remove blocking in graph_compute and check for set rows

* Fix bindings

* Update op support
2026-04-30 11:29:19 +03:00
Trivikram Reddy c546b0b1bc
Hexagon: Bump HMX Frequency to Max Corner (llama/22334)
* hexagon: bump HMX freq to max corner

* hex-mm: fix error in log msg
2026-04-30 11:29:18 +03:00
Zheyuan Chen 35d679a4f8
ggml-webgpu: enable FLASH_ATTN_EXT on browser without subgroup matrix (llama/22199)
* ggml-webgpu: add tile flash attention fallback

* ggml-webgpu: add new fields and discard usage of mnk for tile version

* ggml-webgpu: modify the vec path to discard the mnk parameter

* ggml-webgpu: enable flash attention vec and tile version for broswer

* ggml-webgpu: stagging KV for flash attention tile version

* formatting

* turn on subgroup uniformity check

* remove Q_TILE as it is always 1 for vec path

* make row_max and exp_sum to local register

* make different bindings with same underlying buffer to have the same usage flags

* move path selection into the shader library and have the host consume a single flash-attn decision object.

* turn off skip_validation and address buffer overlapping when nwg==1

* formatting

* merge binding when kv overlap
2026-04-30 11:29:18 +03:00
Mengsheng Wu 6576c4da90
hexagon: use DIRID 13 in libggml-htp.inf for modern InfVerif (llama/22306) 2026-04-30 11:29:18 +03:00
Georgi Gerganov 07d6db39e5
metal : print GPU description (llama/22318) 2026-04-30 11:29:18 +03:00
Georgi Gerganov dfb8b68799
ggml : minor coding style (llama/22308) 2026-04-30 11:29:18 +03:00
Mengsheng Wu 23921d5a69
hexagon: add SOLVE_TRI op (llama/21974)
* hexagon: add SOLVE_TRI op

* ggml: fix TODO description for solve_tri

* hexagon: rm unused variable/function warnings

* hexagon: chunk vs batch processingfor better thread utilization

* hexagon: vectorize partial f32 loads

* hexagon: move HVX f32 add/sub/mul wrappers to hvx-base.h

---------

Co-authored-by: Todor Boinovski <todorb@qti.qualcomm.com>
2026-04-30 11:29:18 +03:00
Chen Yuan 641998f558
fix(shader): handle the buffer aliasing for rms fuse (llama/22266) 2026-04-30 11:29:17 +03:00
Max Krasnyansky 71b1ab3784
hexagon: add support for basic and extended Op profiling (llama/22269)
* hexagon: restore HTP_OPMASK_QUEUE

* hexagon: honor OPMASK_SKIP_COMPUTE in hmx-matmul

* hex-prof: restore op profiling

* hex-prof: enable PMU

* hexagon: simplify and improve op-queuing with full profiling support

Add separate profile descriptors.

* hexagon: remove opsync and rename opmask into opstage

opsync is no longer needed since the profiler is fully async now.
opmask name was confusing and opstage is more accurate.

* hexagon: refactor opbatch queue handling

* hexagon: add iface hooks for enabling profiler from the host

Also move all the PMU setup stuff out of the hex-utils since it's not inteded for normal use.

* hexagon: make profiler mode configurable

On older devices getting PMU counters is expensive so it's now optional.

* hexagon: add support for setting profiler pmu events from env

* hexagon: simplify profiler output (no need to print buffs, etc)

* hexagon: simplify pmu counter formating

* hexagon: add a simple profile post-proc tool

* hex-prof: add support for reading logs from stdin

* hexagon: document GGML_HEXAGON_PROFILE

* hex-prof: update default width for dims field

* hex-prof: fix linter warnings and errors

* Update ggml/src/ggml-hexagon/htp/htp-ops.h

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/snapdragon/ggml-hexagon-profile.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Trivikram Reddy <tamarnat@qti.qualcomm.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-04-30 11:29:17 +03:00
Georgi Gerganov 682ee99305
metal : fix event synchronization (llama/22260) 2026-04-30 11:29:17 +03:00
Georgi Gerganov 1aba061737
ggml-base: use MATH_LIBRARY variable instead of hardcoded 'm' (llama/22239)
Fixes #22237 — the find_library(MATH_LIBRARY m) result was being
discarded and the target linked against the literal 'm' string.

This prevents users from overriding the math library (e.g. for AMD AOCL)
via CMake variables. Now the discovered MATH_LIBRARY is used directly.
2026-04-30 11:29:17 +03:00
abotsis b938c5026c
sycl : fused MoE mul_mat_vec_q for TG (llama/21920)
* sycl : fused MoE mul_mat_vec_q for TG

Create an MMVQ kernel so ggml_sycl_mul_mat_id can consolidate
n_experts_used matmuls in a single kernel launch. The kernel
also reads expert IDs directly, removing a per-call host sync.

This is similar to the CUDA backend's ggml_cuda_mul_mat_vec_q*
paths.

All types supported in the current MMVQ are supported here as well:
Q2_K, Q3_K, Q4_K, Q5_K, Q6_K, Q4_0, Q4_1, Q5_0, Q5_1, Q8_0

It will fall back to the existing per-expert path when src0 has been rewritten
by opt_for_reorder(), and for any shape the fused path doesn't handle.

test-backend-ops passes for supported type/shape combos.

Benchmark: Qwen3-Next-35B-A3B Q4_K_M on Intel Arc B70 (SYCL0),
baseline 707c0b7a6, 16k context, -fa 0.

  build/bin/llama-bench -hf unsloth/Qwen3.5-35B-A3B-GGUF:Q4_K_M \
    -p 1024 -n 128 -d 16384 -ngl 99 -fa 0 -ub 2048 -r 2 -dev SYCL0

Before (3 runs on 707c0b7a6):

  | test            |            run 1 |            run 2 |            run 3 |
  | --------------- | ----------------:| ----------------:| ----------------:|
  | pp1024 @ d16384 |   533.26 ±  4.87 |   535.20 ±  2.78 |   524.27 ±  3.10 |
  | tg128  @ d16384 |    33.47 ±  0.02 |    33.31 ±  0.02 |    33.17 ±  0.05 |

After (3 runs on 707c0b7a6 + this patch):

  | test            |            run 1 |            run 2 |            run 3 |
  | --------------- | ----------------:| ----------------:| ----------------:|
  | pp1024 @ d16384 |   534.06 ±  0.97 |   531.95 ±  0.02 |   520.94 ± 20.10 |
  | tg128  @ d16384 |    45.85 ±  0.21 |    45.95 ±  0.45 |    46.22 ±  0.12 |

disclosure: Claude wrote it, but I reviewed and understand the implementation
(albeit my C is a little rusty).

* sycl: also support nvfp4 and mxfp4 expert types

* sycl: terser comments/nested dispatch in response to review

* sycl: more comment cleanup in mmvq.cpp/hpp

---------

Co-authored-by: Debian <aaron@openllmi.net.bots.is>
2026-04-30 11:29:17 +03:00
Chen Yuan df528c4f71
ggml-webgpu: add support for im2col (llama/22259)
* shader(im2col): implement the im2col shader

* shader(im2col): clean the formatting issues

* shader(im2col): clean the editorconfig checker warning

* fix(shader): address the workgroup issues of im2col and conv2d
2026-04-30 11:29:17 +03:00
Anav Prasad b6b547885c
CUDA: fuse relu + sqr (llama/22249) 2026-04-30 11:29:16 +03:00
uvos 393fdffe20
HIP: flip GGML_HIP_GRAPHS to default on (llama/22254)
In #11362 hip graph was disabled by default as, at the time, its performance impact was negative. Due to improvements in rocm and our usage and construction of graphs this is no longer true, so lets change the default.
2026-04-30 11:29:16 +03:00
Nikhil Jain d2a26dc8e2
Implement async tensor api and event api (llama/22099)
* Only run webgpu CI on my fork

* Implement set_tensor_async

* Implement synchronize api

* Implement event creation and deletion API

* Cleanup

* Cleanup

* Comment out jobs for local CI run

* Add webgpu only workflow

* Delete .github/workflows/build-webgpu.yml

* Cleanup

* Cleanup

* Update API with function handlers

* Run clang-format

* Replace one-shot buffer with a direct queue.WriteBuffer using the buffer context
2026-04-30 11:29:16 +03:00
Masashi Yoshimura 0fbe4c4ca7
ggml-webgpu: Add fused RMS_NORM + MUL (llama/21983)
* fused rms_norm_mul + mul

* Add GGML_WEBGPU_DISABLE_FUSION for being able to disable kernel fusion.

* Decouple num_fused_ops from webgpu_context; misc cleanup

* Fix eps handling and remove disable_fusion.

* Fix not to use c++20 initializers.
2026-04-30 11:29:16 +03:00
Akarshan Biswas c5bb7c0078
sycl: Improve mul_mat_id memory efficiency and add BF16 fast path (llama/22119)
* sycl: size mul_mat_id staging buffers by routed rows

Previously src1_contiguous/dst_contiguous in ggml_sycl_mul_mat_id were
sized to ggml_nelements(src1/dst), which over-allocates when ne12 > 1
and can fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero for
MoE models (notably with --cpu-moe). Size them by the actual number of
routed rows (ids->ne[1] * n_ids) instead.

* sycl: add bf16 mul_mat fast path via DNNL

When src0 is BF16 (commonly the case for lm_head / output.weight), the
existing f16 path is skipped because bf16 isn't covered, and the f32
fallback dequantizes the entire src0 slab to f32 in a single pool alloc
(row_diff*ne00 floats). For large-vocab models this can reach several
GB and fail with UR_RESULT_ERROR_OUT_OF_HOST_MEMORY on Level Zero.

Add a bf16xbf16 -> f32 DNNL matmul fast path that uses the bf16 storage
in place and only materializes a small src1 bf16 conversion buffer. bf16
matmul accumulates in f32, so it's correct even when the op requests
GGML_PREC_F32 (as lm_head does).

- gemm.hpp: map bfloat16 to dnnl::memory::data_type::bf16.
- convert.{hpp,cpp}: expose ggml_get_to_bf16_sycl for f32/f16/bf16 -> bf16.
- ggml-sycl.cpp: take the bf16 path early in ggml_sycl_op_mul_mat_sycl
  when DNNL and GGML_SYCL_HAS_BF16 are both available.
2026-04-30 11:29:16 +03:00
Chen Yuan 447be522e9
ggml-webgpu(shader): support conv2d kernels. (llama/21964)
* ggml(webgpu): fix the busy-polls in Emscripten  in the waitAny after #20618, and remove the busy webgpu log

* Merge with upstream

* Fix GET_ROWS packed integer NaN when using f16 as memory buffer in shader quants

* Update Unary wgsl EXP and EXPM1 for f16 stability

* Fix GET_ROWS IQ4_XS strcut for NaN f16 canonicalization

* Fix numerical percision for unary sqrt when working with f16

* Fix NaN canonicalization for packed integers using f16

* Update err threshold for binary div ops when using f16

* backend: Keep one Dawn/WebGPU instance alive for the lifetime of the static backend

* clean: uncomment existing code logs

* clean: clean the unncessary debug info

* Refactor and generalize dequant helpers

* Remove deprecated quant structs

* Refactor shader defines to reduce repetition

* Remove error override for F16 type

* fix: fix the accidential removal of the proper initialization of ctx

* clean: clean legacy and format code

* fix: did not modify tests ops

* shader(conv2d): add conv2d shader kernels and pass f32 and f16 tests

* shader(conv2d): fix the out of bounds memory access in the weight indexing

* shader(conv2d): clean unused variables and optimize the computation

* merge: use the new entries function

* clean: address the formatting issues

* clean: address the warning issues

* clear: clean the shader editorconfig-checker issues

* clear: clean the shader editorconfig-checker with utf-8

---------

Co-authored-by: Jeremy J. Hartmann <jeremy@mtion.tv>
2026-04-30 11:29:16 +03:00
Aparna M P d6a417408c
hexagon: add support for FILL op (llama/22198)
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-04-30 11:29:15 +03:00
Masashi Yoshimura 2e5eb6e951
ggml-webgpu: reset CPU/GPU profiling time when freeing context (llama/22050)
* Reset the CPU/GPU profiling time when freeing context.

* move GPU profiling time from global context to webgpu_context.
2026-04-30 11:29:15 +03:00
Shreya Jain 84a6b5c039
Hexagon: DAIG op (llama/22195)
* hexagon: Add DIAG op

* hexagon: add HVX support and DMA double buffering

* hexagon: fix fatal error

* hexagon: remove as many pragma(s) as possible
2026-04-30 11:29:15 +03:00
Mengsheng Wu e2014d6959
hexagon: fix missing v79 entry in libggml-htp.inf (llama/22194) 2026-04-30 11:29:15 +03:00
Zijun Yu 3a73f9cf0b
openvino: driver setup, CI split, thread safety, and NPU optimizations (llama/21944)
* Thread safety per request only

* Fix ROPE yarn case

* Fix sticky stateful config

* Use i4/i8 directly for symmetric quant

* Use weightless caching

* Add WeightlessCacheAttribute to reduce NPU memory usage

* Gelu tanh support (llama/125)

* Imrope support (llama/126)

* fix(openvino): explicit ov::Tensor frees in ggml_backend_openvino_free

* add GPU,NPU support in OV Dockerfile

* add build-openvino.yml ci

* Fix sticky stateful config

* add concurrency to ov-gpu ci runs. Move OV CI to build-openvino.yml

* fix thread-safety of shared runtime context

* rope type abstraction for frontend translations

* fix editorconfig

---------

Co-authored-by: Mustafa Cavus <mustafa.cavus@intel.com>
Co-authored-by: Dan Hoffman <dhoff749@gmail.com>
Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
2026-04-30 11:29:15 +03:00
Georgi Gerganov 150cef5a5f
metal : workaround macOS GPU interactivity watchdog (llama/22216) 2026-04-30 11:29:15 +03:00
Jeff Bolz 85bbc82209
vulkan: Support F16 OP_FILL (llama/22177) 2026-04-30 11:29:14 +03:00
Georgi Gerganov e7cffdbd0b
ggml : bump version to 0.10.0 (ggml/1463) 2026-04-30 11:29:14 +03:00
leonardHONG b13deaabae
ggml-cuda: flush legacy pool on OOM and retry (llama/22155)
* ggml-cuda: flush legacy pool on OOM and retry

Signed-off-by: 梁厚宏 <2695316095@qq.com>

* Address review comments: add explicit sync, update destructor, clean up MUSA macros

Signed-off-by: 梁厚宏 <2695316095@qq.com>

---------

Signed-off-by: 梁厚宏 <2695316095@qq.com>
2026-04-30 11:29:14 +03:00
Gaurav Garg 239c5c86c3
Tensor-parallel: Fix delayed AllReduce on Gemma-4 MoE (llama/22129)
* Fix delayed AllReduce on Gemma-4 MoE

Skip forward past nodes that don't consume the current one, and allow a chain of MULs.

* Check for all sources before skipping nodes

* Address review comments
2026-04-30 11:29:14 +03:00
Johannes Gäßler 6429023e5f
TP: fix 0-sized tensor slices, AllReduce fallback (llama/21808)
* TP: fix 0-sized tensor slices, AllReduce fallback

* fix layer structure <-> GPU count aliasing

* add missing std::fill

* fix CUDA device set, max ggml ctx size
2026-04-30 11:29:14 +03:00
pl752 2b9fb0be77
ggml-cpu: Optimized x86 and generic cpu q1_0 dot (follow up) (llama/21636)
* Implemented optimized q1_0 dot for x86 and generic

* Removed redundant helper definition

* Removed two redundant instructions from AVX q1_0 dot

* Fixed inconsistency with fp16 conversion for generic q1_0 dot and deduplicated generic fallback

* Style cleanup around AVX q1_0 dot

* Replaced explicitly unrolled blocks with inner for loop for q1_0

* Replaced scalar ARM q1_0 impl with new generic one
2026-04-30 11:29:14 +03:00
neha-ha 5f21fdcbb9
ggml-webgpu: updated matrix-vector multiplication (llama/21738)
* merged properly, but slow q3_k and q5_k with u32 indexing

* Start on new mat-vec

* New format float paths working

* Working q4_0

* Work on remaining legacy q-types

* port k-quants to new matvec

* remove old shader

* Remove old constants, format

* remove accidental file

---------

Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-04-30 11:29:13 +03:00
Katostrofik 931cf2f3a8
Fix reorder MMVQ assert on unaligned vocab sizes (llama/22035)
* [SYCL] Fix reorder MMVQ assert on unaligned vocab sizes

The reorder mul_mat_vec_q dispatchers for Q4_0, Q8_0, Q4_K, and Q6_K
asserted that block_num_y was a multiple of 16 subgroups. Models with
a vocab size not divisible by 16 (for example HY-MT at 120818) aborted
on model load when the output projection tripped the assert.

I replaced the assert with padding: block_num_y now rounds up to a
whole number of subgroup-sized workgroups. The kernel already has the
row bounds check (`if (row >= nrows) return;`) so the extra padded
threads early-exit cleanly. Row values are uniform across a subgroup
so the collective reduce stays safe.

For aligned vocab sizes the padded block_num_y equals the old value,
so the kernel launch is identical and there is no regression.

Thanks to @arthw for flagging the relationship to #21527.

Fixes #22020.

AI assisted coding, tested on Intel B70 hardware.

* sycl: use WARP_SIZE for num_subgroups in reorder MMVQ launches

Replaces the hardcoded 16 with WARP_SIZE in the four reorder_mul_mat_vec
launch helpers (Q4_0, Q8_0, Q4_K, Q6_K). Compile-time no-op on the Intel
target where WARP_SIZE is 16, but makes the relationship to subgroup
size explicit. Per review by @NeoZhangJianyu on #22035.

Assisted by Claude.
2026-04-30 11:29:13 +03:00
Johannes Gäßler b8f57c9c50
CUDA: refactor mma data loading for AMD (llama/22051)
* CUDA: refactor mma data loading for AMD

* fix CDNA MMQ occupancy

* fix CDNA3 mma

* fix RDNA3 compile
2026-04-30 11:29:13 +03:00
uvos 945746b40c
HIP: Remove unesscary NCCL_CHECK (llama/21914) 2026-04-30 11:29:13 +03:00
Gaurav Garg 671fd1527a
ggml : reduce CPU overhead in meta backend (llama/22041)
* cache subgraph splits when cgraph is unchanged

Skip per-call subgraph construction in ggml_backend_meta_graph_compute when the same ggml_cgraph is used consecutively.

Assign uid to every sub-graph so that CUDA's fast uid check path hits too.

* Address review comments

* Keep the scope as is

* Rename last_uid and last_n_subgraphs field. Remove last_max_tmp_size field. Refactor code.

* Address review comments

* Update ggml/src/ggml-backend-meta.cpp

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-backend-meta.cpp

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:29:13 +03:00
texasich 171f037fba
cmake: remove CMP0194 policy to restore MSVC builds (llama/21934)
#21630 added the CMP0194 NEW policy to silence a CMake warning, but on Windows runners it caused CMake to prefer the MinGW toolchain for ASM and broke MSVC builds.

Reverting only that policy block restores the previous working behavior. The CMake 4.1+ warning comes back, but that is cosmetic and does not break any platform.

Reported-by: oobabooga

Refs: #21630

Co-authored-by: texasich <texasich@users.noreply.github.com>
2026-04-30 11:29:12 +03:00
Radoslav Gerganov 32789b9e07
rpc : refactor the RPC transport (llama/21998)
* rpc : refactor the RPC transport

Move all transport related code into a separate file and use the
socket_t interface to hide all transport implementation details.

* fix win32

* better socket_t construction
2026-04-30 11:29:12 +03:00
SamareshSingh a899e4bdcb
ggml-backend-meta: add multi-segment read support in get_tensor (llama/22063) 2026-04-30 11:29:12 +03:00
Reese Levine cbbe935765
ggml-webgpu: fix compiler warnings and refactor FlashAttention encoding (llama/21052)
* Update workflows to remove dependence on llvmpipe

* Try setting Dawn_DIR

* remove c++20 initializers

* Move to proper guid

* Try avoiding segfaults on vulkan backend process exit

* Remove compiler warnings on parameter casting

* Fix soft_max and update reg_tile accumulation to f32 for better precision

* Refactor flash_attn a bit

* remove c++20 initializers and format

* Increase div precision for NVIDIA

* revert div precision and comment out ggml-ci node for now

* Formatting

* Try debugging on a failing CI node

* Revert "Try debugging on a failing CI node"

This reverts commit 1971e33cba919915e12bcfd5828abfbd54ca942e.
2026-04-30 11:29:12 +03:00
Aman Gupta 918e0ad209
CUDA: use LRU based eviction for cuda graphs (llama/21611)
* CUDA: use a ring-buffer for cuda graphs

* bump limit to 128

* use LRU eviction

* better naming

* do periodic clean-up
2026-04-30 11:29:12 +03:00
lhez 77c0630ce6
opencl: refactor q8_0 set_tensor and mul_mat host side dispatch for Adreno (llama/21938)
* opencl: refactor q8_0 gemm/gemv Adreno dispatch

* opencl: refactor q8_0 set_tensor

* opencl: fix whitespace
2026-04-30 11:29:12 +03:00
nullname b25d5d050b
hexagon: optimize HMX matmul operations (llama/21071)
* optimize hmx_mat_mul functions by calculating row and column tiles upfront

* refactor core_dot_chunk_fp16 to use size_t for tile counts and improve readability

* wip

* set scale outside of loop

* wip

* refactor core_mma_chunk_fp16 and mat_mul_qk_0_d16a32 to use size_t for tile counts

* wip

* wip

* refactor transfer_output_chunk_fp16_to_fp32 to use size_t for dimensions

* refactor core_dot_chunk_fp16 to use size_t for tile row stride calculation

* wip

* refactor hmx_mat_mul functions to use hvx_vec_splat_f16 for column scales initialization

* refactor hmx_mat_mul_permuted_w16a32_batched to streamline scale setting and locking

* refactor core_dot_chunk_fp16 to improve tile stride calculations for output

* refactor hmx_mat_mul functions to use Q6_V_vsplat_R for column scales initialization

* fix compiling error

* wip

* optimize row and column tile indexing in core_mma_chunk_fp16 function

* wip

* Revert "wip"

This reverts commit cde679eff79c4a28dd2d89d32f710015e09592b6.

* Add size limit check for HAP_mmap in htp_iface_mmap and drop_mmap functions

* wip
2026-04-30 11:29:11 +03:00
shaofeiqi 57a48a4850
opencl: add q5_K gemm and gemv kernels for Adreno (llama/21595) 2026-04-30 11:29:11 +03:00
Aman Gupta 820438ae2c
ggml: add graph_reused (llama/21764)
* ggml: add graph_reused

* use versioning instead of reuse flag

* increment version with atomic

* use top bits for split numbering

* add assert

* move counter to ggml.c

* set uid in split_graph only

* fix windows

* address further review comments

* get next_uid rather than doing bit manipulation

* rename + add comment about uid
2026-04-30 11:29:11 +03:00
Kusha Gharahi 655c0750f5
metal: Implement ROLL op (llama/21946)
* nix: support unified apple-sdk

* Impl roll op for Metal

* Revert "nix: support unified apple-sdk"

This reverts commit abfa473360471532c547de8b202c780507924d4b.

* update ops.md

* update op docs
2026-04-30 11:29:11 +03:00
rehan-10xengineer 94d6d0b743
ggml-cpu: add 128-bit RVV implementation for Quantization Vector Dot (llama/20633)
* ggml-cpu: add 128-bit impls for i-quants, ternary quants

* ggml-cpu: add 128-bit impls for iq2_xs, iq3_s, iq3_xxs, tq2_0

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: refactor; add rvv checks

---------

Co-authored-by: taimur-10x <taimur.ahmad@10xengineers.ai>
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-04-30 11:29:11 +03:00
rehan-10xengineer 07c181b57f
ggml : implemented simd_gemm kernel for riscv vector extension (llama/20627)
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-04-30 11:29:11 +03:00
Reese Levine 092330b474
ggml-webgpu: compute pass batching and removing profiling overhead (llama/21873)
* Update register tiling matmul to use f32 accumulation

* fix profiling code

* Fix register tiling matmul for chrome, i'm blaming dawn

* Update batch tuning value for iOS

* compile fix

* Fix use of new load function

* Move to a single query set for GPU profiling

* Move to batching compute passes when not profiling

* Refactor build_multi

* remove iOS throttling now that we're batching compute passes
2026-04-30 11:29:10 +03:00
Katostrofik f62bb13320
Fix Q8_0 reorder: garbage on 2nd prompt + crash on full VRAM (llama/21638)
* [SYCL] Fix Q8_0 reorder: add missing dequantize path for GEMM

The Q8_0 reorder optimization (#21527) was missing a reorder-aware
dequantizer for the GEMM code path used during prompt processing.
After token generation reordered Q8_0 weights (via DMMV/MMVQ), the
next prompt processing pass would read them with the standard
dequantizer, producing garbage output.

Add dequantize_block_q8_0_reorder() and wire it into both
ggml_get_to_fp16_sycl() and ggml_get_to_fp32_sycl(), matching the
pattern already used by Q4_0, Q4_K, and Q6_K.

Fixes #21589

AI (Claude) was used to assist with root cause investigation and
writing the kernel code. All code was human-reviewed and tested
on real hardware.

* SYCL: fix reorder crash when device memory is full

The reorder optimization allocates a temporary buffer the full size of
the weight tensor on the device. When VRAM is nearly full (large models
on a single GPU), this allocation fails and the subsequent memcpy crashes
on a NULL pointer.

Fix: try device allocation first, fall back to host memory if device
memory is full. The reorder kernel still works correctly reading from
host memory over PCIe. This is slower for the one-time reorder (~21 t/s
vs ~38 t/s on Intel Arc Pro B70), but the optimization is preserved for
all subsequent inference. If both device and host allocation fail, skip
the reorder and fall back to the unoptimized kernel path.

Also fixes a bug where opt_for_reorder() marked tensors as reordered
even when the reorder was skipped due to allocation failure. This caused
DMMV/MMVQ kernels to read the original AoS data as if it were SoA,
producing garbage output or NaN results.

Tested on Intel Arc Pro B70 (32GB) with Q8_0, Q4_K_M models. Coding was
AI-assisted (Claude), reviewed and tested on hardware by a human.

Fixes #20478

* SYCL: add RAII temp buffer class + macro guard for host fallback

Replace sycl_ext_malloc_with_fallback/sycl_ext_free_fallback free
functions with sycl_reorder_temp_buffer RAII class. The host_fallback
bool is now a private member, and cleanup happens automatically at
scope exit.

Add GGML_SYCL_HOST_MEM_FALLBACK cmake option (default ON) to guard
the host memory fallback code path. Device access to host memory
requires Linux kernel 6.8+ (Ubuntu 26.04+); users on older kernels
can set -DGGML_SYCL_HOST_MEM_FALLBACK=OFF to disable it.

Addresses arthw's review on PR #21638.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* SYCL: document GGML_SYCL_HOST_MEM_FALLBACK build option in SYCL.md

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* SYCL: add reorder-aware DMMV dequantizers for Q4_K and Q6_K

Q4_K and Q6_K had reorder support for MMVQ and GEMM paths but not
DMMV. When the DMMV path encountered reordered data it would abort.

Add DMMV kernels that read from the SOA reorder layout for both
types. Same math as the non-reorder versions, different memory
access pattern.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-30 11:29:10 +03:00
Ruben Ortlam 7fe6b8e171
vulkan: optimize im2col (llama/21713)
* vulkan: improve im2col memory write layout

* cap workgroups

* minimal device tuning

* use vendor_id instead of subgroup size
2026-04-30 11:29:10 +03:00
Pasha Khosravi c6d1fbf31f
cuda: Q1_0 initial backend (llama/21629)
* [cuda] initial Q1_0 backend

* remove unused code, fix AMD MMA guard

* attempt to support dp4a

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:29:10 +03:00
Reese Levine 2a785c5969
ggml-webgpu: Fix dequantization helpers to not pass in pointers (llama/21872)
* Fix dequantization helpers to not pass in pointers

* Increase XIELU precision
2026-04-30 11:29:10 +03:00
Johannes Gäßler 9638e29657
CUDA: require explicit opt-in for P2P access (llama/21910) 2026-04-30 11:29:10 +03:00
Johannes Gäßler 7e57b20d53
CUDA: manage NCCL communicators in context (llama/21891)
* CUDA: manage NCCL communicators in context

* add check that all backends are CUDA

* remove unused vector, limit init to > 1 GPUs

* fix warnings

* fix cuda device, cache allreduce
2026-04-30 11:29:09 +03:00
Valeriy Dubov 182db04cb2
rpc : add native RDMA transport for RPC backend (RoCEv2) (llama/20590) 2026-04-30 11:29:09 +03:00
Xuan-Son Nguyen 86d94cd95b
docs: more extensive RoPE documentation [no ci] (llama/21953)
* more extensive ggml_rope documentation

* add more docs

* nits
2026-04-30 11:29:09 +03:00
Yiwei Shao 24cc89e477
hexagon: optimization for HMX mat_mul (llama/21554)
* hexagon: add async HMX worker

Introduce hmx-worker (dedicated thread for HMX compute) to overlap HMX
matmul with HVX dequant/DMA stages in the pipeline path, replacing the
previous synchronous HMX calls that blocked the main thread.

* hexagon: cost-based VTCM chunk search for out-stationary matmul

* hexagon: fix futex race in hmx_worker_drain
Store the boolean to local variable avoid atomic load twice

* hex-mm: hmx optimize scatter/transpose and use HMX intrinsics

* hex-vmem: drop vmem limit a touch under 3GB on v73

* hexagon: add fwd declaration of htp_context

* hex-hmx: replace hmx-worker with hmx-queue that mimics dma-queue interface

Simplifies the overall implemantion, reduces thread wakeup roundtrips.

* hex-mm: add debug log to hmx work func called from hmx-queue

* Update hmx-queue.h

Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>

---------

Co-authored-by: Kim-Chyan Gan <kgan@qti.qualcomm.com>
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com>
2026-04-30 11:29:09 +03:00
Xuan-Son Nguyen 44d86c4921
ggml : remove ggml-ext.h (llama/21869)
* ggml: correct placement of ggml-ext.h

* ggml : remove ggml-ext.h

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-04-30 11:29:09 +03:00
Georgi Gerganov 08e412c862
metal : fix FA support logic (llama/21898) 2026-04-30 11:29:09 +03:00
Jeff Bolz 45365fa111
vulkan: Programmatically add RoundingModeRTE to all shaders when the device supports it (llama/21572)
* vulkan: Programmatically add RoundingModeRTE to all shaders when the device supports it

* use FetchContent to get SPIRV-Headers

* Fetch spirv-headers unconditionally

* remove fetchcontent, rely on installed headers

* fix ubuntu job

* Update docs/build.md
2026-04-30 11:29:08 +03:00
Georgi Gerganov 7024f7e5c1
ci : re-enable mac workflows (llama/21894)
* ci : re-enable mac workflows

* vulkan : fix compile warning
2026-04-30 11:29:08 +03:00
Seyoung Jeong 691b1d0826
metal : add XIELU unary op (llama/20802) 2026-04-30 11:29:08 +03:00
Richard Davison 80f7be74bb
ggml : fix ARM NEON nvfp4 dot product on non-dotprod targets (llama/21559) 2026-04-30 11:29:08 +03:00
texasich bfdcd4a92c
cmake: fix CMP0194 warning on Windows with MSVC (llama/21630)
* cmake: fix CMP0194 warning on Windows with MSVC

Set CMP0194 policy to NEW before project() call in ggml/CMakeLists.txt to suppress the "MSVC is not an assembler for language ASM" warning introduced in CMake 4.1.

The ggml project enables ASM globally for Metal (macOS) and KleidiAI (ARM) backends. On Windows/MSVC, no assembler sources are used, but CMake 4.1+ warns because cl.exe is not a valid ASM compiler.

This follows the same pattern used in ggml-vulkan (CMP0114, CMP0147).

Closes ggml-org/llama.cpp#20311

* cmake: apply cisc's formatting suggestion

---------

Co-authored-by: texasich <texasich@users.noreply.github.com>
2026-04-30 11:29:08 +03:00
Reese Levine b732f4d9b5
ggml-webgpu: Update register tiling matmul to use f32 accumulation (llama/21644)
* Update register tiling matmul to use f32 accumulation

* fix profiling code

* Fix register tiling matmul for chrome, i'm blaming dawn

* Update batch tuning value for iOS

* compile fix

* Fix use of new load function
2026-04-30 11:29:07 +03:00
Jeff Bolz cdeaa34174
vulkan: Support GGML_TYPE_NVFP4 (llama/21455)
This adds nvfp4 support for get_rows, dequant, and mul_mat(_id). For
mul_mat, it does not add support for the dp4/q8_1 path, it's all via
fp16/fp32.
2026-04-30 11:29:07 +03:00
Ruben Ortlam 0f99a47177
vulkan: Flash Attention DP4A shader for quantized KV cache (llama/20797)
* use integer dot product for quantized KV flash attention

* small improvements

* fix SHMEM_STAGING indexing

* add missing KV type quants

* fixes

* add supported quants to FA tests

* readd fast paths for <8bit quants

* fix mmq gate and shmem checks
2026-04-30 11:29:07 +03:00
Oliver Simons d9ed371c2c
CUDA: Limit DeviceSegmentedSort to immediate mode (llama/21718)
* CUDA: Limit DeviceSegmentedSort to immediate mode

DeviceSegmentedSort is currently not capturable in a cuda graph. Hence,
we have to go for the slower DeviceSegmentedRadixSort in that case.

Perf numbers on RTX Pro 6000 Blackwell Max-Q:
DeviceSegmentedRadixSort in graph mode (i.e. CUDA Graphs)

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 12291 runs -   105.94 us/run -     8192 kB/run -   73.75 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 10245 runs -   115.08 us/run -    16384 kB/run -  135.77 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   221.22 us/run -    32768 kB/run -  141.26 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   430.98 us/run -    65536 kB/run -  145.02 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1028 runs -  1185.83 us/run -   131072 kB/run -  105.41 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  387 runs -  2748.62 us/run -   262144 kB/run -   90.95 GB/s

DeviceSegmentedSort in immediate mode

  ARGSORT(type=f32,ne=[2048,512,1,1],order=1):                 16388 runs -    71.17 us/run -     8192 kB/run -  109.78 GB/s
  ARGSORT(type=f32,ne=[4096,512,1,1],order=1):                 12294 runs -    81.38 us/run -    16384 kB/run -  192.00 GB/s
  ARGSORT(type=f32,ne=[8192,512,1,1],order=1):                  5125 runs -   240.81 us/run -    32768 kB/run -  129.77 GB/s
  ARGSORT(type=f32,ne=[16384,512,1,1],order=1):                 2565 runs -   406.60 us/run -    65536 kB/run -  153.71 GB/s
  ARGSORT(type=f32,ne=[32768,512,1,1],order=1):                 1285 runs -   873.23 us/run -   131072 kB/run -  143.15 GB/s
  ARGSORT(type=f32,ne=[65536,512,1,1],order=1):                  516 runs -  2288.46 us/run -   262144 kB/run -  109.24 GB/s

* Add test case for dispatch to DeviceSegmentedRadixSort

We currently lack a way to force graph mode in CUDA, patch callback to
invoke ggml_backend_compare_graph_backend twice to enforce each test to
run in graph mode
2026-04-30 11:29:07 +03:00
Masashi Yoshimura 36b7bb3d95
Remove extra conditional check on debug mode. (llama/21798) 2026-04-30 11:29:07 +03:00
Akarshan Biswas 655072cd78
sycl: disable Q1_0 in backend and cleanup unused variables (llama/21807) 2026-04-30 11:29:07 +03:00
Stephen Cox b907207312
mtmd: add Gemma 4 audio conformer encoder support (llama/21421)
* mtmd: add Gemma 4 audio conformer encoder support

Add audio processing for Gemma 4 E2B/E4B via a USM-style Conformer.

Architecture:
- 12-layer Conformer: FFN → Self-Attention → Causal Conv1D → FFN → Norm
- Subsampling Conv Projection: 2x Conv2D(stride=2) with LayerNorm
- Full self-attention with sinusoidal RPE and sliding window mask (24)
- Logit softcapping at 50.0, ClippableLinear clamping
- Output: 1024 → 1536 → RMSNorm → multimodal embedder

Mel preprocessing (dedicated mtmd_audio_preprocessor_gemma4a):
- HTK mel scale, 128 bins, magnitude STFT, mel_floor=1e-3
- Standard periodic Hann window (320 samples), zero-padded to FFT size
- Semicausal left-padding (frame_length/2 samples)
- Frame count matched to PyTorch (unfold formula)
- No pre-emphasis, no Whisper-style normalization
- Mel cosine similarity vs PyTorch: 0.9998

Key fixes:
- Tensor loading dedup: prevent get_tensor() from creating duplicate
  entries in ctx_data. Fixed with std::set guard.
- ClippableLinear clamp_info loading moved after per-layer tensors.
- Sliding window mask (24 positions) matching PyTorch context_size.
- Skip Whisper normalization for Gemma4 mel output.

Tested on E2B and E4B with CPU and Vulkan backends.
Transcribes: "Glad to see things are going well and business is starting
to pick up" (matching ground truth).

Ref: #21325
2026-04-30 11:29:06 +03:00
Johannes Gäßler c0b46c2f8f
CUDA: skip compilation of superfluous FA kernels (llama/21768) 2026-04-30 11:29:06 +03:00
shaofeiqi e0c8e505e9
opencl: add basic support for q5_k (llama/21593)
* opencl: add general q5_k mv

* opencl: add flattened Q5_K mv and general Q5_K mm

* opencl: fix Q5_K unit tests
2026-04-30 11:29:06 +03:00
Sigbjørn Skjæret 34381b01c4
ggml : fix a few instances of missing GGML_TYPE_Q1_0 cases (llama/21716) 2026-04-30 11:29:06 +03:00
Aman Gupta 3af7c879bc
CUDA: also store node->src ne/nb for graph equality (llama/21736) 2026-04-30 11:29:06 +03:00
Max Krasnyansky 28ce072f59
hexagon: improved Op queuing, buffer and cache management (llama/21705)
* hexagon: introduce op request batching and rewrite buffer managment

The host now prepares batches of requests and dispatches them via a single dspqueue message.

Buffers are mapped explicitly by NPU while processing batches.

* hex-dma: disable l2 bypass since to work around new issue due to no flushes between Ops

* hex-utils: add explicit l2flush and l2clear helpers

* hex-opreq: use fine-grain per tensor l2 management

* hex-opreq: avoid redundant invalidates for tensors we already flushed

* hex-opreq: update debug messages

* htp-opreq: reuse ops_context

* hex-opreq: do not flush or invalidate cache lines beyond buffer boundry

* hex-opreq: fix errors in log message

* Revert "hex-opreq: do not flush or invalidate cache lines beyond buffer boundry"

This reverts commit 8b7f0a55a750a6430ce4eb1874c7feb3d720056d.

* hexagon: limit l2 flushes to 1MB which covers l2 cache

* hex-opreq: limit cache flush to 4MB

Looks like 4MB cont. vitual space should cover the 1MB cache.

* hexagon: drop cache flush size to 2MB

* hex-opreq: start reworking opreq packing

* hex-opreq: introduce new way of packing opbatch where tensors are stored separately

* hex-opreq: add a simple fastrpc call to force unmap all buffers

* hex-l2flush: somehow 2MB does not seem robust, also cleanup step size to use line-size

* hex-opreq: bump opreq batch size to 256

* hex-mm: place src1 spad at the top of vtcm for easy reuse

* hex-ops: introduce internal types and disable src1 reuse for now

Nothing new just formalizing the repack / qyn.quant types we've been using.

* htp-opreq: use tensor pointers instead of copies

* hex-opreq: introduce more robust way for tracking vtcm/spad reuse

This removes the SKIP_QUANTIZE flag that became fragile with the addition of HMX and other ops.

* hex-cumsum: fix error post opreq merge

* hex-opreq: move request batch handling into the session

Prepping everything for using dspqueue buffers and doing that inside the session is much cleaner.

* hex-mm: yet another fix for src1 reuse when we're mixing hmx/hvx

* hex-bufs: introduce pinned mmapings and use non-pinned ones for model buffers

* hex-buf: add support for allocating shared/pinned buffer for opreqs

* hex-opbatch: make opbatches configurable

* hex-naming: better name for ggml_hexagon_shared_buffer

* hex-naming: add session->c_name() helper

* hex-opbatch: start using shm but still copy for now

* hex-opbatch: use shared buffer for packing opbatch

* hex-opbatch: beter naming for opbatch related classes and code

* hex-opbatch: reuse batched tensors with same data/dims/strides

* hex-opbatch: update logging

* hex-opbatch: add support for vmem limit for op batching

* hex-opbatch: update htp side to properly support dynamic mmap/unmap

* hex-opbatch: add OB and OQ params for run-completion script and fix the asserts in batch processing

* hex-opbatch: fixed src1 handling in act ops

* hex-act: fix empty src1 handling in swiglu and friends

Simplify preamble macro while at it

* hex-mm: minor fix vtcm and dma handling in matmul

cleaning up some left-overs from merges

* hex-opbatch: allocate extra 1KB for dspqueue overhead

* hexagon: fix softmax for non-aligned tensors and cleanup vtcm alloc

* hex-mm: properly handle hmx_disabled flag

* hex-ops: update comments

* hex-ops: add debug output for get/set-rows

* hex-mmap: optimize un/mapping of buffers

* hex-opreq: global cache flush and invalidate beyond 128KB threshold

* hex-ops: add super simple opfilter regex for debugging

If an Op matches the regex hex backend will reject it.

* hex-opbatch: wireup newer ops missed in merge and update main switch to detect this in future

* hexagon: improved vtcm acquision to remove inter-op overhead

Fully compatible with QNN-HTP coex

* hex-mm: fixed hvx fallback path

* hex-mm: lower the vmem threshold a bit further to ~3GB

* hexagon: update debug & error logs

This also fixes an issue with newer llvm merging repack and non-repack
functions. We use those pointer to distinguish between buffer types.

* hexagon: move ops context into main context

Just a cleanup. We don't need separate contexts at this point.

* hex-opbatch: cleanup naming and headers for opbatch and related descriptors

* hex-fa: it's now better to enable FA during TG to reduce graph splits

* hexagon: remove GGML_HEXAGON_EXPERIMENTAL env var

It's no longer useful. Please use more flexible GGML_HEXAGON_OPFILTER to disable Ops
if needed for debugging or validation.

* hexagon: fixed editorconfig check

* Update ggml/src/ggml-hexagon/ggml-hexagon.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Trivikram Reddy <tamarnat@qti.qualcomm.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-04-30 11:29:06 +03:00
Rithik Sharma 2580cfc703
ggml-webgpu: support non-square subgroup matrix configs for Intel GPUs (llama/21669) 2026-04-30 11:29:05 +03:00
Chen Yuan 3fc738a8c2
ggml-webgpu: address quantization precision and backend lifecycle managment (llama/21521)
* ggml(webgpu): fix the busy-polls in Emscripten  in the waitAny after #20618, and remove the busy webgpu log

* Merge with upstream

* Fix GET_ROWS packed integer NaN when using f16 as memory buffer in shader quants

* Update Unary wgsl EXP and EXPM1 for f16 stability

* Fix GET_ROWS IQ4_XS strcut for NaN f16 canonicalization

* Fix numerical percision for unary sqrt when working with f16

* Fix NaN canonicalization for packed integers using f16

* Update err threshold for binary div ops when using f16

* backend: Keep one Dawn/WebGPU instance alive for the lifetime of the static backend

* clean: uncomment existing code logs

* clean: clean the unncessary debug info

* Refactor and generalize dequant helpers

* Remove deprecated quant structs

* Refactor shader defines to reduce repetition

* Remove error override for F16 type

* fix: fix the accidential removal of the proper initialization of ctx

* clean: clean legacy and format code

* fix: did not modify tests ops

---------

Co-authored-by: Jeremy J. Hartmann <jeremy@mtion.tv>
2026-04-30 11:29:05 +03:00
Jeff Bolz 458ad1d93e
vulkan: Support Q1_0 (llama/21539)
* vulkan: Support Q1_0

* use get_dm
2026-04-30 11:29:05 +03:00
Aman Gupta 28347201fc
CUDA: fuse muls (llama/21665) 2026-04-30 11:29:05 +03:00
andyluo7 c77a33df06
HIP: add CDNA4 (gfx950) architecture support for MI350X/MI355X (llama/21570)
Add AMD Instinct MI350X/MI355X (gfx950, CDNA4) support:

- vendors/hip.h: Add CDNA4 preprocessor define for __gfx950__
- common.cuh: Add GGML_CUDA_CC_CDNA4 and GGML_CUDA_CC_IS_CDNA4 macros
- mma.cuh: Route CDNA4 to compatible MFMA instructions:
  * f32 matmul: mfma_f32_16x16x4f32 (xf32 variant unavailable on gfx950)
  * bf16 matmul: mfma_f32_16x16x16bf16_1k (same as CDNA3)
  * int8 matmul: mfma_i32_16x16x32_i8/32x32x16 (same as CDNA3)
- mmq.cuh: Include CDNA4 in stream-k kernel dispatch

CDNA4 is largely compatible with CDNA3 except:
- No xf32 MFMA (mfma_f32_16x16x8_xf32) — routes to f32 path
- Different FP8 format (e4m3fn vs e4m3_fnuz) — not changed here

Tested on AMD Instinct MI355X (gfx950), ROCm 7.0.1:
- Build: compiles cleanly with -DAMDGPU_TARGETS=gfx950
- llama-bench (Qwen2.5-1.5B Q4_K_M, single GPU):
  * f16+FA: 40,013 tok/s prefill, 254 tok/s decode
  * q8_0+FA: functional
- Flash attention: works correctly
- MMQ: works correctly with stream-k dispatch

Co-authored-by: Andy Luo <andyluo7@users.noreply.github.com>
2026-04-30 11:29:05 +03:00
Johannes Gäßler bb895c843d
ggml: backend-agnostic tensor parallelism (experimental) (llama/19378)
* ggml: backend-agnostic tensor parallelism

* support for GPT-OSS, Qwen 3 MoE

* partial Vulkan fix

* add support for 4/8 GPUs

* unconditional peer access

* re-use buffers + ggml contexts

* fix output pattern

* NCCL support

* GGML: HIP: add RCCL support

* Remove shfl and AllReduce from backend interface

* move allocation workaround out of ggml-alloc.c

* 2d tensor set/get support

* Fix the seg fault without NCCL

* Apply suggestion from JohannesGaessler

* support for tensor dims % n_devs != 0

* fix view_offs scaling

* arbitrary num. of GPUs/tensor split

* fix compilation

* better granularity estimate

* Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.

Fix compilation errors.

* partial Qwen 3 Next support

* Fix qwen3 30b (llama/8)

* Fix crash with Qwen-30B-A3B Q4_0

Qwen-30B-A3B Q4_0 has an intermediate dimension of 768. Using a granularity of 256 forces an uneven split between GPUs, which is not supported by the current implementation.

* Decide block size based on tensor quantization type

* Fix crashes due to KV cache serialization (llama/9)

KV cache serialization requires non-zero offsets on the tensor. Add support in the meta backend to set/get a tensor with a non-zero offset.

* metal : fix build (llama/7)

* static memory allocations, fix usage count

* fix tensor granularity

* more even memory distribution

* use BF16 for allreduce

* rebase fixup

* better error message for unsupported architectures

* Fix device mismatch during scatter of allReduce. (llama/11)

There is a mismatch between the dst buffer device and the backend device, causing the use of sync copies

* Enable the previous allreduce implementation. It is better in both perf and stability (llama/12)

* delay AllReduce for Moe for less I/O

* build : clean-up compile warnings

* backend : move most of the meta backend API to ggml-backend-impl.h

* cont : hide unused public API in the implementation

* llama : use llama_device + remove ggml_backend_dev_is_meta()

* ggml-backend : remove unused alloc include

* minor : remove regex include

* ggml : introduce ggml-ext.h for staging new APIs

* rebase fixup

* fix tests

* llama : more robust logic for determining Meta devices (llama/16)

* llama : more robust logic for determining Meta devices

* cont : fix devs size check

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* cont : fix log type

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* disable roundtrip for meta backend

* fix arch selection

* Qwen 3.5 support

* fix Gemma 4 MoE

* fix OpenVino, SYCL

* fix test-llama-archs for CPU-only builds

* Fix Qwen 3.5 MoE

* disable meta backend tests for WebGPU

* tests : filter CPU-based devices from the Meta backend tests (llama/17)

* meta : formatting, naming, indentation (llama/18)

* formatting : llama-model.cpp

* formatting : ggml-ext.h

* formatting : ggml-backend-meta.cpp

* meta : add TODO

* add documentation

* better error messages

* fix GPT-OSS

---------

Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-04-30 11:29:05 +03:00
fairydreaming c4c6e143a7
ggml : check return value of CUB calls used in argsort and top-k (they all return cudaError_t) (llama/21676)
Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2026-04-30 11:29:04 +03:00
Georgi Gerganov f0ee409f7b
metal : add missing mm-id specializations for q1_0 (llama/21662) 2026-04-30 11:29:04 +03:00
Akarshan Biswas 4598eb080b
sycl : add flash-attn support for head size 512 (llama/21654)
* sycl : add flash-attn support for head size 512

This patch extends the SYCL Flash Attention implementation to support head sizes (DKQ/DV) of 512.

Changes:
- Added DKQ/DV 512 cases to both tile and vector Flash Attention kernels.
- Updated kernel selection logic to allow vector kernels for head sizes up to 512 (previously 256).
- Removed unused/redundant AMD and RDNA-specific configuration functions in `fattn-tile.hpp`.
- Refactored `ggml_backend_sycl_buffer_init_tensor` to use a switch statement for clearer tensor extra buffer initialization.
- Added necessary template instances for the new 512 head size across various quantization types.

* remove defunct mxfp4 reorder from setting buffer type
2026-04-30 11:29:04 +03:00
Ruben Ortlam 1d555510de
vulkan: unify type macros to use Vx instead of _VECx (llama/21605) 2026-04-30 11:29:04 +03:00
Aman Gupta 2c7472939f
CUDA: also store `node->src->data` ptrs for equality check (llama/21635)
* CUDA: also store node->src->data ptrs for equality check

* address review comments
2026-04-30 11:29:04 +03:00
RealOrko 16dd171620
fix: free ctx_copy in ggml_opt_free to plug per-training-session leak (llama/21592)
* fix: free ctx_copy in ggml_opt_free to plug per-training-session leak

ggml_opt_alloc populates opt_ctx->ctx_copy via a free+init pair every
time the allocated graph shape changes. The last ctx_copy from the
final ggml_opt_alloc call survives until ggml_opt_free is invoked,
but ggml_opt_free was only freeing ctx_static and ctx_cpu, never
ctx_copy. Each opt_ctx lifetime therefore leaks the final per-batch
context — ~900 KB for a typical GNN training session in
sindarin-pkg-tensor, surfaced via AddressSanitizer.

ctx_copy is nullptr-initialized and ggml_free() handles NULL safely,
so the new release is guard-free.

* Update ggml/src/ggml-opt.cpp

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: realorko <realorko@nowhere.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:29:03 +03:00
Reese Levine e70c0d43f4
webgpu : Query for adapter support when registering WebGPU backend (llama/21579) 2026-04-30 11:29:03 +03:00
Pasha Khosravi 15deafa31e
metal: Q1_0 backend (llama/21528)
* initial Q1_0 Metal backend

* tuning q1_0 metal kernels

* add Q1_0 to test-backend-ops

* add Q1_0<->F32 copy test

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-04-30 11:29:03 +03:00
Aman Gupta fa2eaa433b
CUDA: make cuda graphs props check faster (llama/21472)
* CUDA: compute fast hash instead of expensive props check

* use seen node

* use memcp
2026-04-30 11:29:03 +03:00
iacopPBK d91d1e8e6c
ggml-cuda: ds_read_b128 for q4_0 and q4_1 mmq kernels (llama/21168)
* ds_read_b128 for q4_0 and q4_1 mmq kernels

     Current for loop generates ds_read_b32 instructions with hip compiler, the new solution generates ds_read_b128 instructions for the same operation, saving some LDS bandwidth. Tested on MI50 and RX6800XT, its faster on both.

* Vectorized lds load update: used ggml_cuda_get_max_cpy_bytes and ggml_cuda_memcpy_1 functions for generic implementation

* Explicit for loop in mmq, renamed vec into tmp

* Fixed max_cpy usage in the loading loop

* Fixed typo in q4_1 kernel

* Update ggml/src/ggml-cuda/mmq.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/mmq.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/mmq.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Renoved trailing white line 500

* Update mmq.cuh removed other whitelines

* Remove trailing whitespaces

---------

Co-authored-by: iacopPBK <iacopPBK@users.noreply.github.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: iacopPBK <iacop@deneb.com>
2026-04-30 11:29:03 +03:00
Reese Levine d1456437e1
ggml-webgpu: parameterize submission size and add iOS specific limits (llama/21533)
* Work towards removing bitcast

* Move rest of existing types over

* Add timeout back to wait and remove synchronous set_tensor/memset_tensor

* move to unpackf16 for wider compatibility

* cleanup

* Remove deadlock condition in free_bufs

* Start work on removing parameter buffer pools

* Simplify and optimize further

* simplify profile futures

* Fix stride

* Try using a single command buffer per batch

* formatting

* Add parameters for different browsers in-flight submissions

* Update handling of batch size too

* Throttle ios as much as possible

* Increase timeout for llvm-pipe testing
2026-04-30 11:29:03 +03:00
Aman Gupta 5ef7aafa06
CUDA: check for buffer overlap before fusing (llama/21566)
* CUDA: check for buffer overlap before fusing

* use ggml_cuda_check_fusion_memory_ranges
2026-04-30 11:29:02 +03:00
Georgi Gerganov f1d2b83db0
ggml : deprecate GGML_OP_ADD1 (llama/21363)
* ggml : deprecate GGML_OP_ADD1

* cont : remove tests

* cont : re-enable vulkan check
2026-04-30 11:29:02 +03:00
Tom Overlund 78b4fd85e1
ggml: Vulkan build, Linux -- output error string for errno on fork failure (#20868) (llama/20904) 2026-04-30 11:29:02 +03:00
mkoker 18c98ffaf7
vulkan: add FA dequant for q4_1, q5_0, q5_1, iq4_nl (llama/21029)
Add dequantize4() implementations for Q4_1, Q5_0, Q5_1, and IQ4_NL
in the flash attention base shader. Register them in the shader
generator, pipeline creation, and enable in the scalar/coopmat1 FA
support check.
2026-04-30 11:29:02 +03:00
Antoine Viallon a1f76fb4cf
ggml-cuda : fix CDNA2 compute capability constant for gfx90a (MI210) (llama/21519)
GGML_CUDA_CC_CDNA2 was set to 0x910
Fix by setting the constant to 0x90a to match the actual gfx90a ISA.
2026-04-30 11:29:02 +03:00
PMZFX 1ebf3cafa0
Add Q8_0 reorder optimization (~3x tg speedup on Intel Arc) (llama/21527)
Extend the existing reorder optimization to Q8_0. The reorder
separates scale factors from weight data for coalesced memory
access -- was implemented for Q4_0/Q4_K/Q6_K but Q8_0 was missing.

On Arc Pro B70 (Xe2), Q8_0 tg goes from 4.88 to 15.24 t/s (3.1x)
on Qwen3.5-27B. BW utilization: 21% -> 66%.

The key fix beyond the kernels: Q8_0 was missing from the type
check in ggml_backend_sycl_buffer_init_tensor() that allocates
the extra struct carrying the reorder flag -- so the optimization
was silently skipped.

AI (Claude) was used to assist with root cause investigation and
writing the kernel code. All code was human-reviewed and tested
on real hardware.

Fixes: #21517
2026-04-30 11:29:02 +03:00
Masashi Yoshimura 9cbc4b3acb
ggml-webgpu: Add the support of `MUL_MAT_ID` (llama/21147)
* Add mul_mat_id support to WebGPU

* Apply suggestion from @reeselevine

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-04-30 11:29:01 +03:00
Pasha Khosravi 0c2fbd4703
ggml: add Q1_0 1-bit quantization support (CPU) (llama/21273)
* ggml: add Q1_0 and Q1_0_g128 1-bit quantization support (CPU)

* add generic fallback for x86

* remove Q1_0 (group size 32)

* rename Q1_0_g128 => Q1_0

* fix Q1_0 LlamaFileType Enum

* Fix trailing spaces; add generic fallback for othre backends

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix /r/n spacing + arch-fallback

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-04-30 11:29:01 +03:00
Gaurav Garg 7b19b94c5d
Write an optimized flash_attn_stream_k_fixup kernel (llama/21159)
* Write an optimized flash_attn_stream_k_fixup kernel

Write a specialized and more optimized kernel for cases where nblocks_stream_k is multiple of ntiles_dst.
Make nblocks_stream_k to multiple of ntiles_dst if nblocks_stream_k > 2 * ntiles_dst

* Use the new kernel only for nblocks_stream_k_raw > 4 * ntiles_dst to make sure we have enough concurrency on GPUs

* Address review comments

* Address review comments

* Revert variable names to original
2026-04-30 11:29:01 +03:00
Neo Zhang 42e4a28865
sycl : handle other FA case (llama/21377) 2026-04-30 11:29:01 +03:00
Yarden Tal c031045531
hexagon: slight optimization for argosrt output init (llama/21463) 2026-04-30 11:29:01 +03:00
Reese Levine d6cfdc669c
ggml-webgpu: move from parameter buffer pool to single buffer with offsets (llama/21278)
* Work towards removing bitcast

* Move rest of existing types over

* Add timeout back to wait and remove synchronous set_tensor/memset_tensor

* move to unpackf16 for wider compatibility

* cleanup

* Remove deadlock condition in free_bufs

* Start work on removing parameter buffer pools

* Simplify and optimize further

* simplify profile futures

* Fix stride

* Try using a single command buffer per batch

* formatting
2026-04-30 11:29:01 +03:00
Vishal Singh 3f5117610b
ggml-zendnn : add MUL_MAT_ID op support for MoE models (llama/21315)
* ggml-zendnn : add MUL_MAT_ID op support for MoE models
- Add MUL_MAT_ID op acceleration for Mixture-of-Experts models
- MUL_MAT_ID op fallback to CPU backend if total experts > 32
- Point ZenDNN lib to latest bits ZenDNN-2026-WW13

* ggml-zendnn : add braces to sgemm failure condition for consistency

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

---------

Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2026-04-30 11:29:00 +03:00
Radoslav Gerganov 321f628239
rpc : reuse compute graph buffers (llama/21299)
Reuse the buffer for the ggml context which is used for creating the
compute graph on the server side. This partially addresses a memory leak
created by the CUDA backend due to using buffer addresses as cache
keys.

ref: #21265
ref: #20315
2026-04-30 11:29:00 +03:00
Zheyuan Chen c5a5e6528e
ggml-webgpu: add vectorized flash attention (llama/20709)
* naive vectorized version

* add vectorized flash attention

* update vec version

* remove unused path and shader

* remove unused helper functions

* add comments

* remove pad path

* ggml-webgpu: fix flash-attn vec nwg=1 path and tighten vec specialization

* change back to vec4

* enable multi split

* enable vec path when:
- Q->ne[1] < 20
- Q->ne[0] % 32 == 0
- V->ne[0] % 4 == 0
- K->type == f16

* update flast_attn_vec_split.wgsl to reduce redundant workgroup barrier usage and use select

* enable vec path for q4 and q8

* flash-attn vec nwg=1 fast path (skip tmp/reduce staging)

* use packed f16 K loads in flash-attn vec split

* use packed f16 K loads in flash-attn vec split on host side

* tune flash-attn vec f16 VEC_NE by head dim

* cleanup

* cleanup

* keep host side clean

* cleanup host side

* change back to original host wait/submit behavior

* formatting

* reverted param-buffer pool r ecfactor

* add helper functions

* ggml-webgpu: move flash-attn vec pipeline caching back into shader lib

* ggml-webgpu: remove duplicate functions

* ggml-webgpu: reserve flash-attn vec scratch in dst buffer allocation

* ggml-webgpu: revert unrelated change

* ggml-webgpu: revert deleted comment

* disable uniformity check

* remove unnecessary change

* Update ggml/src/ggml-webgpu/wgsl-shaders/flash_attn_vec_split.wgsl

* Update ggml/src/ggml-webgpu/ggml-webgpu.cpp

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-04-30 11:29:00 +03:00
Neo Zhang 7f6c0ac20f
sycl : fix llama_kv_cache hang when kv_cache is huge: 5GB (llama/21283) 2026-04-30 11:29:00 +03:00
Georgi Gerganov 514eabc1e5
ggml : bump version to 0.9.11 (ggml/1456) 2026-04-30 11:29:00 +03:00
Todor Boinovski 444662bc83
hexagon : add cumsum op support (llama/21246)
* hexagon : add cumsum op support

* hexagon: enable dma for cumsum op

* Fix line-ending

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-04-30 11:29:00 +03:00
lhez 08108512c7
opencl: fix leak in Adreno q8_0 path (llama/21212) 2026-04-30 11:28:59 +03:00
Johannes Gäßler 82bb26fba1
CUDA: fix FA kernel selection logic (llama/21271) 2026-04-30 11:28:59 +03:00
Aparna M P 9a40dd9365
hexagon: improve RMS_NORM and DIV accuracy (llama/21251)
* hexagon-rms_norm: fix RMS_NORM for non-aligned tensor sizes

Co-authored-by: Krishna Sridhar <srsr@qti.qualcomm.com>

* hexagon-div: perform DIV in fp16 domain for lower dsp archs

---------

Co-authored-by: Krishna Sridhar <srsr@qti.qualcomm.com>
2026-04-30 11:28:59 +03:00
Neo Zhang fab70d287e
sycl : support nvfp4 type in mul_mat (llama/21227) 2026-04-30 11:28:59 +03:00
Michael Wand 981195be5a
ggml-cuda: Add generic NVFP4 MMQ kernel (llama/21074)
* Introduced NVFP4 generic MMQ kernel

* Added extra FP8 guard, hope to solve ci HIP failure

* Rename tiles and use HIP_FP8_AVAILABLE

* Removed remaning FP8 straggler and added const int

* Const

* Removed DECL_MMQ_CASE artifact

* Removed newline

* Removed space after else

* Changed HIP FP8 NVFP4 conversion gate

* Added new line to bottom of mmq.cu 270

* Removed extra spaces

* Removed single space in front of else on line 814

* Added NVFP4 to generate cu script so HIP can see it, further tightened logic

* Include generated mmq-instance-nvfp4.cu

* Added NVFP4 mmq to HIP Check ignore list

* Update ggml/src/ggml-cuda/mmq.cuh

Changed to Q3_K tile to read MMQ_MMA_TILE_X_K_NVFP4

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/mmq.cuh

Changed to Q3_K tile to read MMQ_MMA_TILE_X_K_NVFP4 in tile assert

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/mmq.cuh

Added function name ending for end if

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Added function names to closing endif

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:28:59 +03:00
Georgi Gerganov ace95aac6b
ggml : bump version to 0.9.10 (ggml/1454) 2026-04-30 11:28:59 +03:00
uvos 1971a362dc
CUDA/HIP: Fix kernel slection for mmvq mmid kernel to align host selection with device launch bounds (llama/21238)
The conditions cc == GGML_CUDA_CC_VOLTA || cc >= GGML_CUDA_CC_ADA_LOVELACE and cc >= GGML_CUDA_CC_TURING match all non-nvidia devices. This causes us to attempt to launch the kernel for batch sizes with larger configurations than our launch bounds on HIP devices. This pr fixes the conditionals in get_mmvq_mmid_max_batch.

Fixes #21191
2026-04-30 11:28:58 +03:00
Georgi Gerganov 5c5b88eb77
ggml : fix RWKV ops thread assignment (llama/21226) 2026-04-30 11:28:58 +03:00
Taimur Ahmad 1b95f84550
ggml-cpu: fix fallback for RVV kernels without zvfh (llama/21157)
* ggml-cpu: refactor sgemm; fix rvv checks

* ggml-cpu: refactor rvv kernels; set zvfbfwma default to off
2026-04-30 11:28:58 +03:00
Anav Prasad 933bd1f79c
CUDA: Add Flash Attention Support for Head Dimension 512 (llama/20998)
* flash attention support for head dimension 512 added

* FA D=512 - match 576 configs, limit ncols2, revert vec cap

* fix HIP tile kernel build for D=512

* fix HIP tile kernel occupancy for D=512 on AMD

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* fix tile FA compilation

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-04-30 11:28:58 +03:00
Reese Levine 78f54d15d8
ggml webgpu: quantized buffers to u32 + wider browser/device support (llama/21046)
* Work towards removing bitcast

* Move rest of existing types over

* Add timeout back to wait and remove synchronous set_tensor/memset_tensor

* move to unpackf16 for wider compatibility

* cleanup

* Remove deadlock condition in free_bufs
2026-04-30 11:28:58 +03:00
Abhijit Ramesh 21b9dd6789
ggml-webgpu: port all AOT operators to JIT (llama/20728)
* port cpy pipeline to shader lib with JIT compilation
* port glu pipeline to shader lib with JIT compilation
* port rope pipeline to shader lib with JIT compilation
* port soft_max pipeline to shader lib with JIT compilation
* removed unused functions from embed_wgsl.py which were used for
old AOT template expansion
2026-04-30 11:28:58 +03:00
hipudding 5ffe58838d
CANN: fix multi-thread set_tensor race conditions (llama/20151)
* CANN: fix multi-thread set_tensor race conditions

When ollama calls ggml_backend_tensor_set from multiple threads (each
writing a different chunk of the same tensor), the CANN backend had
three concurrency issues:

1. Quantized tensors (Q4_0/Q8_0) require a full-tensor format transform
   before uploading to device. Per-chunk transforms produced corrupt data.

2. ND-to-NZ weight conversion requires complete tensor data on device.
   Per-chunk conversion operated on incomplete data.

3. The global g_nz_workspaces array had unprotected concurrent access.

Fix by introducing a TensorSetTracker that accumulates write progress
per tensor. For quantized tensors, raw data is staged in a host buffer
and the transform + upload is deferred until all chunks arrive. For NZ
weights, chunks are uploaded directly but conversion is deferred. The
tracker and its staging buffer are released immediately after
post-processing completes.

Add per-device mutex to g_nz_workspaces to prevent data races.

* CANN: fix L2_NORM ignoring eps parameter

The L2_NORM implementation was not using the eps parameter from
op_params, causing incorrect results when eps is large (e.g. 10.0).
The CPU reference computes scale = 1/fmaxf(norm, eps), so add a
Clamp step to clamp the norm to at least eps before dividing.

* ggml/cann: compare op_params for POOL_2D in ACL graph cache matching

When ACL graph mode is enabled, the graph LRU cache checks whether a
cached graph matches the current computation graph. Previously,
GGML_OP_POOL_2D was not included in the op_params comparison, so two
POOL_2D nodes with different pooling parameters (kernel size, stride,
padding) but identical tensor shapes and addresses could incorrectly
reuse a cached graph, leading to wrong results or aclnn errors.

Add GGML_OP_POOL_2D to the list of ops that require op_params matching
in ggml_graph_node_properties::has_matching_properties().

* cann: fix ACL graph cache matching by adding tensor type and unconditional op_params comparison

The ACL graph LRU cache was incorrectly reusing cached graphs for
operations with different tensor types or op_params, causing test
failures for CPY (f16 vs bf16), POOL_2D, L2_NORM, NORM_MUL_ADD,
RMS_NORM_MUL_ADD, and ADD_RMS_NORM.

Changes:
- Add node_type and src_type[] fields to ggml_graph_node_properties
  so the cache can distinguish tensors with different types but
  identical ne/nb (e.g. f16 and bf16 both have 2-byte elements)
- Compare op_params unconditionally for all ops instead of only for
  SCALE/UNARY/GLU/ROPE/POOL_2D
2026-04-30 11:28:57 +03:00
Neo Zhang 952c66237d
sycl : enhance fattn perf (llama/21185) 2026-04-30 11:28:57 +03:00
shaofeiqi 6ac5a50005
opencl: add q4_K gemm and gemv kernels for Adreno (llama/20919)
* opencl: add q4_K gemm and gemv kernels for Adreno

* opencl: fix whitespace

* opencl: add workarounds for compiler bugs on older devices

* opencl: handle fp16 denorm on X Elite

* opencl: fix kernel build error

* opencl: fix whitespace

* opencl: make q4_K cvt kernels signature consistent

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-04-30 11:28:57 +03:00
Oliver Simons 75b9543856
CUDA : Fix CUB's argsort when nrows % block_size == 0 CCCL < 3.1 (llama/21181)
* CUDA: Fix CUB's argsort when nrows % block_size == 0 CCCL < 3.1

We wrongly calculated offset_grid as `ceildiv(nrows, block_size)`,
while it must be `ceildiv(nrows + 1, block_size)`. As a consequence, we
had uninitialized values in `offset_iterator[nrows]` for the case when
`nrows % block_size == 0`.

Fixes #21162

* Reduce nrows in test case to 256, don't need 768
2026-04-30 11:28:57 +03:00
Radoslav Gerganov 40ddc5a5b9
rpc : fix misleading error log (llama/21184)
When RPC is running with a remote backend which doesn't have init_tensor
function (like CPU and Metal), the server log gets full with error
messages saying that init_tensor is being called with null buffer which
is incorrect. This patch fixes this.
2026-04-30 11:28:57 +03:00
Gaurav Garg 6b67c91879
Optimize MOE GEMV kernel for BS > 1. (llama/20905)
* Optimize MOE GEMV kernel for BS > 1.

The previous MOE kernel for BS > 1 had too many thread blocks (nrows_x, nchannels_dst, ncols_dst), with very little work per block. block of (32, 4) was doing inner dot product for a single row.

New mul_mat_vec_q_moe kernel is dedicated for MoE multi-token kernel with grid (ceil(nrows_x/rpb), nchannels_dst), block (warp_size, ncols_dst). Each warp handles two rows independently with warp-level reduction only (no shared memory sync).

This change doesn't increase any compilation time as a single template instance is needed per type. This also simplifies the original GEMV kernel and gets rid of `is_multi_token_id` specialization.

* Remove em-dashes

* Cherry-pick changes from @am17an PR https://github.com/ggml-org/llama.cpp/pull/20885 to enable small_k optimization only for cases where it benefits

Increase max batch size for MMVQ kernels for MUL_MAT_ID to 8

* Make the max batch size for MOE GEMV kernel configurable based on GPU arch and datatype

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-04-30 11:28:57 +03:00
Max Krasnyansky 9e96d390f7
hexagon: dma optimizations (mostly fixing regressions) (llama/21137)
* hex-fa: add simple dma cache for Mask

I noticed that we were refetch the mask rows over and over.
This simple cache avoids that.

* hex-dma: unset in-order desc bit which caused signficant perf regression

We don't rely on true in order processing of the DMA descriptors anywhere.
Turns out this mode caused significant regression of around 3-4 TPS during token gen.

* hex-rope: update comment to clarify that we don't need in-order DMA completions
2026-04-30 11:28:56 +03:00
Georgi Gerganov 763a454052
ggml : bump version to 0.9.9 (ggml/1449) 2026-04-30 11:28:52 +03:00
jinweihan fc674574ca
bench : sync submit-results URL to ggml-org (#3769)
The project moved from ggerganov/ to ggml-org/ and the README already
references the new URL in both places it mentions issue #89 (README.md
and examples/bench/README.md). Syncing the two remaining hardcoded URLs
in examples/bench/bench.cpp and examples/bench.wasm/emscripten.cpp.

The old URL still redirects, so this is cosmetic.
2026-04-20 07:12:57 +02:00
Daniel Worthington-Bodart 166c20b473
whisper : add stateless VAD detect + explicit state reset for streaming (#3677)
whisper_vad_detect_speech resets LSTM state on every call, which is
correct for batch processing but prevents temporal continuity when
calling per-chunk in a streaming loop.

Add whisper_vad_detect_speech_no_reset (skips buffer clear) and
whisper_vad_reset_state (explicit clear between utterances).
Existing whisper_vad_detect_speech is now a thin wrapper — zero
behavior change for current callers.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 13:36:27 +02:00
Georgi Gerganov 95ea8f9bfb sync : ggml 2026-03-29 15:04:36 +03:00
Ruben Ortlam 759f0084b4 vulkan: add noncontiguous GLU support (llama/21081)
* vulkan: add noncontiguous GLU support

* fix compile issue
2026-03-29 15:04:36 +03:00
Yiwei Shao 52699f6d19 hexagon: support for IQ4_NL and MXFP4 (llama/21018)
* ggml-hexagon: add IQ4_NL and MXFP4 HMX matmul support

- Add IQ4_NL quantization type support to Hexagon backend (buffer
  set/get tensor repack, mul_mat, mul_mat_id dispatch)
- Implement HVX IQ4_NL vec_dot kernels (1x1, 2x1, 2x2) with
  LUT-based 4-bit index to int8 kvalue dequantization
- Add MXFP4 HMX dequantization path with E8M0 scale conversion,
  including batch-4 fast path and single-tile fallback
- Unify quantized row size / scale offset logic to handle Q4_0,
  Q8_0, IQ4_NL, and MXFP4 in the DMA fetch path

* ggml-hexagon: fix SKIP_QUANTIZE src1 address mismatch in mixed-quant models

* Fix the pragma indent
2026-03-29 15:04:36 +03:00
Radoslav Gerganov 7f466e237b rpc : proper handling of data pointers to CPU buffers (llama/21030)
The compute graph may contain tensors pointing to CPU buffers. In these
cases the buffer address is serialized as 0 and sent over the wire.
However, the data pointer is serialized as-is and this prevents proper
validation on the server side. This patches fixes this by serializing
the data pointer as 0 for non-RPC buffers and doing proper validation on
the server side.

closes: #21006
2026-03-29 15:04:36 +03:00
ren b564a99ed6 metal : Fix dimension constraint violation in matmul2d descriptor (llama/21048)
Updates Metal tensor API test probe to fix the dimension constraint violation in the matmul2d descriptor (at least one value must be a multiple of 16).
2026-03-29 15:04:36 +03:00
uvos 45a7083431 hip: use fnuz fp8 for conversion on CDNA3 (llama/21040) 2026-03-29 15:04:36 +03:00
lhez 1848f994e3 opencl: allow large buffer for adreno (llama/20997) 2026-03-29 15:04:36 +03:00
ihb2032 07237ff99e fix(ggml): correct RISC-V ISA string canonical ordering for RVV in CMake (llama/20888)
Signed-off-by: ihb2032 <hebome@foxmail.com>
2026-03-29 15:04:36 +03:00
Michael Wand eb747f3def ggml-cuda: Add NVFP4 dp4a kernel (llama/20644)
Added check for dst_t to cuda_cast template for float
Restored ggml_cuda_ue4m3_to_fp32, changed vecdot ints to int32ts
Added CUDART/HIP Check and HIP/fp8 include
Added NVFP4 to Test-backend-ops
Added hip_fp8_e4m3 to __nv_fp8_e4m3 typedef

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-29 15:04:36 +03:00
Yihao Wang a050c7d1bf CUDA & CPU: support F32 kernel type for `CONV_TRANSPOSE_2D` (llama/17094)
* Refactor CUDA 2D transpose implementation to support multiple kernel types and improve parameter handling

- Introduced a `conv2d_transpose_params` struct for better parameter management.
- Updated `conv2d_transpose_kernel` to be templated for different kernel types (float and half).
- Modified `ggml_cuda_conv_2d_transpose_p0` to handle both F16 and F32 kernel types.
- Enhanced test cases to validate functionality for both kernel types.

* Refactor test cases for 2D convolution transpose to support dynamic kernel types

- Updated `test_conv_transpose_2d` structure to improve parameter handling by reordering constructor arguments.
- Enhanced test case generation to iterate over kernel types, allowing for flexible testing of different configurations.
- Removed hardcoded kernel type instances in favor of a loop for better maintainability and scalability.

* Refactor ggml_compute_forward_conv_transpose_2d to support both F16 and F32 tensor types.

* Refactor conv2d transpose kernel to use a template for kernel type, enhancing flexibility for different data types.
Update test cases to include both F16 and F32 tensor types for comprehensive coverage.

* Update ggml/src/ggml-cuda/conv2d-transpose.cu

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Update ggml/src/ggml-cpu/ggml-cpu.c

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Refactor conv2d transpose implementation by removing the conv2d_transpose_params struct and dispatching with direct kernel launch.

* Enhance cpu conv2d transpose implementation by introducing a templated kernel type for improved flexibility with F16 and F32 data types.

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-03-29 15:04:36 +03:00
Saba Fallah 495b77aec2 mtmd: Add DeepSeekOCR Support (llama/17400)
* mtmd: llama.cpp DeepSeekOCR support
init commit

* loading sam tensors

* mtmd: fix vision model processing

* deepseek-ocr clip-vit model impl

* mtmd: add DeepSeek-OCR LM support with standard attention

* mtmd: successfully runs DeepSeek-OCR LM in llama-cli

* mtmd: Fix RoPE type for DeepSeek-OCR LM.

* loading LM
testing Vision model loading

* sam warmup working

* sam erroneous return corrected

* clip-vit:  corrected cls_embd concat

* clip-vit: model convert  qkv_proj split

* corrected combining of image encoders' results

* fix: update callback for ffn_moe_weighted and add callback for attn_out in deepseek2 model

* concat image_newline and image_seperator tokens

* visual_model warmup (technically) works

* window partitioning using standard ggml ops

* sam implementation without using CPU only ops

* clip: fixed warnings

* Merge branch 'sf/deepseek-ocr' of github.com:sfallah/llama.cpp into sf/deepseek-ocr

* mtmd: fix get_rel_pos

* mtmd: fixed the wrong scaler for get_rel_pos

* image encoding technically works but the output can't be checked singe image decoding fails

* mtmd: minor changed

* mtmd: add native resolution support

* - image encoding debugged
- issues fixed mainly related wrong config like n_patches etc.
- configs need to be corrected in the converter

* mtmd: correct token order

* - dynamic resizing
- changes are concerning PR https://github.com/sfallah/llama.cpp/pull/4

* mtmd: quick fix token order

* mtmd: fix danling pointer

* mtmd: SAM numerically works

* mtmd: debug CLIP-L (vit_pre_ln)

* mtmd: debug CLIP-L & first working DeepSeek-OCR model

* mtmd : add --dsocr-mode CLI argument for DeepSeek-OCR resolution control & all native resolution modes work

* mtmd: simplify SAM patch embedding

* mtmd: adapt Pillow image resizing function

* mtmd:  simplify DeepSeek-OCR dynamic resolution preprocessing

* mtmd: remove --dsocr-mode argument

* mtmd: refactor code & remove unused helper functions

* mtmd: fix tensor names for image newlines and view separator

* clean up

* reverting automatically removed spaces

* reverting automatically removed spaces

* mtmd: fixed bad ocr check in Deepseek2 (LM)

* mtmd: support combined QKV projection in buid_vit

* using common build_attn in sam

* corrected code-branch when flash-attn disabled
enabling usage of --flash-attn option

* mtmd: minor fix

* minor formatting and style

* fixed flake8 lint issues

* minor editorconfig-check fixes

* minor editorconfig-check fixes

* mtmd: simplify get_rel_pos

* mtmd: make sam hparams configurable

* mtmd: add detailed comments for resize_bicubic_pillow

* mtmd: fixed wrong input setting

* mtmd: convert model in FP16

* mtmd: minor fix

* mtmd: remove tweak to llama-mtmd-cli & deepseek-ocr template

* fix: test-1.jpg ORC issue with small (640) resolution
setting min-resolution base (1024) max large (1280) for dynamic-resolution

* minor: editconfig-check fix

* merge with changes from https://github.com/ggml-org/llama.cpp/pull/17909
added new opt to tests.sh to disable flash-attn

* minor: editconfig-check fix

* testing deepseek-ocr
quick and dirty test script comparing results of Qwen2.5-VL vs DeepSeek-OCR

* quick and (potential) dirty merge with https://github.com/ggml-org/llama.cpp/pull/17909

* refactoring, one single builder function and static helpers

* added deepseek-ocr test to tests.sh

* minor formatting fixes

* check with fixed expected resutls

* minor formatting

* editorconfig-check fix

* merge with changes from https://github.com/ggml-org/llama.cpp/pull/18042

* minor
- added GLM-4.6V to big tests
- added missing deps for python test

* convert: minor fix

* mtmd: format code

* convert: quick fix

* convert: quick fix

* minor python formatting

* fixed merge build issue

* merge resolved
- fixed issues in convert
- tested several deepseek models

* minor fix

* minor

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* - removed clip_is_deepseekocr
- removed redundant RESIZE_ALGO_BICUBIC_PILLOW resize-algo
- simplified image-preprocessing
- removed/simplified debug functions

* - cleaning commented out code

* fixing instabilities issues reintroducing resize_bicubic_pillow

* - use f16 model for deepseek-ocr test
- ignore llama-arch test for deepseek-ocr

* rename fc_w --> mm_fc_w

* add links to OCR discussion

* cleaner loading code

* add missing .weight to some tensors

* add default jinja template (to be used by server)

* move test model to ggml-org

* rolling back upscale change

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: bluebread <hotbread70127@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2026-03-29 15:04:36 +03:00
Johannes Gäßler 3987857d2d llama: fix llama-model-saver (llama/20503)
* llama : add fd-based model loading via llama_model_load_from_fd

* llama : address review feedback for fd-based model loading

* llama : use FILE pointer instead of fd in public API

* llama : use FILE pointer consistently, address review feedback

* fixup

* fix tensor names

* fix llama-model-saver

* roundtrip tests

* fixup

* refactor tests

* fix prints

* fix model saving

* fix CI, disable Chameleon

* print seed

---------

Co-authored-by: Siddhesh2377 <siddheshsonar2377@gmail.com>
2026-03-29 15:04:36 +03:00
Neo Zhang f2a8e65ea7 sycl : fix wrong variable check by assert (llama/20903)
* fix wrong variable check by assert

* use GGML api
2026-03-29 15:04:36 +03:00
nuri 9e4e4c2401 metal : add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/20930)
Co-authored-by: nryoo <nryoo@nryooui-MacBookPro.local>
2026-03-29 15:04:36 +03:00
Georgi Gerganov eef7422d4d metal : add FA instantiations for HSK=512, HSV=512 (llama/20902) 2026-03-29 15:04:36 +03:00
Max Krasnyansky 116a9f6ab7 hexagon: general DMA and Binary Op fixes for large strides (llama/20918)
* hex-dma: make chained dma the default to handle newer models

This also includes some new instrumentation that we can remove later.

* hexagon: add uint32 dump helper

* hexagon: use single-page VTCM allocation to avoid issues with large gather ops in ssm-conv

ssm-conv uses HVX gather instruction and that instruction cannot handle cases where the base+offset
spans page boundaries.

* hexagon: update ssm-conv to make base-addr compute a bit easier to read

* hex-dma: use 1d mode for reshaping, it supports sizes up to 24-bits (>16MB)

* hex-bin: fix incorrect stride logic

* hexagon: make sure repack buffs are dumped for verbose > 2

* hex-bin: consistently use dma_queue_push even for dummy dst transactions

* hex-dma: start using 2d-wide mode on v75 and up

The removes the need to deal with the 16-bit limitaion for the strides.

* hex-bin: cleanup kernel selection logic

* hex-bin: cleanup binary op core and fix transposed tensor handling

* snapdragon: update run-bench to use larger ubatch and fa-on
2026-03-29 15:04:36 +03:00
lhez 624be93425 opencl: add q6_K gemm and gemv kernels for Adreno (llama/20089)
* opencl: add q6_K noshuffle kernels, initial q6_K gemv, some host code

* opencl: add q6_K transpose

* opencl: fix cvt kernel name

* opencl: add call to q6_K gemv

* opencl: fix q6_K scale transpose

* opencl: fix loading for gemv q6_K, refactor

* opencl: fix transpose_8_buf kernel assignment, refactor

* opencl: refactor q6_K transpose

* opencl: add gemm_noshuffle_q6_k_f32

* opencl: fix qh loading

* opencl: refactor q6_K gemv host side, release bufs and imgs

* opencl: refactor

* opencl: fix q6_K dequant and scale selection

* opencl: workaround compiler bug, fix dump_tensor

* opencl: refactor q6_K convert kernels

* opencl: unpack transformed q6_K in get_tensor

* opencl: refactor, handle non-uniform workgroups

* opencl: support non-vector subgroup bcast
2026-03-29 15:04:36 +03:00
las7 37c0a52c1b rpc : RCE patch (llama/20908) 2026-03-29 15:04:36 +03:00
Rashid Ul Islam c589dd77d4 metal: add CONV_3D (llama/19927)
* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* metal:add conv_3d backend

Rebased with master and resolved conflicts.

* Resolved issues related to changes in variable names

* kernel void kernel_upscale_bilinear_f32 was missing in my branch, added back, should pass all tests now

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-29 15:04:36 +03:00
Chenguang Li 54f5c02f29 CANN: add RoPE cache preload before ACL graph capture (llama/20747)
ACL graph capture disallows host-to-device memcpy and device memory
malloc/free on the captured stream. Pre-load the RoPE cache before
capture so that:
- Host-to-device copies and allocations run on the non-captured stream
- Cache metadata is populated and memory pool is warmed up
- During capture, only on-device computations are recorded; host-side
  and allocation branches are skipped
2026-03-29 15:04:36 +03:00
Dan Hoffman a0e41ec261 fix(openvino): explicit memset in buffer_context allocation (llama/20857)
* fix(openvino): explicit memset in buffer_context allocation

* minor

---------

Co-authored-by: Dan Hoffman <dhoffman@cyket.net>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-29 15:04:36 +03:00
shaofeiqi c976b22d7b opencl: add flattened Q4_K mv and general Q4_K mm (llama/20773) 2026-03-29 15:04:36 +03:00
Johannes Gäßler 607c92430f CUDA: fix BF16 FA compilation (llama/20865) 2026-03-29 15:04:36 +03:00
Neo Zhang 1d0f0285de support bf16 and quantized type (llama/20803) 2026-03-29 15:04:36 +03:00
Patrick Buckley 69f0d907ee ggml-cuda: native bf16 flash attention for vec kernel (llama/20525)
* ggml-cuda: native bf16 flash attention for vec and tile kernels

mma kernel still converts bf16 to fp16 before launch, native mma bf16 todo

* ggml-cuda: address code owner review feedback

reverted tile kernel changes to avoid larger refactor

* fix ci failures on turing and hip

* fix bf16 vec kernel compile on hip v_dot2 platforms

* add comments

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-29 15:04:36 +03:00
Gaurav Garg 77b635e9c4 Increase number of output elements per-thread block if the K-dimension is small (llama/20635)
* Increase per-thread work if the K-dimension is small

With tensor parallelism, the K-dimension of the FFN-down matrices is split, which makes it quite small, especially for MOEs. For example, Qwen3-30b-A3B has a K-dimension of 768, and Qwen3235B-A22B has k-dimension of 1536.
The current heuristic uses a group of 4 warps irrespective of K-dimension size, resulting in some of the threads being idle. This results in poor performance for these matrices.

This change increases the number of output elements per block for such cases.

* Limit this change to ncols_dst = 1

* tab to space
2026-03-29 15:04:36 +03:00
y198 5f3428219a fix(rpc): prevent division by zero in deserialize_tensor (llama/20712)
rpc : prevent division by zero in deserialize_tensor

When receiving an RPC message with a deprecated tensor type (e.g., type 4 or 5 where `blck_size == 0`), `ggml_row_size()` will trigger a division by zero (SIGFPE) and crash the rpc-server.

This patch adds a simple validation check in `deserialize_tensor` to return `nullptr` if the requested tensor type has a block size of 0.

(Note: This was originally reported via Security Advisory and maintainer suggested dropping a patch here).

* style: remove trailing whitespace
2026-03-29 15:04:36 +03:00
Matt Corallo 22710fdb82 Add shader count for Intel Arc Pro B60 (llama/20818) 2026-03-29 15:04:36 +03:00
shalinib-ibm ca5d565dcd ggml-cpu: add always_inline to tinyBLAS_PPC accumulator saves (llama/20791)
Explicitly mark save_acc and add_save_Acc with always_inline
in tinyBLAS_PPC. This ensures the compiler keeps MMA accumulator
disassembly within kernel's register context, preventing un-necessary
stask spills.

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2026-03-29 15:04:36 +03:00
Jeff Bolz 49b505bcc5 vulkan: change gated_delta_net to shard a column across a subgroup (llama/20662)
* vulkan: change gated_delta_net to shard a column across a subgroup

This is based on https://github.com/ggml-org/llama.cpp/pull/20391, I used an
LLM to port the CUDA code to Vulkan, and guided to it to make various fixes to
work with Vulkan (e.g. handling different subgroup sizes, unknown mapping of
subgroup to invocation id, using subgroupAdd optionally, etc.).

This fixes a perf regression from the transposing of the values in memory
(!20443).

* vulkan: Spread columns across fewer lanes to reduce the number of workgroups
2026-03-29 15:04:36 +03:00
hipudding 46dcb35aa3 CANN: add BF16 support for core operators (llama/20152)
* CANN: add BF16 support for core operators

Add BF16 (bfloat16) type support to the CANN backend for the following
operators: MUL_MAT, MUL_MAT_ID, GET_ROWS, SET_ROWS, CPY, CONT, and
OUT_PROD. This enables BF16 models to run on Ascend NPUs.

* CANN: skip NZ weight format for BF16 and add 310P compile guards

NZ weight format conversion does not support BF16 tensors, skip it
in set_tensor, get_alloc_size and mul_mat. Remove BF16 from MUL_MAT_ID
and OUT_PROD as there are no BF16 use cases. Add #ifndef ASCEND_310P
guards for all BF16 operator support since 310P does not support BF16.
2026-03-29 15:04:36 +03:00
Sundaram krishnan 65d820a44a ggml: guard KleidiAI DOWNLOAD_EXTRACT_TIMESTAMP for cmake < 3.24 (llama/20767) 2026-03-29 15:04:36 +03:00
Rail Chabdarov e1cdce46c5 hip: Avoid compiler bug in RDNA code generation during debug builds on Windows (llama/20655) 2026-03-29 15:04:36 +03:00
Yiwei Shao 15f6b6ad76 hexagon: add Matrix Extensions (HMX) for Hexagon NPU backend (llama/20693)
* migrate(vtcm): unify VTCM management for HMX merge

- Add HMX fields to htp_context (#ifdef HTP_HAS_HMX): hmx_enabled,
  hmx_dma, vtcm_scratch_size, exp2_table
- Add HTP_VTCM_SESSION_HOLD CMake option (default ON): hold VTCM for
  entire session instead of per-op acquire/release
- Add vtcm_op_acquire/vtcm_op_release inline wrappers: no-op in
  session-hold mode, delegate in per-op mode
- Add VTCM tail reservation for precompute tables (256KB, 64KB aligned)
  in htp_iface_start under HTP_HAS_HMX
- Add HMX init/cleanup hooks in htp_iface_start/stop
- Add precompute table recovery in vtcm_acquire after VTCM preemption
- Do NOT migrate vtcm_mgr from htp-ops-lib (replaced by tail reservation)

* migrate(repack): replace x4x2 with HMX tile-permuted super-block format

- Add hmx_block_q4_0/q8_0 struct definitions (scales-first + sequential quants)
- Implement forward repack: repack_q4_0_to_hmx_superblock, repack_q8_0_to_hmx_superblock, repack_f16_to_tile_permuted
- Implement inverse repack for get_tensor debug verification
- Route set_tensor/get_tensor via opt_arch >= 73 to HMX path, else existing HVX x4x2
- MXFP4 on v73+ falls back to HVX x4x2 repack (not memcpy)
- Extend supports_op: add IQ4_NL for v73+, F16 tile alignment checks
- Tail blocks (K not multiple of 256): repack to x4x2 via pad-repack-truncate
- Add CMake GGML_HEXAGON_HMX_TAIL_HVX option (default ON); OFF rejects non-256-aligned K in supports_op

* migrate(dma): add dma_queue_push_1d() convenience wrapper for HMX ops

Add 1D linear DMA transfer helper to hex-dma.h for upcoming HMX op
migration. Reuses existing dma_queue_flush() for sync points instead
of adding redundant dma_queue_drain().

* migrate(hmx): reorganize HMX files into htp/hmx/ and simplify HMX locking

Move all 14 HMX-related files from htp/ to htp/hmx/ subdirectory for
cleaner separation between HVX and HMX code. Simplify HMX hardware
locking by replacing the two-level lock design (SHARED HAP lock +
custom asm spin-lock) with direct HAP_compute_res_hmx_lock/unlock
on the existing vtcm_rctx, which already has HMX capability.

Key changes:
- Create htp/hmx/ subdirectory with all HMX infrastructure and ops
- Replace hmx_mgr_ctx_id + spin-lock with HAP_compute_res_hmx_lock(vtcm_rctx)
- Remove hmx_manager_enable/disable_execution() (SHARED lock no longer needed)
- Add hmx_set_vtcm_state() call in main.c (was missing, caused null globals)
- Update main.c includes to use hmx/ prefix
- Clean up duplicate declarations from hmx-worker-pool.h

* migrate(hmx-infra): consolidate HMX infrastructure into htp_context

- Remove hmx-mgr.c/h: eliminate global HMX state singleton, thread htp_context through all HMX ops
- Remove hmx-worker-pool.c/h: replace separate HMX worker pool with main worker_pool API (worker_pool_run_func)
- Replace hmx_unit_acquire/release with direct HAP_compute_res_hmx_lock/unlock on ctx->vtcm_rctx
- Remove HTP_VTCM_SESSION_HOLD compile option: always use per-op vtcm_acquire/release
- Remove hmx_dma from htp_context: HMX ops use ctx->dma[0] instead of separate DMA queue
- Simplify main.c init/cleanup: remove hmx_manager_setup/reset and vtcm_op_acquire/release wrappers
- Delete upstream llama.cpp AGENTS.md (not applicable to fork)

* migrate(flash-attn): remove HTP_EXP2_TABLE_COPIES, use single exp2 table

- Remove HTP_EXP2_TABLE_COPIES compile definition and CMake cache variable
- Remove table duplication loop in precompute-table.c
- Remove worker_index % N sub-table indexing in hmx-flash-attn-ops.c
- Fix table_size to 65536 (single 64 KB copy) in main.c

The exp2 lookup table is read-only; concurrent VTCM reads do not cause
bank conflicts, so duplicating the table wastes 192 KB of VTCM for no
benefit.

* migrate(dsp-main): add HMX priority dispatch in packet_callback

- Add proc_hmx_matmul_req() wrapper for HMX mat_mul (F16 and quantized types)
- Add proc_hmx_flash_attn_req() wrapper for HMX simple_flash_attn (FP16 only, falls back to HVX for non-FP16)
- Add proc_hmx_rms_norm_req() wrapper using hvx_rms_norm_f32
- Route MUL_MAT, FLASH_ATTN_EXT, RMS_NORM through HMX path when ctx->hmx_enabled
- Split RMS_NORM and SCALE into separate case blocks for independent dispatch
- All HMX wrappers guarded by #ifdef HTP_HAS_HMX

* migrate(cmake-dsp): add HMX source files and -mhmx for v73+ skels

Add HTP_VTCM_SESSION_HOLD option (default ON) and v73+ HMX build
integration: compile hmx-matmul-ops, hmx-flash-attn-ops,
hmx-rms-norm-ops and precompute-table into v73/v75/v79/v81 skels
with -mhmx flag and HTP_HAS_HMX=1 definition. v68/v69 skels remain
unchanged.

* migrate(hmx-ops): fix compile errors in HMX ops for ggml struct compatibility

- hmx-matmul-ops.c: include ggml-common.h for block_q4_0/block_q8_0 definitions
- hmx-matmul-ops.c: rename quants->qs, scale->d to match upstream ggml field names
- hmx-flash-attn-ops.c: suppress -Wunused-function/-Wunused-variable warnings
- hmx-flash-attn-ops.c: inline ctx->n_threads, remove unused n_workers variable

* hmx: set Q/O element type to fp16 for flash attention

The llama.cpp integration passes fp16 Q/O tensors, so qo_fp32_element
should be false to match the actual data layout.

* hexagon: unify HMX weight format to x4x2, add IQ4_NL and DSP-side fallback

Remove the v73+ HMX-specific super-block/tile-permuted weight format
and unify all architectures on the HVX x4x2 packed format. The DSP now
decides at runtime whether to use the HMX or HVX matmul path based on
dimension constraints (M%32, N%32, K%256 alignment), rather than the
host rejecting ops in supports_op. This simplifies the host repack
logic, eliminates ~400 lines of HMX super-block code, and adds IQ4_NL
quantization support across host and DSP.

Key changes:
- Remove hmx_block_q4_0/q8_0 types, repack functions, and F16 tile
  permutation (ggml-hexagon.cpp, hmx-quants.h)
- Simplify set_tensor/get_tensor to always use x4x2 repack, add IQ4_NL
- Force is_host=false so tensor copies go through format conversion
- Add HTP_TYPE_IQ4_NL to DSP message protocol (htp-msg.h)
- Rewrite DSP dequantizers to work directly on x4x2 layout
  (hmx-matmul-ops.c)
- Fix mxclracc.hf placement: clear per output tile, not once globally
- Move HMX eligibility checks to DSP proc_hmx_matmul_req (main.c)
- Remove dma_queue_push_1d wrapper, use 2D DMA for weight sub-blocks
- Add VTCM allocation overflow asserts
- Remove GGML_HEXAGON_HMX_TAIL_HVX build option (CMakeLists.txt)

* Enhance HMX debugging capabilities with new tile dumping functions

- Introduced hmx_dump_tile_mem and hmx_dump_fp32_tile_region for improved memory layout visualization of tile data.
- Updated hmx_dump_tile_rows to provide raw memory output for debugging.
- Added debug logging for activation and weight tile pairs during processing to facilitate troubleshooting.
- Refined existing macros for dumping HVX vector values to streamline debugging output.

These changes aim to enhance the debugging experience for HMX matmul operations, ensuring better visibility into data handling and transformations.

* OK for small mat mul

* hexagon: fix UDMA roiwidth 16-bit overflow in HMX matmul DMA transfers

The UDMA descriptor roiwidth field is 16-bit (max 65535), but large matrix
DMA transfers (e.g. 32×2304 = 73728 bytes) exceeded this limit, causing
truncated transfers and NaN results. Fix by using 2D DMA (per-row stride ×
n_rows) instead of 1D (total_size × 1) for all 4 DMA push calls in both
x4x2 and fp16 weight paths.

Also includes:
- Use standard vlut16 instead of _nomatch variant for dequantization
- Add per-tile vscatter drain barrier for correctness
- Add compile-time HMX_DEBUG_TRACE_VALUES instrumentation (disabled by default)

* hexagon: remove HMX RMS norm fallback and re-enable matmul pipeline

Remove hmx-rms-norm-ops.c as the HVX RMS norm offers no benefit over
the generic unary path. Re-enable DMA pipeline mode for QK matmul.

* hexagon: guard all HMX matmul DMA transfers against UDMA 16-bit field overflow

All UDMA type1 descriptor fields (roiwidth, roiheight, srcstride, dststride)
are 16-bit (max 65535). Commit 40d2a9cc fixed roiwidth overflow in the
non-pipeline path by switching from 1D to 2D DMA, but the pipeline path
(3 call sites) was left unchanged and still used 1D DMA with
chunk_size = n_cols * row_stride as roiwidth, which overflows for any
practical matrix size when the pipeline is active.

Add a local hmx_dma_push_safe() helper that transparently handles overflow:
- Fast path (zero overhead): all params fit in 16 bits -> direct call.
- Contiguous block: reshapes into a single 2D descriptor with sub_width
  that fits in 16 bits, preserving async DMA behavior.
- Stride overflow: row-by-row fallback for future large-k models where
  per-row stride itself exceeds 65535.

Convert all 8 external dma_queue_push calls in hmx-matmul-ops.c to use
the safe helper, including the 3 pipeline sites (1D -> 2D fix), the
FP16 and x4x2 weight paths, qweight_fetch sub-block DMA, and the
output-stationary activation fetch.

* hexagon: multithread activation/output transfer and add HMX matmul fallback

- Replace single-threaded transfer_activation_chunk_fp32_to_fp16 with
  transfer_activation_chunk_multithread across all HMX matmul paths
- Add multi-threaded transfer_output_chunk_multithread for FP16-to-FP32
  output store, following the same worker pool pattern
- Rename transfer_activation_chunk_no_prefetch back to
  transfer_activation_chunk_fp32_to_fp16 and clean up stale comments
- Add HVX fallback in proc_hmx_matmul_req when HMX matmul returns error

* [todo]: dynamic alloc vtcm, cause prefill regression.

* hexagon: constrain HMX mxmem tile load region to avoid VTCM bank boundary faults

Set activation/weight mxmem Rt to 2047 for single-tile loads and document the 4MB VTCM bank boundary constraint, preventing precise bus errors when dynamic VTCM allocation places tiles near bank edges.

* hexagon: split unaligned-M HMX matmul into HMX+HVX phases

- keep HMX for the 32-aligned head rows and process tail rows with HVX
- force re-quantization for HVX tail after HMX phase to avoid stale VTCM state
- preserve fallback behavior when N is unaligned or no aligned M rows exist

* hexagon: batch-4 Q4_0 dequantize fast path and remove debug traces

Add dequantize_x4x2_q4_0_x4groups_hvx() that processes 4 contiguous
K-tiles with a single vmemu + vlut16 per row, reducing per-tile overhead.
The dequantize loop now takes the batch-4 path when 4 aligned K-tiles
are available within the same column tile, falling back to the original
single-tile path otherwise.

Also removes HMX_DEBUG_TRACE_VALUES instrumentation blocks that are no
longer needed.

* hexagon: abort on DSP error and fix HMX-to-HVX fallback quantize flag

Promote DSP response error from log to GGML_ABORT for fail-fast
behavior. Clear SKIP_QUANTIZE flag when falling back from HMX to HVX
matmul so the HVX path correctly re-quantizes activations.

* hexagon: support batch matmul. This fix perplexity issue
The problem comes from Grouped-Query Attention(GQA).  Strides between batches are not well respected
TODO: optimize batch matmul to reuse weights between batches.

* hexagon: reuse weights in fp16 batch matmul

* hexagon: remove unused HMX flash attention operations and precomputation table, remove the log system for test

* hexagon: remove unused HVX math helpers, debug infrastructure, and stale build options

* hexagon: fix HMX not enabled due to missing force_hvx parameter in IDL

* hexagon: remove the unnecessary changes not related to HMX

* hexagon: bypass HMX by default

* hexagon: add upstream repo link to htp-ops-lib ported file headers

* hexagon: restore host buffer support

* hexagon: add HMX=1 option for the adb scripts

* hex-hmx: improve DMA pipelining

* hex-hmx: further improvements to dma pipelining

* hex-hmx: minor cleanup

* hex-hmx: move hmx lock out of inner loops/calls

* hex-hmx: remove unnecessary state and wrappers

* hex-hmx: remove hmx dir and unify f32 to f16 conversions

* hex-hmx: further unify hvx conversions

* hex-hmx: revert f16 converter to the original for now

* hex-hmx: minor cleanup for f16 to f32 converter

* hex-mm: replace incorrect fp16-to-fp32 hmx converter and reformated related code

* hex-dma: move chanied dma push into hex-dma.h header and update hmx-mm

* hex-mm: use hex_is_aligned instead of a duplicated hmx_is_aligned

* hex-mm: use hvx_vec_splat_f16 in the hmx code

* hex-mm: use VLEN and HTP types in hmx-code

* hex-mm: remove duplicate QK and defs

* hexagon: pre-shuffle quants before vlut16

* hexagon: enable HMX by default

* hex-mm: code indent fixes for hmx-matmul

* hexagon: update hex-utils to include align/smin/etc helpers and use that in hmx mm

* hex-mm: more formatting fixes

* hex-mm: minor naming updates in hmx code

* hex-mm: remove leftover from rebase conflict

* Fix the incorrect indents

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-03-29 15:04:36 +03:00
uvos 081dc773a5 ci : add hip quality check (llama/20430)
* CI: add hip quality check

* Update scripts/hip/gcn-cdna-vgpr-check.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update .github/workflows/hip-quality-check.yml

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update .github/workflows/hip-quality-check.yml

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update .github/workflows/hip-quality-check.yml

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/hip/gcn-cdna-vgpr-check.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/hip/gcn-cdna-vgpr-check.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/hip/gcn-cdna-vgpr-check.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/hip/gcn-cdna-vgpr-check.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Revert "Update .github/workflows/hip-quality-check.yml"

This reverts commit efa0bfcdb01dfac0feee674987a0482d50f46145.

* scripts: gcn-cdna-vgpr-check.py: enforce int type for total_vgprs

* scripts: gcn-cdna-vgpr-check.py: add flash attention instances to ignore list

* Bump ccache version

* Add mssing seperators to list

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-29 15:04:36 +03:00
Reese Levine 551bb82960 ggml webgpu: ops support for qwen3.5 (SET, TRI_SOLVE, SSM_CONV, GATED_DELTA_NET) + GET_ROWS optimization (llama/20687)
* Implement l2_norm, set, tri

* Add DIAG/SOLVE_TRI

* Add SSM_CONV

* Better get_rows and gated_delta_net to support qwen3.5

* Clean up, update ops.md

* Fix binding_index type for wasm

* Fix read write annotations

* cleanups
2026-03-29 15:04:36 +03:00
Eve 43c7c0f86c vulkan: dequantize iq4_xs 4 at a time (llama/20657) 2026-03-29 15:04:36 +03:00
Charles Xu fea629d00f cmake : fix build warning when kleidiai is enabled (llama/20457)
* cmake : fix build warning when kleidiai is enabled

* remove LLAMA_ARG_THREADS from KleidiAI backend
2026-03-29 15:04:36 +03:00
Chenguang Li 2a6de29364 CANN: handle in-place ROPE on non-contiguous f32 tensors (llama/20274)
RotaryPositionEmbedding on CANN fails when src and dst share the same
non-contiguous buffer (inplace + view), because the operator overwrites
source data before it is fully read.

Add a branch that detects this case and uses contiguous temporary
buffers: copy src to temp, run ROPE into another temp, then copy back
to the non-contiguous dst. Fixes 20 failing ROPE tests (f32, v=1,
inplace=1).

Signed-off-by: noemotiovon <757486878@qq.com>
2026-03-29 15:04:36 +03:00
Masashi Yoshimura 3d004fbf0a ggml-webgpu: Update the `RMS_NORM` preprocessor and add `L2_NORM` (llama/20665)
* Update the preprocessor of RMS_NORM and add L2_NORM.

* Fix the name of rms_norm to row_norm.
2026-03-29 15:04:36 +03:00
Masashi Yoshimura 12015a2174 ggml-webgpu: Add supports for `DIAG` and `TRI` (llama/20664)
* Add supports for DIAG and TRI.

* Remove extra ttype and add a comment for TRI op.
2026-03-29 15:04:36 +03:00
Chenguang Li dfba84cb47 CANN: support flash attention for head dim not multiple of 16, fix ALiBi slope offset (llama/20031)
- Allow FLASH_ATTN_EXT when head dimension D is not a multiple of 16 by
  padding Q/K/V to D_padded = GGML_PAD(D, 16), running FusedInferAttentionScoreV2,
  then slicing the output back to D (ggml-cann.cpp + aclnn_ops.cpp).
- Fix aclnn_get_slope second-part offset: use ggml_type_size(dtype) instead of
  sizeof(float) so ALiBi slopes are correct when dtype is F16 (e.g. GQA with
  48 heads); fixes buffer overflow and large numerical errors in those cases.
2026-03-29 15:04:36 +03:00
Reese Levine d6a0f0d075 Move to no timeout for WaitAny in graph submission to avoid deadlocks in some cases on llvm-pipe backends (llama/20618) 2026-03-29 15:04:36 +03:00
Shaw Nguyen 14caedfa18 ggml-cpu/x86: fix unused changemask warning in repack (llama/20692) 2026-03-29 15:04:36 +03:00
uvos 61c7cd024d HIP : ignore return of hipMemAdvise [no ci] (llama/20696) 2026-03-29 15:04:36 +03:00
Krishna Sridhar e222814fc4 hexagon: add neg, exp, sigmoid, softplus ops, cont, repeat ops (llama/20701)
Add element-wise unary ops needed by Qwen 3.5's DeltaNet linear
attention layers. These ops follow the existing unary-ops pattern
with VTCM DMA double-buffering.

- neg: negate via scale by -1.0
- exp: uses existing hvx_exp_f32 HVX intrinsics
- sigmoid: uses existing hvx_sigmoid_f32_aa HVX intrinsics
- softplus: log(1 + exp(x)) scalar fallback
- CONT reuses the existing CPY infrastructure since making a tensor
  contiguous is equivalent to a same-type copy.
- REPEAT implements tiled memory copy with multi-threaded execution via
  the worker pool, supporting f32 and f16 types. The kernel parallelizes
  across output rows and uses memcpy for each tile.

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-03-29 15:04:36 +03:00
Ruben Ortlam 16ca5e6fb1 vulkan: disable mmvq on Intel Windows driver (llama/20672)
* vulkan: disable mmvq on Intel Windows driver

* improve comment
2026-03-29 15:04:36 +03:00
Kevin Hannon 906aef3da8 ggml-blas: set mkl threads from thread context (llama/20602)
* ggml blas: set mkl threads from thread context

* add code to run blas locally
2026-03-29 15:04:36 +03:00
Taimur Ahmad c890a9d9b4 ggml-cpu: fix RVV checks in quants and repacking (llama/20682)
* ggml-cpu: refactor quants.c; add rvv check

* ggml-cpu: refactor; disable generic fallback
2026-03-29 15:04:36 +03:00
Ruben Ortlam 0ad6ceef59 vulkan: async and event fixes (llama/20518)
* vulkan: fix event wait submission, event command buffer reset

* fix event command buffer reset validation error

* also reset command buffers before reuse

* use timeline semaphores instead of fences for event_synchronize

* don't use initializer list for semaphore wait info

* use multiple events to avoid reset issues

* fix event reuse issue with multiple vectors

* add semaphore wait condition also if compute_ctx already exists

* remove event pending stage
2026-03-29 15:04:36 +03:00
Justin Bradford ab7d305b75 kleidiai : fix MUL_MAT support for batched (3D) inputs (llama/20620)
* kleidiai : fix MUL_MAT support for batched (3D) inputs

The supports_op() check incorrectly rejected MUL_MAT operations with 3D
inputs (ne[2] > 1), but the actual compute_forward_qx() implementation
handles batched inputs correctly via a loop over ne12.

This caused models with Q4_0/Q8_0 weights to crash during graph scheduling
when n_seq_max > 1, because weights were placed in KLEIDIAI buffers during
loading (tested with 2D inputs) but the runtime used 3D inputs.

Also relax the buffer check to allow supports_op() to be called during
weight loading when src[0]->buffer is NULL.

Fixes #20608

* Kleidiai support_ops should only return true for 3D inputs, not also 4D
2026-03-29 15:04:36 +03:00
Ruben Ortlam 49adc8b470 vulkan: allow graphics queue only through env var (llama/20599)
* vulkan: avoid graphics queue on non-RADV AMD drivers

* avoid graphics queues on small GPUs

* change to only use graphics queue if overridden with env var GGML_VK_ALLOW_GRAPHICS_QUEUE

* reenable transfer queue if graphics queue is not used
2026-03-29 15:04:36 +03:00
Neo Zhang 6494251197 ehance UPSCALE to support all UT cases (llama/20637)
* [SYCL] ehance UPSCALE to support more cases

* rm test case result of SYCL1
2026-03-29 15:04:36 +03:00
Martin Klacer 9232af59ba kleidiai: add data type check to get_tensor_traits (llama/20639)
* kleidiai: add data type check to get_tensor_traits

 * Added check for F16 data type into get_tensor_traits path with input data
   not in ggml_backend_cpu_kleidiai_buffer_type format (unsupported for Q4/8)

Signed-off-by: Martin Klacer <martin.klacer@arm.com>
Change-Id: I9aca4b9b8d669d35db6f1dbcc4e080b1919b1de7

* updated ggml/src/ggml-cpu/kleidiai/kleidiai.cpp

updated kleidiai.cpp file as per suggestion

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Signed-off-by: Martin Klacer <martin.klacer@arm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-29 15:04:36 +03:00
Ruben Ortlam 724ea71cf9 vulkan: fix flash attention dot product precision (llama/20589) 2026-03-29 15:04:36 +03:00
Aman Gupta dae7781052 CUDA: GDN hide memory latency (llama/20537) 2026-03-29 15:04:36 +03:00
Sigbjørn Skjæret 1335dfa785 sycl : fix for untransposed GDA recurrent state (llama/20583) 2026-03-29 15:04:36 +03:00
KITAITI Makoto 76684141a5
ruby : fix dangling pointers, memory leak, and SEGV on parallel transcription (#3715)
* Prevent dangling pointers

* Use proper free function

* Free callback containers

* Set default log callback when nil is passed to log_set

* Raise error if callbacks set when parallel transcription

* Bump version to 1.3.7

* Make tests follow spec change

* Add note on parallel transcription and callbacks

* Update signature of Whisper.log_set [skip ci]
2026-03-22 02:03:00 +09:00
Georgi Gerganov 9386f23940
release : v1.8.4 2026-03-19 10:40:13 +02:00
Georgi Gerganov ef3463bb29
ci : update workflows 2026-03-18 22:43:38 +02:00
Georgi Gerganov 4bbce1e5b2
benches : update 2026-03-18 22:34:51 +02:00
Georgi Gerganov f5b477ab09 sync : ggml 2026-03-18 15:18:24 +02:00
Georgi Gerganov b2be16208d ggml : bump version to 0.9.8 (ggml/1442) 2026-03-18 15:18:24 +02:00
Georgi Gerganov 945d3151d9 ggml : restore ggml_type_sizef() to aboid major version bump (ggml/1441) 2026-03-18 15:18:24 +02:00
lohopupa dc96116622
fix: VAD time mapping timestamp drift caused by overlap samples (#3711)
* whisper : fix VAD segment overlap boundary handling

 - Use original segment length (pre-overlap) for vad_end in the time
   mapping table, so segment boundaries are preserved accurately

Claude Sonnet 4.6 (Low)

* whisper : remove intermediate VAD time mapping points

Now that segment boundaries are mapped accurately, the intermediate
point interpolation is no longer necessary.

---------

Co-authored-by: Lohopupa <lohopupa@gmail.com>
2026-03-17 07:19:08 +01:00
Alan 79218f51d0
go : handle EOF correctly in model download (#3671) 2026-03-16 13:44:18 +02:00
Aiudadadadf 975b979834
py : replace deprecated openvino-dev with openvino>=2023.3.0 (#3678)
* models: replace deprecated openvino-dev with openvino>=2023.3.0 for Python 3.12+ compat

* models: remove unused openvino.tools.mo import from convert-whisper-to-openvino.py
2026-03-16 13:41:54 +02:00
Gaël James 21665eab4c
examples : Allow max_len to be used for any output format (#3679) 2026-03-16 13:33:56 +02:00
Igor Loskutov 136dc2eb12
server: return proper HTTP status codes for error responses (#3707)
Several error paths in the /inference and /load endpoints returned
HTTP 200 with a JSON error body, making it impossible for clients
to distinguish errors from successful responses by status code.

Set 400 for client errors (missing file field, unreadable audio,
missing/invalid model) and 500 for server errors (ffmpeg conversion
failure). The two existing status-code sites (499 for client
disconnect, 500 for processing failure) are unchanged.
2026-03-16 13:33:06 +02:00
Georgi Gerganov 27fa20774a ggml : try fix arm build (#0) 2026-03-16 13:10:15 +02:00
Georgi Gerganov 2bc630f197 talk-llama : sync llama.cpp 2026-03-16 13:10:15 +02:00
Georgi Gerganov ab1252c19e sync : ggml 2026-03-16 13:10:15 +02:00
David366AI d4bc312169 ggml : extend im2col f16 (ggml/1434)
* examples/yolo: fix load_model memory leak

* fix/issue-1433 ggml_compute_forward_im2col_f16 assert error

* fix/issue-1433
2026-03-16 13:10:15 +02:00
Georgi Gerganov 81ea958719 common : add nvfp4 (ggml/0) 2026-03-16 13:10:15 +02:00
Johannes Gäßler d7926e62d4 CUDA: limit number of FA stream-k CUDA blocks (llama/20586) 2026-03-16 13:10:15 +02:00
Pascal 2fb6aea8ad ggml: avoid creating CUDA context during device init (llama/20595) 2026-03-16 13:10:15 +02:00
MoonShadow b327a321a2 ggml/hip: fix APU compatibility - soft error handling for hipMemAdviseSetCoarseGrain (llama/20536)
* ggml/hip: fix APU compatibility - soft error handling for hipMemAdviseSetCoarseGrain

On AMD APU/iGPU devices (unified memory architecture), hipMemAdviseSetCoarseGrain
returns hipErrorInvalidValue because the hint is not applicable to UMA systems.
The previous CUDA_CHECK() call treated this as a fatal error, causing crashes on
APU systems such as AMD Strix Halo (gfx1151).

Fix: treat hipMemAdviseSetCoarseGrain as an optional performance hint - call it
without error checking and clear any resulting error with hipGetLastError().

Also add pre-allocation debug logging (GGML_LOG_DEBUG) to help diagnose memory
issues on APU systems, and store totalGlobalMem in device info.

Context: AMD APUs on Windows are affected by a ROCm runtime bug that limits
hipMallocManaged to ~64GB regardless of available system RAM. A fix has been
submitted upstream: https://github.com/ROCm/rocm-systems/pull/4077

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* ggml/hip: remove unrelated changes, keep only hipMemAdviseSetCoarseGrain fix

---------

Co-authored-by: moonshadow-25 <moonshadow-25@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 13:10:15 +02:00
Bartowski 6770239830 ggml : guard against sumq2 being 0 in IQ4_NL (llama/20460) 2026-03-16 13:10:15 +02:00
PikaPikachu 55c66106af cuda : add RDNA4-specific MMVQ parameter table for bs=1 decode (llama/19478)
* mmvq: add RDNA3/RDNA4-specific parameter table (nwarps=8, rows=1)

* mmvq: add dedicated RDNA3 parameter table

* mmvq: exclude RDNA3.5 (gfx1150/1151) from RDNA3 table
2026-03-16 13:10:15 +02:00
Ruben Ortlam cd02195b8f vulkan: use graphics queue on AMD (llama/20551)
* vulkan: use graphics queue on AMD for slightly better performance

* disable async transfer queue on AMD
2026-03-16 13:10:15 +02:00
Georgi Gerganov b312018435 metal : add FA specialization for HSK = 320, HSV = 256 (llama/20549) 2026-03-16 13:10:15 +02:00
Max Krasnyansky 55f8cfdaed hexagon: Q4_0 and MXFP4 repack fixes (llama/20527)
* hexagon: fix tail corruption with rows sizes not multiple of 256

* hexagon: use different stride for repacking partial blocks

* hex-mm: update repack and kernels to avoid shuffles for full 256-element blocks

Previous commit changed the repacking to use even:odd (0:1,2:3,..) packing
instead of the original (0:128,1:129,...) packing in order to fix tail corruption.
Since the mm kernels already deal with partial tails we can use even:odd
packing only for the last block.
This avoid performance penalty of having to shuffle to zip the elements
in the common case.

* hex-mm: update rmpy x8 for better optimizations

* hex-mm: tighten supported MUL_MAT checks to avoid spurios failures

* hex-mm: use vzero to init accumulators

* hex-mm: properly call partial rmpy_x8
2026-03-16 13:10:15 +02:00
Neo Zhang c5f9a49b51 add op gated_delta_net (llama/20455) 2026-03-16 13:10:15 +02:00
Adrien Gallouët 93d09fdb23 ggml : add native AVX512-FP16 support for F16 operations (llama/20529)
The overall benchmark speed remains almost the same because the CPU is
now calculating faster than the RAM can deliver the data. (See perf stat
results below showing 2.7 billion fewer instructions).

Also note that this path will be only enabled for native build or with
custom flags.

now:
```
 Performance counter stats for 'build/bin/llama-bench -m Qwen3-0.6B-f16.gguf -p 512 -n 128':

        189,073.52 msec task-clock                       #   14.658 CPUs utilized
               404      context-switches                 #    2.137 /sec
                19      cpu-migrations                   #    0.100 /sec
           372,390      page-faults                      #    1.970 K/sec
   310,877,195,595      instructions                     #    0.54  insn per cycle
   581,071,530,602      cycles                           #    3.073 GHz
    19,352,107,994      branches                         #  102.352 M/sec
        48,304,438      branch-misses                    #    0.25% of all branches
    84,998,431,152      L1-dcache-loads                  #  449.552 M/sec
    12,186,410,279      L1-dcache-load-misses            #   14.34% of all L1-dcache accesses

      12.899358742 seconds time elapsed

     187.823044000 seconds user
       1.253416000 seconds sys
```

before:
```
 Performance counter stats for 'build/bin/llama-bench -m Qwen3-0.6B-f16.gguf -p 512 -n 128':

        190,594.56 msec task-clock                       #   14.652 CPUs utilized
               436      context-switches                 #    2.288 /sec
                22      cpu-migrations                   #    0.115 /sec
           372,782      page-faults                      #    1.956 K/sec
   313,574,921,966      instructions                     #    0.54  insn per cycle
   586,064,970,425      cycles                           #    3.075 GHz
    19,585,778,563      branches                         #  102.761 M/sec
        48,437,488      branch-misses                    #    0.25% of all branches
    86,219,336,628      L1-dcache-loads                  #  452.370 M/sec
    12,232,085,771      L1-dcache-load-misses            #   14.19% of all L1-dcache accesses

      13.007923164 seconds time elapsed

     189.395316000 seconds user
       1.202612000 seconds sys
```

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-03-16 13:10:15 +02:00
Wallentri 8ad5cb1e9d Use fp32 in cuBLAS V100 to avoid overflows, env variables to override cuBLAS compute type (llama/19959)
* Update ggml-cuda.cu

* Update ggml-cuda.cu

* Update build.md

* Update build.md

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml-cuda.cu

* Update build.md

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update build.md

* Update ggml-cuda.cu

* Update ggml-cuda.cu

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-16 13:10:15 +02:00
Zijun Yu 96b163e874 ggml : add OpenVINO backend (llama/15307)
* Update build doc

* Add cgraph tensor output name to OV op name

* Update openvino build instructions

* Add initial NPU support

* draft NPU support version 2: prefill + kvcache

* NPU support version 2: prefill + kvcache

* Change due to ggml cgraph changes, not correct yet

* Change due to ggml cgraph changes, llama-3.2 CPU work

* Add AMD64 to CMakeLists

* Change due to ggml cgraph changes, all device work

* Refactor: clean, fix warning

* Update clang-format

* Statful transformation for CPU GPU

* Add SwiGLU

* Fuse to SDPA

* Replace Concat with Broadcast in MulMat for GQA

* Pull out indices creation for kv cache update

* Refactor: remove past_token_len from extra_inputs

* Fix Phi3 SwiGLU and SoftMax

* Pull out sin cos from rope

* Reduce memory: free ov weights node after graph conversion

* Fix CPY due to cgraph change

* Added OpenVINO CI/CD. Updated docs

* Fix llama-cli

* Fix Phi3 ROPE; Add test-backend-ops

* Fix NPU

* Fix llama-bench; Clang-format

* Fix llama-perplexity

* temp. changes for mark decomp

* matmul in fp32

* mulmat input conversion fix

* mulmat type conversion update

* add mark decomp pass

* Revert changes in fuse_to_sdpa

* Update build.md

* Fix test-backend-ops

* Skip test-thread-safety; Run ctest only in ci/run.sh

* Use CiD for NPU

* Optimize tensor conversion, improve TTFT

* Support op SET_ROWS

* Fix NPU

* Remove CPY

* Fix test-backend-ops

* Minor updates for raising PR

* Perf: RMS fused to OV internal RMS op

* Fix after rebasing

- Layout of cache k and cache v are unified: [seq, n_head, head_size]
- Add CPY and FLASH_ATTN_EXT, flash attn is not used yet
- Skip test-backend-ops due to flash attn test crash
- Add mutex around graph conversion to avoid test-thread-safety fali in the future
- Update NPU config
- Update GPU config to disable SDPA opt to make phi-3 run

* Change openvino device_type to GPU; Enable flash_attn

* Update supports_buft and supports_op for quantized models

* Add quant weight conversion functions from genai gguf reader

* Quant models run with accuracy issue

* Fix accuracy: disable cpu_repack

* Fix CI; Disable test-backend-ops

* Fix Q4_1

* Fix test-backend-ops: Treat quantized tensors as weights

* Add NPU Q4_0 support

* NPU perf: eliminate zp

* Dequantize q4_1 q4_k q6_k for NPU

* Add custom quant type: q8_1_c, q4_0_128

* Set m_is_static=false as default in decoder

* Simpilfy translation of get_rows

* Fix after rebasing

* Improve debug util; Eliminate nop ReshapeReshape

* STYLE: make get_types_to_requant a function

* Support BF16 model

* Fix NPU compile

* WA for npu 1st token acc issue

* Apply EliminateZP only for npu

* Add GeGLU

* Fix Hunyuan

* Support iSWA

* Fix NPU accuracy

* Fix ROPE accuracy when freq_scale != 1

* Minor: not add attention_size_swa for non-swa model

* Minor refactor

* Add Q5_K to support phi-3-q4_k_m

* Requantize Q6_K (gs16) to gs32 on GPU

* Fix after rebasing

* Always apply Eliminate_ZP to fix GPU compile issue on some platforms

* kvcachefusion support

* env variable GGML_OPENVINO_DISABLE_SDPA_OPTIMIZATION added

* Fix for Phi3

* Fix llama-cli (need to run with --no-warmup)

* Fix add_sliced_mask; Revert mulmat, softmax; Remove input attention_size, iSWA model not working

* fix after rebasing

* Fix llama-3-8b and phi3-mini q4_0 NPU

* Update to OV-2025.3 and CMakeLists.txt

* Add OV CI cache

* Apply CISC review and update CI to OV2025.3

* Update CI to run OV dep install before build

* Update OV dockerfile to use OV2025.3 and update build docs

* Style: use switch in supports_ops

* Style: middle ptr and ref align, omit optional struct keyword

* NPU Unify PD (llama/14)

* Stateless. Fix llama-cli llama-server

* Simplify broadcast op in attention

* Replace get_output_tensor+memcpy with set_output_tensor

* NPU unify PD. Unify dynamic and static dims

* Clean placeholders in ggml-openvino.cpp

* NPU unify PD (handled internally)

* change graph to 4d, support multi sequences

* Fix llama-bench

* Fix NPU

* Update ggml-decoder.cpp

Hitting error while compiling on windows:

error C3861: 'unsetenv': identifier not found

Reason: unsetenv() is a POSIX function; it doesn’t exist on Windows. Visual Studio (MSVC) won’t recognize it.

Proposed fix: Use _putenv_s() (Windows equivalent)
This is supported by MSVC and achieves the same effect: it removes the environment variable from the process environment.

This keeps cross-platform compatibility.

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Remove the second decoder for node. Moving the function into the model decoder

* Fix error for naive

* NPU prefill chunking

* NPU fix llama-bench

* fallback naive run with accuracy issue

* NPU support llma-perplexity -b 512 --no-warmup

* Refactor: split ov_graph_compute for dynamic and static

* remove unused API GgmlOvDecoder::get_output_stride(const std::string & name)

* minor update due to ov 2025.4

* remove unused API GgmlOvDecoder::get_output_names()

* remove unused API get_output_shape(const std::string & name)

* Modified API GgmlOvDecoder::get_output_type(const std::string & name)

* Removed API GgmlOvDecoder::get_output_op_params(const std::string & name)

* Removed API get_output_ggml_tensor(const std::string & name)

* Removed API m_outputs

* Removed m_output_names

* Removed API GgmlOvDecoder::get_input_names()

* Removed API GgmlOvDecoder::get_input_stride(const std::string& name)

* Removed API get_input_type

* Removed API get_input_type

* Removed API GgmlOvDecoder::get_input_shape(const std::string & name)

* Removed API GgmlOvDecoder::get_input_op_params(const std::string & name)

* Fix error for decoder cache

* Reuse cached decoder

* GPU remove Q6_K requantization

* NPU fix wrong model output shape

* NPU fix q4 perf regression

* Remove unused variable nodes

* Fix decoder can_reuse for llama-bench

* Update build.md for Windows

* backend buffer: allocate on host

* Use shared_buffer for GPU NPU; Refactor

* Add ov_backend_host_buffer; Use cached remote context

* Put kvcache on GPU

* Use ggml_aligned_malloc

* only use remote tensor for kvcache

* only use remote tensor for kvcache for GPU

* FIX: use remote tensor from singleton

* Update build.md to include OpenCL

* NPU always requant to q4_0_128

* Optimize symmetric quant weight extraction: use single zp

* Use Q8_0_C in token embd, lm_head, and for 5 and 6 bits quant

* Update build.md

* Support -ctk f32

* Initial stateful graph support

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>

* code cleanup

* npu perf fix

* requant to f16 for Q6 embed on NPU

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

* Update ggml/src/ggml-openvino/ggml-openvino-extra.cpp

* Create OPENVINO.md in llama.cpp backend docs

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update build.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* kq_mask naming fix

* Syntax correction for workflows build file

* Change ov backend buffer is_host to false

* Fix llama-bench -p -n where p<=256

* Fix --direct-io 0

* Don't put kvcache on GPU in stateful mode

* Remove hardcode names

* Fix stateful shapes

* Simplification for stateful and update output shape processing

* Remove hardcode names

* Avoid re-compilation in llama-bench

* Extract zp directly instead of bias

* Refactor weight tensor processing

* create_weight_node accept non-ov backend buffer

* remove changes in llama-graph.cpp

* stateful masking fix (llama/38)

Fix for stateful accuracy issues and cl_out_of_resources error in stateful GPU with larger context sizes.

* Fix test-backend-ops crash glu, get_rows, scale, rms_norm, add

* hardcoded name handling for rope_freqs.weight

* Suppress logging and add error handling to allow test-backend-ops to complete

* Fix MUL_MAT with broadcast; Add unsupported MUL_MAT FLASH_ATTN cases

* Use bias instead of zp in test-backend-ops

* Update OV in CI, Add OV CI Tests in GH Actions

* Temp fix for multithreading bug

* Update OV CI, fix review suggestions.

* fix editorconfig-checker, update docs

* Fix tabs to spaces for editorconfig-checker

* fix editorconfig-checker

* Update docs

* updated model link to be GGUF model links

* Remove GGML_CPU_REPACK=OFF

* Skip permuted ADD and MUL

* Removed static variables from utils.cpp

* Removed initializing non-existing variable

* Remove unused structs

* Fix test-backend-ops for OV GPU

* unify api calling

* Update utils.cpp

* When the dim is dynamic, throw an error, need to is stastic forst

* Add interface compute_model_outputs(), which get the model output through computing the node use count & status in the cgraph to avoid the flag using

* No need to return

* Fix test-backend-ops for OV GPU LNL

* Fix test-thread-safety

* use the shape from infer request of output tensor create to avoid issue

* fix dynamic output shape  issue

* fix issue for the unused node in tests

* Remove unused lock

* Add comment

* Update openvino docs

* update to OV release version 2026.0

* add ci ov-gpu self hosted runner

* fix editorconfig

* Fix perplexity

* Rewrite the model inputs finding mechanism  (llama/54)

* Rewrite the model inputs finding logistic

* Put stateful shape handle in get input shape

* Put the iteration logistic in func

* Added ggml-ci-intel-openvino-gpu and doc update

* .hpp files converted to .h

* fix ggml-ci-x64-intel-openvino-gpu

* Fix for stateful execution bug in llama-bench

* Minor updates after stateful llama-bench fix

* Update ggml/src/ggml-openvino/utils.cpp

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>

* Remove multiple get_shape calls

* Bring back mutex into compute

* Fix VIEW op, which slice the input node

* Added token_len_per_seq existence check before slicing masks and moved node retrieval inside guarded block to prevent missing-key access

* Temp. fix for test requant errors

* Update to OV ggml-ci to low-perf

* ci : temporary disable "test-llama-archs"

* ci : cache v4 -> v5, checkout v4 -> v6, fix runner tag

* docs : update url

* Fix OV link in docker and Update docs

---------

Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>
Co-authored-by: Arshath <arshath.ramzan@intel.com>
Co-authored-by: XuejunZhai <Xuejun.Zhai@intel.com>
Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>
Co-authored-by: Xuejun Zhai <Xuejun.Zhai@intel>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-16 13:10:15 +02:00
Rail Chabdarov 46aad766f5 Fix data race in CUDA's "cpy" kernel (influences GGML's DUP, CONT operations). (llama/20507)
* Fix datarace in CUDA's "cpy" kernel.

* Remove extra barrier by using more of shared memory.
2026-03-16 13:10:15 +02:00
lhez a31600d8e3 opencl: fix l2_norm (llama/20480) 2026-03-16 13:10:15 +02:00
Georgi Gerganov c7abcd577b graph : remove redundant GDN state transposes (llama/20443)
* ggml : transpose fused GDN state access for coalesced memory reads (llama/20436)

The fused Gated Delta Net kernel accessed the [S_v, S_v] state matrix
column-wise on row-major storage, causing strided reads (stride S_v =
128 floats = 512 bytes) that waste GPU cache bandwidth. This produced a
39% regression on Qwen3.5-9B (Metal, M4 Max) compared to the unfused
path.

Transpose the state indexing so threads read contiguously:
- Metal: s_ptr[is*S_v] -> s_ptr[is] (stride 1 vs S_v)
- CUDA:  curr_state[i*S_v+col] -> curr_state[col*S_v+i] (coalesced)
- CPU:   restructured loops for row-wise transposed access

Also add --fused-gdn [on|off|auto] CLI flag (mirrors --flash-attn) so
users can control fused GDN independently of auto-detection.

All GATED_DELTA_NET backend-ops tests pass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ggml : use SIMD dot products in CPU GDN kernel, couple AR/chunked fused flags

- Replace scalar inner loops with ggml_vec_dot_f32 for SIMD-optimized
  dot products in the CPU fused GDN kernel (delta and attention output)
- Couple fused_gdn_ar and fused_gdn_ch flags in auto-detection: if one
  path lacks device support, disable both to prevent state layout mismatch
  between transposed (fused) and non-transposed (unfused) formats

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* llama : rever fgdn argument changes

* graph : remove GDN state transposes

* vulkan : adapt

* cuda : remove obsolete smem code

---------

Co-authored-by: Paul Flynn <paul@arkavo.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
2026-03-16 13:10:15 +02:00
rehan-10xengineer 5905e8708f ggml-cpu: add RVV vec dot kernels for quantization types (llama/18859)
* ggml-cpu: add rvv quantize_row_q8_K kernel

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for iq4_nl, mxfp4, iq2_xxs

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for iq4_xs, refactor

* ggml-cpu: remove ifunc for rvv vec dot

* ggml-cpu: add vec_dot for iq2_xs, iq3_xxs

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: refactor quants.c

---------

Co-authored-by: taimur-10x <taimur.ahmad@10xengineers.ai>
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
Co-authored-by: Rehan Qasim <rehanbhatti0317@gmail.com>
2026-03-16 13:10:15 +02:00
Adrien Gallouët 9bfa81d262 ggml : fix typo gmml (llama/20512)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-03-16 13:10:15 +02:00
Georgi Gerganov f1f5f43d69 metal : fix l2 norm scale (llama/20493) 2026-03-16 13:10:15 +02:00
Georgi Gerganov 2ed6dc0222 llama : disable graph reuse with pipeline parallelism (llama/20463) 2026-03-16 13:10:15 +02:00
ProgenyAlpha 2450919665 vulkan: add GATED_DELTA_NET op support (llama/20334)
* vulkan: add GATED_DELTA_NET op support

Implements the fused gated delta net recurrence as a Vulkan compute
shader with full support for scalar gate, KDA vector gate, GQA
broadcast, multi-token sequences, and permuted (non-contiguous) q/k
inputs. Specialization constants select head size (32/64/128) and
KDA mode at pipeline creation time.

Passes all 13 test-backend-ops cases on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: optimize GATED_DELTA_NET shader (Phase 1)

- vec4 dot products on all inner loops (dp4 hardware intrinsic)
- Cache exp(g) in shared memory for KDA path, eliminating ~32K
  redundant global reads and ~16K redundant exp() calls per token
- vec4 fused decay + rank-1 update (3 vec4 ops vs 12 scalar ops)
- Add perf benchmark cases for GATED_DELTA_NET to test-backend-ops

KDA TG: +5.4% throughput. Non-KDA: no regressions.
13/13 test-backend-ops passing on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: address review feedback for GATED_DELTA_NET

Pipeline array refactor [3][2], A_TYPE/D_TYPE/FLOAT_TYPE shader macros,
scale in push constants, supports_op fix, dispatch restructuring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: use FLOAT_TYPE for buffer/shared declarations, align formatting

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: add explicit FLOAT_TYPE casts for buffer loads

Wrap data_q, data_k, and data_g buffer reads with FLOAT_TYPE() casts
to ensure correct behavior across all Vulkan configurations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: fix Q/K broadcast for interleaved head layout

Adapt to the interleaved broadcast convention from #20340:
head_id / rq1 → head_id % neq1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Progeny Alpha <ProgenyAlpha@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 13:10:15 +02:00
ProgenyAlpha 44c12c642e vulkan: fix SSM_CONV PP scaling with large ubatch sizes (llama/20379)
* vulkan: optimize SSM_CONV workgroup dispatch for large ubatch

Tile tokens into 2D workgroups (32x16) to reduce workgroup launch
overhead at large ubatch sizes. Add vec4 fast path for nc=4 (common
d_conv size). Fixes PP performance degradation with ubatch > 512.

Ref: ggml-org/llama.cpp#18725

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* vulkan: remove unused shared memory declaration in SSM_CONV

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Progeny Alpha <ProgenyAlpha@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 13:10:15 +02:00
Georgi Gerganov 7e816a99d2 sync : ggml 2026-03-16 13:10:15 +02:00
Georgi Gerganov b48ffe28fc metal : avoid divisions in bin kernel (llama/20426) 2026-03-16 13:10:15 +02:00
Georgi Gerganov 7ccebd5264 sync : ggml 2026-03-16 13:10:15 +02:00
Jeff Bolz 86e312d61d vulkan: fix l2_norm epsilon handling (llama/20350) 2026-03-16 13:10:15 +02:00
Jeff Bolz 6c5e3aac3e vulkan: fix OOB check in flash_attn_mask_opt (llama/20296) 2026-03-16 13:10:15 +02:00
Masato Nakasaka 26ee4f7362 vulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no-mmap (llama/20059)
* Changed to reuse command buffers to fix crashing on Intel GPU

* Removed unused parameter

* Fixed compile error and minor mistake

* Fix logging

* Changing to use usage flag per command buffer

* fixed style

* added buffer reset

* Removed cmd_buffer_idx for reuse consistency

* Fixed style
2026-03-16 13:10:15 +02:00
lhez d5772cf7b2 opencl: use larger workgroup size for get_rows (llama/20316) 2026-03-16 13:10:15 +02:00
shaofeiqi 193781cf0e opencl: add cumsum op (llama/18981)
* OpenCL: add CUMSUM op support

* remove unused argument

* opencl: refactor cumsum

* opencl: refactor

* opencl: refactor tmp buffer

* opencl: adjust max number of subgroups

* opencl: fix whitespace

* opencl: fix global size when cumsum the tmp buffer

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-03-16 13:10:15 +02:00
uvos f5ba865378 hip: compile debug builds with -O2 on hip to avoid a compiler bug (llama/20392) 2026-03-16 13:10:15 +02:00
Masashi Yoshimura 5267523829 ggml-webgpu: Add supports for `GGML_OP_REPEAT` (llama/20230)
* Add GGML_OP_REPEAT to webgpu backend.

* Add i16 support for GGML_OP_REPEAT.
2026-03-16 13:10:15 +02:00
Georgi Gerganov d73fe25267 llama : enable chunked fused GDN path (llama/20340)
* llama : enable chunked fused GDN path

* models : avoid Q and K repeats when using fused GDA

* cont : fix comment

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* cont : fix the fix

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* cont : fix

* metal : add GDN kernel (llama/20361)

* metal : add Metal backend for GGML_OP_GATED_DELTA_NET

Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.

Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.

Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
  tg128: 170 -> 213 t/s (+25%)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* metal : validate contiguity of all input tensors in supports_op

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* metal : add algorithm equivalence comment for GDA decay path

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* cont : unslop + optimize

* cont : clean-up

---------

Co-authored-by: Paul Flynn <paul@arkavo.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* CUDA: AR gated delta net improvements (llama/20391)

* Add FastDiv to gated_delta_net_cuda

* Shard columns across warps

This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).

* Remove unneded include in gated_delta_net.cu

* Improve comments

* Apply code-formating

* Make sharding HIP-compatible

1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
2. Add test with partial warp to test sum reduction on CUDA

* Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t

* Rename variables

* Enable GDN also for prefill, move TODO for chunked_GDN

* Actually remove the TODO from 206890897546bd16602c3b79394fd5ea09ef199f

* Get warp size at runtime

warp_size is not known at compile time in hip host code.

* Don't expose ggml_cuda_get_physical_warp_size on host

---------

Co-authored-by: uvos <devnull@uvos.xyz>

* llama : refactor llm_build_delta_net_base API

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
Co-authored-by: Paul Flynn <paul@arkavo.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
Co-authored-by: uvos <devnull@uvos.xyz>
2026-03-16 13:10:15 +02:00
Richard Davison e4021d4071 ggml : add NVFP4 quantization type support (llama/19769)
* WIP: add NVFP4 quantization support

* tests

* improve NVFP4 dot product implementation performance and fix bad super call

* typo

* Use nvfp4 kvalues

* vulkan : fix NVFP4 shader compilation by including kvalues_mxfp4 lookup table

* vulcal and perf fixes

* wip

* Fix metal

* fix vulcan

* Rename threshold & fix wrong scale

* Fix MOE

* Shelf backend implementations (CUDA, Metal, Vulkan, arch-specific SIMD)

Remove NVFP4 support from GPU backends and architecture-specific
optimized dot products. These should be added in separate PRs so
backend specialists can review them independently.

Reverted files:
- ggml-cuda: common.cuh, convert.cu, mmq.cu/cuh, mmvq.cu, vecdotq.cuh,
  quantize.cu/cuh, mma.cuh, ggml-cuda.cu, fattn-tile.cuh
- ggml-metal: ggml-metal.metal, ggml-metal-device.cpp, ggml-metal-impl.h,
  ggml-metal-ops.cpp
- ggml-vulkan: ggml-vulkan.cpp, all vulkan-shaders/*
- ggml-cpu arch: arm/quants.c, x86/quants.c, powerpc/quants.c, s390/quants.c

Core NVFP4 support (type definition, CPU fallback dot product,
quantization, dequantization, conversion) is retained.

* Fix arch-fallback.h: add NVFP4 generic fallback for all platforms

After shelving backend-specific SIMD implementations, the generic
CPU dot product needs to be aliased on ARM, x86, PowerPC, and s390
platforms that previously relied on arch-specific versions.

* quantize: add NVFP4 as a quantization type option

* Fix ggml_fp32_to_ue4m3: handle subnormal values

Previously, values with ue4m3_exp <= 0 were clamped to 0, causing
all small scales to underflow. This made NVFP4 quantization via
llama-quantize produce garbage (PPL = 5.8M) since typical transformer
weights have amax/6.0 in the range 0.001-0.01, which falls in the
UE4M3 subnormal range.

Now subnormals are properly encoded as man * 2^-9 (exp=0, man=1..7),
matching the decode path in ggml_ue4m3_to_fp32.

Result: NVFP4 requantization now produces PPL = 15.25 (vs F16 = 14.33),
comparable to Q4_1 (PPL = 15.81) at slightly lower BPW (4.70 vs 5.15).

* Restore ARM NEON NVFP4 dot product implementation

Restores the optimized ggml_vec_dot_nvfp4_q8_0 for ARM NEON using
vqtbl1q_s8 lookup and ggml_vdotq_s32 dot products.

tg128 performance: 4.37 t/s (generic) -> 13.66 t/s (NEON) = 3.1x speedup

* Optimize ARM NEON NVFP4 dot product: LUT + vpaddq + vfmaq

- Add ue4m3_scale_lut[128] to ggml-common.h replacing branch-heavy
  ggml_ue4m3_to_fp32() in the hot loop
- Use vpaddq_s32 for pairwise int32 reduction instead of vaddvq_s32
- Accumulate with vfmaq_f32 into float32x4_t vector accumulators

tg128: 8.1 -> 31.0 t/s (3.8x speedup, 77% of Q4_1 speed)

* ARM NEON NVFP4: rearrange q8 to match nibble layout

Alternative approach: rearrange q8 data to match the NVFP4 lo/hi
nibble layout instead of rearranging the looked-up NVFP4 values.
Eliminates vcombine_s8(vget_low, vget_low) shuffles.

Performance is equivalent (~18.5 t/s) - the bottleneck is the 2x
block overhead from QK=16 vs QK=32, not the shuffle instructions.

* CPU only backend 64 super-block layout

* cleanup

* Remove unused LUT

* int

* exclude NVFP4 from unsupported ops in metal build

* remove quantization for now

* store scales as native UE4M3, preserve original model bits when possible

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* correct comment

* format

* reduce duplication and cleanup

* Address comments

* move detection to prepare_tensors

* Use math instead of const

* Move

* fix comment

* Shelf quantize tests

* Rebase and move check

* cleanup

* lint

* Update gguf-py/gguf/scripts/gguf_convert_endian.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Use fallback quant config

* Simplify

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* organize

* Refactor

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* fix return type

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-16 13:10:15 +02:00
Daniel Bevenius 5d3a5447c8 llama : add support for Nemotron 3 Super (llama/20411)
* llama : add support for Nemotron 3 Super

This commit adds support for the Nemotron 3 Super model (120B.A12B)
enabling this model to be converted to GGUF format and run in llama.cpp.

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Matt Clayton <156335168+mattjcly@users.noreply.github.com>
2026-03-16 13:10:15 +02:00
Georgi Gerganov e2aa5c73f3 metal : fix capture_compute counter logic (llama/20410) 2026-03-16 13:10:15 +02:00
Georgi Gerganov 0e1e76f93b metal : fix q5_k mul_mv register spill (llama/20399) 2026-03-16 13:10:15 +02:00
Georgi Gerganov c2e384f21e metal : add env var to trigger graph capture (llama/20398) 2026-03-16 13:10:15 +02:00
uvos 8b335550cf ggml-cuda: gdn use shared mem for HIP (llama/20366)
Suggested-by: Aman Gupta <amangupta052@gmail.com>
2026-03-16 13:10:15 +02:00
uvos 7c9a16c565 cuda/hip: fix loop unrolling in ssm-conv (llama/20369) 2026-03-16 13:10:15 +02:00
Neo Zhang 286387ef0a fix op rope, add rope_back (llama/20293) 2026-03-16 13:10:15 +02:00
Neo Zhang 72c7a2532d fix for failed UT case: ACC, L2_NORM, UPSCALE, fused_glu, unary (llama/20283) 2026-03-16 13:10:15 +02:00
Georgi Gerganov 1e05b10d67 ggml : bump RPC version (llama/20330) 2026-03-16 13:10:15 +02:00
Reese Levine fddedc5cbc ggml webgpu: faster normal quant and some k-quant matrix operations, better shader parameter handling (llama/20173)
* K quant speedup (llama/20)

* Basic JIT compilation for mul_mat, get_rows, and scale (llama/17)

* scale jit working

* preliminary working jit for getrows and mulmat, needs refining

* simplified mul_mat preprocessing switch statement

* get_rows fixes, mul_mat refinement

* formatted + last edits

* removed some extraneous prints

* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish

* small fix

* some changes, working

* get_rows and mul_mat jit fixed and working

* Update formatting

* formatting

* Add header

---------

Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Start work on all-encompassing shader library

* refactor argmax, set_rows

* Refactor all but flashattention, mat mul

* no gibberish, all k quants added, merged

* vec memory fix

* q6_k matching metal on my machine, tests passing

* Set tile size for q6_k separately

* Separate out fast shaders

---------

Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>

* Move towards writeBuffer for params

* Move away from multiple buffers for set_rows errors, remove host buffer for parameter buffers, minor cleanups

* Remove extra file

* Formatting

---------

Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
2026-03-16 13:10:15 +02:00
Charles Xu dfa6858d02 kleidiai : support for concurrent sme and neon kernel execution (llama/20070) 2026-03-16 13:10:15 +02:00
Taimur Ahmad bd64b8af4d ggml-cpu: add RVV repack GEMM and GEMV for quantization types (llama/19121)
* ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv repacking for iq4_nl

* ggml-cpu: add generic impl for iq4_nl gemm/gemv

* ggml-cpu: add rvv repacking for q8_0

* ggml-cpu: refactor; add rvv repacking for q4_0, q4_K

* ggml-cpu: refactor; add rvv repacking for q2_K

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: refactor rvv repack

---------

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-03-16 13:10:15 +02:00
Julian Pscheid cabe3d95f4 metal: handle command buffer failures gracefully in synchronize (llama/20306)
Replace GGML_ABORT("fatal error") in ggml_metal_synchronize() with
error flag + return. This aligns synchronize error handling with
graph_compute, which already returns GGML_STATUS_FAILED for the same
condition.

When a command buffer fails (e.g., iOS GPU access revocation during
backgrounding, macOS eGPU disconnect, OOM), the backend enters an
error state instead of killing the host process. Subsequent
graph_compute calls return GGML_STATUS_FAILED immediately. Recovery
requires recreating the backend.

Failed extra command buffers are properly released on the error path
to avoid Metal object leaks.
2026-03-16 13:10:15 +02:00
Paul Flynn ae21974f4f metal : extend mul_mv_ext to BF16, Q2_K, Q3_K (llama/20250)
Enable mul_mv_ext small-batch kernels (BS 2-8) for BF16, Q2_K,
and Q3_K quantization types. These types previously fell through
to the slower single-row mul_mv path.

BF16 uses the float4 dequantize path (like F16). Q2_K and Q3_K
use the float4x4 K-quant path (like Q4_K/Q5_K/Q6_K).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 13:10:15 +02:00
Georgi Gerganov d19c65e9da metal : add upscale (llama/20284) 2026-03-16 13:10:15 +02:00
Aman Gupta 3984ae384d ggml-cuda: disable gdn for musa (llama/20278) 2026-03-16 13:10:15 +02:00
Bertay Eren 65dbf3c31a ggml-vulkan: add SGN operator, auto-generate Vulkan.csv and ops.md (llama/20219) 2026-03-16 13:10:15 +02:00
Ruben Ortlam 890c047e30 vulkan: skip zero size tensors in backend copies (llama/20233) 2026-03-16 13:10:15 +02:00
Michael Huang f099ed27b8 cuda : display total and free VRAM capacity during device initialization (llama/20185) 2026-03-16 13:10:15 +02:00
GiantPrince 8d97f59639 ggml-vulkan: Add ELU op support (llama/20183)
* ggml-Vulkan: add ELU support

* ggml-Vulkan: remove extra spaces and variables

* ggml-Vulkan: fix format issue

* ggml-Vulkan: fix format issue

* fix whitespace issue

* Update Vulkan.csv and ops.md
2026-03-16 13:10:15 +02:00
Jeff Bolz 4b0653a792 vulkan: Fix data races in coopmat1 mul_mat(_id) (llama/20084)
* vulkan: Fix data races in coopmat1 mul_mat(_id)

Add barriers between coopmat store and regular loads. We sort of got away with
this because it was the same subgroup accessing the values, but it's still a
race and may not work.

* switch to subgroup control barriers
2026-03-16 13:10:15 +02:00
Neo Zhang 8a9b0ba1df supprt Flash Attention for fp32/fp16/Q4/Q5/Q8 (llama/20190)
* support flash-attention for fp32/fp16/Q4/Q5/Q8

* rm warining

* update for JIT
2026-03-16 13:10:15 +02:00
Aman Gupta 49489bfbd1 ggml: add GATED_DELTA_NET op (llama/19504)
* ggml: add GATED_DELTA_NET op

* remove the transpose

* add KDA

* add qwen35 dense

* llama : check for fused gated delta net backend support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-16 13:10:15 +02:00
lhez 910034df28 opencl: add l2_norm (llama/20160) 2026-03-16 13:10:15 +02:00
Bartowski 6e063fae5a quants : Add memsets and other fixes for IQ quants (llama/19861)
* Add memsets and other fixes for IQ quants

* Make memset unconditional, change Laux back to L

* Move another memset
2026-03-16 13:10:15 +02:00
Todor Boinovski 78b3801d54 hexagon: add f32 ssm_conv op (llama/20122)
* hexagon: add ssm_conv op

* hexagon: hvx kernel is functional

* hexagon: improvements to ssm-conv hvx kernel

* hexagon: added dma to ssm-conv hvx kernel

* hexagon: ssm-conv dynamically compute gather scratchpad

* hex-ssm-conv: add local context and fix various issues (spad indexing, etc)

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-03-16 13:10:15 +02:00
Max Krasnyansky 247ec204d8 cpu: skip redudant ROPE cache updates (llama/20149) 2026-03-16 13:10:15 +02:00
Aman Gupta d658720fa5 ggml-cuda: add mem check for fusion (llama/19916)
* ggml-cuda: add mem check for fusion

* Replace NaNs with -FLT_MAX

* fix typo

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-16 13:10:15 +02:00
Aaron Teo 5d9b73dc06 ggml: update comments for backends which have no memory to report (llama/20157)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2026-03-16 13:10:15 +02:00
shalinib-ibm 548f2e5190 ggml-cpu: Fix gcc 15 ICE on ppc64le (ggml/20083) (llama/20130)
This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.

Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
    `plxv 40, 2(14)`

This ensures zero performance regression while unblocking builds on
newer toolchains.

Reproduced on:
- Alpine Linux + GCC 15.2.0-r2
- RHEL 9  + GCC 15.1.1 (gcc-toolset-15)

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2026-03-16 13:10:15 +02:00
Aman Gupta d2d235f467 CUDA: use shared mem for ssm_conv (llama/20128)
* CUDA: use shared mem for ssm_conv

* fuse silu + ssm_conv

* fuse unary + mul

* enable for fp16

* formatting

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-16 13:10:15 +02:00
Johannes Gäßler 596b655dbd ggml-cpu: fix data race for debug asserts (llama/20148) 2026-03-16 13:10:15 +02:00
lhez 1d94b0be4f opencl: add neg, exp and diag (llama/20127)
* opencl: add `neg`

* opencl: add `exp`

* opencl: add `diag`
2026-03-16 13:10:15 +02:00
YardenTal44 f56fb1be3b hexagon: add fp16 support for binary ops: add,sub,mul,div (llama/20139)
* hexagon: add fp16 support for binary ops: add,sub,mul,div

* hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)

* hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad

* snapdragon: fix readme link

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-03-16 13:10:15 +02:00
Andreas Kieslinger 51f397c1af CUDA: Improve performance via less synchronizations between token (llama/17795)
* Adds CPU-to-CUDA copy capability to
ggml_backend_cuda_cpy_tensor_async()

* Adds function to relax sync requirements between input copies on
supported backends (CUDA for now)

* Exchanges synchronous copy with async copy function.

* Adds macro guards to allow compilation in non-CUDA builds

* Reworked backend detection in ggml-backend.cpp to avoid linking
conflicts

* Relax requirement of checks in async CUDA copies from backend and buffer type to just buffer type, to avoid linking issues

* Minor cleanup

* Makes opt-in to relax use of explicit syncs more general. Backends like
vulkan which require a synchronization between HtoD copies and graph
execution could also adopt this change now.

* Reintroduces stricter check for CPU->CUDA backend async copy via
GGML_DEVICE_TYPE_CPU.

* Corrects initialization of ggml_backend_sync_mode in
ggml_backend_sched_split initialization

* Simplifies synchronizations to adhere to `saaasg` pattern.

* Apply suggestion from @ggerganov (src->buffer to buf_src)

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Apply suggestion from @ggerganov (src->buffer to buf_src) v2

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-16 13:10:15 +02:00
Marcel Petrick 67abc63e9d chore : correct typos [no ci] (llama/20041)
* fix(docs): correct typos found during code review

Non-functional changes only:
- Fixed minor spelling mistakes in comments
- Corrected typos in user-facing strings
- No variables, logic, or functional code was modified.

Signed-off-by: Marcel Petrick <mail@marcelpetrick.it>

* Update docs/backend/CANN.md

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* Revert "Auxiliary commit to revert individual files from 846d1c301281178efbc6ce6060ad34c1ebe45af8"

This reverts commit 02fcf0c7db661d5ff3eff96b2b2db9fdb7213256.

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Signed-off-by: Marcel Petrick <mail@marcelpetrick.it>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-16 13:10:15 +02:00
Max Krasnyansky 2e79b85f66 hexagon: Flash Attention optimizations (dma, mpyacc, multi-row) and MatMul updates (llama/20118)
* ggml-hexagon: enhance hvx_dot_f16_f16_aa_rx4 for improved performance by expanding vector handling and optimizing accumulation

# Conflicts:
#	ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx4 and enhance hvx_vec_reduce_sum_f32x4 for improved performance and reduced complexity

* ggml-hexagon: add hvx_dot_f16_f16_aa_rx32 for enhanced vector processing in flash attention

# Conflicts:
#	ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* optimize hvx_dot_f16_f16_aa_rx4 and hvx_dot_f16_f16_aa_rx32 by removing unused scale parameter and improving vector accumulation

# Conflicts:
#	ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: refactor hvx_dot_f16_f16_aa_rx4 for improved readability and return HVX_Vector for better integration

# Conflicts:
#	ggml/src/ggml-hexagon/htp/flash-attn-ops.c

* ggml-hexagon: initialize sums variable in hvx_dot_f16_f16_aa_rx32 for clarity

* ggml-hexagon: fix compiling error

* fix hvx_dot_f16_f16_aa_rx4 to handle leftover elements correctly using masking

* refactor hvx_dot_f16_f16_aa_rx4 to accept vector and leftover element counts as parameters for improved clarity and flexibility

* wip

* fa: instrumentation and dma reordering

* hex-fa: use block-size 64 to improve DMA pipelining

* hex-fa: optimize vec-dot for v79 and above

* hex-fa: use block size 64

* hex-fa: avoid scalar fp32->fp16 conversions

* hex-fa: simplify dot_f16 functions using optimized vec_mpyacc

* hex-fa: rewrite mad_f32_f16 using hvx_vec_mpyacc

* hex-mm: use mpyacc in matmul dot functions

---------

Co-authored-by: chraac <chraac@gmail.com>
2026-03-16 13:10:15 +02:00
lhez 2c50962528 opencl: add `SET`, support i32 for `CPY`, minor refactor for cpy (llama/20101) 2026-03-16 13:10:15 +02:00
Nikhil Jain 4834971a4f Fix wait logic for inflight jobs (llama/20096)
* Enable tmate debugging for investigating thread safety issue

* Refactor wait and submit to operate on vector<wgpu::FutureWaitInfo>, and fix wait to delete only the future that is completed.

* Cleanup

* Remove clear change and run clang-format

* Cleanup
2026-03-16 13:10:15 +02:00
Masashi Yoshimura 8d78d40946 Add concat op to webgpu. (llama/20068) 2026-03-16 13:10:15 +02:00
Johannes Gäßler 5d25427e58 ggml: fix ggml_is_contiguous_n for ne == 1 (llama/20092) 2026-03-16 13:10:15 +02:00
Adrien Gallouët b1b018dfd1 ggml : use a simple std::thread in AMX without OpenMP (llama/20074)
Disabling OpenMP generally provides better inference performance (at
least in my testing) but the loading becomes slightly slower.

Benchmark results for `convert_B_packed_format()`:

Before this commit:

         N      K |  No OpenMP     OpenMP |    Diff |  Speedup
    ------------------------------------------------------------
       512   2880 |    640.9us    263.5us |  -58.9% |    0.41x
      2880   4096 |     2.55ms    261.7us |  -89.8% |    0.10x
    201088   2880 |   256.44ms    21.61ms |  -91.6% |    0.08x
    ------------------------------------------------------------

    Total: 325.43ms vs 31.05ms

After:

         N      K |  No OpenMP     OpenMP |    Diff |  Speedup
    ------------------------------------------------------------
       512   2880 |     1.49ms    263.5us |  -82.3% |    0.18x
      2880   4096 |     1.55ms    261.7us |  -83.1% |    0.17x
    201088   2880 |    24.03ms    21.61ms |  -10.1% |    0.90x
    ------------------------------------------------------------

    Total: 78.97ms vs 31.05ms

Tested with unsloth/gpt-oss-20b-GGUF:Q4_K_M.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-03-16 13:10:15 +02:00
Charles Xu 169d723fa0 kleidiai : add sme fp16 compute path for q4_0 gemm on aarch64 (llama/20043) 2026-03-16 13:10:15 +02:00
shaofeiqi 3a96680718 opencl: add optimized q4_1 mm kernel for adreno (llama/19840)
* Add Q4_1 OpenCL Kernels

* opencl: refactor transpose

* opencl: format

* opencl: refactor q4_1 unpack

* opencl: move `ggml_cl_mul_mat_q4_1_f32_adreno`

* opencl: refactor `ggml_cl_mul_mat_q4_1_f32_adreno` and kernels

* opencl: rename kernel files and kernes

* opencl: fix build for non adreno

* opencl: move code around and format

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-03-16 13:10:15 +02:00
Abhijit Ramesh 3145384715 ggml webgpu: fix workgroup dispatch limit for large batch sizes (llama/19965)
* ggml-webgpu: fix workgroup dispatch limit for large batch sizes

WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.

* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions

* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
   dispatch

* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
  from 2D dispatch

* update mul_mat.wgsl to compute global index from 2D workgroup
  coordinates

* refactor all three mul_mat dispatch paths to use the shared helper

* ggml-webgpu: add bounds checking for over-dispatched workgroups

2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.

* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory

* fixes test failures with large batch sizes (eg., bs=[128, 1024])

* ggml-webgpu: add back TODO for spliting large sizes into batches

* Optimize 2d workgroup provisioning

* Set some parameters that increase speed

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-03-16 13:10:15 +02:00
Nikhil Jain 22034a5f6f ggml webgpu: Clean up per-thread parameter buffer pool and job submission logic (llama/19772)
* Allow webgpu_buf_pool to resize if needed, remove inflight_threads, and replace inflight_threads with num_kernels for submission

* Run clang-format

* Keep track of num batched kernels that have not been submitted yet

* Run clang-format

* Increase buf pool max size

* Increase param buf pool init size

* Remove webgpu buf pool resizing

* Merge with master

* Add buffer pool growth

* Move buffer pool growth outside of lock

* Reduce max pool size to 32

* Run clang-format

* Only resize param buf pool
2026-03-16 13:10:15 +02:00
Masashi Yoshimura de686fafad ggml-webgpu: Support non-contiguous `src0` and overlapping `src0/src1` in binary ops (llama/19850)
* ggml-webgpu: Add binary op support for overlapping and non-contiguous.

* Add newline to binary.wgsl

* Append the test of binary op for src overlapping  to test_bin_bcast.

* Remove unnecessary newline.
2026-03-16 13:10:15 +02:00
Ruben Ortlam 923a292429 vulkan: tune MMVQ for Intel Windows (llama/19988) 2026-03-16 13:10:15 +02:00
Aaron Teo e2be9edd5a ggml-cpu: optimise s390x multiply extend instructions (llama/20032) 2026-03-16 13:10:15 +02:00
Ruben Ortlam 2a9649c420 vulkan: improve partial offloading performance on AMD (llama/19976)
* vulkan: fix and enable cpy_tensor_async function

* use transfer_queue for async transfers on AMD, synchronize with timeline semaphore

* update offload_op logic

* fix missing transfer submission

* disable async transfer queue on AMD GCN

* revert op batch size change

* fix cpy_tensor_async checks
2026-03-16 13:10:15 +02:00
oobabooga ca3f6bbd3c cuda: cap grid.y at 65535 in non-contiguous dequantize/convert kernels (llama/19999) 2026-03-16 13:10:15 +02:00
Jayant Lohia 699eaf3a10 CUDA: add CDNA3 MFMA support for flash attention MMA kernel (llama/19806)
* CUDA: add CDNA3 MFMA support for flash attention MMA kernel

Add MI300X (gfx942) MFMA tensor core flash attention using
v_mfma_f32_16x16x16_f16 (FP16 in, FP32 accumulate).

- Add FATTN_WARP_SIZE=64 for CDNA wavefront64
- Add CDNA config for head sizes 64, 80, 96, 112, 128
- Add FP16 MFMA intrinsic path in mma.cuh
- Add manual V transpose load for MFMA register layout
- Route CDNA to MMA for prompt processing, VEC for token generation
- Fix Q loading and combine stride granularity for non-power-of-2 heads

Benchmarks (Qwen2.5-1.5B Q4_K_M, MI300X):
  pp512  +7%,  pp1024 +13%,  pp2048 +23%,  pp4096 +39%
  tg128  -10% (FA overhead, VEC used for both)

All 2480 flash attention tests pass.

Ref: https://github.com/ggml-org/llama.cpp/issues/17917

* address review: replace FATTN_WARP_SIZE with constexpr, improve dispatch

- Replace #define FATTN_WARP_SIZE with constexpr int warp_size =
  ggml_cuda_get_physical_warp_size() in each device function
- Use ne[1]*gqa_ratio threshold for MMA vs tile dispatch. Benchmarked
  crossover on MI300X @ d32768 with power-of-2 GQA models:
    hsk=64  (Llama 1B, gqa=4): MMA wins at eff >= 128 (+11%)
    hsk=128 (Llama 3B, gqa=4): MMA wins at eff >= 128 (+4%)
  Unified threshold: eff_nq >= 128 for all head sizes.
- Remove VEC fallback; small batches fall through to tile kernel

* Update ggml/src/ggml-cuda/fattn.cu

* use ggml_cuda_info().devices warp_size instead of hardcoded check

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-03-16 13:10:15 +02:00
Aman Gupta b524b5a1f0 ggml-cpu: add repack for mxfp4 (llama/19738) 2026-03-16 13:10:15 +02:00
KITAITI Makoto 30c5194c96
ruby : null-check (#3689)
* Introduce null-check to prevent SEGV

* Fix error message
2026-03-05 14:36:42 +09:00
Georgi Gerganov 9453b4b9be gguf : sync (ggml/0) 2026-02-27 20:57:58 +02:00
Georgi Gerganov aaf8bdf3b8 scripts : sync gguf 2026-02-27 20:57:58 +02:00
Georgi Gerganov 84f8db71d8 talk-llama : sync llama.cpp 2026-02-27 20:57:58 +02:00
Georgi Gerganov 4734056067 sync : ggml 2026-02-27 20:57:58 +02:00
Neo Zhang 64f48603e6 replace the magic nunber 768 by max work group size to support iGPU (llama/19920)
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2026-02-27 20:57:58 +02:00
Vishal Singh 9c1fd5cc6e ggml-zendnn: update code for latest ZenDNN API (llama/19923)
- adapt ggml-zendnn.cpp to the new lowoha::matmul interface
- update the ZenDNN git tag in CMake to the latest release (ZenDNN‑2026‑WW08)
- add static lib support in CMake
2026-02-27 20:57:58 +02:00
Adrien Gallouët 316d921c1a ggml : fix AMX and add batched support (llama/19925)
llama-perplexity -hf ggml-org/Qwen3-0.6B-GGUF:Q4_0 -f wikitext-2-raw/wiki.test.raw -c 2048 -b 2048 --chunks 2

before this commit:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 2.31 seconds per pass - ETA 0.07 minutes
[1]17.3868,[2]22.2199,
Final estimate: PPL = 22.2199 +/- 1.59692

llama_perf_context_print:        load time =     878.56 ms
llama_perf_context_print: prompt eval time =    2037.82 ms /  4096 tokens (    0.50 ms per token,  2009.99 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    6403.17 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - CPU_REPACK         |                  288 =   288 +       0 +       0                |
llama_memory_breakdown_print: |   - AMX                |                   31 =    31 +       0 +       0                |
```

after this commit:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 1.98 seconds per pass - ETA 0.05 minutes
[1]17.2005,[2]21.8220,
Final estimate: PPL = 21.8220 +/- 1.56485

llama_perf_context_print:        load time =     719.23 ms
llama_perf_context_print: prompt eval time =    1676.23 ms /  4096 tokens (    0.41 ms per token,  2443.58 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    4258.74 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - AMX                |                  319 =   319 +       0 +       0                |
```
(no more CPU_REPACK)

after this commit, disabling amx:

```
perplexity: calculating perplexity over 2 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 2.34 seconds per pass - ETA 0.07 minutes
[1]17.2005,[2]21.8220,
Final estimate: PPL = 21.8220 +/- 1.56485

llama_perf_context_print:        load time =     841.91 ms
llama_perf_context_print: prompt eval time =    2057.28 ms /  4096 tokens (    0.50 ms per token,  1990.98 tokens per second)
llama_perf_context_print:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:       total time =    6454.51 ms /  4097 tokens
llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Host               |                  845 =   318 +     224 +     302                |
llama_memory_breakdown_print: |   - CPU_REPACK         |                  319 =   319 +       0 +       0                |
```
=> same perplexity.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-02-27 20:57:58 +02:00
Ruben Ortlam e722ee1bf5 vulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (llama/19921) 2026-02-27 20:57:58 +02:00
Kevin Pouget f877e1b202 ggml-virtgpu: improve the reliability of the code (llama/19846)
* ggml-virtgpu-backend: validate the consistency of the received objects

This patch adds consistency checks in the
ggml-virtgpu-backend (running on the host side) to ensure that the
data received from the guest is consistent (valid pointers, valid
sizes and offsets).

* ggml-virtgpu-backend: add fallback/skips for optional ggml backend methods

```
  1. bck->iface.synchronize(bck)
  2. buft->iface.get_alloc_size(buft, op)
  3. buft->iface.get_max_size(buft)
```

these three methods are optional in the GGML interface. `get_max_size`
was already properly defaulted, but `backend sychronize` and `butf
get_max_size` would have segfaulted the backend if not implemented.

* ggml-virtgpu-backend: fix log format missing argument

* ggml-virtgpu-backend: improve the abort message

* ggml-virtgpu-backend: more safety checks

* ggml-virtgpu-backend: new error code

* ggml-virtgpu-backend: initialize all the error codes

* ggml-virtgpu: add a missing comment generated by the code generator

* ggml-virtgpu: add the '[virtgpu]' prefix to the device/buffer names

* ggml-virtgpu: apir_device_buffer_from_ptr: improve the error message

* ggml-virtgpu: shared: make it match the latest api_remoting.h of Virglrenderer APIR

(still unmerged)

* ggml-virtgpu: update the code generator to have dispatch_command_name in a host/guest shared file

* ggml-virtgpu: REMOTE_CALL: fail if the backend returns an error

* docs/backend/VirtGPU.md: indicate that the RAM+VRAM size is limed to 64 GB with libkrun

* ggml-virtgpu: turn off clang-format header ordering for some of the files

Compilation breaks when ordered alphabetically.

* ggml-virtgpu: clang-format

* ggml-virtgpu/backend/shared/api_remoting: better comments for the APIR return codes
2026-02-27 20:57:58 +02:00
Neo Zhang 4cac408c60 support permuted, remove check s0/s10 (llama/19889)
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2026-02-27 20:57:58 +02:00
Jeff Bolz fb55b2654b vulkan: check for memory overlap before doing fusion (llama/19768)
* vulkan: check for memory overlap before doing fusion

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

* address feedback
2026-02-27 20:57:58 +02:00
Georgi Gerganov 279be33a83 ggml/gguf : prevent integer overflows (llama/19856)
* gguf : prevent integer overflow for ggml_context mem size

* ggml : fix int overflows in ggml_new_object()

* gguf : prevent string exhaustion

* gguf : prevent array elements exhaustion

* ggml : fix negative tensor type oob

* py : assert that alignment is non-zero power of 2

* ggml : check int overflow in ggml_new_tensor_impl and ggml_new_object

* gguf-py : error on duplicate keys when reading

* py : restore tensor_fields

* enforce proper alignment in add_custom_alignment

* gguf : better name

* gguf : fix ctx size for no_alloc == true

* gguf : minor print fix

* ggml : print values when overflow

* ggml : remove deprecated ggml_type_sizef()

* ggml : relax ggml_type asserts to debug-only

* gguf : add mem_size overflow test

* gguf : add file size check for arrays

* ggml : relax asseerts for ggml_get_type_traits()

* flake8 fix

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-27 20:57:58 +02:00
Ruben Ortlam 90800b5aa5 Vulkan Scalar Flash Attention Refactor (llama/19625)
* vulkan: allow using fp16 in scalar flash attention shader

* split rows inside of subgroups for faster synchronization

* use row_split when Br >= 4, change reductions to use shared memory if row_split == 1

* use f32 scalar FA if f16 is not supported by device

* fix amd workgroup size issue

* optimize masksh use

* add medium rows FA shader Br size

* fixes

* add padding to mask shmem buffer

* cache q values into registers for KQ

* fuse lf accumulation, pf and v accumulation into a loop

* stage K loads through shmem

* stage V loads through shmem

* only stage through shmem on Nvidia

* default to Bc 32

* also stage V through shmem when this is done for K

* dynamic subgroups for intel

* use vectorized stores

* use float_type for dequantize4 functions

* use smaller scalar rows size for smaller rows count

* relax flash attention split_k condition to allow non-gqa use

* use minimal subgroup size on Intel

* fix shmem support function

* fix rebase issues

* fixes

* Bc 4 for scalar FA is not a valid configuration

* Use wave32 on AMD RDNA for scalar FA

* add Intel shader core count lookup-table

* fix regressions

* device tuning

* tmpsh size fix

* fix editorconfig

* refactor fa tuning logic into a single place

* fix gqa opt logic

* fix block_rows with small n_rows

* amd tuning

* fix hsk=72/80 issue

* tuning

* allow condition skipping for column check

* use float16 for Of if available

* address feedback

* fix bad RDNA performance on head size <= 128 by limiting occupancy

* allow printing pipeline stats

* cleanup and fixes

* limit occupancy for GCN for small batch FA with large HSK

* disable f16 FA for GCN AMD GPUs on the proprietary driver
2026-02-27 20:57:58 +02:00
Jeff Bolz dcc877688d vulkan: fix coopmat1 without bf16 support (llama/19793) 2026-02-27 20:57:58 +02:00
Jeff Bolz 344eae3d22 vulkan: fix data race in mul_mat_id shader (llama/19790) 2026-02-27 20:57:58 +02:00
Max Krasnyansky 53b571a47e hexagon refactor all Ops to use local context struct (llama/19819)
* hexagon: refactor set/get/sum-rows ops to use local context

* hexagon: refactor ROPE and Softmax Ops to use local context

Improves performance a bit by precomputing things and saving in the context.

* hexagon: refactor activation ops to use local context struct

* hexagon: refactor unary ops to use local context struct and DMA/VTCM

* hexagon: use aligned hvx_scale function

* hexagon: remove unused fields from op_context

* hexagon: rewrite ROPE to use DMA and VTCM scratchpad

* hex-rope: keep N rows in scratchpad (instead of just two)

* hex-rope: introduce rowidx cache

* hex-rope: remove unused fields

* hex-rope: rewrite dma prefetch logic to allow for multi-row fetch/compute

also removes the need for fastdiv.

* hex-rope: minor formatting

* hex-rope: use indices and unroll the loops

* hex-rope: more updates to cleanup rope-block handling

* hexagon: cleanup supported type/dims checks

* hexagon: all reduce funcs replicated across lanes

There is no need to explicitly replicate the first value.

* snapdragon: update adb and windows scripts to use ubatch-size 256

Updated Ops support handles larger ubatches.
2026-02-27 20:57:58 +02:00
Alberto Cabrera Pérez 06fbd9c5f2 ggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod) (llama/19356)
* Generic GEMV and boilerplate for q5_K dotprod
* Generic GEMM and boilerplate for q5_K dotprod
* ARM64 q5_K dotprod GEMM
* ARM64 q5_K dotprod GEMV
2026-02-27 20:57:58 +02:00
Gaurav Garg 98915f889a Improve CUDA graph capture (llama/19754)
* Improve CUDA graph capture

Currently, CUDA graphs are eagerly enabled on the first call to ggml_backend_cuda_graph_compute. If the graph properties keep changing (4+ consecutive updates), the graph is permanently disabled. This is suboptimal because:

- The first call always incurs CUDA graph capture overhead even if the graph is unstable
- Once permanently disabled, CUDA graphs never re-enable even after the graph stabilizes (e.g., switching from prompt processing to decode)

The new approach delays CUDA graph activation until warmup completes: the same cgraph must be called at least twice with matching properties before CUDA graph capture begins. This avoids wasted capture overhead on volatile graphs and allows graphs to become eligible once they stabilize.
This also fixes issues such as https://github.com/ggml-org/llama.cpp/discussions/19708

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Remove EM dashes

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-02-27 20:57:58 +02:00
Taimur Ahmad 0c10a15447 ggml-cpu: add RVV vec dot kernels for quantization types (llama/18784)
* ggml-cpu: add rvv vec_dot for iq2_s

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for iq3_s

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

* ggml-cpu: add rvv vec_dot for iq1_s, iq1_m

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add vlen switch for rvv vec_dot

---------

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-02-27 20:57:58 +02:00
Masashi Yoshimura 0158795ebc ggml-webgpu: Add unary op (SQR, SQRT, SIN, COS) support. (llama/19700)
* ggml-webgpu: Add unary op (SQR, SQRT, SIN, COS) support.

* Fix to cast the src value to f32 before sin/cos computing.
2026-02-27 20:57:58 +02:00
Ruben Ortlam 3f68f30907 vulkan: fix MMQ shader push constants and multi-dispatch (llama/19732) 2026-02-27 20:57:58 +02:00
Johannes Gäßler ade724fced CUDA: fix kernel selection logic for tile FA (llama/19686)
* CUDA: fix kernel selection logic for tile FA

* add comment
2026-02-27 20:57:58 +02:00
shalinib-ibm cc9e5cf89d llamafile: powerpc: add FP16 MMA path for Q4/Q8 matmul (llama/19709)
Avoid xvi8ger4pp signed→unsigned bias correction by dequantizing Q4/Q8
inputs to FP16 and using FP16×FP16→FP32 MMA. This removes
post-processing overhead and improves performance.

Performance Impact:
1.5 ~ 2x improvement in PP_Speed for Q4 and Q8 Models,
measured with llama-bench and llama-batched-bench.
Q8 Model: granite-4.0-h-micro-Q8_0.gguf (from huggingface)
Q4 Model: Meta-Llama3-8b Q4 model (generated with llama-quantize from
f32 model)

llama-bench Q8 Model Results:
 model                          	       size 	     params 	 backend    	 threads 	            test 	Base t/s	Patch t/s
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	             pp8 	         64.48 ± 4.72 	         73.99 ± 0.27
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	            pp16 	         80.11 ± 0.32 	        112.53 ± 0.40
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	            pp32 	         89.10 ± 0.27 	        152.95 ± 0.68
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	            pp64 	         93.65 ± 0.25 	        187.83 ± 0.83
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	           pp128 	         99.93 ± 0.02 	        201.32 ± 0.11
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	           pp256 	        102.32 ± 0.40 	        208.32 ± 0.41
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	           pp512 	        103.42 ± 0.40 	        209.98 ± 0.14
 granitehybrid 3B Q8_0          	   3.16 GiB 	     3.19 B 	 CPU        	      10 	           tg128 	         20.35 ± 0.01 	         19.57 ± 0.01

llama-bench Q4 Model Results:
 model                          	       size 	     params 	 backend    	 threads 	            test 	              Base    t/s 	               Patch   t/s
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	             pp8 	         34.77 ± 0.10 	         41.23 ± 0.08
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	            pp16 	         40.81 ± 0.04 	         64.55 ± 0.15
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	            pp32 	         44.65 ± 0.05 	         90.84 ± 0.22
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	            pp64 	         47.49 ± 0.03 	        114.39 ± 0.11
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	           pp128 	         49.29 ± 0.24 	        120.13 ± 0.19
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	           pp256 	         49.77 ± 0.23 	        121.51 ± 0.11
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	           pp512 	         49.89 ± 0.23 	        117.52 ± 0.10
 llama 8B Q4_0                  	   4.33 GiB 	     8.03 B 	 CPU        	      10 	           tg128 	         13.40 ± 0.01 	         13.37 ± 0.00

Llama perplexity Results:

Model	                    Base Final PPL Estimate	Patch Final PPL Estimate
granite-4.0-h-micro-Q8_0    1.3862 +/- 0.04424	        1.3868 +/- 0.04432
Meta-Llama3-8b Q4	    1.3801 +/- 0.04116	        1.3803 +/- 0.04116

Signed-off-by: Shalini.Salomi.Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2026-02-27 20:57:58 +02:00
Reese Levine 8b3a52ba87 ggml webgpu: Fix bug in dispatching large matrix-vector multiplication (llama/19535)
* Fix bug in dispatching large matrix-vector multiplication
2026-02-27 20:57:58 +02:00
Reese Levine fc7a78f4d8 ggml webgpu: shader library organization (llama/19530)
* Basic JIT compilation for mul_mat, get_rows, and scale (ggml/17)

* scale jit working

* preliminary working jit for getrows and mulmat, needs refining

* simplified mul_mat preprocessing switch statement

* get_rows fixes, mul_mat refinement

* formatted + last edits

* removed some extraneous prints

* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish

* small fix

* some changes, working

* get_rows and mul_mat jit fixed and working

* Update formatting

* formatting

* Add header

---------

Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Start work on all-encompassing shader library

* refactor argmax, set_rows

* Refactor all but flashattention, mat mul

* flashattention and matrix multiplication moved to new format

* clean up preprocessing

* Formatting

* remove duplicate constants

* Split large shaders into multiple static strings

---------

Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
2026-02-27 20:57:58 +02:00
Jeff Bolz f1da0a26f5 vulkan: split mul_mat into multiple dispatches to avoid overflow (llama/19509)
* vulkan: split mul_mat into multiple dispatches to avoid overflow

The batch dimensions can be greater than the max workgroup count limit,
in which case we need to split into multiple dispatches and pass the base
index through a push constant.

Fall back for the less common p021 and nc variants.

* address feedback
2026-02-27 20:57:58 +02:00
shaofeiqi 51ce7de94c opencl: refactor expm1 and softplus (llama/19404)
* opencl: refactor expm1

* opencl: refactor softplus

* opencl: use h for half literals

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-02-27 20:57:58 +02:00
shaofeiqi 6fadc749a9 opencl: optimize mean and sum_row kernels (llama/19614)
* opencl: optimize mean and sum_row kernels

* opencl: add comment for max subgroups

* opencl: format

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-02-27 20:57:58 +02:00
Talha Can Havadar 58855d08c2 ggml: ggml-cpu: force-no-lto-for-cpu-feats (llama/19609)
When LTO enabled in build environments it forces all builds to have LTO
in place. But feature detection logic is fragile, and causing Illegal
instruction errors with lto. This disables LTO for the feature
detection code to prevent cross-module optimization from inlining
architecture-specific instructions into the score function. Without this,
LTO can cause SIGILL when loading backends on older CPUs (e.g., loading
power10 backend on power9 crashes before feature check runs).
2026-02-27 20:57:58 +02:00
Georgi Gerganov cf4bd07028 cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (llama/19645)
* cuda : enable CUDA graphs for MMID BS <= 4

* cont : add stream capture check

Co-authored-by: Oliver Simons <osimons@nvidia.com>

* cont : add MMVQ_MMID_MAX_BATCH_SIZE

---------

Co-authored-by: Oliver Simons <osimons@nvidia.com>
2026-02-27 20:57:58 +02:00
Judd 5ee5748722 ggml : make `ggml_is_view` as API (llama/19539)
* make `ggml_is_view` as API

* introduce `ggml_aux_is_view` as inline version for internal use.

* change `ggml_aux_is_view` to  `ggml_impl_is_view`
2026-02-27 20:57:58 +02:00
Mario Limonciello 5d9d72ec12 Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (llama/19591)
Avoids issues with ROCm 6.4.4.

Closes: https://github.com/ggml-org/llama.cpp/issues/19580
Fixes: 6845f7f87 ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)")

Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
2026-02-27 20:57:58 +02:00
abhijain1204fujitsu f8f7c1d891 ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (llama/19132)
* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance.

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <Prashant.Vithule@fujitsu.com>
2026-02-27 20:57:58 +02:00
David Friehs 02a9f660b8 cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (llama/19624)
* cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization

- load all 8 int8 for a grid position in one load
- calculate signs via popcnt instead of fetching from ksigns table
- broadcast signs to drop individual shift/mask

* cuda: iq2xxs: simplify sum scaling

express `(sum * scale + sum / 2) / 4` as `(sum * (scale * 2 + 1)) / 8`
express `((aux32 >> 28) * 2 + 1)` as `(aux32 >> 27 | 1)`

saves 3 registers for mul_mat_vec_q (152 -> 149) according to nsight
AFAICT no overflow can occur here as iq2xxs values are far too small

* uint -> uint32_t

error: identifier "uint" is undefined
2026-02-27 20:57:58 +02:00
Daniel Bevenius df2f8d3bc4 cmake : check if KleidiAI API has been fetched (llama/19640)
This commit addresses a build issue with the KleidiAI backend when
building multiple cpu backends. Commmit
3a00c98584e42a20675b6569d81beadb282b0952 ("cmake : fix KleidiAI install
target failure with EXCLUDE_FROM_ALL") introduced a change where
FetchContent_Populate is called instead of FetchContent_MakeAvailable,
where the latter does handle this case (it is idempotent but
FetchContent_Populate is not).

I missed this during my review and I should not have commited without
verifying the CI failure, sorry about that.
2026-02-27 20:57:58 +02:00
Georgi Gerganov 22f0861efc ggml : avoid UB in gemm ukernel (llama/19642) 2026-02-27 20:57:58 +02:00
Aaron Teo 7b5a1ebaa6 ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (llama/19399) 2026-02-27 20:57:58 +02:00
Aman Gupta 76f769d06f ggml-cpu: FA add GEMM microkernel (llama/19422)
* ggml-cpu: FA add GEMM microkernel

* add guard for sizeless vector types

* fix case where DV % GGML_F32_EPR !=0

* move memset out of the loop

* move another memset out of the loop

* use RM=4 for arm

* simd_gemm: convert everything to int

* convert everything to size_t to avoid warnings

* fixup

* add pragma for ignoring aggressive loop optimizations
2026-02-27 20:57:58 +02:00
SamareshSingh 7ee772ab2b cmake : fix KleidiAI install target failure with EXCLUDE_FROM_ALL (llama/19581)
* cmake: fix KleidiAI install target failure with EXCLUDE_FROM_ALL

Fix for the bug #19501 by adding EXCLUDE_FROM_ALL to FetchContent_Declare. This properly excludes KleidiAI from both build and install targets, preventing install failures when GGML_CPU_KLEIDIAI=ON is used.

The KleidiAI source files are still compiled into libggml-cpu.so, preserving all functionality.

* addressed code review comments
2026-02-27 20:57:58 +02:00
Georgi Gerganov 4bea3cd329 ggml : bump version to 0.9.7 (ggml/1425) 2026-02-27 20:57:58 +02:00
Dmitry Atamanov cec1dd9d12
examples : update miniaudio library to 0.11.24 (#3672) 2026-02-27 11:15:15 +01:00
Maxime Grenu 21411d81ea
docs : fix duplicate word typo in VAD section (#3670)
The VAD section contained a spurious 'the' at the end of a sentence,
creating the run-on 'Using this information the / only the speech
segments...'. Replace the orphaned 'the' with a comma so the sentence
reads correctly: 'Using this information, only the speech segments...'.
2026-02-19 16:18:42 +01:00
Georgi Gerganov 364c77f4ca talk-llama : sync llama.cpp 2026-02-15 21:44:37 +02:00
Georgi Gerganov 83f2ed19e1 sync : ggml 2026-02-15 21:44:37 +02:00
Georgi Gerganov 4ac70ce791 models : optimize qwen3next graph (llama/19375)
* models : optimizing qwen3next graph

* cont

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* cont : remove redundant q, g chunking

* minor

* minor

* avoid passing masks around

* avoid concats during chunking

* naming + shapes

* update names and use prefix to disable CUDA graphs
2026-02-15 21:44:37 +02:00
Adrien Gallouët 226e8c041c ggml : fix GGML_DEBUG with OpenMP (llama/19599)
last_graph is only available without OpenMP, but
ggml_graph_compute_thread() is called in both cases.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-02-15 21:44:37 +02:00
Georgi Gerganov fbdac5119c metal : fix ACC op (llama/19427) 2026-02-15 21:44:37 +02:00
Jeff Bolz cc448def01 vulkan: support L2_NORM with contiguous rows (llama/19604) 2026-02-15 21:44:37 +02:00
Jeff Bolz 197e9ab6eb vulkan: support GGML_OP_SET (llama/19584) 2026-02-15 21:44:37 +02:00
Sophon fc6bbab817 vulkan: Add vendor id for Qualcomm drivers (llama/19569)
This commit allows Qualcomm native vulkan driver to be used on Windows
instead of Mesa Dozen.
2026-02-15 21:44:37 +02:00
Max Krasnyansky e6476d4c12 hexagon: further optimizations and refactoring for flash attention (llama/19583)
* ggml-hexagon: fa improvements

ggml-hexagon: optimize flash attention calculations with improved variable handling

ggml-hexagon: streamline flash attention operations by removing redundant checks for FP32

ggml-hexagon: optimize hvx_dot_f16_f16_aa_rx2 by simplifying variable handling for unused elements

ggml-hexagon: optimize flash attention by changing slope vector type to F16

* hexfa: fixed test-backend-ops failurs due to leftover element handling

* hexagon: refactor and optimize fa to use local context struct

* ggml-hexagon: optimize flash-attention using hvx_vec_expf

Use HVX for online softmax.

---------

Co-authored-by: chraac <chraac@gmail.com>
2026-02-15 21:44:37 +02:00
Jeff Bolz ec57bf407c vulkan: restore -inf check in FA shaders (llama/19582) 2026-02-15 21:44:37 +02:00
Alberto Cabrera Pérez e8a25654b2 Fix wrong memcpy length for block_interleave == 4 (llama/19575) 2026-02-15 21:44:37 +02:00
ymcki 628b545b7e fix vulkan ggml_acc only works in 3d but not 4d (llama/19426)
* fix vulkan ggml_acc only works in 3d but not 4d

* removed clamp in test_acc_block

* use the correct stride and its test case

* cuda : fix "supports op" condition

* change src0 to src1 in ggml_vk_acc. Update acc.comp with jeffbolznv\'s suggestion except to keep the boundary check

* version without boundary check

* revert back to boundary check version

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-02-15 21:44:37 +02:00
Aman Gupta 58e3d5a42d CUDA: loop over ne2*ne3 in case it overflows (llama/19538)
* CUDA: loop over ne2*ne3 in case it overflows

* use fastdiv
2026-02-15 21:44:37 +02:00
Oliver Simons 3eb4905af1 CUDA: Do not mutate cgraph for fused ADDs (llama/19566)
* Do not mutate cgraph for fused ADDs

1. We should try to minimize in-place changes to the incoming
   ggml_cgraph where possible (those should happen in graph_optimize)
2. Modifying in-place leads to an additional, unnecessary graph capture
   step as we store the properties before modifying the graph in-place
   in the cuda-backend

* Assert ggml_tensor is trivially copyable

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-02-15 21:44:37 +02:00
Georgi Gerganov 0e94faa19c metal : improve concurrency (llama/19555) 2026-02-15 21:44:37 +02:00
Georgi Gerganov c5325e50fc metal : support GGML_OP_SET (llama/19548) 2026-02-15 21:44:37 +02:00
Shupei Fan 195af60a8b hexagon: fix typo in vtcm_needs_release (llama/19545) 2026-02-15 21:44:37 +02:00
lhez 9f87eeccdf opencl: add basic support for q4_1 (llama/19534)
* opencl: add q4_1 mv

* opencl: clean up

* opencl: add flattened q4_1 mv

* opencl: clean up

* opencl: add basic q4_1 mm

* opencl: fix whitespace

* opencl: add general q4_0 mm
2026-02-15 21:44:37 +02:00
Georgi Gerganov d8e3e2ef08 metal : update sum_rows kernel to support float4 (llama/19524) 2026-02-15 21:44:37 +02:00
Mario Limonciello 39b5f414a3 Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (llama/19461)
There is an upstream problem [1] with AMD's LLVM 22 fork and
rocWMMA 2.2.0 causing compilation issues on devices without
native fp16 support (CDNA devices).

The specialized types aren't resolved properly:
```
/opt/rocm/include/rocwmma/internal/mfma_impl.hpp:2549:37: error: ambiguous partial specializations of 'amdgcn_mfma<__half, __half, __half, 16, 16, 16>'
 2549 |             using ARegsT = typename Impl::ARegsT;
```

Add a workaround to explicitly declare the types and cast when
compiling with HIP and ROCWMMA_FATTN [2].  When this is actually
fixed upstream some guards can be used to detect and wrap the
version that has the fix to only apply when necessary.

Link: https://github.com/ROCm/rocm-libraries/issues/4398 [1]
Link: https://github.com/ggml-org/llama.cpp/issues/19269 [2]

Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
2026-02-15 21:44:37 +02:00
Max Krasnyansky 304205679c hexagon: further optimization and tuning of matmul and dot kernels (llama/19407)
* ggml-hexagon: implement 2x2 matmul kernel

* hexmm: implement vec_dot_rx2x2 for Q8_0 and MXFP4

* hexagon: fix editor config failures

* hexagon: refactor matmul ops to use context struct and remove wrappers

Also implement vec_dot_f16 2x2

* hexagon: refactor dyn quantizers to use mmctx

* hexagon: remove mm fastdiv from op_ctx

* hexagon: refactor matmul entry point to reduce code duplication

---------

Co-authored-by: Trivikram Reddy <tamarnat@qti.qualcomm.com>
2026-02-15 21:44:37 +02:00
lhez 0326fd37dd opencl: add general Q6_K mm and Q4_K mv (llama/19347)
* opencl: add general q6_k mm

* opencl: refine condition for q6_K mm

* opencl: add general q4_K mv

* opencl: fix whitespace
2026-02-15 21:44:37 +02:00
Georgi Gerganov f3e78985be ggml : unary ops support non-cont src0 + metal F16 unary ops (llama/19511)
* ggml : unary ops support non-cont src0

* metal : support F16 unary ops + fix ELU
2026-02-15 21:44:37 +02:00
Georgi Gerganov 3ffa1fd84e metal : extend l2_norm support for non-cont src0 (llama/19502) 2026-02-15 21:44:37 +02:00
Max Krasnyansky 09587ceb12 hexagon: Add ARGSORT, DIV, SQR, SQRT, SUM_ROWS, GEGLU (llama/19406)
* hexagon: add ARGSORT op

Co-authored-by: Yarden Tal <yardent@qti.qualcomm.com>

* hexagon: argsort reject tensors with huge rows for now

* Adding support for DIV,SQR,SQRT,SUM_ROWS ops in hexagon backend

* hexagon : Add GEGLU op

* hexagon: fix editor config check

* hexagon: rewrite and optimize binary ops ADD/SUB/MUL/DIV/ADD_ID to use DMA

---------

Co-authored-by: Yarden Tal <yardent@qti.qualcomm.com>
Co-authored-by: Manohara Hosakoppa Krishnamurthy <mhosakop@qti.qualcomm.com>
2026-02-15 21:44:37 +02:00
Georgi Gerganov 3504358056 ggml : extend bin bcast for permuted src1 (llama/19484)
* tests : extend bin bcast for permuted src1

* cont : extend bin support

* cont : s0 is always 1

* tests : simplify
2026-02-15 21:44:37 +02:00
Georgi Gerganov de949fb1db metal : consolidate unary ops (llama/19490) 2026-02-15 21:44:37 +02:00
Oliver Simons 57c620b4b1 CUDA : Update CCCL-tag for 3.2 to final release from RC (llama/19486)
CCCL 3.2 has been released since it was added to llama.cpp as part of
the backend-sampling PR, and it makes sense to update from RC to final
released version.

https://github.com/NVIDIA/cccl/releases/tag/v3.2.0
2026-02-15 21:44:37 +02:00
Nikhil Jain 562255fd77 Plug memory leaks and free resources on shutdown (llama/19315)
* Fix memory leaks in shader lib, backend, backend_context, buffer_context, and webgpu_buf_pool

* Free pools

* Cleanup

* More cleanup

* Run clang-format

* Fix arg-parser and tokenizer test errors that free an unallocated buffer

* Fix device lost callback to not print on device teardown

* Fix include and run clang-format

* remove unused unused

* Update binary ops

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-02-15 21:44:37 +02:00
Alberto Cabrera Pérez d77265c818 ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) (llama/19360)
* First working version of GEMM and GEMV

* interleave loads and compute

* Clang-format

* Added missing fallback. Removed tested TODO.

* Swap M and N to be consistent with the repack template convention
2026-02-15 21:44:37 +02:00
k4ss4n b0fe2e84fa ggml : use noexcept overload for is_regular_file in backend registration (llama/19452)
using noexcept std::filesystem::directory_entry::is_regular_file
overload prevents abnormal termination upon throwing an error
(as caused by symlinks to non-existent folders on linux)

Resolves: #18560
2026-02-15 21:44:37 +02:00
Raul Torres 2de2fc9270 CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (llama/18968) 2026-02-15 21:44:37 +02:00
hipudding 6a74f56212 CANN: implement quantized MUL_MAT_ID for MoE models (llama/19228)
Implement ggml_cann_mul_mat_id_quant function to support quantized matrix
multiplication for Mixture of Experts (MoE) architectures on CANN backend.

Key features:
- Support Q4_0 and Q8_0 quantized weight formats
- Use IndexSelect to dynamically route expert-specific weights based on indices
- Leverage WeightQuantBatchMatmulV2 for efficient quantized computation
- Handle automatic F16 type conversion for hardware compatibility
- Support both per-expert and broadcast input modes

Implementation details:
- Extract expert weights and scales using CANN IndexSelect operation
- Process each batch and expert combination independently
- Create proper tensor views with correct stride for matmul operations
- Automatic input/output type casting to/from F16 as needed

Testing: All test cases passed for supported types (F32, F16, Q4_0, Q8_0).
2026-02-15 21:44:37 +02:00
Georgi Gerganov a36210c836 cuda : extend GGML_OP_PAD to work with non-cont src0 (llama/19429)
* cuda : extend GGML_OP_PAD to work with non-cont src0

* tests : add permuted pad
2026-02-15 21:44:37 +02:00
Oliver Simons 808904277e CUDA: Fix non-contig rope (llama/19338)
* Rename variables + fix rope_neox

Seems memory layout is shared with Vulkan so we can port fix from
https://github.com/ggml-org/llama.cpp/pull/19299

* Fix rope_multi

* Fix rope_vision

* Fix rope_norm

* Rename ne* to ne0* for consistent variable naming

* cont : consistent stride names

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-02-15 21:44:37 +02:00
Nuno 764482c317
ci: add vulkan docker image (#3644)
Signed-off-by: rare-magma <rare-magma@posteo.eu>
2026-02-09 12:33:06 +02:00
Pádraic Slattery 052066c4f7
chore: Update outdated GitHub Actions versions (#3646) 2026-02-09 12:32:46 +02:00
Christian Kastner 525be69a66
cmake: Drop obsolete build-time configuration of backends (#3649)
The backend configuration now happens in ggml.

This updated configuration mirrors that of llama.cpp.
2026-02-09 12:32:18 +02:00
Sid Mohan eb27fa2252
server : fix hardcoded /inference path in default HTML page (#3639)
Closes #3596
2026-02-09 10:10:13 +02:00
Georgi Gerganov 193f7cdaaf
ci : try fix mirrors (#3655) 2026-02-09 09:59:22 +02:00
Georgi Gerganov 4b23ff249e talk-llama : sync llama.cpp 2026-02-08 09:29:10 +02:00
Georgi Gerganov b0e81c1a2e sync : ggml 2026-02-08 09:29:10 +02:00
Georgi Gerganov 55d7cb2e93 metal : consolidate bin kernels (llama/19390)
* metal : refactor bin kernels

* cont

* cont : fix cv
2026-02-08 09:29:10 +02:00
Georgi Gerganov a9a0a51fba metal : fix event synchronization in cpy_tensor_async (llama/19402) 2026-02-08 09:29:10 +02:00
Abhijit Ramesh 1739af663a ggml-webgpu: JIT compile binary operators and handle binding overlaps (llama/19310)
* ggml webgpu: port binary operators to use pre-wgsl

* Add binary.wgsl: unified shader with conditionals for all 4 ops

* Add gen_binary_shaders.cpp: build tool for using pre_wgsl preprocessor

* Remove bin_op.tmpl.wgsl and binary.wgsl (Python template)

* Update CMake to generate binary operator shaders at build time

* ggml-webgpu: migrate binary ops to JIT compilation with overlap handling

* port binary operators from AOT to pre-wgsl JIT compilation

* add src1=dst overlap handling for binary ops

* use compile-time workgroup size defines instead of runtime overrides

* ggml-webgpu: complete overlap handling for binary ops

* add support for inplace & overlap case in binding setup

* restructure conditional logic to handle all overlap cases

* ensure all buffer bindings are correctly assigned for edge cases

* ggml-webgpu: remove unused binary overlap cases

Remove src0==src1 binary overlap case that never occurs in practice.

* keep INPLACE (src0==dst), OVERLAP (src1==dst), DEFAULT

* remove unused src0==src1 and all-same variant

* refactor wgsl to eliminate duplication
2026-02-08 09:29:10 +02:00
Nechama Krashinski f2f7320817 sycl: add F16 support for GGML_OP_CEIL (llama/19306)
* Fix SYCL CEIL operator

* sycl: implement GGML_OP_CEIL
2026-02-08 09:29:10 +02:00
Jeff Bolz cea22b3075 vulkan: For coopmat2 FA, use fp16 accumulators for the final result (llama/19376)
The cpu and cuda backends use fp16 for the VKQ accumulator type, this change
does the same for vulkan. This helps particularly with large head sizes which
are very register-limited.

I tried this for the coopmat1 path and it slowed down a bit. I didn't try for
scalar.

I applied the softmax bias that the cuda backend uses to avoid overflow,
although I was not able to reproduce the original bug without it.
2026-02-08 09:29:10 +02:00
Jeff Bolz c1b63354bb vulkan: make FA mask/softcap enables spec constants (llama/19309)
* vulkan: make FA mask/softcap enables spec constants

* don't specialize for sinks

* bump timeout a little bit
2026-02-08 09:29:10 +02:00
Georgi Gerganov 776cf61857 metal : skip loading all-zero mask (llama/19337)
* metal : skip loading all-zero mask

* cont : minor
2026-02-08 09:29:10 +02:00
Georgi Gerganov 2a7d5490f1 cuda : cuda graphs now compare all node params (llama/19383) 2026-02-08 09:29:10 +02:00
Georgi Gerganov 34d332aca5 metal : adaptive CPU/GPU interleave based on number of nodes (llama/19369) 2026-02-08 09:29:10 +02:00
Jeff Bolz a567c140a3 vulkan: Preprocess FA mask to detect all-neg-inf and all-zero. (llama/19281)
Write out a 2-bit code per block and avoid loading the mask when it
matches these two common cases.

Apply this optimization when the mask is relatively large (i.e. prompt
processing).
2026-02-08 09:29:10 +02:00
Georgi Gerganov 0781df2518 metal : add diag (llama/19330) 2026-02-08 09:29:10 +02:00
Oleksandr Kuvshynov 932def3198 vulkan: fix GPU deduplication logic. (llama/19222)
* vulkan: fix GPU deduplication logic.

As reported in https://github.com/ggml-org/llama.cpp/issues/19221, the
(same uuid, same driver) logic is problematic for windows+intel igpu.

Let's just avoid filtering for MoltenVK which is apple-specific, and
keep the logic the  same as before 88d23ad5 - just dedup based on UUID.

Verified that MacOS + 4xVega still reports 4 GPUs with this version.

* vulkan: only skip dedup when both drivers are moltenVk
2026-02-08 09:29:10 +02:00
Jeff Bolz 5a786f7648 vulkan: Set k_load_shmem to false when K is too large (llama/19301) 2026-02-08 09:29:10 +02:00
Jeff Bolz e0a3f393ad vulkan: fix non-contig rope (llama/19299) 2026-02-08 09:29:10 +02:00
will-lms eecc9bfa69 metal : add missing includes (llama/19348) 2026-02-08 09:29:10 +02:00
Kevin Pouget 2763054f99 ggml-virtgpu: make the code thread safe (llama/19204)
* ggml-virtgpu: regenerate_remoting.py: add the ability to deprecate a function

* ggml-virtgpu: deprecate buffer_type is_host remoting

not necessary

* ggml-virtgpu: stop using static vars as cache

The static init isn't thread safe.

* ggml-virtgpu: protect the use of the shared memory to transfer data

* ggml-virtgpu: make the remote calls thread-safe

* ggml-virtgpu: backend: don't continue if couldn't allocate the tensor memory

* ggml-virtgpu: add a cleanup function for consistency

* ggml-virtgpu: backend: don't crash if buft->iface.get_max_size is missing

* fix style and ordering

* Remove the static variable in apir_device_get_count

* ggml-virtgpu: improve the logging

* fix review minor formatting changes
2026-02-08 09:29:10 +02:00
Aman Gupta 4685ec9555 ggml-cpu: use LUT for converting e8->f32 scales on x86 (llama/19288)
* ggml-cpu: use LUT for converting e8->f32 scales on x86

* add dispatch based on macro
2026-02-08 09:29:10 +02:00
Georgi Gerganov 5dda94dd2e metal : add solve_tri (llama/19302) 2026-02-08 09:29:10 +02:00
Ruben Ortlam aa34558b6f vulkan: disable coopmat1 fa on Nvidia Turing (llama/19290) 2026-02-08 09:29:10 +02:00
Aman Gupta 8eede801e3 CUDA: use mmvq for mul-mat-id for small batch sizes (llama/18958)
* CUDA: use mmvq for mul-mat-id for small batch sizes

* add mmvq too

* Fix perf issue on ampere. Use mmvf mm-id only for non-nvidia GPUs

* templatize multi_token_path
2026-02-08 09:29:10 +02:00
Georgi Gerganov ce8a2da620 metal : minor cleanup (llama/19251) 2026-02-08 09:29:10 +02:00
Oliver Simons 698265d754 CUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (llama/19053)
By providing stride_* variables as size_t (i.e., 64-bit) the compiler can
correctly unroll the [two for-loops](557515be1e/ggml/src/ggml-cuda/mmq.cuh (L3789-L3816))
on BW. This gives some perf for prefill/pp phase on BW, while not affecting
other SMs:

| GPU                                                     | Model                 | Test   |   t/s master |   t/s osimons/fix_bw_mmq_fixup_kernel |   Speedup |
|:--------------------------------------------------------|:----------------------|:-------|-------------:|--------------------------------------:|----------:|
| NVIDIA RTX 6000 Ada Generation                          | gpt-oss 20B MXFP4 MoE | pp8096 |      8404.05 |                               8375.79 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | llama 3B Q4_K_M       | pp8096 |     16148.93 |                              16019.60 |      0.99 |
| NVIDIA RTX 6000 Ada Generation                          | llama 8B Q4_0         | pp8096 |      8008.29 |                               7978.80 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B BF16    | pp8096 |      4263.16 |                               4248.53 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B Q4_K_M  | pp8096 |      5165.11 |                               5157.43 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | gpt-oss 20B MXFP4 MoE | pp8096 |     12582.80 |                              12758.37 |      1.01 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 3B Q4_K_M       | pp8096 |     16879.10 |                              17619.47 |      1.04 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 8B Q4_0         | pp8096 |     10649.90 |                              10982.65 |      1.03 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B BF16    | pp8096 |      7717.73 |                               7716.22 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B Q4_K_M  | pp8096 |      7301.90 |                               7370.38 |      1.01 |
2026-02-08 09:29:10 +02:00
George 57107b2bf8 ggml: added cleanups in ggml_quantize_free (llama/19278)
Add missing cleanup calls for IQ2_S, IQ1_M quantization types and IQ3XS with 512 blocks during quantization cleanup.
2026-02-08 09:29:10 +02:00
Gaurav Garg 6ec362d2e0 cuda : revert CUDA_SCALE_LAUNCH_QUEUES override until investigated (llama/19227)
Hangs were reported on Jetson Orin AGX if we set CUDA_SCALE_LAUNCH_QUEUES=4x. Reverting the previous PR (#19042) and updating the document to consider setting CUDA_SCALE_LAUNCH_QUEUES=4x for faster throughput on multi-GPU systems.
2026-02-08 09:29:10 +02:00
lhez 591072fcc8 opencl: refactor some ops, concat, repeat, tanh and scale (llama/19226)
* opencl: refactor concat

* opencl: refactor repeat

* opencl: refactor tanh

* opencl: enable fp16 for tanh

* opencl: refactor scale

* opencl: fix unused variables
2026-02-08 09:29:10 +02:00
Aman Gupta 871063016d ggml-cpu: FA split across kv for faster TG (llama/19209)
* ggml-cpu: split across kv for faster TG

* simplify sinks application

* add ref impl
2026-02-08 09:29:10 +02:00
Neo Zhang c4003da2b8 Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (llama/19246)
User can't build up the software for Nvidia & AMD GPU.
rm the oneMath since it is only used in NV and AMD code path.
2026-02-08 09:29:10 +02:00
Tamar 74353e90a1 sycl: implement GGML_OP_TOP_K (llama/19242) 2026-02-08 09:29:10 +02:00
Georgi Gerganov 73e04555eb metal : support virtual devices (llama/18919)
* metal : support virtual devices

* cont : manage buffer type context memory

* metal : add events

* cont : implement cpy_tensor_async
2026-02-08 09:29:10 +02:00
Johannes Gäßler 625c8d863e ggml-backend: fix async set/get fallback sync (llama/19179) 2026-02-08 09:29:10 +02:00
Christian Kastner 0e219ebf89 docs : Minor cleanups (llama/19252)
* Update old URLs to github.com/ggml-org/

* Bump copyrights
2026-02-08 09:29:10 +02:00
Nikhil Jain a0256b8159 Remove pipeline cache mutexes (llama/19195)
* Remove mutex for pipeline caches, since they are now per-thread.

* Add comment

* Run clang-format

* Cleanup

* Run CI again

* Run CI once more

* Run clang-format
2026-02-08 09:29:10 +02:00
Max Krasnyansky aca5953d8d Bump cmake max version (needed for Windows on Snapdragon builds) (llama/19188)
* Bump max cmake version (needed for Windows on Snapdragon builds)

* cmake: move max version setting into ggml/CMakeLists
2026-02-08 09:29:10 +02:00
nullname 9b927dd849 ggml-hexagon: flash-attention and reduce-sum optimizations (llama/19141)
* wip

* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

* wip

* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

* optimize vector dot product functions to use unified reduction for improved performance

* wip

* ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

* ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

* wip

* ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

* ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

* optimize vector dot product functions to use unified reduction for improved performance

* hexagon: optimize reduce-sum for v75+

* hexagon: always keep row_sums in sf/fp32

* ggml-hexagon: enhance directory checks for HEXAGON_SDK_ROOT and HEXAGON_TOOLS_ROOT

* fix compiling error after rebase

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-02-08 09:29:10 +02:00
shaofeiqi db9c88744d opencl: add optimized q8_0 mm kernel for adreno (llama/18871)
* Add Q8_0 OpenCL kernel

Co-authored-by: yunjie <yunjie@qti.qualcomm.com>

* opencl: fix build for non-adreno

* opencl: refactor q8_0

* opencl: enforce subgroup size of 64 for adreno for q8_0

* For A750 and older generations, subgroup size can be 64 or 128.
  This kernel assumes subgroup size 64.

* opencl: suppress warning when adreno kernels are disabled

---------

Co-authored-by: yunjie <yunjie@qti.qualcomm.com>
Co-authored-by: Li He <lih@qti.qualcomm.com>
2026-02-08 09:29:10 +02:00
Simon Redman efd6344939 Correctly fetch q8_1 quantize pipeline in test as needed by 8a3519b (llama/19194) 2026-02-08 09:29:10 +02:00
Georgi Gerganov 06e3750407 ggml : bump version to 0.9.6 (ggml/1423) 2026-02-08 09:29:10 +02:00
Georgi Gerganov fc1a3e579e cmake : remove unused file (ggml/1419) 2026-02-08 09:29:10 +02:00
KITAITI Makoto 941bdabbe4
ruby : add `Whisper::Context::Params`, fix token memory management (#3647)
* Don't convert to temporary VALUE

* Define Whisper::Context::Params

* Add test for Whisper::Context::Params

* Implement Whisper::Context::Params

* Add tests for Context::Params

* Fix Whisper::Token memory management

* Add test for token_timestamps

* Make Context accept Context::Params

* Make Context::Params.new accept keyword args

* Add test for Context::Params.new with keyword args

* Add signature of Context::Params

* Add example for Whisper::Token

* Fix typos

* Revert "Don't convert to temporary VALUE"

This reverts commit dee66e7384.

* Hold Token#text as Ruby objectd

* Don't use pointer for ruby_whisper_context_params.params

* Use RUBY_DEFAULT_FREE instead of custom function

* Update bindings/ruby/README.md

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* Add document for Whisper::Context::Params

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2026-02-04 20:33:09 +09:00
KITAITI Makoto aa1bc0d1a6
ruby : add `VAD::Context#segments_from_samples`, allow Pathname, etc. (#3633)
* ruby : Bump version to 1.3.6

* Fix code in example

* Add sample code to transcribe from MemoryView

* Define GetVADContext macro

* Use GetVADContext

* Extract parse_full_args function

* Use parse_full_args in ruby_whisper_full_parallel

* Free samples after use

* Check return value of parse_full_args()

* Define GetVADParams macro

* Add VAD::Context#segments_from_samples

* Add tests for VAD::Context#segments_from_samples

* Add signature for VAD::Context#segments_from_samples

* Add sample code for VAD::Context#segments_from_samples

* Add test for Whisper::Context#transcribe with Pathname

* Make Whisper::Context#transcribe and Whisper::VAD::Context#detect accept Pathname

* Update signature of Whisper::Context#transcribe

* Fix variable name

* Don't free memory view

* Make parse_full_args return struct

* Fallback when failed to get MemoryView

* Add num of samples when too long

* Check members of MemoryView

* Fix a typo

* Remove unnecessary include

* Fix a typo

* Fix a typo

* Care the case of MemoryView doesn't fit spec

* Add TODO comment

* Add optimazation option to compiler flags

* Use ALLOC_N instead of malloc

* Add description to sample code

* Rename and change args: parse_full_args -> parse_samples

* Free samples when exception raised

* Assign type check result to a variable

* Define wrapper function of whisper_full

* Change signature of parse_samples for rb_ensure

* Ensure release MemoryView

* Extract fill_samples function

* Free samples memory when filling it failed

* Free samples memory when transcription failed

* Prepare transcription in wrapper funciton

* Change function name

* Simplify function boundary
2026-01-30 22:59:36 +09:00
Frieder Bluemle bf422cb704
scripts : Fix dSYMs path case for macOS xcframework build (#3630)
The script creates dSYMs/ but references dSYMS/ for macOS, causing
build failures on case-sensitive filesystems.
2026-01-30 15:57:26 +02:00
Georgi Gerganov acbace0571 cuda : fix compile warnings (#0) 2026-01-30 15:56:40 +02:00
Georgi Gerganov 953e503fd9 talk-llama : sync llama.cpp 2026-01-30 15:56:40 +02:00
Georgi Gerganov b529c0610f sync : ggml 2026-01-30 15:56:40 +02:00
bssrdf 5dca0db99c add tensor type checking as part of cuda graph properties (llama/19186) 2026-01-30 15:56:40 +02:00
s8322 2a16e7a67f sycl: implement GGML_UNARY_OP_SOFTPLUS (llama/19114)
* sycl: add softplus unary op implementation

* sycl: add softplus unary op implementation

* docs(ops): mark SYCL SOFTPLUS as supported

* docs: update SYCL status for SOFTPLUS
2026-01-30 15:56:40 +02:00
RachelMantel 1b3c27efae sycl: implement GGML_OP_TRI (llama/19089)
* sycl: implement GGML_OP_TRI

* docs: update ops.md for SYCL TRI

* docs: regenerate ops.md

* docs: update SYCL support for GGML_OP_TRI
2026-01-30 15:56:40 +02:00
Zheyuan Chen 829e70044b ggml-webgpu: improve flastAttention performance by software pipelining (llama/19151)
* webgpu : pipeline flash_attn Q/K loads in WGSL

* ggml-webgpu: unroll Q*K accumlation inner loop

* ggml-webgpu: vectorization

* ggml-webgpu: unrolling

* ggml-webgpu: remove redundant unrolling

* ggml-webgpu: restore the config

* ggml-webgpu: remove redundant comments

* ggml-webgpu: formatting

* ggml-webgpu: formatting and remove vectorization

* ggml-webgpu: remove unnecessary constants

* ggml-webgpu: change QKV buffer to read_write to pass validation

* ggml-webgpu: add explanation for the additional bracket around Q K accumulate

* Indentation and for -> if for tail

* Kick off CI on wgsl only commits

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-01-30 15:56:40 +02:00
Todor Boinovski 2a89a3f35c hexagon: enable offloading to Hexagon on Windows on Snapdragon (llama/19150)
* hexagon: updates to enable offloading to HTP on WoS

* Update windows.md

* Update windows.md

* hexagon: enable -O3 optimizations

* hexagon: move all _WINDOWS conditional compilation to _WIN32

* hexagon: updates to enable offloading to HTP on WoS

* hexagon: use run-time vs load-time dynamic linking for cdsp driver interface

* refactor htp-drv

* hexagon: add run-bench.ps1 script

* hexagon: htdrv refactor

* hexagon: unify Android and Windows build readmes

* hexagon: update README.md

* hexagon: refactor htpdrv

* hexagon: drv refactor

* hexagon: more drv refactor

* hexagon: fixes for android builds

* hexagon: factor out dl into ggml-backend-dl

* hexagon: add run-tool.ps1 script

* hexagon: merge htp-utils in htp-drv and remove unused code

* wos: no need for getopt_custom.h

* wos: add missing CR in htpdrv

* hexagon: ndev enforecement applies only to the Android devices

* hexagon: add support for generating and signing .cat file

* hexagon: add .inf file

* hexagon: working auto-signing and improved windows builds

* hexagon: futher improve skel build

* hexagon: add rough WoS guide

* hexagon: updated windows guide

* hexagon: improve cmake handling of certs and logging

* hexagon: improve windows setup/build doc

* hexagon: more windows readme updates

* hexagon: windows readme updates

* hexagon: windows readme updates

* hexagon: windows readme updates

* hexagon: windows readme updates

* Update windows.md

* Update windows.md

* snapdragon: rename docs/backend/hexagon to docs/backends/snapdragon

Also added a power shell script to simplify build env setup.

* hexagon: remove trailing whitespace and move cmake requirement to user-presets

* hexagon: fix CMakeUserPresets path in workflow yaml

* hexagon: introduce local version of libdl.h

* hexagon: fix src1 reuse logic

gpt-oss needs a bigger lookahead window.
The check for src[1] itself being quantized was wrong.

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-01-30 15:56:40 +02:00
Georgi Gerganov b997e690ef cuda : fix nkvo, offload and cuda graph node properties matching (llama/19165)
* cuda : fix nkvo

* cont : more robust cuda graph node property matching

* cont : restore pre-leafs implementation

* cont : comments + static_assert
2026-01-30 15:56:40 +02:00
yulo 34a3e28a08 HIP: add mmf for CDNA (llama/18896)
* refactor mmf rows_per_block

* speed up compile

* pass cdna compile

* fix cuda error

* clean up mmf

* f32 mmf

* clean float mma

* fix mmf error

* faster mmf

* extend tile k

* fix compile error

* Revert "extend tile k"

This reverts commit 4d2ef3d483932659801a59a5af0b6b48f6ffd5c7.

* fix smem overflow

* speed up compiling mmf

* speed up compile for hip

* 512 block for cdna

* config pad size

* fix as comment

* update select logic

* move some code to cuh

* fix as comment

* correct cdna3 config

---------

Co-authored-by: zhang hui <you@example.com>
2026-01-30 15:56:40 +02:00
Vishal Singh e0a2182970 ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (llama/19159) 2026-01-30 15:56:40 +02:00
Aman Gupta 62ba8b537f CUDA: refactor topk-moe to enable more models (GLM 4.7, Nemotron etc.) (llama/19126) 2026-01-30 15:56:40 +02:00
Neo Zhang f0e85bb142 sycl: fix norm kernels: l2_norm, group_norm, rms_norm by remove assert to support more cases (llama/19154)
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2026-01-30 15:56:40 +02:00
Ruben Ortlam 33148bb523 Vulkan Flash Attention Coopmat1 Refactor (llama/19075)
* vulkan: use coopmat for flash attention p*v matrix multiplication

* fix P loading issue

* fix barrier position

* remove reduction that is no longer needed

* move max thread reduction into loop

* remove osh padding

* add bounds checks and padding

* remove unused code

* fix shmem sizes, loop duration and accesses

* don't overwrite Qf, add new shared psh buffer instead

* add missing bounds checks

* use subgroup reductions

* optimize

* move bounds check, reduce barriers

* support other Bc values and other subgroup sizes

* remove D_split

* replace Of register array with shared memory Ofsh array

* parallelize HSV across the rowgroups

* go back to Of in registers, not shmem

* vectorize sfsh

* don't store entire K tile in shmem

* fixes

* load large k tiles to shmem on Nvidia

* adapt shared memory host check function to shader changes

* remove Bc 32 case

* remove unused variable

* fix missing mask reduction tmspsh barrier

* fix mask bounds check

* fix rowmax f16 under/overflow to inf

* fix flash_attn_cm2 BLOCK_SIZE preprocessor directives
2026-01-30 15:56:40 +02:00
Patryk Kaminski cc0c103b5d ggml-sycl: remove unused syclcompat header (llama/19140)
The syclcompat/math.hpp is not used anymore. The change that intrduced it was successfuly reverted (https://github.com/ggml-org/llama.cpp/pull/17826).
This include path will become obsolete and dropped in oneAPI 2026.0 effectively breaking ggml-sycl builds.
2026-01-30 15:56:40 +02:00
Oleksandr Kuvshynov dda7d9cd1c vulkan: handle device dedup on MacOS + Vega II Duo cards (llama/19058)
Deduplication here relied on the fact that vulkan would return unique
UUID for different physical GPUs. It is at the moment not always the case.
On Mac Pro 2019 running Mac OS, with 2 Vega II Duo cards (so, 4 GPU total),
MotlenVK would assign same UUID to pairs of GPUs, unless they
are connected with Infinity Fabric.

See more details here: KhronosGroup/MoltenVK#2683.

The right way is to fix that in MoltenVK, but until it is fixed,
llama.cpp would only recognize 2 of 4 GPUs in such configuration.

The deduplication logic here is changed to only filter GPUs if UUID is
same but driver is different.
2026-01-30 15:56:40 +02:00
Kevin Pouget 531d7b6781 ggml: new backend for Virglrenderer API Remoting acceleration (v2) (llama/18718) 2026-01-30 15:56:40 +02:00
Alberto Cabrera Pérez 3701413a71 ggml-cpu: arm64: Q4_K scale unroll and vectorization (llama/19108) 2026-01-30 15:56:40 +02:00
Georgi Gerganov 7fb0f823de cuda : fix "V is K view" check for non-unified KV cache (llama/19145) 2026-01-30 15:56:40 +02:00
Georgi Gerganov f28a733025 CUDA: tune GLM 4.7 Flash FA kernel selection logic (DGX Spark) (llama/19142) 2026-01-30 15:56:40 +02:00
Nikhil Jain dfdd2fee83 ggml webgpu: Split shared state (webgpu_context) into global state and per-thread state (llama/18976)
* Squashed commit of the following:

commit b3c6bf4b0450d8d452b934df27a0fb7cb53cd755
Author: Abhijit Ramesh <abhijitramesh2k@gmail.com>
Date:   Mon Dec 1 18:29:00 2025 -0800

    ggml webgpu: fix xielu parameter passing (llama/11)

    The XIELU operation was incorrectly using static_cast to convert
    float parameters to uint32_t, which converted numeric values instead
    of preserving IEEE 754 bit patterns. This caused incorrect values
    to be interpreted by the GPU shader.

    * Use reinterpret_cast to preserve float bit patterns when passing
      through uint32_t params buffer
    * Update WGSL shader parameter types from u32 to f32
    * Re-enable XIELU support (was disabled due to numerical issues)

    Fixes NMSE test failures for XIELU operation on WebGPU backend.

commit 5ca9b5e49ea7cddc9ab7c8b43a11a9c76a4dff4a
Author: neha-ha <137219201+neha-ha@users.noreply.github.com>
Date:   Tue Nov 18 12:17:00 2025 -0800

    Refactored pipelines and workgroup calculations (llama/10)

    * refactored pipelines

    * refactored workgroup calculation

    * removed commented out block of prior maps

    * Clean up ceiling division pattern

    ---------

    Co-authored-by: Neha Abbas <nehaabbas@eduroam-169-233-141-223.ucsc.edu>
    Co-authored-by: Reese Levine <reeselevine1@gmail.com>

Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 29 23:13:06 2025 -0700

    formatted embed wgsl and ggml-webgpu.cpp

commit e1f6baea31645e5d96ad53664acae856f74b96f4
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 29 23:08:37 2025 -0700

    implemented REPL_Template support and removed bug in unary operators kernel

commit 8c70b8fece445cdc9a8c660dbddbf201e52da2bb
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 15 16:14:20 2025 -0700

    responded and dealt with PR comments

commit f9282c660c10dec4487d434549bdb707a9cd9f37
Author: James Contini <jamescontini@gmail.com>
Date:   Sun Oct 12 13:41:41 2025 -0700

    removed unnecesarry checking if node->src[1] exists for unary operators

commit 4cf28d7dec41c29186d66152735b244c5699f9dc
Author: James Contini <jamescontini@gmail.com>
Date:   Sun Oct 12 13:32:45 2025 -0700

    All operators (inlcluding xielu) working

commit 74c6add1761a59d2c2ff60b60e8ad3c8300f6d3e
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 13:16:48 2025 -0700

    fixed autoconfig

commit 362749910be4f0120c8ffb21ceddeb7d2c088e51
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 13:10:46 2025 -0700

    removed vestigial files

commit cb0858333785757804c5104e59c4981843207c16
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 12:59:32 2025 -0700

    abides by editor-config

commit 5360e2852a4b51197d7d67d0a5d42e908b02d7ed
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 12:45:57 2025 -0700

    rms_norm double declaration bug atoned

commit 7b09baa4aa53711be5a126043670cc182c78bfcd
Merge: 8a6ec843 74b8fc17
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 11:50:03 2025 -0700

    resolving merge conflicts

commit 8a6ec843a50ab82f8cef59b4558eb63f318ba02d
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 8 18:06:47 2025 -0700

    unary operators pass ggml tests

commit c3ae38278a2db236adc5912c9140e4f0d63f2c19
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 1 16:22:40 2025 -0700

    neg passes backend test

commit aa1c9b2f8877a405470ca56709c42a1fd43713de
Author: James Contini <jamescontini@gmail.com>
Date:   Tue Sep 30 23:55:27 2025 -0700

    neg f16xf32xip builds and runs, havent actually ran a model that uses neg kernel yet though

Co-authored-by: James Contini <jamescontini@gmail.com>
Co-authored-by: Neha Abbas <neabbas@ucsc.edu>
Co-authored-by: Abhijit Ramesh <abhijitramesh2k@gmail.com>

* Remove extra code and format

* Add ops documentation (finally)

* ggml webgpu: add SOFTPLUS unary operator

Implements SOFTPLUS (log(1 + exp(x))) with f16/f32 support. Uses f32
precision for intermediate calculations to prevent f16 overflow.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* Follow Vulkan backend numerical stability pattern

* ggml webgpu: add EXPM1 unary operator

Implements EXPM1 (exp(x) - 1) with f16/f32 support.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support

* ggml webgpu: add FLOOR unary operator

Implements FLOOR (rounds down to nearest integer) with f16/f32 support.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support

* ggml webgpu: add CEIL unary operator

Implements CEIL (rounds up to nearest integer) with f16/f32 support.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support

* ggml webgpu: add ROUND unary operator

Implements ROUND (rounds to nearest integer) with f16/f32 support.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support

* ggml webgpu: add TRUNC unary operator

Implements TRUNC (truncates towards zero) with f16/f32 support.

* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support

* docs : update WebGPU support for unary operators (FLOOR, CEIL, ROUND, TRUNC, EXPM1, SOFTPLUS)

* Updates to webgpu get_memory

* Move shared state (webgpu_context) and device creation out of registration context, device context, and buffer context, and move into backend context

* Small cleanup

* Move Instance, Device, Adapter, Device creation, and capabilities to global state while moving Queue, pipelines, and buffers to per-thread state.

* Cleanups

* More cleanup

* Move staging_buf mutex to global context

* Resolve merge

* Resolve merge

* Resolve merge

* Clean up merge errors, delete forward declaration, and run clang-format

* Rename device_init to backend_init

* Move webgpu_context to backend_context

* Move buffer context members into global context and refactor function calls

* Run clang-format

* Remove commends

* Move parameter buffers to per-thread, add single memset_tensor param buf

* Fix CI compilation issue

* Fix builds for emscripten not supporting subgroups

* cleanup

* cleanup

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
2026-01-30 15:56:40 +02:00
Vishal Singh 9c75c793a6 ggml-zendnn : update ZenDNN git tag to main branch (llama/19133) 2026-01-30 15:56:40 +02:00
Johannes Gäßler 9d94d0f782 CUDA: tune GLM 4.7 Flash FA kernel selection logic (llama/19097) 2026-01-30 15:56:40 +02:00
Alberto Cabrera Pérez 00885e08e2 ggml-cpu: aarm64: q6_K repack gemm and gemv (and generic) implementations (i8mm) #18860 (llama/18888)
* Boilerplate for q6_K repack

* q6_K repack to q6_Kx8 implementation

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* q6_K generic gemv and gemm

* wip, gemm_q6_K 8x8

* Still WIP: loading of q8s, q6h and q6l

* first working version of q6_K gemm

* Moved q6 loads outside of sb block, Unrolled inner loop

* Replaced modulo with mask

* First implementation of GEMV

* ggml_vdotq_s32 -> vdotq_s32

* Reduce width of accumulators in q6_K gemv

* Bsums instead of calc bias. Preload scales to use vget_lane. Unroll.

* Reuse scales in GEMM (same GEMV opt)

* Added todos for bsum and different qh repack

* Arch fallback

* VSLIQ for merging qh adn ql

* Removed TODO, already tested

* Apply suggestions

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Removed unused import

---------

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-30 15:56:40 +02:00
Gaurav Garg 5fcbbdc0dd Reduce CPU-side stalls due to the CUDA command buffer being full (llama/19042)
* [CUDA] Reduce CPU-side stalls due to the CUDA command buffer being full

With pipeline parallelism, during prompt processing, the CPU-side CUDA command buffer gets full, stalling the CPU. Due to this, enough work doesn't get submitted to the GPU, causing bubbles in the GPU timeline.
Fix this by setting the CUDA environment variable CUDA_SCALE_LAUNCH_QUEUES to 4x to increase the command buffer size.

* Set the env variable in the CUDA backend registry allocation

* Add link to PR in code comment

* Remove warning logs and update documentation
2026-01-30 15:56:40 +02:00
shalinib-ibm b2e2032856 ggml-cpu: Enable FP16 MMA kernels on PPC (llama/19060) 2026-01-30 15:56:40 +02:00
lhez 56f82a9f33 opencl: add flattened q6_K mv (llama/19054) 2026-01-30 15:56:40 +02:00
Johannes Gäßler 41d5d7bb0e CUDA: fix padding of GQA to power of 2 in FA (llama/19115) 2026-01-30 15:56:40 +02:00
Johannes Gäßler f63848eada CUDA: faster FA for GQA > 1 but not power of 2 (llama/19092) 2026-01-30 15:56:40 +02:00
ccbinn 4372b87b8e metal : fix recommendedMaxWorkingSetSize availability on legacy iOS/macOS (llama/19088)
Co-authored-by: chenbin11 <chenbin11@kuaishou.com>
2026-01-30 15:56:40 +02:00
Aman Gupta 1642a4fb60 ggml-cpu: Use tiled FA for prompt-processing (llama/19012)
* ggml-cpu: Use tiled FA for prompt-processing

the FA performance is gimped on CPU on long contexts because it essentially uses a vector kernel. This PR adds a tiled FA for PP. Perf tuning for tile sizes done on a AMD EPYC single-socket 64-c machine.

* fix out of bounds for mask

* skip rows where there are all masks

* skip tile if mask is inf

* store mask in worksize

* check inf tile earlier
2026-01-30 15:56:40 +02:00
Georgi Gerganov d2b51404e4 kv-cache : support V-less cache (llama/19067)
* kv-cache : support V-less cache

* cuda : better check for V_is_K_view

* cuda : improve V_is_K_view check

* graph : add comments

* hparams : refactor
2026-01-30 15:56:40 +02:00
Johannes Gäßler f53eafd745 CUDA: re-use MLA K data for V in MMA FA (llama/19057) 2026-01-30 15:56:40 +02:00
Aman Gupta 13577a6ce4 ggml-cuda: enable cuda-graphs for `n-cpu-moe` (llama/18934)
* ggml-cuda: add split-wise cuda graph

* add n-cpu-moe compare_llama_bench.py

* fix hip/musa builds
2026-01-30 15:56:40 +02:00
nullname 79f1bb3d35 ggml-hexagon: flash-attn opt (llama/19025)
* optimize flash attention kernel by improving score computation and online softmax update

* wip

* Refactor online softmax update in flash attention kernel for improved performance

* Optimize flash attention kernel by replacing float array with HVX_Vector for score computation

* wip
2026-01-30 15:56:40 +02:00
Neo Zhang 0d9dda5a99 use malloc to support both iGPU and dGPU in same time (llama/18992)
* use malloc to support both iGPU and dGPU in same time

* support windows

---------

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2026-01-30 15:56:40 +02:00
Alberto Cabrera Pérez e090d91f5e ggml-cpu: aarm64: q5_K repack gemm and gemv (and generic) implementations (i8mm) (llama/18860)
* Boilerplate for q5_Kx8 REPACK on ARM and fallback

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Implements make_block_q5_Kx8 by extending make_block_q4_Kx8

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* q5_K repack gemm and gemv generics

* Gemm and Gemv ARM implementations (i8mm)

* Improved qh manipulation looking at non-repack vec_dot implementation

* Full unroll

* Apply Q5_K Gemv vand and vshl optimizations to gemm. Improve comments.

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Fix wrong fallback definitions of Q5_K

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Fixed comments. Reverted unnecessary formatting

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Fixed typo in generic definitions

* Switching AND + Shift with Shift Insert. Better op interleaving.

* Vectorize + unroll the block scales

* Apply gemm optimizations to gemv

* Improve bias calculation

---------

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>
2026-01-30 15:56:40 +02:00
Georgi Gerganov 3f96a1da0e mla : make the V tensor a view of K (llama/18986)
* mla : pass V as a view of K to the FA op

* cuda : adjust mla logic to new layout

* kv-cache : fix rope shift

* tests : remove comment

* cuda : fix reusable_cutoff

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-01-30 15:56:40 +02:00
Johannes Gäßler f21d0cbb1a CUDA: fix alignment check for FA (llama/19023) 2026-01-30 15:56:40 +02:00
lhez 0e030b852a opencl: enable the general fp mm for non-cont input and as a fallback for specialized kqv kernel for adreno (llama/18970)
* opencl: add `copy_to_contiguous` and utilize mm kernels

* opencl: only copy to cont for f32 and f16 tensors

* opencl: use cont mm for fallback when dst is large

* opencl: use nb local to copy-to-cont

* opencl: use local offset as well
2026-01-30 15:56:40 +02:00
Aman Gupta d4fafcfc6f CUDA: add gqa_ratio 4 for GLM 4.7 flash (llama/18953) 2026-01-30 15:56:40 +02:00
shaofeiqi 167fec69d5 opencl: add TRI op support (llama/18979) 2026-01-30 15:56:40 +02:00
Aleksei Nikiforov 55927d42ef ggml-zdnn : mark zDNN buffers as non-host (llama/18967)
While buffers reside in host memory,
additional transformation is needed to use buffers with zDNN.

Fixes #18848
2026-01-30 15:56:40 +02:00
Jeff Bolz b7e323f40b vulkan: Remove transfer_ctx, do everything in compute_ctx. (llama/18945)
* vulkan: Remove transfer_ctx, do everything in compute_ctx.

We had a bug where a set_tensor_async (using transfer_ctx) didn't get
submitted before the graph_compute (using compute_ctx) that came after
it. To avoid this sort of issue, just do everything in compute_ctx.

Remove transfer_cmd_pool, which was already unused.

* fix crash with perf logger
2026-01-30 15:56:40 +02:00
Jeff Bolz b2bc4d810b vulkan: support flash attention GQA/split_k with small batches (llama/18938) 2026-01-30 15:56:40 +02:00
Masato Nakasaka 3bbf4ced47 Revert "vulkan: force full subgroups for flash attention to fix intel subgroup crash (#17356)" (llama/18831)
This reverts commit 980b7cd17e055c8c587f79ffda7eb4fddf405566.
2026-01-30 15:56:40 +02:00
Jeff Bolz 660d943ff8 vulkan: Use mul_mat_vec_id for small values of n (llama/18918)
Change ggml_vk_mul_mat_vec_id_q_f16 to loop over the batch dimension and
update the indexing calculations in get_offsets.

Mat-vec is faster than mat-mat for small values of n. We don't get the same
reuse of the weights as in the non-ID path, but with this the cost is linear
in n rather than n>1 being far slower than n==1.
2026-01-30 15:56:40 +02:00
Oliver Simons 924a9e292c CUDA: Fix builds for older CCCL versions by ifdefing strided_iterator (llama/18964)
* CUDA: Fix builds for older CCCL versions by ifdefing strided_iterator

Strided iterator was added in [CCCL
3.1](https://github.com/NVIDIA/cccl/releases/tag/v3.1.0), which is packaged into
[CTK
13.1](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#id5)

* Unindent as per code review request
2026-01-30 15:56:40 +02:00
Oliver Simons fdc83ee3c0 CUDA: Replace init_offsets kernel with iterators in cub-based argsort (llama/18930)
* CUDA: Replace `init_offsets` with iterators in argsort

This is a QOL improvement, saving us the cost of materializing the
iterator

* Remove unnecessary include from top-k.cu
2026-01-30 15:56:40 +02:00
Adrien Gallouët bf71ffa6b3 ggml : cleanup path_str() (llama/18928)
- Remove pragmas as `std::codecvt_utf8` is not used.
- Avoid implicit `strlen()`.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-30 15:56:40 +02:00
Georgi Gerganov b0517d6912 metal : enable FA for MLA heads (llama/18950) 2026-01-30 15:56:40 +02:00
Georgi Gerganov 47f3e3b927 ggml : add ggml_build_forward_select (llama/18550)
* ggml : add ggml_build_forward_select

* cuda : adapt CUDA graph compat to new feature

* vulkan : update logic to handle command buffer closing

* ggml : check compute for fusion

* ggml : add comment
2026-01-30 15:56:40 +02:00
lhez 62a09b106d opencl: fix q6_K mv for m=1 (llama/18893) 2026-01-30 15:56:40 +02:00
Reese Levine 389dafc7c2 ggml webgpu: support for backend sampling (llama/18880) 2026-01-30 15:56:40 +02:00
Thore Koritzius 511ca7a1f4 ggml : extend ggml_pool_1d + metal (llama/16429)
* chore: resolve conflicts

* feat: ggml metal impl

* fix: ggml_metal_kargs_pool_1d struct

* fix: require contiguous input

* chore: test pool_1d

* chore: limit pool1d test cases to p0=0 and s0=k0 to conform with asserts

* chore: add p0 and s0 to testing

* fix: allow padding for cpu and metal

* Update ggml/src/ggml-metal/ggml-metal.metal

* fix: correct single-threaded loop

* ggml : cleanup

* tests : add ne[1] != 1 tests

* fix: ne[1] handling in np

* cont : fixes

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-30 15:56:40 +02:00
Perry Naseck ecb4b80c35 ggml-blas: hide warnings from included BLAS headers (llama/18818)
* fix compile def openblas, blis for compat libs, nvpl compile def, warn if no blas vendor set

* ggml-blas: hide warnings from included BLAS headers
2026-01-30 15:56:40 +02:00
Raul Torres 42960b6073 CANN: Remove unused `ggml_cann_get_device` function (llama/18625) 2026-01-30 15:56:40 +02:00
Chenguang Li 2fceb5a80f CANN: fix an issue where get_env was not fully renamed (llama/18796)
* CANN: fix an issue where get_env was not fully renamed

* ci: add cann with acl group

* ci: define use_acl_graph using GitHub Action

* ci: update cann dockerfile with acl graph
2026-01-30 15:56:40 +02:00
hipudding 854274a297 CANN: support gated linear attn (llama/18653)
* CANN: support gated linear attn

This change adds support for the GGML_OP_GATED_LINEAR_ATTN operator.
The feature was implemented by YushengZhao. Because the previous
submission was based on an outdated codebase, this PR was rebased to
merge.

Co-authored-by: YushengZhao <yusheng.chao@outlook.com>
Co-authored-by: hipudding <huafengchun@gmail.com>

* CANN: optimize OP gla

Optimize gla for high preformance

* Remove unused comments

---------

Co-authored-by: 赵禹昇 <2501112001@cninfer02.localdomain>
Co-authored-by: YushengZhao <yusheng.chao@outlook.com>
2026-01-30 15:56:40 +02:00
shaofeiqi ed6004d051 OpenCL: add SOLVE_TRI op support (llama/18846) 2026-01-30 15:56:40 +02:00
Georgi Gerganov 290ff3d28d cuda : print less debug logs when disabling cuda graphs (llama/18868) 2026-01-30 15:56:40 +02:00
Johannes Gäßler f2f0ba0384 CUDA: fix allignment on register spill for FA (llama/18815) 2026-01-30 15:56:40 +02:00
shalinib-ibm 78a23d4830 ggml-cpu: optimize ggml_vec_dot_bf16 for Power9 (llama/18837) 2026-01-30 15:56:40 +02:00
Max Krasnyansky 50b7ab3d46 hexagon: support for OP_CPY, host buffers now optional (llama/18822) 2026-01-30 15:56:40 +02:00
Oliver Simons bc09047405 CUDA: Factor out and re-use `block_reduce` function (llama/18785)
* CUDA: Refactor and expose two_stage_warp_reduce_* function

* Use `two_stage_warp_reduce` also in softmax kernel, move smem out of it

Moving smem out of `__device__` function to `__global__` function
allows for explicit smem reuse, as either compiler or cuda rt seem to not
free it afterwards (`cudaFuncSetAttribute` fails when not accounting for
it once for each call to two_stage_warp_reduce)

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Use two_stage_warp_reduce in group_norm_f32

* Use two_stage_warp_reduce in rms_norm_f32

* Fix smem calculation which expects bytes

* Make `two_stage_warp_reduce` accept all values warp_reduce accepts

Also integrate it into norm_f32 function

* Use two_stage_warp_reduce in l2_norm_f32

* Use type traits for block reduction for better legibility

Also adresss other requests by @am17an such as variable renaming

* Make norm tests cover all cuda paths

* Mark columns % WARP_SIZE !=0 as supported for RMS_NORM_BACK

Unit-tests passed locally, let's see if they pass in the CI as well

* Use `enum class` for `block_reduce_method`

This is more type-safe than plain enum

* Rename variables as suggested in code review by @am17an

* Rename two_stage_warp_reduce -> block_reduce

* Fix trailing whitespace in common.cuh

* Make condition of static_assert type-dependent

This delays evaluation until the template is actually instantiated.
Otherwise, some compilers may evaluate the assert when parsing the
template, resulting in build errors as observed here:

https://github.com/ggml-org/llama.cpp/actions/runs/20960323123/job/60235530068?pr=18785

* Inline definitions

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-01-30 15:56:40 +02:00
Jeff Bolz 4b155e9bfb vulkan: Check maxStorageBufferRange in supports_op (llama/18709)
* vulkan: Check maxStorageBufferRange in supports_op

* skip maxStorageBufferRange check when shader64BitIndexing is enabled
2026-01-30 15:56:40 +02:00
Daniel Bevenius 25aeb66a4a CUDA : fix typo in clang pragma comment [no ci] (llama/18830) 2026-01-30 15:56:40 +02:00
Ruben Ortlam 49762e8fb3 vulkan: work around Intel fp16 bug in mmq (llama/18814) 2026-01-30 15:56:40 +02:00
Perry Naseck 17656e56dc ggml-metal: do not copy headers for embedded, use current binary dir for embedded (llama/18705) 2026-01-30 15:56:40 +02:00
yulo c6a495ae5d HIP: add fattn-mma-f16 for RDNA4 (llama/18481)
* finish VQ mma

* flash_attn_ext_f16_iter

* KQ_rowsum

* correct exp

* fix scale error

* fix softmax scale

* fix softmax scale

* enable fattn on cpu side

* fix random error

* disable fattn-mma-f16 on rdna3

* fix wrong col for rdna

* use identity mat to transpose

* resolve conflicts

* basic tuning for DeepSeek-R1-Distill-Qwen-1.5B

* fix volta compile error

* align rdna4 policy for fattn

* adjust fattn policy

* adjust kernel selection logic

* update as the review comments

* keep fattn-wmma logic

* adjust kernel selection logic

---------

Co-authored-by: zhang hui <you@example.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-01-30 15:56:40 +02:00
Bráulio Oliveira 7aa8818647
examples : use -dev/--device and WHISPER_ARG_DEVICE (#3557)
Align device selection naming with llama.cpp.
2026-01-21 08:40:30 +01:00
Yshtola f53dc74843
whisper : Fix UTF-8 character boundary issue in segment wrapping (max_len) (#3592)
The current implementation in `whisper_wrap_segment()` uses `strlen()` to count bytes, not UTF-8 characters. When splitting segments at `max_len`, this can break multi-byte UTF-8 characters, resulting in invalid sequences displayed as `�` (U+FFFD replacement character).
2026-01-16 14:16:05 +02:00
Georgi Gerganov 2eeeba56e9
release : v1.8.3 2026-01-15 11:54:31 +02:00
Georgi Gerganov 21c1765fcb
benches : update 2026-01-15 11:53:09 +02:00
Georgi Gerganov 47af2fb70f sync : ggml 2026-01-14 09:11:59 +02:00
Georgi Gerganov 6ee0eaf531 CUDA : fix unused argument when USE_CUDA_GRAPH=OFF (llama/18800) 2026-01-14 09:11:59 +02:00
Jeff Bolz ab1828dc1c vulkan: change memory_logger to be controlled by an env var (llama/18769) 2026-01-14 09:11:59 +02:00
Jeff Bolz aedf332ec5 vulkan: Use VK_EXT_shader_64bit_indexing to handle large mat_mul(_id) (llama/18678)
This fixes incoherent output in Llama-4-Maverick-17B-128E-PAB-Q8_0, which
has a mul_mat_id with an A matrix that's Q8_0 8192 x 5120 x 128.

This should work when the number of blocks in the A matrix is less than 2^32
(for mul_mat_vec or mul_mm_cm2), or for mul_mm I think the limit is like
2^32*LOAD_VEC_A elements.

- Divide batch_stride by QUANT_K earlier, so the block index calculation works in 32b.
- Each vk_pipeline_struct has a linked list of pipelines that will allow it to handle
variants. So far this change just adds a single use case for this, compiling with the
e64BitIndexingEXT flag.
- Use the 64b indexing variant when the A matrix is larger than maxStorageBufferRange.

64-bit indexing has some cost - around 3-5% in MoE models, so it's worth the effort
to avoid enabling it unconditionally.
2026-01-14 09:11:59 +02:00
Ruben Ortlam 716d68aca9 vulkan: Disable large coopmat matmul configuration on proprietary AMD driver (llama/18763)
* vulkan: Disable large coopmat matmul configuration on proprietary AMD driver

* Also disable the large tile size
2026-01-14 09:11:59 +02:00
Ruben Ortlam c0433783c3 Vulkan: Optimize Matmul parameters for AMD GPUs with Coopmat support (llama/18749)
* vulkan: Enable and optimize large matmul parameter combination for AMD

* limit tuning to AMD GPUs with coopmat support

* use tx_m values instead of _l
2026-01-14 09:11:59 +02:00
Georgi Gerganov ecfcc65fbf talk-llama : sync llama.cpp 2026-01-14 09:11:59 +02:00
Georgi Gerganov 13dc9a912b sync : ggml 2026-01-14 09:11:59 +02:00
shaofeiqi d4ce2e554f opencl: add SOFTPLUS op support (llama/18726) 2026-01-14 09:11:59 +02:00
Johannes Gäßler 3a1ea96373 HIP: adjust RDNA3.5 MMQ kernel selction logic (llama/18666) 2026-01-14 09:11:59 +02:00
Perry Naseck 484b17053a cmake : update blas logic (llama/18205) 2026-01-14 09:11:59 +02:00
Michael Wand 45be2cd27a Corrected: changed s13 = src1->nb[3] instead of nb[2] (llama/18724) 2026-01-14 09:11:59 +02:00
shaofeiqi 4af27bf2da opencl: add EXPM1 op (llama/18704) 2026-01-14 09:11:59 +02:00
Reese Levine 4ac8c3b478 Updates to webgpu get_memory (llama/18707) 2026-01-14 09:11:59 +02:00
Aaron Teo fff3ebd93d llama: use host memory if device reports 0 memory (llama/18587) 2026-01-14 09:11:59 +02:00
Masashi Yoshimura a71127dfd8 ggml-webgpu: Fix GGML_MEM_ALIGN to 8 for emscripten. (llama/18628)
* Fix GGML_MEM_ALIGN to 8 for emscripten.

* Add a comment explaining the need for GGML_MEM_ALIGN == 8 in 64-bit wasm with emscripten
2026-01-14 09:11:59 +02:00
Reese Levine 1bb903f599 ggml webgpu: initial flashattention implementation (llama/18610)
* FlashAttention (llama/13)

* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* neg f16xf32xip builds and runs, havent actually ran a model that uses neg kernel yet though

* neg passes backend test

* unary operators pass ggml tests

* rms_norm double declaration bug atoned

* abides by editor-config

* removed vestigial files

* fixed autoconfig

* All operators (inlcluding xielu) working

* removed unnecesarry checking if node->src[1] exists for unary operators

* responded and dealt with PR comments

* implemented REPL_Template support and removed bug in unary operators kernel

* formatted embed wgsl and ggml-webgpu.cpp

* Faster tensors (llama/8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

* Wasm (llama/9)

* webgpu : fix build on emscripten

* more debugging stuff

* test-backend-ops: force single thread on wasm

* fix single-thread case for init_tensor_uniform

* use jspi

* add pthread

* test: remember to set n_thread for cpu backend

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Intermediate state

* Fast working f16/f32 vec4

* Working float fast mul mat

* Clean up naming of mul_mat to match logical model, start work on q mul_mat

* Setup for subgroup matrix mat mul

* Basic working subgroup matrix

* Working subgroup matrix tiling

* Handle weirder sg matrix sizes (but still % sg matrix size)

* Working start to gemv

* working f16 accumulation with shared memory staging

* Print out available subgroup matrix configurations

* Vectorize dst stores for sg matrix shader

* Gemv working scalar

* Minor set_rows optimization (llama/4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Comment on dawn toggles

* Working subgroup matrix code for (semi)generic sizes

* Remove some comments

* Cleanup code

* Update dawn version and move to portable subgroup size

* Try to fix new dawn release

* Update subgroup size comment

* Only check for subgroup matrix configs if they are supported

* Add toggles for subgroup matrix/f16 support on nvidia+vulkan

* Make row/col naming consistent

* Refactor shared memory loading

* Move sg matrix stores to correct file

* Working q4_0

* Formatting

* Work with emscripten builds

* Fix test-backend-ops emscripten for f16/quantized types

* Use emscripten memory64 to support get_memory

* Add build flags and try ci

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

* Remove extra whitespace

* Move wasm single-thread logic out of test-backend-ops for cpu backend

* Disable multiple threads for emscripten single-thread builds in ggml_graph_plan

* Refactored pipelines and workgroup calculations (llama/10)

* refactored pipelines

* refactored workgroup calculation

* removed commented out block of prior maps

* Clean up ceiling division pattern

---------

Co-authored-by: Neha Abbas <nehaabbas@eduroam-169-233-141-223.ucsc.edu>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Start work on flash attention

* Shader structure set up (many bugs still)

* debugging

* Working first test

* Working with head grouping, head sizes to 128, logit softcap, mask/sinks enabled, f32

* Generalize softmax to work with multiple subgroups, f16 accumulation, mask shared memory tiling

* Start work on integrating pre-wgsl

* Separate structs/initial shader compilation library into separate files

* Work on compilation choices for flashattention

* Work on subgroup matrix/tile size portability

* subgroup size agnostic online softmax

* Cleanups, quantization types

* more cleanup

* fix wasm build

* Refactor flashattention to increase parallelism, use direct loads for KV in somce cases

* Checkpoint

* formatting

* Update to account for default kv cache padding

* formatting shader

* Add workflow for ggml-ci webgpu

* Try passing absolute path to dawn in ggml-ci

* Avoid error on device destruction, add todos for proper cleanup

* Fix unused warning

* Forgot one parameter unused

* Move some flashattn computation to f32 for correctness
2026-01-14 09:11:59 +02:00
Jeff Bolz 0bc0e5616e vulkan: fix push constant size for quantize_q8_1 (llama/18687)
I added an assert to catch further mismatches, and it found several.
Fix those, too.
2026-01-14 09:11:59 +02:00
Jeff Bolz 678c660e62 vulkan: optimize ssm_scan (llama/18630)
* vulkan: optimize ssm_scan

* fix warp vs subgroup naming
2026-01-14 09:11:59 +02:00
도로로도로또 f2d8588229 metal : add MoE kernel specialization for ne20=5 (llama/18667)
Add template specialization for kernel_mul_mm_id_map0 with ne20=5
to support models using 5 active experts (e.g., VAETKI).
2026-01-14 09:11:59 +02:00
Doctor Shotgun b9965c89a1 ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH (llama/18535)
* ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH
* makes the min_batch_size for triggering op offload configurable via env var, defaulting to the prior hardcoded value of 32

* ggml: read GGML_OP_OFFLOAD_MIN_BATCH once and store to dev ctx

* cann: forward declaration of device context struct

* cann: move offload op check after device context declaration

* cuda: fix whitespace

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-01-14 09:11:59 +02:00
shaofeiqi 85a329cb08 opencl: add FILL op support (llama/18682) 2026-01-14 09:11:59 +02:00
Oliver Walsh 4f2ca7c163 cuda : fix build on cuda 12.8 (llama/18672)
compute121 requires 12.9

Signed-off-by: Oliver Walsh <owalsh@redhat.com>
2026-01-14 09:11:59 +02:00
Jeff Bolz a91ab72bd9 vulkan: reject ops when a tensor is too large to allocate (llama/18646) 2026-01-14 09:11:59 +02:00
virajwad 096e7e911a vulkan: Warptile tuning for Intel Xe2/Xe3 (llama/18178)
* modify warptile tuning for xe3

* intel vendor check w/ coopmat support

* fix back formatting

* fix formatting change 2

* move intel check to chip specific tuning part

* Change to support both windows and linux

* modify m_warptile to l_warptile for intel

* modify warptile tuning for bf16 matmuls to fix regression (m_warptile to l_warptile)

* Code style changes

* Code style changes (2)

* Code style changes (3)
2026-01-14 09:11:59 +02:00
Eve a576ed944a vulkan: more mul mat optimizations (llama/18533)
* q4_k

* q5_k

* q2_k

* q4_1

* q5_1

* better buf index
2026-01-14 09:11:59 +02:00
hipudding 5c583f3c02 CANN: Fix rename for get_env (llama/18652)
In #18624, get_env in ggml-cann was renamed to get_env_as_lowercase
to accurately reflect the function’s behavior and reduce the chance
of misuse. However, the update missed renaming call sites in other
files. This commit fixes that oversight.
2026-01-14 09:11:59 +02:00
Raul Torres 47671c81db CANN: Rename `get_env` to `get_env_as_lowercase` (llama/18624) 2026-01-14 09:11:59 +02:00
Max Krasnyansky a5f51ac75b Hexagon add support for f16/f32 flash attention, scale, set-rows and improve f16/32 matmul (llama/18611)
* hexagon: improve fp16 matmul and add fp32/fp16 flash-attention

* hexagon: add support for set-rows fp32 -> fp16 with i32/i64 row-idx

* hexagon: add support for SCALE fp32

* hexagon: replace scalar fp32 -> fp16 copy with HVX

* hexagon: optimize flash_atten_ext with aligned VTCM buffers and DMA

- Implements double-buffered DMA prefetching for K, V, and Mask tensors.
- Ensures K and V rows in VTCM are padded to 128 bytes to support aligned HVX operations.
- Correctly synchronizes DMA transfers to prevent race conditions.
- Uses `FLASH_ATTN_BLOCK_SIZE` of 128 for efficient chunking.

* hexagon: use aligned mad_f16

* hexagon: flash_atten more aligned ops

* hexagon: optimize scale_f32 hvx helpers

* hexagon: unroll fa loops

* hexagon: remove unused set-rows log

* hexagon: flash_attn_ext add support for DMAing Q

- Update `op_flash_attn_ext` to include Q row size in scratchpad allocation.
- Pad Q row size to 128 bytes for alignment.
- Implement DMA transfer for Q tensor in `flash_attn_ext_f16_thread`.
- Update dot product computations to use VTCM-buffered Q data.

* hexagon: fix handling of NANs hvx dotproducts

* hexagon: cleanup spad allocation in flash-atten

* hexagon: improve fp16/fp32 matmul

- Introduced `vec_dot_f16_f16` and `vec_dot_f16_f16_rx2` kernels using efficient HVX dot product intrinsics.
- Added `quantize_fp32_f16` to copy/convert weights from DDR to VTCM
- Updated `op_matmul` to use the optimized path when VTCM capacity allows and broadcasting requirements are compatible.
- Implemented fallback logic to the original implementation for complex broadcasting scenarios.

* hexagon: fix HVX_ARCH check

* hexagon: matmul cleanup and fp16 fixes

Use aligned vec_dot_f16 for 2d matmuls and unaligned version for 4d.

* hexagon: fix fp16 x fp16 matmuls and some minor refactoring

* hexagon: add support for GET_ROWS f32 -> f32

Also optimize SET_ROWS threading a bit when we have just a few rows to process.

* hexagon: optimize set-rows threading

* hexagon: update adb/run-bench.sh to properly support experimental and verbose options

* hexagon: flash_atten use aligned vectors for dot products
2026-01-14 09:11:59 +02:00
Aadeshveer Singh 436f30d05f ggml : optimize cuda ssm_scan using warp-level reduction (llama/18505)
* ggml : optimize cuda ssm_scan using warp-level reduction

* ggml : apply code review suggestions (style, const, constexpr)

* ggml : add TODO regarding stride consistency
2026-01-14 09:11:59 +02:00
Jeff Bolz dbec71f6cf vulkan: support buffer_from_host_ptr (llama/18467)
* vulkan: support buffer_from_host_ptr

* hacky use of buffer_from_host_ptr for directio

* disable buffer_from_host_ptr cap

* use external memory for ggml_vk_host_malloc, revert model loader changes

* disable external_memory_host for MoltenVK

* take buffer memory types into account

* don't use external_memory_host for ggml_vk_host_malloc
2026-01-14 09:11:59 +02:00
Aman Gupta 575d894603 ggml-cuda: refactor cuda graph usage (llama/18637)
* ggml-cuda: refactor cuda graph usage

* use is_enabled() instead of enabled
2026-01-14 09:11:59 +02:00
Beinsezii ed674cfc10 mmq.cu: tune mmq/rocblas switching for RDNA (llama/18537)
* Patch perf regression for mmq kernels in ROCm

recover performance regression for https://github.com/ggml-org/llama.cpp/issues/17917

* add n_experts branch like the cdna path

* mmq.cu: tune mmq/wmma switching for RDNA

* mmq.cu: move amd wmma mmq/wmma switching behind IS_RDNA3

* Update ggml/src/ggml-cuda/mmq.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Jiacheng (Jason) Chen <76919340+jiachengjason@users.noreply.github.com>
Co-authored-by: jiachengjason <jasonchen.jiacheng@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-01-14 09:11:59 +02:00
Adrien Gallouët 5520f27363 ggml : fix avx512bf16 build (llama/18623)
- include `immintrin.h` when required
- remove unused m512bh

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-14 09:11:59 +02:00
Raul Torres 9a1a6685ba CANN: Make `valid_values` variable `static const` (llama/18627) 2026-01-14 09:11:59 +02:00
nwyin e563e239a7 ggml webgpu: add CEIL operation support (llama/18605)
* ggml-webgpu: add CEIL operation support

      Add support for the CEIL unary operation in the WebGPU backend:
      - Add CEIL_FUNC shader template in unary_op.wgsl
      - Add 4 shader variants (f32, f16, inplace versions)
      - Initialize CEIL pipelines in ggml-webgpu.cpp
      - Register CEIL in supports_op function

* docs: update WebGPU ops support for CEIL
2026-01-14 09:11:59 +02:00
Johannes Gäßler 9956333361 CUDA: fix FA FP16 accumulator overflow for Granite (llama/18614) 2026-01-14 09:11:59 +02:00
Aman Gupta 804f545454 ggml-cuda: check for srcs outside the cgraph (llama/18583)
* ggml-cuda: check for srcs outside the cgraph

* review: use leafs instead
2026-01-14 09:11:59 +02:00
Jeff Bolz 52ba45e2b8 vulkan: fix topk_moe_sigmoid_norm_bias failures in GLM-4.6 (llama/18582) 2026-01-14 09:11:59 +02:00
Jeff Bolz 0a99b4c377 vulkan: handle quantize_q8_1 overflowing the max workgroup count (llama/18515)
* vulkan: handle quantize_q8_1 overflowing the max workgroup count

* vulkan: Fix small tile size matmul on lavapipe

* fix mul_mat_id failures
2026-01-14 09:11:59 +02:00
Chenguang Li 1d657effe3 CANN: add operator fusion support for ADD + RMS_NORM (llama/17512)
This commit implements operator fusion for ADD + RMS_NORM operations
in the CANN backend to reduce memory access overhead and improve
performance. The fusion is controlled by the GGML_CANN_OPERATOR_FUSION
environment variable (default: false).

Changes:
- Implement ggml_cann_op_add_rms_norm_fused() using ACLNN AddRmsNorm
- Add ggml_cann_can_fuse() to check fusion eligibility
- Integrate fusion logic into computation graph evaluation
- Add test cases for ADD + RMS_NORM fusion
- Update documentation with new environment variable

The fusion combines ADD and RMS_NORM into a single kernel call,
which is more efficient than executing them separately.
2026-01-14 09:11:59 +02:00
Daniel Bevenius 4d6a3fb00d sampling : add support for backend sampling (llama/17004)
* sampling : add support for backend sampling

This commit adds support for performing sampling operations on the
backend (e.g. GPU) as part of the model computation graph.

The motivation for this feature is to enable sampling to be performed
directly on the backend as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the backend.

For example, the backend sampler chain might select/sample a token
directly in which case only the sampled token needs to be transferred
from device memory to host memory.

It is also possible for the backend samplers to perform filtering of
the logits, or compute and filter the probability distribution, in
which case only the filtered logits or probabilites need to be
transferred back to system memory for further processing by CPU
samplers.

Currently the backend sampling works in a similar manner to how
pooling works, it is a function that is called by build_graph and the
sampler operations become part of the models computation graph.

* llama-cli : add backend sampler configuration

* server : add backend sampling options/configuration

* webui : add backend sampling options

* ggml : add initial cumsum implementation for CUDA

* sampling : enable all backend sampler tests

This commit enables all exisiting backend sampler tests in the
test-backend-sampler. Previously, some tests were disabled because
there were missing ggml operation implementations.

* graph : do not include llama-model.h

* sampling : always expose sampled_ids

This commit precomputes and caches the full-vocab token id list in
llama_context's constructor, so llama_get_backend_sampled_token_ids_ith
always returns a valid pointer.

The motivation for this is that this enables both common/sampling.cpp
and src/llama-sampling.cpp can simplify their logic.

Not all backends samplers that process logits need to set the
sampled_tokens_id as they may not change the order of the logits, for
example the temperature sampler only scales the logits but does not
change their order. Simliar the logit bias sampler only adds bias to
specific token ids but does not change the order of the logits. In
these cases there will not be a device to host copy of the sampled
token ids, and this is the use case where having this precomputed
list is useful.

* sampling : ensure at most one output token per seq

This commit adds a check in the batch allocator to ensure that when
backend sampling is enabled, at most one output token is specified per
sequence.

* CUDA: Optimize argsort for gpu-based token sampling

Argsort is used for top-k currently. WE optimize argsort by 2 things:

1. Use `DeviceRadixSort` for single-row/sequence to parallelize it
   across our SMs
2. Use `DeviceSegmentedSort` for multi-row/sequence as this is the
   correct entrypoint (the function chooses different execution paths,
   it contains `DeviceSegmentedRadixSort` as one of the paths and will
   choose the best one according to heuristics.
   https://nvidia.github.io/cccl/cub/api/structcub_1_1DeviceSegmentedSort.html#overview

Some perf numbers for a RTX PRO 6000:

On the kernel level, tested with
`GGML_CUDA_DISABLE_GRAPHS=1 ./test-backend-ops -o ARGSORT perf`
Before:
```
  ARGSORT(type=f32,ne=[65000,16,1,1],order=0):                  4130 runs -   359.24 us/run
  ARGSORT(type=f32,ne=[200000,1,1,1],order=0):                  8192 runs -   861.34 us/run
  ARGSORT(type=f32,ne=[200000,16,1,1],order=0):                 1343 runs -  1020.01 us/run
```

After:
```
  ARGSORT(type=f32,ne=[65000,16,1,1],order=0):                  4130 runs -   312.41 us/run
  ARGSORT(type=f32,ne=[200000,1,1,1],order=0):                 16384 runs -    63.48 us/run
  ARGSORT(type=f32,ne=[200000,16,1,1],order=0):                 1343 runs -   874.36 us/run
```
2026-01-14 09:11:59 +02:00
Aman Gupta f0bf5b8cc3 CUDA: disable cuda graph when using n-cpu-moe (llama/18593)
* CUDA: disable cuda graph when using n-cpu-moe

* call ggml_cuda_set_device
2026-01-14 09:11:59 +02:00
Aman Gupta 88f5765c82 ggml-cuda: remove unused params in ggml_cuda_graph (llama/18579) 2026-01-14 09:11:59 +02:00
Aman Gupta 1e725546b0 ggml-cuda: fixes for concurrent streams (llama/18496) 2026-01-14 09:11:59 +02:00
Johannes Gäßler 60d178cee9 CUDA: only allocate FA tmp buffer if needed (llama/18564) 2026-01-14 09:11:59 +02:00
pl752 304e780e5f (Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment (llama/18559)
* CUDA: Fixed obj byte size instead of obj count being passed to pool alloc (fattn-common, dst_tmp_meta)

* CUDA: Explicitly casted some of the int alloc counts before multiplication in argsort

---------

Co-authored-by: pl752 <maximpl752@gmail.com>
2026-01-14 09:11:59 +02:00
Shouyu c9e9f083c2 ggml-hexagon: optimize activation function (llama/18393)
* refactor: refactor silu

* refactor: optimize swiglu

* refactor: remove unncessary if in swiglu

* refactor: refactor swiglu_oai

* chore: fix formatting issue
2026-01-14 09:11:59 +02:00
Jeff Bolz 9d83865607 vulkan: Optimize GGML_OP_CUMSUM (llama/18417)
* vulkan: Optimize GGML_OP_CUMSUM

There are two paths: The preexisting one that does a whole row per workgroup
in a single shader, and one that splits each row into multiple blocks and does
two passes. The first pass computes partials within a block, the second adds
the block partials to compute the final result. The multipass shader is used
when there are a small number of large rows.

In the whole-row shader, handle multiple elements per invocation.

* use 2 ELEM_PER_THREAD for AMD/Intel

* address feedback
2026-01-14 09:11:59 +02:00
Jeff Bolz b7ff521e71 vulkan: Implement mmvq for iq1_s/iq1_m (llama/18450) 2026-01-14 09:11:59 +02:00
Georgi Gerganov b99c911c49 metal : adjust extra size for FA buffer to avoid reallocations (llama/18545) 2026-01-14 09:11:59 +02:00
Chris Rohlf f328b13d5c rpc : use unordered_map::reserve and emplace (llama/18513) 2026-01-14 09:11:59 +02:00
MeeMin fbde389665 cuda : fix copy of large tensors (ggml_nbytes <= INT_MAX assertion) (llama/18433)
* ggml-cuda: fixed assertion in ggml_cuda_cpy (llama/18140)

* ggml-cuda: changes in data types to int64_t

* ggml-cuda: added asserts for CUDA block numbers

* ggml-cuda: changed the condition for y and z dimension
2026-01-14 09:11:59 +02:00
Aman Gupta f22c1ccbe4 ggml-cuda: remove unneccesary prints on ggml_cuda_init (llama/18502) 2026-01-14 09:11:59 +02:00
Jeff Bolz b1f65a4a7e vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron (llama/18295)
* vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron

Also handle GGML_OP_SCALE at the end (nemotron, deepseek2).

Fewer pipeline variants and spec constants, just use push constants.

In test_topk_moe, change exp_probs_b to be 1D, matching real networks.

Update test-backend-ops and ggml-backend to allow verifying multiple outputs
in a fusion test (topk_moe has two outputs). Previously only the final node
was verified.

* change test_topk_moe to allow results in arbitrary order

* disable sigmoid fusion for moltenvk
2026-01-14 09:11:59 +02:00
Peter A. a96310871a
examples : fix executable example targets (#3600)
* cmake:
    - added `whisper-` prefix to unprefixed targets: `quantize`, `lsp`,
      `vad-speech-segments`
    - added `install(TARGETS ${TARGET} RUNTIME)` where it was missing

Signed-off-by: Peter A. <ink.splatters@pm.me>

* .github/workflows/build.yml: quantize -> whisper-quantize

Signed-off-by: Peter A. <ink.splatters@pm.me>

---------

Signed-off-by: Peter A. <ink.splatters@pm.me>
2026-01-13 08:08:18 +01:00
KITAITI Makoto 679bdb53db
ruby : fix segmentation fault (#3591)
* Mark long live variable

* Fix test for Whisper::Token#deconstruct_keys(nil)

* Don't use long live variable

* Fix indentation
2026-01-05 17:41:22 +09:00
Georgi Gerganov e9898ddfb9 sync : ggml 2025-12-31 18:27:20 +02:00
Georgi Gerganov ce03f8e759 ggml : bump version to 0.9.5 (ggml/1410) 2025-12-31 18:27:20 +02:00
Georgi Gerganov 7359ac94d5 talk-llama : sync llama.cpp 2025-12-31 17:52:09 +02:00
Georgi Gerganov 54fa8216ea sync : ggml 2025-12-31 17:52:09 +02:00
gatbontonpc 8189f2cb65 metal : add count_equal op (llama/18314)
* add count equal for metal

* remove trailing whitespace

* updated doc ops table

* changed shmem to i32

* added multi tg and templating

* removed BLAS support from Metal docs

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add memset to set dst to 0

* metal : cleanup

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-31 17:52:09 +02:00
Johannes Gäßler 2d250f8049 CUDA: fix KQ max calculation (llama/18487) 2025-12-31 17:52:09 +02:00
Georgi Gerganov 5deaf8f2a3 metal : remove BF16 x F16 kernels (llama/18456) 2025-12-31 17:52:09 +02:00
Aman Gupta 467933199a sycl: add newline at the end of CMakeLists.txt (llama/18503) 2025-12-31 17:52:09 +02:00
Rahul Sathe a3635494da Work around broken IntelSYCLConfig.cmake in Intel oneAPI 2025.x (llama/18345)
* cmake: work around broken IntelSYCLConfig.cmake in oneAPI 2025.x

* [AI] sycl: auto-detect and skip incompatible IntelSYCL package

Automatically detect compiler versions with incompatible IntelSYCL
CMake configuration files and fall back to manual SYCL flags instead
of requiring users to set options manually.

Fixes build failures with oneAPI 2025.x where IntelSYCLConfig.cmake
has SYCL_FEATURE_TEST_EXTRACT invocation errors.

* refactor: improve SYCL provider handling and error messages in CMake configuration

* refactor: enhance SYCL provider validation and error handling in CMake configuration

* ggml-sycl: wrap find_package(IntelSYCL) to prevent build crashes
2025-12-31 17:52:09 +02:00
Charles Xu c9955367d4 kleidiai: add and integrate SVE 256-bit vector-length kernel (llama/18458)
* kleidiai: add and integrate SVE 256-bit vector-length kernel

* updated for review comments
2025-12-31 17:52:09 +02:00
Aman Gupta 6d4aa96bfa CUDA: add log line when mxfp4 acceleration is used (llama/18483)
* CUDA: add log line when mxfp4 acceleration is used

* add in backend_get_features
2025-12-31 17:52:09 +02:00
Johannes Gäßler 5765c5b04e CUDA: fix replacment of bad archs in CMake (llama/18457) 2025-12-31 17:52:09 +02:00
Johannes Gäßler d6cb2407b7 CUDA: Blackwell features for non-native builds (llama/18436) 2025-12-31 17:52:09 +02:00
Aman Gupta e49e88b2d8 cuda: fix race condition in cumsum (llama/18448)
* ggml-cuda: fix race condition in cumsum

* remove unneccesary sync_threads
2025-12-31 17:52:09 +02:00
uvos 20f5729921 HIP: Use mmq on MFMA devices for MUL_MAT_ID in cases where a lot of splits would be generated (llama/18202) 2025-12-31 17:52:09 +02:00
Aman Gupta b8d209f55c Revert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)" (llama/18426) 2025-12-31 17:52:09 +02:00
o7si 54fe9a645d rpc: fix segfault on invalid endpoint format (llama/18387)
* rpc: fix segfault on invalid endpoint format

* rpc: add error log for failed endpoint connection
2025-12-31 17:52:09 +02:00
Boian Berberov b3788ef729 cmake: Added more x86_64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On` (llama/18186)
* minor: Consolidated `#include <immintrin.h>` under `ggml-cpu-impl.h`

* cmake: Added more x86-64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On`

- `ivybridge`
- `piledriver`
- `cannonlake`
- `cascadelake`
- `cooperlake`
- `zen4`

Resolves: #17966
2025-12-31 17:52:09 +02:00
QDelta 31fc2c37c8 ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (llama/18413) 2025-12-31 17:52:09 +02:00
lhez a800a3acd1 opencl: allow resizing transpose buffers (llama/18384)
* opencl: allow resizing transpose buffers instead of using fixed sizes

* opencl: remove commented code
2025-12-31 17:52:09 +02:00
Aman Gupta 29f8155445 ggml-cuda: Use same regex for GGML_NATIVE=OFF (llama/18407) 2025-12-31 17:52:09 +02:00
Jeff Bolz 015b618d96 vulkan: preprocess mul_mat_id experts and discard workgroups more quickly (llama/18352)
Run a preprocess to count how many times each expert is used, and use this to
quickly discard workgroups that aren't needed.
2025-12-31 17:52:09 +02:00
Jeff Bolz e37c8ed94e vulkan: optimize decodeFuncB in coopmat2 mul_mat_id shader (llama/18349)
* vulkan: Use BK=32 for coopmat2 mul_mat_id

* vulkan: optimize decodeFuncB in coopmat2 mul_mat_id shader

Disable robustness, remove the OOB check in decodeFuncB, and initialize the
row_ids to zero to avoid OOB access.

Don't slice/offset the B matrix to ic * BN, only to adjust the coord back down
to the range [0, BN) in decodeFuncB. Instead just slice with a row offset of
zero and remove the '& (BN - 1)'. This allows the compiler to common some of
the shared memory loads.
2025-12-31 17:52:09 +02:00
Jeff Bolz 331c6ccd31 vulkan: Use BK=32 for coopmat2 mul_mat_id (llama/18332) 2025-12-31 17:52:09 +02:00
Eve 35cb4abb67 vulkan: small dequantization improvements (llama/18380)
* iq4_xs

* quants
2025-12-31 17:52:09 +02:00
Jeff Bolz 181e36f194 vulkan: Support UPSCALE w/antialias (llama/18327) 2025-12-31 17:52:09 +02:00
Jeff Bolz 67473fef57 vulkan: handle rope with large number of rows (llama/18306) 2025-12-31 17:52:09 +02:00
0Marble 33f75a88ac CANN: implement the SSM_CONV operator (llama/17737)
* CANN: implement SSM_CONV operator

Co-authored-by: Aleksei Lobanov, <zeromarblectm@gmail.com>
Co-authored-by: Sujin Kang, <waterjin326@gmail.com>

* CANN: remove custom error limit for SSM_CONV

* CANN: merge SSM_CONV tensor shape/strides into one line

---------

Co-authored-by: Sujin Kang, <waterjin326@gmail.com>
2025-12-31 17:52:09 +02:00
Aman Gupta 51778354ce ggml-cuda: fix regex for arch list (llama/18371)
* ggml-cuda: fix regex for arch list

* make regex exact
2025-12-31 17:52:09 +02:00
Aman Gupta 8e02f0919d cuda: optimize cumsum cub path (llama/18362)
* cuda: optimize cumsum cub path

* remove heavy perf test
2025-12-31 17:52:09 +02:00
Aman Gupta ea07c5d3b7 ggml-cuda: fix blackwell native builds (llama/18361)
* ggml-cuda: fix blackwell native builds

Replace 12x in native architectures by 12xa

* replace for GGML_NATIVE=OFF too

* only replace for native

* remove 120f-virtual for default compilation

---------

Co-authored-by: Aman Gupta <aman>
2025-12-31 17:52:09 +02:00
Penglin Cai 5f0488f012 CANN: Add support for CONV_TRANSPOSE_1D when kernel size > 255 (llama/17934)
* CONV_TRANSPOSE_1D kernel_size>255

* remove condition check

* fix the bug of type conversion

* removing trailing whitespaces

* fix: return true in the switch case
2025-12-31 17:52:09 +02:00
Aadeshveer Singh db75fff539 ggml : optimize cuda cumsum fallback kernel (llama/18343) 2025-12-31 17:52:09 +02:00
Aman Gupta 41e578ec8a CUDA: experimental native mxfp4 support for blackwell (llama/17906)
* CUDA: experimental native mxfp4 support for blackwell

* optimize load_tiles

* optimize quantize_mxfp4

* cleanup

* first pass review: formatting

* use interleaved layout for mma

* mmq: add assert for size

* use __nv_fp4x4_e2m1

* use iter_k as 512, cleanup

* Use 1200 as blackwell instead of 1000

* address review comments

* mmq: fix stride

* quantize.cu: use reference impl of e8m0 scale

* address review comments

* add 120f-virtual + minor fixes

---------

Co-authored-by: Aman Gupta <aman>
2025-12-31 17:52:09 +02:00
Jeff Bolz f863735caa vulkan: fix command buffer corruption in ggml_backend_vk_event_wait (llama/18302) 2025-12-31 17:52:09 +02:00
Wang Weixuan bab2c02da5 CANN : refactor ACL graph cache (llama/17752)
Move the graph property checking code into methods of LRU cache.

Signed-off-by: Wang Weixuan <wangweixvan@gmail.com>
2025-12-31 17:52:09 +02:00
Ruben Ortlam 1356600679 vulkan: use fewer FA rows for small cache runs (llama/18280) 2025-12-31 17:52:09 +02:00
TianHao324 ec9239d3b7 CANN: Uses yarn_ramp cache in ROPE (llama/17725) 2025-12-31 17:52:09 +02:00
Chris Rohlf 9bdd4658f4 rpc : add check for rpc buffer type (llama/18242) 2025-12-31 17:52:09 +02:00
nullname e4c89612cd ggml-hexagon: create generalized functions for cpu side op (llama/17500)
* refactor: replace ggml_hexagon_mul_mat with template-based binary operation for improved flexibility

* refactor: replace ggml_hexagon_mul_mat_id with template-based binary operation for improved flexibility

* refactor: initialize buffer types and streamline dspqueue_buffers_init calls for clarity

* add comment

* refactor: remove redundant buffer checks in hexagon supported operations

* wip

* add missing include to fix weak symbol warning

* add ggml_hexagon_op_generic

* refactor: simplify tensor operation initialization and buffer management in hexagon implementation

* refactor: streamline hexagon operation initialization and buffer management

* refactor: update function signatures and streamline request handling in hexagon operations

* wip

* ggml-hexagon: clean up code formatting and improve unary operation handling

* wip

* rename

* fix: add support for permuted F16 tensors and enhance quantization checks in matrix operations

* refactor: replace ggml_hexagon_mul_mat with template-based binary operation for improved flexibility

refactor: replace ggml_hexagon_mul_mat_id with template-based binary operation for improved flexibility

refactor: initialize buffer types and streamline dspqueue_buffers_init calls for clarity

refactor: remove redundant buffer checks in hexagon supported operations

add missing include to fix weak symbol warning

add ggml_hexagon_op_generic

refactor: simplify tensor operation initialization and buffer management in hexagon implementation

refactor: streamline hexagon operation initialization and buffer management

refactor: update function signatures and streamline request handling in hexagon operations

ggml-hexagon: clean up code formatting and improve unary operation handling

fix: add support for permuted F16 tensors and enhance quantization checks in matrix operations

# Conflicts:
#	ggml/src/ggml-hexagon/ggml-hexagon.cpp

* hexagon: fix merge conflicts

* hexagon: minor cleanup for buffer support checks

* hexagon: factor out op_desc and the overal op logging

* hexagon: further simplify and cleanup op dispatch logic

* snapdragon: update adb scripts to use llama-cli and llama-completion

* fix pipeline failure

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2025-12-31 17:52:09 +02:00
Shouyu 2f33395197 ggml-hexagon: gelu optimization (llama/18151)
* feat: working gelu with src0 put on vtcm

* feat: gelu ping-pong for both in and out

* fix: fixu compile error

* break: distinguish dma ddr->vtcm and vtcm->ddr operation

* fix: fix dma queue size

* break: update dma api to either pop src or dst ptr

* fix: fix activation vtcm allocation issue for src1 when swapperd

* refactor: ping-pong gelu logic to avoid unnecessary if else

* dma: improved queue interface and prefetch handling

* gelu: fix N+2 block prefetch

---------

Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2025-12-31 17:52:09 +02:00
Taimur Ahmad 5b0c1c1580 llamafile: add rvv support for sgemm kernels (llama/18199)
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2025-12-31 17:52:09 +02:00
lhez f2fe1e5baf opencl: unpack q4_0 for adreno in get_tensor (llama/18278) 2025-12-31 17:52:09 +02:00
Jeff Bolz dbbe6c11b5 vulkan: Extend rope fusions to allow mrope (llama/18264)
Extend the test-backend-ops tests as well.
2025-12-31 17:52:09 +02:00
Jeff Bolz 98e59a43d1 vulkan: Implement set_tensor_async and the event interfaces (llama/18047)
The goal is to enable the async loading code paths in
llama_model_loader::load_all_data, originally from #7896. This works and the
loads themselves are faster, but with host visible vidmem I think the cost of
allocating/mapping vidmem moves and becomes more expensive, and I don't see a
benefit by default. But with GGML_VK_DISABLE_HOST_VISIBLE_VIDMEM=1 I do see a
significant improvement in model loading time.
2025-12-31 17:52:09 +02:00
Johannes Gäßler b68b12f2d5 llama: fix RPC for -fit on (llama/18233) 2025-12-31 17:52:09 +02:00
Jeff Bolz b893e0813a vulkan: fix im2col overflowing maxworkgroupcount (llama/18180) 2025-12-31 17:52:09 +02:00
Jeff Bolz f407c5e562 vulkan/cuda: fix topk_moe with exp_probs_b (llama/18071)
I updated test_topk_moe to more closely match llm_graph_context::build_moe_ffn
and added coverage for exp_probs_b and some other missing combinations. This
exposed a bug in both CUDA and Vulkan backends where they were assuming the
input to argsort and the input to get_rows are the same. I'd like to optimize
this graph in another change, but for now just get it functional.

CUDA also had a bug where it got n_experts from the wrong place, leading to
GGML_ASSERT failures in some of the new tests.
2025-12-31 17:52:09 +02:00
Jeff Bolz ad6ee3865d vulkan: support GGML_UNARY_OP_XIELU (llama/18062) 2025-12-31 17:52:09 +02:00
Jeff Bolz 3cd141f1a9 vulkan: in graph_optimize, try to group ADD operations (llama/18060)
I saw the adds not staying together in the new nemotron 3 nano model.
2025-12-31 17:52:09 +02:00
lovedheart 449fc7c024 Vulkan: some improvement on mul_mat_iq2_xs (llama/18031)
* Some improvement on mul_mat_iq2_xs

Refactor calculations for db values and grid data to optimize performance and reduce redundancy.

* Fix trailing whitespace
2025-12-31 17:52:09 +02:00
Aadeshveer Singh 0983985f06 Added comments explaining thread block size selection logic based on row count and column size, derived from historical commit context (llama/18212) 2025-12-31 17:52:09 +02:00
Alfred 17a4cb15b8 ggml-hexagon: Implement true Q8_0 quantization on Hexagon NPU for more accurate mixed-precision matmul operations (llama/17977)
* feat: implement real Q8_0

* feat: adding cmake option for configuring FP32 quantize group size

* typo: set() shall be used

---------

Co-authored-by: ngdxzy <zhenyu_xu@uri.edu>
2025-12-31 17:52:09 +02:00
Jeff Bolz 195d8d0c65 vulkan: Add perf logger mode with concurrency (llama/17944)
This implements a variation of the perf logger where rather than timing each
operation individually with effectively a barrier in between, we put the
timing boundaries where we already synchronize and time the groups of work
that normally overlap. This can be useful to help understand whether
individual operations need to be optimized, or if the group is already running
efficiently.

GGML_VK_PERF_LOGGER_CONCURRENT=1 enables the new mode (when
GGML_VK_PERF_LOGGER is also set).

GGML_VK_SYNC_LOGGER=1 replaces the ENABLE_SYNC_LOGGING compile time switch.
2025-12-31 17:52:09 +02:00
Xuan-Son Nguyen fea481f412 model : add ASR support for LFM2-Audio-1.5B (conformer) (llama/18106)
* ASR with LFM2-Audio-1.5B

* Set rope_theta

* Fix comment

* Remove rope_theta setting

* Address PR feedback

* rename functions to conformer

* remove some redundant ggml_cont

* fix missing tensor

* add prefix "a." for conv tensors

* remove redundant reshape

* clean up

* add test model

---------

Co-authored-by: Tarek Dakhran <tarek@liquid.ai>
2025-12-31 17:52:09 +02:00
Taimur Ahmad 956fac433b ggml-cpu: extend support for RVV floating-point kernels (llama/17318)
* cmake: add BF16 RVV flag for ggml-cpu

* ggml-cpu: add floating-point conversion kernels

* ggml: add floating-point kernels

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: fix lmul in vec_dot_bf16

* ggml-cpu: change redsum to lmul 4, fix leftover

---------

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2025-12-31 17:52:09 +02:00
yulo 325a9b739c remove i_major_dual (llama/18157)
Co-authored-by: zhang hui <you@example.com>
2025-12-31 17:52:09 +02:00
Shouyu c3a16089e3 ggml-hexagon: swiglu_oai operation (llama/18114)
* snapshot: debug ggml-hexagon swiglu-oai

* fix: fix hvx_min_scalar_f32

* feat: working swiglu-oai

* chore: fix formating isue
2025-12-31 17:52:09 +02:00
Shouyu c7ccedb5ba ggml-hexagon: gelu operation (llama/17921)
* feat: inital support for gelu using sigmoid approximation

* snapshot: faster gelu using polynomial approximation

* test: disable l2-block prefetch in polynomail approximation

* Revert "test: disable l2-block prefetch in polynomail approximation"

This reverts commit 72339994d45b2bed887e79994403c378d90b62b5.

* Revert "snapshot: faster gelu using polynomial approximation"

This reverts commit 2a787a61d11f9e63e5943a2e6d134b2f0c402ace.

* debug: temporarily disable unnecessary log message for debug purpose

* Feat: optiized unaligned sigmoid_f32

* Feat: larger l2prefetch block

* feat: apply unaligned-load optimization on mul and mul_scalar

* Revert "debug: temporarily disable unnecessary log message for debug purpose"

This reverts commit 84f2f23aa9f17e2fa826db969cd825d0ab192995.

* refactor: cleanup commented unused code

* chore: reformat code with clang-formatter to pass cli test

* Revert "chore: reformat code with clang-formatter to pass cli test"

This reverts commit 952877ec24732b12010c7fa7ed3fc8de4b74e718.

* fix: fix loop overflow

* chore: fix formating ci error
2025-12-31 17:52:09 +02:00
Alberto Cabrera Pérez 1f72f00542 ggml-cpu: ARM64: repack version of q8_0 (dotprod and i8mm) (llama/18096)
* wip: skeleton for q8_0 repack

* q8_0 repack GEMV implementations

* GEMM implementations

* Formatting

* Fixed format consistency of repack gemm and gemv declarations

* gemv and gemm generic location consistent with declarations

* Removed non-correct unused variables statements

* Cleanup, consistent style

* Missing generic fallbacks for x86 and powerpc
2025-12-31 17:52:09 +02:00
yulo 9118c05dc4 HIP: Refactor mma for RDNA and CDNA (llama/17990)
* mma.cuh for rdna4

* mma for rdna3

* mmq for rdna4

* mmq for rdna3

* align i-major and j-major

* cdna

* fix cuda error

* add missing tile of mfma

* fix j-major wrong ne on CDNA

* fix gramma and empty spaces

---------

Co-authored-by: zhang hui <you@example.com>
2025-12-31 17:52:09 +02:00
KITAITI Makoto 6114e69213
ruby : add Whisper::Token, fix model URI (#3575)
* Define and use macro to get context safely

* Add test to check SEGV

* Move installation guid after usage

* [skip ci]Change doc slightly

* [skip ci]Fix a typo in README

* [skip ci]Add carry_initial_prompt option in README

* Define GetVADSegments and use it

* Use GetContext

* Fix download URI of small.en-tdrz

* Fix URI of CoreML models corresponding to quantized models

* Cache computed string

* Remove unused argument

* Add Whisper::Token

* Add document comments

* Rename function: rb_whisper_token_s_new -> ruby_whisper_token_s_init

* Fix size of token

* Insert _get into function names

* Add Whisper::Token#text

* Add test for Whisper::Token#text

* Declare static if possible

* Change method names

* Add Whisper::Token#deconstruct_keys

* Add tests for Whisper::Token#deconstruct_keys

* Add signatures for Whisper::Token

* Complete signature

* [skip ci]Add n_tokens to document of Segment
2025-12-24 16:52:16 +09:00
Georgi Gerganov 6c22e792cb talk-llama : sync llama.cpp 2025-12-18 08:20:56 +02:00
Georgi Gerganov 698348aadc sync : ggml 2025-12-18 08:20:56 +02:00
Naco Siren 00108bb713 llama.android : Rewrite Android binding (w/o cpu_features dep) (llama/17413)
* UI: implement basic UI components

* util: implement performance monitor; wrap it with a viewmodel

* util: implement user preferences utility

* UI: implement core flow's screens

* UI: add a new MainActivity; update manifest

* [WIP] DI: implement simple local vm factory provider

* UI: disable triggering drawer via gesture; enable alert dialog on back navigation inside conversation and benchmark

* UI: allow drawer's gesture control only on Home and Settings screens; enable alert dialog on back navigation inside conversation and benchmark

* UI: split a nested parent settings screen into separate child settings screens

* UI: polish system prompt setup UI

* Deps: bump Kotlin plugin; introduce KSP; apply in :app subproject

* DB: setup Room database

* data: introduce repo for System Prompt; flow data from Room to VM

* bugfix: properly handle user's quitting conversation screen while tokens in generation

* UI: rename `ModeSelection` to `ModelLoading` for better clarity

* UI: update app name to be more Arm

* UI: polish conversation screen

* data: code polish

* UI: code polish

* bugfix: handle user quitting on model loading

* UI: locks user in alert dialog when model is unloading

* vm: replace token metrics stubs with actual implementation

* UI: refactor top app bars

* nit: combine temperatureMetrics and useFahrenheit

* DI: introduce Hilt plugin + processor + lib dependencies

* DI: make app Hilt injectable

* DI: make viewmodels Hilt injectable

* DI: replace manual DI with Hilt DI

* UI: optimize AppContent's composing

* bugfix: wait for model to load before navigating to benchmark screen; use NavigationActions instead of raw navController

* UI: navigation with more natural animated transitions

* DI: Optimize AppModule

* Feature: Introduce ModelRepository and ModelsManagementViewModel; update AppModule

* UI: polish UI for ModelsManagementScreen; inject ModelsManagementVieModel

* DI: abstract the protocol of SystemPromptRepository; update AppModule

* data: [WIP] prepare for ModelRepository refactor & impl

* data: introduce Model entity and DAO; update DI module

* UI: replace Models Management screen's stubbing with instrumentation

* UI: polish sort order menu

* data: import local model with file picker

* bugfix: use List instead of Collection for ModelDao's deletion

* data: add a util file for extracting file name & size and model metadata

* UI: enrich ModelManagementState; extract filename to show correct importing UI

* UI: implement multiple models deletion; update Models Management screen

* UI: handle back navigation when user is in multi-selection mode

* util: extract file size formatting into ModelUtils

* UI: add a confirmation step when user picks a file; refactor model import overlay into AlertDialog

* UI: extract a shared ModelCard component

* UI: replace model selection screen's data stubbing; add empty view

* nit: tidy SystemPromptViewModel

* Util: split FileUtils from ModelUtils; extract copy methods into FileUtils

* data: pass through getModelById from ModelDao into ModelRepository

* core: extract conversation and benchmark logics into InferenceManager; add logs and missing state updates in stub InferenceEngine

* vm: split mono MainViewModel into separate individual ViewModels

* vm: merge SystemPromptViewModel into ModelLoadingViewModel

* core: break down InferenceManager due to Interface Segregation Principle

* UI: show model card in Model Loading screen

* UI: show model card in Conversation screen

* UI: unify Model Card components

* core: swap in LLamaAndroid and mark stub engine for testing only

* data: allow canceling the ongoing model import

* UI: update UI ongoing model import's cancellation

* LLama: update engine state after handling the cancellation of sendUserPrompt

* VM: handle the cancellation of ongoing token generation

* LLama: refactor loadModel by splitting the system prompt setting into a separate method

* feature: check for available space before copying local model

* UI: centralize the AppScaffold and modularize its configs

* UI: refactor BottomBarConfig.ModelsManagement APIs

* UI: combine TopBarConfig and BottomBarConfig into each route's ScaffoldConfig

* UI: replace ugly optional as casts in AppScaffold with extension functions

* UI: fix the typo `totalGb` in `StorageMetrics`

* UI: remove code duplication in sort menu

* LLama: add ModelUnloadingState to engine State; add missing state checks in stub engine; fix instrumentation engine's error messages

* UI: refactor back handling by removing centralized BackHandlerSetup and UnloadModelConfirmationDialog from AppContent

* UI: implement BenchmarkScreen's individual back handling

* LLama: add a new Initializing state; ; add two extension properties; rename LibraryLoaded state to Initialized

* UI: Introduce an abstract ViewModel to handle additional model unloading logics

* UI: expose a single facade ModelUnloadDialogHandler; move UnloadModelState into ModelUnloadingViewModel.kt

* UI: migrate ModelLoadingScreen onto ModelLoadingViewModel; update & refine ModelLoadingScreen

* UI: migrate ConversationViewModel onto ModelLoadingViewModel; update & refine ConversationScreen

* nit: extract app name into a constant value; remove unused onBackPressed callbacks

* UI: update AppContent to pass in correct navigation callbacks

* nit: polish ModelLoadingScreen UI

* core: throw Exception instead of returning null if model fails to load

* navigation: sink model loading state management from AppContent down into ModelLoadingScreen; pass ModelLoadingMetrics to Benchmark and Conversation screens

* gguf: add GGUF metadata data holder and its corresponding extractor implementation

* DB: introduce Kotlin serialization extension's library and plugin; add Room runtime library

* GGUF: make GgufMetadata serializable in order to be compatible with Room

* nit: refactor data.local package structure

* nit: rename lastUsed field to dateLastUsed; add dateAdded field

* UI: refactor ModelCard UI to show GGUF metadata

* UI: update ModelSelectionScreen with a preselect mechanism

* UI: polish model card

* nit: allow deselect model on Model Selection screen

* nit: revert accidental committing of debug code

* UI: polish ModelLoading screen

* util: extract formatting helper functions from FileUtils into a new FormatUtils

* UI: polish model cards on Benchmark and Conversation screens to show model loading metrics

* UI: show a Snack bar to warn user that system prompt is not always supported

* UI: handle back press on Model Selection screen

* UI: finally support theme modes; remove hardcoded color schemes, default to dynamic color scheme implementation

* feature: support searching on Model Selection screen

* nit: move scaffold related UI components into a separate package

* UI: extract InfoView out into a separate file for reusability

* data: move Model related actions (query, filter, sort) into ModelInfo file

* UI: animate FAB on model preselection states

* feature: support filtering in Model Management screen

* ui: show empty models info in Model Management screen

* ui: add filter off icon to "Clear filters" menu item

* [WIP] ui: polish Benchmark screen; implement its bottom app bar

* ui: polish Benchmark screen; implement its bottom app bar's rerun and share

* nit: disable mode selection's radio buttons when loading model

* feature: implement Conversation screen's bottom app bar

* pkg: restructure BottomAppBars into separate files in a child package

* pkg: restructure TopBarApps into separate files in a child package

* pkg: restructure system metrics into a separate file

* UI: polish Conversation screen

* data: update system prompt presets

* UI: allow hide or show model card on Conversation & Benchmark screens; fix message arrangement

* data: update & enhance system prompt presets

* deps: introduce Retrofit2

* data: implement HuggingFace data model, data source with Retrofit API

* data: update Model data repository to support fetching HuggingFace models

* [WIP] UI: replace the HuggingFace stub in Model Management screen with actual API call

* UI: map language codes into country Emojis

* ui: add "clear results" action to Benchmark screen

* nit: print current pp & tg in llama-bench

* UI: disable landscape mode; prevent duplicated benchmark running

* llama: migrate C/CXX flags into CMakeList

* [WIP] llama: ABI split builds five .so artifacts.

However, all .so are performing on SVE level

* [WIP] llama: ABI split where five tiers are built sequentially.

* [WIP] llama: disable OpenMP in ABI split since most SoCs are big.LITTLE

* [WIP] llama: enable KleidiAI and disable tier 4 due to `+sve+sve2` bug caused by `ggml_add_cpu_backend_variant_impl` as explained below

```CMake
if (NOT SME_ENABLED MATCHES -1)
...
    set(PRIVATE_ARCH_FLAGS "-fno-tree-vectorize;${PRIVATE_ARCH_FLAGS}+sve+sve2")
...
```

* core: add Google's cpu_features as a submodule

* core: implement cpu_detector native lib

* core: swap out hardcoded LlamaAndroid library loading

* core: add back OpenMP due to huge perf loss on TG128

* misc: reorg the pkg structure

* misc: rename LlamaAndroid related class to InferenceEngine prefixes

* [WIP] lib: move GgufMetadata into the lib submodule

* lib: expose GgufMetadataReader as interface only

* lib: replace the naive & plain SharedPreferences with DataStore implementation

* lib: hide the internal implementations, only expose a facade and interfaces

* lib: expose Arm features

* di: add a stub TierDetection; provide both actual impl and stub in AppModule

* UI: add visualizer UI for Arm features

* misc: UI polish

* lib: refactored InferenceEngineLoader; added a `NONE` Llama Tier

* UI: support `NONE` Llama Tier in general settings

* lib: optimize engine loader; always perform a fresh detection when cache is null

* remote: add HuggingFaceModelDetails data class

* remote: refine HuggingFaceModel data class

* nit: remove `trendingScore` field from HuggingFace model entities, weird...

* remote: refactor HuggingFaceApiService; implement download feature in HuggingFaceRemoteDataSource

* remote: fix the incorrect parse of HuggingFace's inconsistent & weird JSON response

* UI: scaffold Models Management screen and view model

* UI: implement a dialog UI to show fetched HuggingFace models.

* UI: use a broadcast receiver to listen for download complete events and show local import dialog.

* data: handle network exceptions elegantly

* pkg: restructure `data`'s packages

* data: extract local file info, copy and cleanup logics into LocalFileDataSource

* nit: minor UI patch; add missing comments

* bugfix: tapping "Home" in navigation drawer should simply close it without any navigation action.

* UI: improve autoscroll during token generation

* lib: tested on JFrog Artifactory for Maven publishing

* UI: show RAM warning if model too large

* UI: polish model management screen's error dialog

* util: add more items into the mapping table of ISO 639-1 language code to ISO 3166-1 country code

* llm: properly propagate error to UI upon failing to load selected model

* UI: avoid duplicated calculation of token metrics

* lib: read & validate the magic number from the picked source file before executing the import

* UI: add "Learn More" hyperlinks to Error dialog upon model import failures

* lib: refactor the GgufMetadataReader to take  InputStream instead of absolute path as argument

* lib: fix the `SIMD` typo in Tier description

* core: verify model file path is readable

* lib: add UnsupportedArchitectureException for triaged error message

* util: split FormatUtils into multiple utils for better readability

* UI: change benchmark screen from raw markdown to table view

* bugfix: reset preselection upon running the preselected model

* misc: linter issue

* bugfix: fix the malfunctioning monitoring switch

* UI: update Arm features indicator; fix the broken hyperlinks

* UI: add quick action buttons to benchmark screen's result card

* UI: hide share fab after clearing all benchmark results

* UI: fix the model unload dialog message; elevate the model card and hide it by default on Conversation screen;

* UI: hide the stubbing actions in Conversation screen

* UI: add show/hide stats control to conversation screen's assistant message bubble; fix placeholder

* UI: add a info button to explain token metrics

* misc: remove the redundant `Companion` added due to refactoring

* UI: show corresponding system metrics detailed info upon tapping RAM / storage / temperature indicator

* UI: add info button to System Prompt switch; expand the model card by default

* UI: disable tag & language chips; add section headers to explain what they are

* misc: replace top bar indicator's spacer with padding

* UI: merge the Model Selection and Model Management into a unified Models screen

* UI: split the ModelsManagementViewModel from a unified ModelsViewModel due to huge complexity

* UI: add model loading in progress view; polish the empty model info view

* UI: polish the bottom bars and info view when no models found; show loading in progress while fetching models

* build: [BREAKING] bump the versions of libraries and plugins

* UI: fix the breaking build

* UI: add Tooltip on Import FAB for user onboarding

* UI: adds AppPreferences to track user onboarding status

* UI: tracks user's first success on importing a model

* data: add hand crafted rules to filter the models fetched from HuggingFace API

* UI: update app name & about; polish top bars' indicators & buttons

* UI: polish Hugging Face download dialog UI

* UX: implement onboarding tooltips for model import and onboarding

* misc: use sentence case for CTA button labels

* [WIP] UI: add Arm color palette from Philip.Watson3

* UI: address Rojin's UX feedbacks

* UI: address Rojin's UX feedbacks - part 2

* UI: update Arm color palette from Philip.Watson3

* data: make sure fetch preselected models in the same order of their IDs

* UI: fix UI issues in the generic settings screen and navigation drawer

* nit: address Rojin's feedbacks on model import message again

* nit: append `®` to all `Arm` labels

* UI: extract a reusable InfoAlertDialog

* core: support GGML_CPU_ALL_VARIANTS on Android!

* core: restructure Kleidi-Llama library

* core: organizing cmake arguments

* data: sort preselected models according to device's available RAM

* app: update adaptive + themed + legacy icons and app name

* UI: fix the font size auto scaling for ArmFeaturesVisualizer

* core: further improve the performance on native methods

* UI: minor color palette changes; emphasize the bottom bar FABs; fix Settings Screen menu item label

* UI: make more room for assistant message bubble's width

* UI: better usage of tertiary colors to highlight model cards but not for warnings

* UI: fix the layout issue on large font sizes

* lib: support x86-64 by dynamically set Arm related definitions

* lib: replace the factory pattern for  deprecated tiered lib loading with single instance pattern

* llama: update the library name in JNI and CMake project

* llama: update the library's package name and namespace

* llama: update the app's package name and namespace

* app: bump ksp version

* app: remove deprecated SystemUIController from accompanist by migrating to EdgeToEdge

* app: extract AppContent from MainActivity to a separate file in ui package

* lib: add File version for GGUF Magic number verification

* lib: perform engine state check inclusively instead of exclusively

* lib: change `LlamaTier` to `ArmCpuTier`

* lib: remove kleidi-llama related namings

* cleanup: remove Arm AI Chat/Playground app source code; replace with the basic sample app from https://github.com/hanyin-arm/Arm-AI-Chat-Sample

Note: the full Google Play version of AI Chat app will be open will be open sourced in another repo soon, therefore didn't go through the trouble of pruning the history using `git filter-repo` here.

* [WIP] doc: update main and Android README docs; add self to code owners

* lib: revert System.load back to System.loadLibrary

* jni: introduce a logging util to filter different logging levels on different build types

* lib: enable app optimization

* doc: replace stub Google Play app URL with the actual link add screenshots; add my GitHub ID to maintainer list

* Remove cpu_features

* Fix linters issues in editorconfig-checker job

https://github.com/ggml-org/llama.cpp/actions/runs/19548770247/job/55974800633?pr=17413

* Remove unnecessary Android CMake flag

* purge include/cpu_features directory

---------

Co-authored-by: Han Yin <han.yin@arm.com>
2025-12-18 08:20:56 +02:00
Aadeshveer Singh 41a95b8ba7 ggml : use WARP_SIZE/2 for argmax reduction offset (llama/18092) 2025-12-18 08:20:56 +02:00
Shouyu 8dd70bdc85 ggml-hexagon: mm for mtmd (llama/17894)
* feat: add run_mtmd script for hexagon

* fix: fix issue in fp16xfp32 mm

* fix: remove opt_experiment for fp16xfp32 mm

* fix: ggml-hexagon: matmul fp16xfp32 support non-contigious src0

* fix: fix syntax check for run-mtmd.sh for cli
2025-12-18 08:20:56 +02:00
Jeremy Demeule b90ec07aba metal: use shared buffers on eGPU (llama/17866)
* metal: use shared buffers on eGPU

With #15906, I noticed on important regression when using metal backend on eGPU.
This commit restore the previous behavior and add an option to force its activation.

* metal: use shared buffers on eGPU

* metal: use shared buffers on eGPU
2025-12-18 08:20:56 +02:00
Johannes Gäßler aaf3f39b4a llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (llama/16653)
* llama: automatically fit args to free memory

llama-fit-params tool

* fix CI

* hints for bug reports, ensure no reallocation

* fix segfault with Vulkan

* add llama-fit-params to CI

* fix CI

* fix CI

* fix CI

* minor adjustments

* fix assignment of 1 dense layer

* fix logger not being reset on model load failure

* remove --n-gpu-layer hint on model load failure

* fix llama-fit-params verbosity

* fix edge case

* fix typo [no ci]
2025-12-18 08:20:56 +02:00
Neo Zhang Jianyu b5e352a52f Support gpt-oss by OPs add-id, mul_mat for mxfp4, swiglu_oai (llama/17826)
* support gpt-oss GPU by OP add-id, mul_mat for mxfp4, swiglu_oai, fix warning

* fix fault ut case, update ops.md

* rebase, fix format issue
2025-12-18 08:20:56 +02:00
Ruben Ortlam 3bb4e1e0ac vulkan: fix mul_mat_vec_iq1_s formatting (llama/18026) 2025-12-18 08:20:56 +02:00
Jeff Bolz af2c8cba6f vulkan: Fix data race/hang in scalar/cm1 flash attention (llama/17887) 2025-12-18 08:20:56 +02:00
lovedheart 7e5df2975e vulkan: improve mul_mat_vec_iq1_s speed (llama/17874) 2025-12-18 08:20:56 +02:00
Eve cdadfc3b72 vulkan: faster q6_k matmul (llama/17813)
* q6_k faster mul mat

* 8 values

* fix comment

* switch to two at a time

* start ci for .glsl files
2025-12-18 08:20:56 +02:00
Georgi Gerganov b62ef9af7a ggml : arm repack fix build (llama/0) 2025-12-18 08:20:56 +02:00
Jeff Bolz b901ebe4a3 vulkan: support get_rows for i32 (llama/17941) 2025-12-18 08:20:56 +02:00
Jeff Bolz f33446643e vulkan: support GGML_OP_DIAG (llama/17893) 2025-12-18 08:20:56 +02:00
Jeff Bolz 939d3085e9 vulkan: Multi-pass softmax for large number of cols (llama/17892)
When the number of cols is large, split each row across multiple workgroups.
There are three phases that communicate partial results through temp buffers:
(1) compute max partials
(2) take max of partials, compute sum(exp(x-max)) partials
(3) sum partials, compute scaled result
2025-12-18 08:20:56 +02:00
Jeff Bolz 13bb296dbf vulkan: Allow non-pow2 n_experts in topk_moe (llama/17872) 2025-12-18 08:20:56 +02:00
Johannes Gäßler feb856d4a1 CUDA: fix overflow in MMA kernel without stream-k (llama/17939) 2025-12-18 08:20:56 +02:00
Sigbjørn Skjæret db1fcd958f cann : fix ops broken by circular padding guard (llama/17825) 2025-12-18 08:20:56 +02:00
ixgbe 2c782ec325 ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (llama/17951)
* ggml-cpu:fix RISC-V Q4_0 repack select and RVV feature reporting

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>

* using the name VLEN instead of CNT

* Update ggml/include/ggml-cpu.h

---------

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-18 08:20:56 +02:00
yulo 25d99e9135 HIP: enable mmf for RDNA3 (llama/17879)
* enable mmf for RDNA3

* disable mmf for some shape

* move some mmvf to mmf

* more mmfv to mmf

* 3 is good in mmvf

---------

Co-authored-by: zhang hui <you@example.com>
2025-12-18 08:20:56 +02:00
Piotr Wilkin (ilintar) e0af519a61 SOLVE_TRI extension to more dimensions (llama/17793)
* Extended TRI

* Fix whitespace

* chore: update webui build output

* Just use cuBLAS for everything...

* Merge both versions

* Remove incorrect imports causing failures for CI

* Still failing... remove all direct cublas imports and rely on common imports from "common.cuh"

* Defines for hipBlas

* Aaaand MUSA defines...

* I hate this job...

* Stupid typo...

* Update ggml/src/ggml-cuda/solve_tri.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-18 08:20:56 +02:00
Russ 3e79e73eee
build: link whisper target against Threads::Threads for FreeBSD support (#3568) 2025-12-17 11:13:38 +02:00
Marcos Del Sol Vives 2551e4ce98
server: allow custom temp directory for ffmpeg (#3564) 2025-12-13 09:37:44 +02:00
Georgi Gerganov f0c9017a2f
ggml : arm repack fix build (#0) 2025-12-13 08:04:09 +02:00
Georgi Gerganov 179d8b1c9c
talk-llama : sync llama.cpp 2025-12-12 18:15:27 +02:00
Georgi Gerganov 48cdc06e91
sync : ggml 2025-12-12 17:55:11 +02:00
Georgi Gerganov 72714d169c
whisper : adjust to ggml changes (#0) 2025-12-12 17:54:58 +02:00
Congcong Cai 324dd21d3c
cmake : set `CMAKE_RUNTIME_OUTPUT_DIRECTORY` for non standalone build (ggml/1394)
Some backend depends on CMAKE_RUNTIME_OUTPUT_DIRECTORY to create temporary file like metal backened.
Missing CMAKE_RUNTIME_OUTPUT_DIRECTORY will cause some cmake error like permission denied (try to copy file to root).
This PR wants to setup a default path for CMAKE_RUNTIME_OUTPUT_DIRECTORY when it does not exist.
2025-12-12 17:53:24 +02:00
Georgi Gerganov 1da1a6865c
ggml-alloc : fix reuse-parent logic for misaligned sizes (llama/17884) 2025-12-12 17:53:24 +02:00
nullname 0c88de5c69
ggml-hexagon: fix `rope` failure at `test-backend-ops` (llama/17565)
* fix test failure

* fix: correct scaling calculations in rope_cache_init

* fix: optimize element copying in rope_hex_f32 using memcpy

* fix: optimize loop boundaries in rope_hex_f32 for better performance

* feat: add profiling macros for performance measurement in operations
2025-12-12 17:53:24 +02:00
Max Krasnyansky a2886fba48
Fix race conditions in threadpool when dealing with dynamic/frequent n_threads changes (llama/17748)
* tests: update barrier test to check for race condition in active threads

* cpu: combine n_graph and n_threads into a single atomic update

* tests: add multi-graph test for test_barrier
2025-12-12 17:53:24 +02:00
Georgi Gerganov cd9b8c6d18
ggml : remove GGML_KQ_MASK_PAD constant (llama/17910)
* ggml : remove GGML_KQ_MASK_PAD constant

* cont : remove comment
2025-12-12 17:53:24 +02:00
Sigbjørn Skjæret ca8ea18d06
cuda : add missing support check for xielu (llama/17895) 2025-12-12 17:53:23 +02:00
Johannes Gäßler ea1829134f
CUDA: fix unpadded strides in MMA FA kernel (llama/17891) 2025-12-12 17:53:23 +02:00
Neo Zhang Jianyu c10b4f9a01
fix softmax for iGPU (llama/17838) 2025-12-12 17:53:23 +02:00
Gabe Goodhart 307dc525bb
metal: SSM kernel improvements (llama/17876)
* feat: Add a batched version of ssm_conv

This was done using Claude Code. It found a number of optimizations around
how the threads were organized, resulting in a huge performance boost!

Branch: Mamba2SSD

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Optimized SSM_SCAN kernel for metal

This used Claude Code and resulted in a modest performance improvement
while maintaining correctness.

Branch: Mamba2SSD

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* test: Add test-backend-ops perf tests for SSM_CONV

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* test: Real representitive tests for SSM_CONV

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Use function constant for ssm_conv batch size

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* test: backend op tests for ssm_scan from granite4 1b-h

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* style: remove commented out templates

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: float4 version of ssm_conv_batched

Branch: SSMKernelImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Add missing ggml_metal_cv_free

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:23 +02:00
Piotr Wilkin (ilintar) 2817582be2
Add DIAG for CUDA (llama/17873)
* Add DIAG for CUDA

* Refactor parameters
2025-12-12 17:53:23 +02:00
Gabe Goodhart 41bbc034f0
ggml : Provide macos-specific backtrace printing to avoid terminal death (llama/17869)
* fix: Provide macos-specific backtrace printing to avoid terminal death

Branch: MacOSSafeBacktrace

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Add GGML_BACKTRACE_LLDB env var to enable using lldb for backtrace

Branch: MacOSSafeBacktrace

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-12-12 17:53:22 +02:00
Georgi Gerganov b6ae0b29d1
metal : print node names for debugging (llama/17882) 2025-12-12 17:53:22 +02:00
Sigbjørn Skjæret ba463fb577
ggml : allow fill node alloc inplace (llama/17870) 2025-12-12 17:53:22 +02:00
Chenguang Li 79d86a5c2c
CANN: add support for partial RoPE and Vision mode (llama/17543)
* cann: add support for partial RoPE and Vision mode

Add support for two important RoPE variants: partial rotation (rope_dims < ne0)
and Vision mode rotation.

1. Support for partial RoPE (rope_dims < ne0):
   - Split tensor into head (first rope_dims dimensions) and tail portions
   - Apply rotation only to head portion using RotaryPositionEmbedding operator
   - Copy unrotated tail portion directly from source to destination
   - Handle both contiguous and non-contiguous tensor layouts

2. Support for Vision mode (GGML_ROPE_TYPE_VISION):
   - Set rope_dims = ne0 for Vision mode to rotate entire tensor
   - Vision mode pairs dimension i with dimension i+n_dims (where n_dims = ne0/2)
   - No tail handling needed since entire tensor is rotated

Implementation details:
   - Use has_tail flag to determine execution path: head/tail splitting when
     rope_dims < ne0, or full tensor rotation when rope_dims == ne0
   - Support both F32 and F16 data types with intermediate F32 conversion
   - Copy non-contiguous tensors to contiguous buffers before calling
     RotaryPositionEmbedding operator for compatibility
   - Improve cache invalidation logic to include rope_dims and indep_sects
     parameters

These enhancements enable CANN backend to handle various RoPE configurations
used in modern vision-language models and models with partial rotation.

* cann: fix review comment
2025-12-12 17:53:22 +02:00
Johannes Gäßler bef1f5a57e
CUDA: fix FP16 overflow in tile FA kernel (llama/17875) 2025-12-12 17:53:22 +02:00
Jay Zenith 821c2071ab
cuda : add FILL op support (llama/17851)
* cuda : add FILL op support

* cuda : add missing FILL op files
2025-12-12 17:53:22 +02:00
wsbagnsv1 e1562e85fc
cuda: optimize SOLVE_TRI using registers and FMAF (llama/17703)
* ggml-cuda: optimize solve_tri_f32_fast and fix stride handling

- Switch from using shared memory for the RHS/solution matrix to a register-based approach (x_low, x_high), reducing shared memory pressure and bank conflicts.
- Implement explicit `fmaf` instructions for the reduction loop.
- Update kernel arguments to pass strides in bytes rather than elements to align with standard ggml tensor arithmetic (casting to `char *` before addition).
- Remove unused `MAX_K_FAST` definition.

* Small cleanup

* Remove comments in solve_tri.cu

* Update ggml/src/ggml-cuda/solve_tri.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/solve_tri.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/solve_tri.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Use const for variables in solve_tri.cu

* Replace fmaf with more readable code

* remove last fmaf

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-12 17:53:21 +02:00
ixgbe c8d0ee2f9f
ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (llama/17784)
* ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>

* cmake: enable RISC-V zihintpause extension for Spacemit builds

* readme : add ZIHINTPAUSE support for RISC-V

---------

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-12-12 17:53:21 +02:00
lovedheart d6d44fac69
Vulkan: improve mul_mat_vec_iq1_m (llama/16907)
* Optimize Vulkan shader for matrix-vector multiplication

* Revert changes on compute_outputs and main

Refactor compute_outputs to handle remaining rows correctly.

* Fix trailing whitespace
2025-12-12 17:53:21 +02:00
Law Po Ying 447ef8633b
sycl: add missing BF16 conversion support for Intel oneAPI (llama/17780)
* sycl: add missing BF16 conversion support for Intel oneAPI

* Fix Line 645: Trailing whitespace
2025-12-12 17:53:21 +02:00
Jeff Bolz 898f876fe2
vulkan: perf_logger improvements (llama/17672)
* vulkan: perf_logger improvements

- Move perf_logger from device to ctx.
- Add an env var to control the frequency we dump the stats. If you set a very
large value, it just dumps when the ctx is destroyed.
- Add a fusion info string to the tracking, only log one item per fused op.
- Fix MUL_MAT_ID flops calculation.

* fix vector sizes
2025-12-12 17:53:21 +02:00
Vishal Singh ebff8f9db9
ggml-zendnn : add ZenDNN backend for AMD CPUs (llama/17690)
* ggml-zennn: add ZenDNN backend support

* ggml-zendnn : address ZenDNN backend review fixes and suggestions

* docs : apply blockquote syntax to ZenDNN docs

---------

Co-authored-by: Manoj Kumar <mkumar@zettabolt.com>
2025-12-12 17:53:21 +02:00
Phylliida Dev c5e1807071
ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (llama/16985)
* Feat: Added vulkan circular tiling support

* Feat: Added cpu circular

* Feat: Added cuda kernels

* Added tests

* Added tests

* Removed non-pad operations

* Removed unneded changes

* removed backend non pad tests

* Update test-backend-ops.cpp

* Fixed comment on pad test

* removed trailing whitespace

* Removed unneded test in test-backend-ops

* Removed removed test from calls

* Update ggml/src/ggml-vulkan/vulkan-shaders/pad.comp

Co-authored-by: Ruben Ortlam <picard12@live.de>

* Fixed alignment

* Formatting

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Format pad

* Format

* Clang format

* format

* format

* don't change so much stuff

* clang format and update to bool

* fix duplicates

* don't need to fix the padding

* make circular bool

* duplicate again

* rename vulkan to wrap around

* Don't need indent

* moved to const expr

* removed unneded extra line break

* More readable method calls

* Minor wording changes

* Added final newline

* Update ggml/include/ggml.h

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/include/ggml.h

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Added circular pad ext tests

* Gate non circular pad devices

* Cleaned gating of non-circular pad devices

---------

Co-authored-by: Phylliida <phylliidadev@gmail.com>
Co-authored-by: Ruben Ortlam <picard12@live.de>
Co-authored-by: Aman Gupta <amangupta052@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:20 +02:00
Johannes Gäßler 94be71911f
HIP: fix RDNA3 FP16/BF16 matrix multiplication (llama/17817) 2025-12-12 17:53:20 +02:00
Sky b67e3abdb2
ggml : improve error handling for search path existence checks (llama/17653)
* Improve error handling for search path existence checks

Refactor existence checks for search paths using std::error_code to handle potential errors.

* Improve cache file existence check with error code

Update fs::exists to use std::error_code for error handling.

* Simplify existence check for search paths

Simplify existence check for search paths

* Fix logging path in error message for posix_stat

* Update ggml/src/ggml-backend-reg.cpp

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Adapt to the coding standard

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2025-12-12 17:53:20 +02:00
Jeff Bolz c66c71e9f4
vulkan: Use one row per workgroup for f32 mmv (llama/17711)
The MoE models have a mul_mat_vec with very small m (32, 64, 128) right before
the topk_moe selection. Running multiple rows per wg doesn't utilize the SMs
well. I think even for larger m, f32 is so bandwidth-limited that running
multiple rows doesn't help.
2025-12-12 17:53:20 +02:00
Jeff Bolz 875d861473
vulkan: support solve_tri with larger N/K values (llama/17781)
Split N into chunks to fit into shared memory.
If K > 128, use a larger workgroup with enough invocations.
Add perf tests matching qwen3next.
2025-12-12 17:53:20 +02:00
Georgi Gerganov 41cf229d72
metal : fix build(#17799)
* metal : fix build

* tests : fix context destruction
2025-12-12 17:53:20 +02:00
Masato Nakasaka a8d02735f7
vulkan: Replace deprecated VK_EXT_validation_features (llama/17637)
* replaced deprecated VK_EXT_validation_features

* forgot to remove old code
2025-12-12 17:53:19 +02:00
Masato Nakasaka 191e5f46a2
vulkan: Fix mismatch in TOPK_MOE unit test (llama/17541)
* Fix shader to support 2D workgroup mapping to a single subgroup

* Set required_subgroup_size

topk_moe shader requires static WARP_SIZE and actual subgroup size to match
2025-12-12 17:53:19 +02:00
Jeff Bolz 64a3f573e0
vulkan: add more num_blocks instantiations in rms_norm (llama/17701) 2025-12-12 17:53:19 +02:00
Jeff Bolz 0484147ab2
vulkan: fix top_k bug when there are ties in the input (llama/17659)
* vulkan: Reduce temporary memory usage for TOP_K

- Compute row size for the temp buffer based on the output of the first pass.
- Update shader addressing math to use the output row size
- Pass the output row size as "ncols_output", what used to be "ncols_output" is now "k"

For the common case of K=40 and src0=(200000,1,1,1), this reduces the temporary buffer
from about 3.2MB to 500KB.

* vulkan: fix top_k bug when there are ties in the input

I noticed by inspection a bug in the vulkan top_k shader where if the least
value in the top_k appears multiple times we could end up writing those extra
copies out rather than some larger values (if the larger values are on higher
numbered threads).

I rewrote the test verification to handle this case, where the final index set
is not necessarily the same.

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:19 +02:00
Acly 0b53759b29
vulkan : support conv-2d with large output size (llama/17685) 2025-12-12 17:53:19 +02:00
Reese Levine 23984be4da
ggml webgpu: unary op suppport, code refactoring, ops support (llama/17764)
* Squashed commit of the following:

commit b3c6bf4b0450d8d452b934df27a0fb7cb53cd755
Author: Abhijit Ramesh <abhijitramesh2k@gmail.com>
Date:   Mon Dec 1 18:29:00 2025 -0800

    ggml webgpu: fix xielu parameter passing (llama/11)

    The XIELU operation was incorrectly using static_cast to convert
    float parameters to uint32_t, which converted numeric values instead
    of preserving IEEE 754 bit patterns. This caused incorrect values
    to be interpreted by the GPU shader.

    * Use reinterpret_cast to preserve float bit patterns when passing
      through uint32_t params buffer
    * Update WGSL shader parameter types from u32 to f32
    * Re-enable XIELU support (was disabled due to numerical issues)

    Fixes NMSE test failures for XIELU operation on WebGPU backend.

commit 5ca9b5e49ea7cddc9ab7c8b43a11a9c76a4dff4a
Author: neha-ha <137219201+neha-ha@users.noreply.github.com>
Date:   Tue Nov 18 12:17:00 2025 -0800

    Refactored pipelines and workgroup calculations (llama/10)

    * refactored pipelines

    * refactored workgroup calculation

    * removed commented out block of prior maps

    * Clean up ceiling division pattern

    ---------

    Co-authored-by: Neha Abbas <nehaabbas@eduroam-169-233-141-223.ucsc.edu>
    Co-authored-by: Reese Levine <reeselevine1@gmail.com>

Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 29 23:13:06 2025 -0700

    formatted embed wgsl and ggml-webgpu.cpp

commit e1f6baea31645e5d96ad53664acae856f74b96f4
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 29 23:08:37 2025 -0700

    implemented REPL_Template support and removed bug in unary operators kernel

commit 8c70b8fece445cdc9a8c660dbddbf201e52da2bb
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 15 16:14:20 2025 -0700

    responded and dealt with PR comments

commit f9282c660c10dec4487d434549bdb707a9cd9f37
Author: James Contini <jamescontini@gmail.com>
Date:   Sun Oct 12 13:41:41 2025 -0700

    removed unnecesarry checking if node->src[1] exists for unary operators

commit 4cf28d7dec41c29186d66152735b244c5699f9dc
Author: James Contini <jamescontini@gmail.com>
Date:   Sun Oct 12 13:32:45 2025 -0700

    All operators (inlcluding xielu) working

commit 74c6add1761a59d2c2ff60b60e8ad3c8300f6d3e
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 13:16:48 2025 -0700

    fixed autoconfig

commit 362749910be4f0120c8ffb21ceddeb7d2c088e51
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 13:10:46 2025 -0700

    removed vestigial files

commit cb0858333785757804c5104e59c4981843207c16
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 12:59:32 2025 -0700

    abides by editor-config

commit 5360e2852a4b51197d7d67d0a5d42e908b02d7ed
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 12:45:57 2025 -0700

    rms_norm double declaration bug atoned

commit 7b09baa4aa53711be5a126043670cc182c78bfcd
Merge: 8a6ec843 74b8fc17
Author: James Contini <jamescontini@gmail.com>
Date:   Fri Oct 10 11:50:03 2025 -0700

    resolving merge conflicts

commit 8a6ec843a50ab82f8cef59b4558eb63f318ba02d
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 8 18:06:47 2025 -0700

    unary operators pass ggml tests

commit c3ae38278a2db236adc5912c9140e4f0d63f2c19
Author: James Contini <jamescontini@gmail.com>
Date:   Wed Oct 1 16:22:40 2025 -0700

    neg passes backend test

commit aa1c9b2f8877a405470ca56709c42a1fd43713de
Author: James Contini <jamescontini@gmail.com>
Date:   Tue Sep 30 23:55:27 2025 -0700

    neg f16xf32xip builds and runs, havent actually ran a model that uses neg kernel yet though

Co-authored-by: James Contini <jamescontini@gmail.com>
Co-authored-by: Neha Abbas <neabbas@ucsc.edu>
Co-authored-by: Abhijit Ramesh <abhijitramesh2k@gmail.com>

* Remove extra code and format

* Add ops documentation (finally)

* Update ggml/src/ggml-webgpu/wgsl-shaders/embed_wgsl.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: James Contini <jamescontini@gmail.com>
Co-authored-by: Neha Abbas <neabbas@ucsc.edu>
Co-authored-by: Abhijit Ramesh <abhijitramesh2k@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-12 17:53:18 +02:00
Jeff Bolz 7e97d3b069
vulkan: enable mmvq for q2_k on NVIDIA (llama/17675) 2025-12-12 17:53:18 +02:00
Jeff Bolz 32ba1ec8e0
vulkan: set all memory allocations to high priority (llama/17624)
* vulkan: set all memory allocations to high priority

* gate by env var
2025-12-12 17:53:18 +02:00
Georgi Gerganov aefcd75f4f
rpc : fix alloc size logic (llama/17116)
* rpc : fix alloc size logic

* rpc : bump version
2025-12-12 17:53:18 +02:00
Georgi Gerganov 322903fa67
metal : add residency sets keep-alive heartbeat (llama/17766)
* examples : add idle

* metal : attach residency sets to queue

* idle : add link

* idle : adjust intervals

* metal : add residency sets keep-alive heartbeat

* cont : adjust default keep-alive time
2025-12-12 17:53:18 +02:00
Johannes Gäßler 4170159dcd
HIP : fix RDNA4 build (llama/17792) 2025-12-12 17:53:18 +02:00
shalinib-ibm d30b744047
Q4/Q8 Tiled Gemm Optimization. (llama/16999) 2025-12-12 17:53:17 +02:00
Johannes Gäßler 14502d6561
CUDA: fix FA VKQ accumulator overflow (llama/17746) 2025-12-12 17:53:17 +02:00
Jiacheng (Jason) Chen e3f3c6ead1
HIP: enable WMMA-MMQ INT kernels for RDNA 3 (llama/17576)
* enabled wmma instructions for most quantizations other than q2k

* fixed the last q2_k test case failure

* address comments: fix out of bound write for RDNA4, add comments after #endif

* clean up rebase: fix ne error in half2

* fix the EditorConfig CI
2025-12-12 17:53:17 +02:00
Piotr Wilkin (ilintar) 8d44d6181a
Add support for CUMSUM and TRI for CUDA. (llama/17584)
* Add support for CUMSUM and TRI for CUDA.

* Minor optimizations.

* Correct warp_prefix_inclusive_sum in float2 variant to return float2

* Optimize TRI

* Whitespace

* Fix strides.

* Implement double loop

* Whitespace

* Fix HIP compilation bugs

* Optimizations + big case performance tests

* Implement using CUB with fallback to custom kernel

* Remove error message.

* Fixes from code review

* Comment out CPU-unsupported F16/BF16 cases to fix CI

* Fine, you win :P

* Fix last cast, use NO_DEVICE_CODE and GGML_UNUSED_VARS

* Vary warp-size based on physical warp size

* Add GGML_UNUSED_VARS in tri as well

* Use constexpr and call prefix_inclusive with warp_size template param

* Update ggml/src/ggml-cuda/cumsum.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Change to tid % warp_size

* Fix strides; hardcode mask; add ggml_lane_mask_t

* Missing renames, remove unused get_warp_mask(), explicit calls to ggml_cuda_info()

* Too hasty...

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-12 17:53:17 +02:00
Gabe Goodhart 8902c9d976
metal: TRI, FILL, EXPM1, SOFTPLUS (llama/16623)
* feat(wip): Port initial TRI impl from pervious work

The kernel does not work and is not optimized, but the
code compiles and runs, so this will be the starting point
now that the core op has been merged.

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove argument for constant val override

This was added in the original draft, but later removed. With this, the
kernel now passes tests.

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Move the ttype conditional to templating to avoid conditional in kernel

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Type fixes

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* feat: Add softplus for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add EXPM1 for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add FILL for metal

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Branchless version of tri using _ggml_vec_tri_cmp as a mask

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove unused arguments

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Use select instead of branch for softplus non-vec

Branch: ggml-cumsum-tri

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:17 +02:00
Alberto Cabrera Pérez f96ebc92d2
ggml-cpu : remove asserts always evaluating to false (llama/17728) 2025-12-12 17:53:17 +02:00
Georgi Gerganov 194d016456
metal : use params per pipeline instance (llama/17739) 2025-12-12 17:53:16 +02:00
Adrien Gallouët 92e50155c9
build : move _WIN32_WINNT definition to headers (llama/17736)
Previously, cmake was forcing `_WIN32_WINNT=0x0A00` for MinGW builds,
This caused "macro redefined" warnings with toolchains that define the version.

This also removes the `GGML_WIN_VER` variable as it is no longer needed.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:16 +02:00
Herman Semenoff 3794a0d3b6
ggml-cpu: remove duplicate conditional check 'iid' (llama/17650) 2025-12-12 17:53:16 +02:00
Johannes Gäßler 7adbcafb6c
CUDA: generalized (mma) FA, add Volta support (llama/17505)
* CUDA: generalized (mma) FA, add Volta support

* use struct for MMA FA kernel config

---------

Co-authored-by: Aman Gupta <aman>
2025-12-12 17:53:16 +02:00
Georgi Gerganov 4a00f2e3a4
metal : fix data race in pipeline library (llama/17731) 2025-12-12 17:53:16 +02:00
Reese Levine d263bdbfb6
ggml webgpu: add support for emscripten builds (llama/17184)
* Faster tensors (llama/8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

* Wasm (llama/9)

* webgpu : fix build on emscripten

* more debugging stuff

* test-backend-ops: force single thread on wasm

* fix single-thread case for init_tensor_uniform

* use jspi

* add pthread

* test: remember to set n_thread for cpu backend

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Intermediate state

* Fast working f16/f32 vec4

* Working float fast mul mat

* Clean up naming of mul_mat to match logical model, start work on q mul_mat

* Setup for subgroup matrix mat mul

* Basic working subgroup matrix

* Working subgroup matrix tiling

* Handle weirder sg matrix sizes (but still % sg matrix size)

* Working start to gemv

* working f16 accumulation with shared memory staging

* Print out available subgroup matrix configurations

* Vectorize dst stores for sg matrix shader

* Gemv working scalar

* Minor set_rows optimization (llama/4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Comment on dawn toggles

* Working subgroup matrix code for (semi)generic sizes

* Remove some comments

* Cleanup code

* Update dawn version and move to portable subgroup size

* Try to fix new dawn release

* Update subgroup size comment

* Only check for subgroup matrix configs if they are supported

* Add toggles for subgroup matrix/f16 support on nvidia+vulkan

* Make row/col naming consistent

* Refactor shared memory loading

* Move sg matrix stores to correct file

* Working q4_0

* Formatting

* Work with emscripten builds

* Fix test-backend-ops emscripten for f16/quantized types

* Use emscripten memory64 to support get_memory

* Add build flags and try ci

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

* Remove extra whitespace

* Move wasm single-thread logic out of test-backend-ops for cpu backend

* Disable multiple threads for emscripten single-thread builds in ggml_graph_plan

* Fix .gitignore

* Add memory64 option and remove unneeded macros for setting threads to 1

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-12-12 17:53:16 +02:00
Jeff Bolz 86cb5ab93f
vulkan: Reduce temporary memory usage for TOP_K (llama/17623)
- Compute row size for the temp buffer based on the output of the first pass.
- Update shader addressing math to use the output row size
- Pass the output row size as "ncols_output", what used to be "ncols_output" is now "k"

For the common case of K=40 and src0=(200000,1,1,1), this reduces the temporary buffer
from about 3.2MB to 500KB.
2025-12-12 17:53:15 +02:00
xiaobing318 fffdf679d4
cmake : add utf8 compilation options for msvc (llama/17682) 2025-12-12 17:53:15 +02:00
Adrien Gallouët 16688c6d2c
ggml : use svcntb() for SVE vector length detection (llama/17474)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:15 +02:00
TianHao324 a64d46a529
CANN: Disable Ger operator of OUT_PROD on 310p device (llama/17563) 2025-12-12 17:53:15 +02:00
Daniel Bevenius 201b910743
ggml : remove redundant n_copies check when setting input/output (llama/17612)
This commit removes a redundant check for sched->n_copies > 1 when
setting input and output flags on tensor copies in
ggml_backend_sched_split_graph.

The motivation for this change is to clarify the code as the outer if
statement already performs this check.
2025-12-12 17:53:15 +02:00
Adrien Gallouët e2537b4af3
ggml : add fallback definition for HWCAP2_SVE2 (llama/17683)
This align with other HWCAP2 feature flags

See #17528

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:15 +02:00
Aman Gupta 4c89232b5c
ggml-cuda: reorder only relevant nodes (llama/17639) 2025-12-12 17:53:14 +02:00
Neo Zhang Jianyu 26732d28c4
enhance argsort for UT (llama/17573)
Co-authored-by: Neo Zhang <zhang.jianyu@outlook.com>
2025-12-12 17:53:14 +02:00
Georgi Gerganov 32090930f7
metal : add FA head size 48 (llama/17619) 2025-12-12 17:53:14 +02:00
Georgi Gerganov 7cd3de89bf
ggml : extend the GGML_SCHED_NO_REALLOC debug logic of the scheduler (llama/17617) 2025-12-12 17:53:14 +02:00
Aman Gupta 6cc2d0534f
llama-graph: avoid expand_forward for fusion (llama/17633) 2025-12-12 17:53:14 +02:00
Tarek Dakhran 0defeee679
model: LFM2-VL fixes (llama/17577)
* Adjust to pytorch

* Add antialiasing upscale

* Increase number of patches to 1024

* Handle default marker insertion for LFM2

* Switch to flag

* Reformat

* Cuda implementation of antialias kernel

* Change placement in ops.cpp

* consistent float literals

* Pad only for LFM2

* Address PR feedback

* Rollback default marker placement changes

* Fallback to CPU implementation for antialias implementation of upscale
2025-12-12 17:53:14 +02:00
Gilad S. 706647202e
ggml: fix: macOS build with `-DGGML_BACKEND_DL=ON` (llama/17581) 2025-12-12 17:53:13 +02:00
Aman Gupta e68ee6e281
CUDA: add stream-based concurrency (llama/16991)
* CUDA: add stream-based concurrency

* HIP: fix hipStreamWaitEvent define and nodiscard warnings

* ggml-cuda: fix fusion inside stream

* ggml-cuda: fix bug w.r.t first stream launch

* ggml-cuda: format

* ggml-cuda: improve assert message

* ggml-cuda: use lambda instead of duplicating code

* ggml-cuda: add some more comments

* ggml-cuda: add more detailed comments about concurrency

* ggml-cuda: rename + remove unused var

* ggml-cuda: fix condition for stream launch

* ggml-cuda: address review comments, add destructor

* common.cuh: add is_valid for concurrent events

* common.cuh: make comment better

* update comment

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* update comment

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* common.cuh: fix lower_bound condition + remove join_node data from write_ranges

* ggml-cuda: fix overlap condition + shadowing parameter

---------

Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-12 17:53:13 +02:00
Mahekk Shaikh 2e4a7a21fa
cuda : add error checking for cudaMemcpyAsync in argsort (llama/17599)
* cuda : add error checking for cudaMemcpyAsync in argsort (llama/12836)

* fix indentation
2025-12-12 17:53:13 +02:00
Acly 2258930c2e
vulkan : fix FA mask load with bounds check (coopmat2) (llama/17606) 2025-12-12 17:53:13 +02:00
Neo Zhang a3459484bf
sycl : support to malloc memory on device more than 4GB, update the doc and script (llama/17566)
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2025-12-12 17:53:13 +02:00
ixgbe 28dff06555
ggml: replace hwcap with riscv_hwprobe for RVV detection (llama/17567)
Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-12-12 17:53:12 +02:00
Ruben Ortlam 2fcc0a3a9f
Vulkan: MMVQ Integer Dot K-Quant and MUL_MAT_ID support (llama/16900)
* vulkan: split mul_mmq_funcs for mul_mat_vecq use

* add mxfp4 mmvq

* add q2_k mmvq

* add q3_k mmvq

* add q4_k and q5_k mmvq

* add q6_k mmvq

* handle 4x4 quants per mmvq thread

* enable MUL_MAT_ID mmvq support

* enable subgroup optimizations for mul_mat_vec_id shaders

* device tuning

* request prealloc_y sync after quantization

* fix indentation

* fix llvmpipe test failures

* fix mul_mat_id mmvq condition

* fix unused variable warning
2025-12-12 17:53:12 +02:00
Jeff Bolz dbf8766ffa
vulkan: improve topk perf for large k, fix overflow in unit tests (llama/17582) 2025-12-12 17:53:12 +02:00
Diego Devesa 463003e76c
ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched (llama/17276)
* ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched
Enabled in ggml-ci for testing.

* llama : update worst-case graph for unified cache

* ci : disable op offload in some tests

* fix spelling

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:12 +02:00
R0CKSTAR c372bdbb3c
enable fp16/fast_fp16/bf16_mma on PH1 (llama/17551)
* [MUSA] enable fp16/fast_fp16/bf16_mma on PH1

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Update ggml/src/ggml-cuda/fattn-vec.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/fattn-vec.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/fattn-tile.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-12 17:53:12 +02:00
Aman Gupta 90ca4e0a07
ggml-cuda: add stricter checking for fusion (llama/17568)
* ggml-cuda: make conditions for fusion more explicit

* ggml-cuda: remove size check as std::equal already does it
2025-12-12 17:53:12 +02:00
Piotr Wilkin (ilintar) 43441ff58a
model : Qwen3 Next (llama/16095)
* Qwen3 Next - cleaned up version

* Whitespaces and stuff

* Correct minor errors

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Misc. fixes.

* Clean up code, add missing hybrid qualifier

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Whitespace

* Proper tensors for cb calls

* Use llama-graph.h vertical alignment

* BROKEN: chunking

* Set new tensors as inputs.

* Proper chunk logic

* It's the circle of life...

* More shenanigans for n_seq > 1

* Nail in the coffin?

* Fix Windows build

* Eh, one fails on Windows, the other fails on Mac... just use general capture.

* quant : cleanup

* model : cleanup

* qwen3 : cleanup

* cont : cleanup

* cont : cleanup

* ggml : revert change

* qwen3 : cleanup

* cont : cleanup

* Readd cmath

* qwen3 : fix typo

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Usual suspects

* fix my bad suggestion

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:11 +02:00
Johannes Gäßler 37e4c2ed3a
CUDA: no FP16 arithmetic for vector FA kernel (llama/17558) 2025-12-12 17:53:11 +02:00
Jeff Bolz 7a20963140
vulkan: Implement GGML_OP_TRI (llama/17503)
* vulkan: Implement GGML_OP_TRI

* check types match
2025-12-12 17:53:11 +02:00
Radoslav Gerganov d26d1c8b85
rpc : cache and reuse compute graphs (llama/15405)
Store the last computed graph and reuse it when possible.
Also do not return response from GRAPH_COMPUTE and assume it always
completes successfully. If this this is not the case, the server closes
the connection. This saves us a network round trip to the server.
2025-12-12 17:53:11 +02:00
yulo f92d542d4d
HIP: enable mul_mat_f for RDNA4 (llama/17437)
* enable mmf for rdna4

* move some mmvf to mmf

* revert lds128 for wmma loading

* Revert "revert lds128 for wmma loading"

This reverts commit db9ae8b6b4738a5def5b393caa1611d52133e9b5.

* Revert "enable mmf for rdna4"

This reverts commit 698c9f24187b990e35c3b73a8067e5387e6ddbd4.

* Revert "move some mmvf to mmf"

This reverts commit 99b92bd6653cc8593607f641e44606391691792f.

* enable mul_mat for rdna4

---------

Co-authored-by: zhang hui <you@example.com>
2025-12-12 17:53:11 +02:00
Piotr Wilkin (ilintar) 51e842d106
SOLVE_TRI CUDA kernel for small matrices (llama/17457) 2025-12-12 17:53:11 +02:00
Neo Zhang Jianyu 93bc8dc5a8
refactor pad_reflect_1d to make the UT case pass (llama/17204)
Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-12-12 17:53:10 +02:00
Jeff Bolz 3727a36c48
vulkan: Implement SOLVE_TRI (llama/17486)
* vulkan: Implement SOLVE_TRI

* load B matrix through shared memory

* use FLOAT_TYPE
2025-12-12 17:53:10 +02:00
matt23654 e682af7886
cuda : fix UMA detection on discrete GPUs. (llama/17537) 2025-12-12 17:53:10 +02:00
Alberto Cabrera Pérez 93f6cdb9c0
ggml-cpu: aarm64: q4_K repack gemm and gemv implementations (dotprod only) (llama/17494)
* Enabled q4_K_4x8 path

* Fixed generic Q4_K 8x4 implementation

* wip: dotprod gemm

* Working arm q4_K dotprod gemm

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Undo acc rename

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Q4_K arm dotprod gemm

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Fix: q4_qs reinterpret from uint to int

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Removed comments

* Fixed macro guards

* Fixed unused vars in generic implementation

* Fixed unused vars in 8x4 repack

* Fixed unused vars in generic implementation, unneeded comment

* Missing arch fallback for x86

* minor : style

---------

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:10 +02:00
Acly ac92424b59
vulkan : move contiguous checks to device_supports_op (llama/17490)
* vulkan : remove op_supports_incontiguous and add missing constraints in device_supports_op

* im2col: remove contraints on src0 (kernel input)
2025-12-12 17:53:10 +02:00
Jeff Bolz 310db24fca
vulkan: use a fixed 1KB buffer for the add_rms_fusion opt (llama/17514) 2025-12-12 17:53:10 +02:00
lhez 74ef5dd1a9
opencl: add sqr, sqrt, mean and ssm_conv (llama/17476)
* opencl: add sqr

* opencl: add sqrt

* opencl: add mean

* opencl: add ssm_conv

* opencl: add missing cl_khr_fp16

* opencl: do sqrt in f32 then convert to f16 for better precision
2025-12-12 17:53:09 +02:00
Alberto Cabrera Pérez 3de4372465
Fix chunks being too small with small matrix sizes (llama/17526) 2025-12-12 17:53:09 +02:00
Jeff Bolz c8050e5fdc
vulkan: allow graph_optimize for prompt processing workloads (llama/17475) 2025-12-12 17:53:09 +02:00
Jeff Bolz d8b61e05f8
vulkan: Implement top-k (llama/17418)
* vulkan: Implement top-k

Each pass launches workgroups that each sort 2^N elements (where N is usually 7-10)
and discards all but the top K. Repeat until only K are left. And there's a fast
path when K==1 to just find the max value rather than sorting.

* fix pipeline selection

* vulkan: Add N-ary search algorithm for topk

* microoptimizations
2025-12-12 17:53:09 +02:00
xctan fb31a19797
ggml-cpu : add RISC-V Zvfh impl for ggml_vec_mad_f16 (llama/17448)
* ggml-cpu : add RISC-V Zvfh impl for ggml_vec_mad_f16

* ggml-cpu : dedup scalar impl

* Update ggml/src/ggml-cpu/vec.h

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:09 +02:00
Adrien Gallouët 8e3560c7ce
ggml : fix ARM feature verification (llama/17519)
On arm64 with `cmake` version 3.31.6, the final feature verification fails:

    -- ARM detected flags: -mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs
    -- Performing Test GGML_MACHINE_SUPPORTS_dotprod
    -- Performing Test GGML_MACHINE_SUPPORTS_dotprod - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_i8mm
    -- Performing Test GGML_MACHINE_SUPPORTS_i8mm - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_sve
    -- Performing Test GGML_MACHINE_SUPPORTS_sve - Success
    -- Performing Test GGML_MACHINE_SUPPORTS_sme
    -- Performing Test GGML_MACHINE_SUPPORTS_sme - Failed
    -- Performing Test GGML_MACHINE_SUPPORTS_nosme
    -- Performing Test GGML_MACHINE_SUPPORTS_nosme - Success
    -- Checking for ARM features using flags:
    --   -U__ARM_FEATURE_SME
    --   -mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs+dotprod+i8mm+sve+nosme
    -- Performing Test HAVE_DOTPROD
    -- Performing Test HAVE_DOTPROD - Failed
    -- Performing Test HAVE_SVE
    -- Performing Test HAVE_SVE - Failed
    -- Performing Test HAVE_MATMUL_INT8
    -- Performing Test HAVE_MATMUL_INT8 - Failed
    -- Performing Test HAVE_FMA
    -- Performing Test HAVE_FMA - Success
    -- Performing Test HAVE_FP16_VECTOR_ARITHMETIC
    -- Performing Test HAVE_FP16_VECTOR_ARITHMETIC - Failed
    -- Performing Test HAVE_SME
    -- Performing Test HAVE_SME - Failed
    -- Adding CPU backend variant ggml-cpu: -U__ARM_FEATURE_SME;-mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs+dotprod+i8mm+sve+nosme

We need to explicitly replace `;` with spaces from the list to make
`CMAKE_REQUIRED_FLAGS` work correctly...

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:08 +02:00
Jiacheng (Jason) Chen bb7223da8a
HIP: Patch failed testcase in WMMA-MMQ kernels for RDNA 4 (llama/17502)
* patch failed test case MUL_MAT(type_a=q4_0,type_b=f32,m=576,n=512,k=576,bs=[1,1],nr=[1,1],per=[0,1,2,3],k_v=0,o=1) for enabling WMMA on RDNA4

* Quick clean up on mma.cuh to add ggml_cuda_memcpy_1 back in for half2 and bfloat162
2025-12-12 17:53:08 +02:00
hipudding f0c54d47e1
CANN: Add MROPE and IMROPE support (llama/17401)
* CANN: ROPE supports both MROPE and IMROPE.

1. Optimize the caching logic of rope_cache_init.
2. Add support for mRoPE and i-mRoPE.

Note that on Ascend 910B devices, it is necessary to disable FA
in CLIP and disable NZ-format conversion. These two issues are
still under investigation.

* Resolve review comments
2025-12-12 17:53:08 +02:00
Jeff Bolz 208450048c
vulkan: Implement GGML_OP_CUMSUM (llama/17479) 2025-12-12 17:53:08 +02:00
Georgi Gerganov 968db8bcfa
ggml : add ggml_top_k (llama/17365)
* ggml : add ggml_top_k

* cont : add ggml_argsort_top_k

* metal : add top_k support

* ggml : cleanup

* tests : add virtual err() function for test_case

* ggml : add comments
2025-12-12 17:53:08 +02:00
TianHao324 e00bb753d6
CANN: supports out_prod operator for F32 and F16 (llama/17406)
Co-authored-by: tianhao <tianhao42@huawei.com>
2025-12-12 17:53:08 +02:00
Jeff Bolz 273e4fe7ae
vulkan: Use fewer rows for scalar FA when HS is not a multiple of 16 (llama/17455) 2025-12-12 17:53:07 +02:00
Jeff Bolz 553d57a4e7
vulkan: more FA details in vk_perf_logger (llama/17443) 2025-12-12 17:53:07 +02:00
Jiacheng (Jason) Chen 371a21865a
HIP: WMMA-MMQ kernels for RDNA 4 (llama/17156)
* first commit naive test to enable mmq for RDNA4

* adding appropriate WMMA instructions

* git rebase on top of master: fixing the correctness of the mat mul operations, updating layout mappings for RDNA4

* clean up merge conflicts

* add comments and code clean up

* PR clean up, addressed comments

* enable MMQ fallback on RDNA4

* addressed comments: add guards in load generic, separate wmma branch for use_mmq function

* Revert build-xcframework.sh

* Formating: remove trailing whitespace

* revert CMake files

* clean up after rebase: remove duplicated change, revert cmake files

* clean up after rebase: revert changes from build-xcframework.sh

* clean up: remove extra space line in mma.cuh

* Revert "clean up: remove extra space line in mma.cuh"

This reverts commit b39ed57c4529906466bd0bc7c2a86e08fc2f8bee.
2025-12-12 17:53:07 +02:00
Alberto Cabrera Pérez f4ede89d24
ggml-cpu: arm64: q4_K repack gemm and gemv implementations (i8mm) (llama/16739)
* Enabled q4_K_8x8_q8_K path on ARM

* wip: I8mm qs multiplication, pending bias

* cpu : arm : REPACK gemm q4_K8x8 implementation

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Guard gemm with proper features, improved superblock scale and min calc

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* cpu: arm: Implemented REPACK gemv for Q4_K

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Removed completed TODO

* Fixed missing guards when selecting optimal repack type for Q4_K

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Fixed macro guard for gemv

* Fixed wrong comment in GEMV

* Fixed warning for unused variable

* vdotq_s32 -> ggml_vdotq_s32

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>

* Clang-format issues

* Apply suggestions from code review

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* Removed unnecessary GGML_UNUSED

* Fixed guards in q4_k gemm and gemv (repack)

---------

Signed-off-by: Alberto Cabrera <alberto.cabrera@liquid.ai>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-12-12 17:53:07 +02:00
ixgbe faf37ffe76
ggml: add RISC-V cpu-feats (llama/17461)
* ggml: add RISC-V cpu-feats

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>

* fix comment[1]

---------

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-12-12 17:53:07 +02:00
Max Krasnyansky 77d874b1c3
hexagon: add support for ROPE_NEOX (llama/17458) 2025-12-12 17:53:07 +02:00
Raul Torres 5ed0ddc458
CANN: Define `cann_graph_update_required` before macro (llama/17434)
**Description of the problem**

`cann_graph_update_required` is redundantly defined and
initialized as `false` inside two mutually exclusive macro branches.

**Proposed solution**

Define it right before the macro so that it could serve both
branches.
2025-12-12 17:53:06 +02:00
M. Mediouni 75cea7f8be
ggml-hexagon: Initial Hexagon v68/v69 support (llama/17394)
* ggml-hexagon: fix build error with GCC

Add stdexcept include to fix GCC build errors

Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>

* ggml-hexagon: check VTCM acquire failures

Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>

* ggml-hexagon: disable destination bypass on older than v73

v68 errors out if having bypass enabled when the VTCM is the destination.

At least on v68 this made things actually work... not a proper fix though, so to look at later...

Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>

* ggml-hexagon: add initial v68/v69 support

v68 is the Hexagon revision notably used on the Snapdragon 8cx
Gen 3 and the QCM6490.

Also add support for v69.

8MB isn't a supported page size, so relax asked for page size constraint
for HAP_compute_res_attr_set_vtcm_param_v2 to optimal.

Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>

---------

Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
2025-12-12 17:53:06 +02:00
nullname 621cb871b3
ggml-hexagon: add `hex_supported_buffer` for better buffer supported check (llama/17212)
* hexagon: add buffer support checks for hexagon sessions

* refactor: simplify buffer support checks in hexagon operations

* hexagon: update buffer support checks to use tensor structure

* refactor: streamline buffer initialization for DSP queue in hexagon operations

* refactor: simplify buffer initialization in DSP queue for hexagon operations

* refactor: optimize hex_supported_buffer function by fold expression

* wip

* refactor: simplify dspqueue_buffers_init function and its usage in hexagon operations

* fix: improve nan handling at hvx_vec_fast_sigmoid_fp32_guard

* refactor: optimize hvx_vec_inverse_fp32_guard for better nan handling

* refactor: update hvx_vec_fast_sigmoid_fp32_guard to use adjusted exponent limits

* refactor: modify hvx_vec_fast_sigmoid_fp32_guard to accept parameters for improved flexibility

* refactor: update hvx_vec_exp_fp32_guard to accept max_exp and inf parameters to save some instructions

* refactor: move hvx_vec_inverse_fp32_guard implementation to hvx-inverse.c for better perf
2025-12-12 17:53:06 +02:00
Sigbjørn Skjæret 61e0b7ed48
cuda : support non-contiguous i32 to i32 copy (llama/17326)
* support non-contiguous i32 to i32 copy

* add tests

* rename cpy_flt to cpy_scalar and reindent params
2025-12-12 17:53:06 +02:00
Jeff Bolz deb4958add
vulkan: remove a couple unnecessary switches (llama/17419) 2025-12-12 17:53:06 +02:00
yulo fc6eae781d
HIP: RDNA4 tensor core support for MMF (llama/17077)
* mmf for rdna4

* align the padding for rdna4

* forbit mul_mat_f for rdna4

* fix as comment

* remove device kernels

* add constexpr for early return

* update based on review comment

* change based on the review comment

* pass compile error

* keep code consistency

---------

Co-authored-by: zhang hui <you@example.com>
2025-12-12 17:53:06 +02:00
lhez 5c0e4a9cc5
opencl: refine condition for kqv mm (llama/17392) 2025-12-12 17:53:05 +02:00
Jeff Bolz cdc1a776be
vulkan: disable async for older Intel devices (llama/17369)
* vulkan: disable async for older Intel devices

* update detection logic

* use name string for detection
2025-12-12 17:53:05 +02:00
Raul Torres a009dc172c
CANN: Refactor `evaluate_and_capture_cann_graph` (llama/17333)
* CANN: Refactor `evaluate_and_capture_cann_graph`

**Description of the problem**

* `matched_graph` is obtained even if graph mode is disabled.
* End of graph capture and graph replay are unnecessarily placed in different `if` blocks.

**Proposed solution**

* Obtain `matched_graph` only if graph mode is enabled.
* Place end of graph capture and graph reply inside the same `if` block.
* Unify graph related comments.

* Remove trailing whitespace
2025-12-12 17:53:05 +02:00
nullname cb3ee1b098
ggml-hexagon: fix swiglu failure at `test-backend-ops` (llama/17344)
* refactor: use hvx_vec_exp_fp32_guard_inf for overflow handling in hvx_exp_f32

* feat: add fast sigmoid function with overflow guard for fp32

* refactor: replace hvx_vec_inverse_fp32 with hvx_vec_inverse_fp32_guard_inf for improved overflow handling

* feat: enhance hvx_add_scalar_f32 with overflow handling using infinity guard

* wip

* add HVX_Vector_Alias

wip

* wip

* fix: improve handling of src1 tensor in glu_swiglu_fp32_per_thread function

* fix nc

* wip

* wip

* handle nan at inverse

* wip

* fix neg

* wip

* rename

* fix hvx_vec_inverse_fp32_guard_inf to handle infinity and NaN cases correctly

* wip

* fix hvx_vec_inverse_fp32_guard_inf to handle NaN cases correctly

* wip

* wip

* wip

* fix output sign
2025-12-12 17:53:05 +02:00
Piotr Wilkin (ilintar) 46f893c2fa
ggml : Fix transposed SOLVE_TRI result (llama/17323)
* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-12 17:53:05 +02:00
Scott Fudally 510805e6c1
DGX Spark: UMA support (llama/17368)
* DGX Spark: UMA support

* Updates from PR feedback

* More PR feedback cleanup

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Remove trailing whitespace

* Update ggml/src/ggml-cuda/ggml-cuda.cu

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 17:53:05 +02:00
Adrien Gallouët 2f20938b58
ggml : remove useless and error-prone variadic macros (llama/17399)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:04 +02:00
sudhiarm 51f5438089
kleidiai: fix zero-size array declaration (llama/17240) 2025-12-12 17:53:04 +02:00
ixgbe 1d3a525001
ggml-cpu:add RISC-V RVV (Zvfh) optimization for FP16 vector scaling (llama/17314)
* ggml-cpu:add RISC-V RVV (Zvfh) optimization for FP16 vector scaling

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>

* fix comment

* fix comment 2

---------

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-12-12 17:53:04 +02:00
Giuseppe Scrivano 24b14cad87
vulkan: implement ADD1, ARANGE, FILL, SOFTPLUS, STEP, ROUND, CEIL, FLOOR, TRUNC (llama/17319)
* vulkan: initialize array

* vulkan: implement ADD1

* vulkan: implement ARANGE

* vulkan: implement FILL

* vulkan: implement SOFTPLUS

* vulkan: implement STEP

* vulkan: implement ROUND

* vulkan: implement CEIL

* vulkan: implement FLOOR

* vulkan: implement TRUNC

* docs: update Vulkan ops

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-12-12 17:53:04 +02:00
Jeff Bolz 95d0b0b0cf
vulkan: support larger argsort (llama/17313)
* vulkan: support larger argsort

This is an extension of the original bitonic sorting shader that puts the
temporary values in global memory and when more than 1024 threads are needed
it runs multiple workgroups and synchronizes through a pipelinebarrier.

To improve the memory access pattern, a copy of the float value is kept with
the index value. I've applied this same change to the original shared memory
version of the shader, which is still used when ncols <= 1024.

* Reduce the number of shader variants. Use smaller workgroups when doing a single pass, for a modest perf boost

* reduce loop overhead

* run multiple cols per invocation, to reduce barrier overhead
2025-12-12 17:53:04 +02:00
Jeff Bolz ae8865c6e6
vulkan: Add copy_transpose shader (llama/17371) 2025-12-12 17:53:04 +02:00
Aman Gupta 73d396826b
cuda: fix rope fusion for gemma3 (llama/17378) 2025-12-12 17:53:03 +02:00
Piotr Wilkin (ilintar) 746cbed20a
Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition (llama/17332)
* Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition

* Argh.

* Making CISC happy ;)

* Integrate CONT tests

* Use loopy loop

* Skip new tests for (B)F16 for now.
2025-12-12 17:53:03 +02:00
Ruben Ortlam 2097a9c1bd
vulkan: force full subgroups for flash attention to fix intel subgroup crash (llama/17356) 2025-12-12 17:53:03 +02:00
Jeremy Rand 27c69271c5
ggml-cpu: Don't pass -mpowerpc64 when -mcpu already implies it (llama/17308) 2025-12-12 17:53:03 +02:00
Chenguang Li c137d11b81
CANN: fix acl_tensor_ptr usage in ASCEND_310P ROPE (llama/17347)
* cann: fix acl_tensor_ptr usage in ASCEND_310P ROPE implementation

Fix compilation errors in the ASCEND_310P-specific ROPE operation code
by adding .get() calls when passing acl_tensor_ptr smart pointers to
functions expecting raw aclTensor* pointers.

This fixes the code that was missed in the previous refactoring commit
(8981848) which changed ggml_cann_create_tensor() return type from
aclTensor* to acl_tensor_ptr.

* cann: format code
2025-12-12 17:53:03 +02:00
Jeff Bolz 24b981eff7
vulkan: support noncontig i32 copy (llama/17328) 2025-12-12 17:53:03 +02:00
Ruben Ortlam b7dfced37f
vulkan: add log RTE support to fix Nvidia CI (llama/17320)
* vulkan: add log RTE support to fix Nvidia CI

* actually use the rte shader
2025-12-12 17:53:02 +02:00
Adrien Gallouët 9e429c47e1
cmake : fix ARM feature verification (llama/17170)
* cmake : fix ARM feature verification

Use check_cxx_source_compiles to prevent conflicts with
the existing GGML_NATIVE detection code.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* cmake : unset __ARM_FEATURE when feature is disabled

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* cmake : fix scope, this is really a macro

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* arm_neon.h is useless

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:02 +02:00
Adrien Gallouët bb88c2545f
ggml : add missing AVX512 feature checks (llama/17270)
_mm512_cvtepu8_epi16        requires  __AVX512BW__
_mm512_srli_epi16           requires  __AVX512BW__
__builtin_ia32_inserti32x8  requires  __AVX512DQ__

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 17:53:02 +02:00
Daniel Bevenius 418314941e
ggml : remove dirty flag from version string (ggml/1391)
This commit removes the "-dirty" suffix from the GGML version string.

The motivation for this change is to ensure that the version string
works with different ways of checking out ggml and using it in projects.
By removing the dirty flag from the version string, we avoid potential
artifacts like shared libraries getting a -dirty suffix in their names.

Instead, if the project is built from a dirty git state, the dirty flag
will be appended to the commit hash in the GGML_BUILD_COMMIT variable.
This will enable users to still identify that the build was made from
from a modified/dirty state even though the version might match a "real"
version.

For example, the commit can be produces as follows:
```c++
    printf("commit: %s\n", ggml_commit());
```
Which would print the following for a dirty build:
```console
commit: 781baf2a-dirty
```

Refs: https://github.com/ggml-org/ggml/pull/1363#issuecomment-3569691546
2025-12-12 17:53:00 +02:00
Josh Montoya 9f5ed26e43
go : Enable VAD for Go bindings (#3563)
* reset context.n so that NextSegment can be called for multiple Process calls

* enable VAD params
2025-12-10 13:31:36 +02:00
Josh Montoya a8f45ab11d
go : reset context.n in Process() (#3503) 2025-12-08 18:33:07 +02:00
Joseph Sellers a88b93f85f
vad : fix buffer overflow in sample reduction loop (#3558)
The buffer size calculation loop (line ~6661) uses `n_samples - 1` as
the upper bound for segment_end_samples, but the copy loop (line 6696)
uses `n_samples`. This inconsistency allows the copy loop to compute
segment_length values up to 1 sample larger per segment than what was
allocated, causing heap corruption.

Symptom: `malloc(): corrupted top size` or `malloc(): invalid size
(unsorted)` crashes after VAD completes sample reduction.

Fix: Use consistent bounds (`n_samples - 1`) in both loops.

Fixes #3403
2025-12-06 12:28:32 +01:00
Daniel Bevenius d566358a1d
tests : update VAD tests to use Silero V6.2.0 (#3534)
* tests : update VAD tests to use Silero V6.2.0

This commit updates the VAD tests to use the Silero V6.2.0 instead of
V5.1.2. I'm was not sure if we needed to keep testing for both versions,
but opted to just update to the latest version for simplicity.

* wasm : use C++17 for emscripten builds

This commit updates the CMakeLists.txt file to explicitly set the C++
standard to C++17 when building with Emscripten.

The motivation for this change is that building with Emscripten
will currently fail locally and on CI with the following error:
```console
[ 75%] Building CXX object examples/CMakeFiles/common.dir/common-ggml.cpp.o
In file included from /home/danbev/work/ai/whisper.cpp/examples/stream.wasm/emscripten.cpp:5:
/home/danbev/work/utils/emsdk/upstream/emscripten/cache/sysroot/include/emscripten/bind.h:11:2: error:
      "embind requires -std=c++17 or newer"
   11 | #error "embind requires -std=c++17 or newer"
      |  ^
In file included from /home/danbev/work/ai/whisper.cpp/examples/whisper.wasm/emscripten.cpp:4:
/home/danbev/work/utils/emsdk/upstream/emscripten/cache/sysroot/include/emscripten/bind.h:11:2: error:
      "embind requires -std=c++17 or newer"
   11 | #error "embind requires -std=c++17 or newer"
      |  ^
```
2025-12-06 10:58:58 +01:00
Daniel Bevenius 19ceec8eac
examples : fix typo in vad-speech-segments command [no ci] (#3535)
This commit corrects a typo the command-line argument for specifying the
VAD model in the vad-speech-segments example.
2025-11-20 13:35:11 +01:00
gzq 40e788a5d1
readme : minor (#3516) 2025-11-20 13:57:55 +02:00
YangLe 961aec7384
metal : fix compile on macos 11 (#3533) 2025-11-20 13:54:54 +02:00
Georgi Gerganov b12abefa9b sync : llama.cpp 2025-11-17 21:05:46 +02:00
Georgi Gerganov 0e5deca8e2 sync : ggml 2025-11-17 21:05:46 +02:00
Georgi Gerganov 661567357c metal : support I32 -> I32 copy (llama/17317) 2025-11-17 21:05:46 +02:00
Georgi Gerganov 74bb8a8b23 metal : faster argsort (llama/17315)
* metal : faster argsort

* cont : keep data in registers
2025-11-17 21:05:46 +02:00
Georgi Gerganov 57c0e6f8b6 metal : add cumsum (llama/17305) 2025-11-17 21:05:46 +02:00
hipudding d3f5487464 CANN: Use smart pointers to manage ACL objects (llama/17238)
* CANN: Use smart pointers to manage ACL objects

Previously, ACL objects were managed via manual destruction, which
led to multiple memory-leak issues during runtime. This patch replaces
manual memory management with smart pointers so that ACL objects
are properly released and ownership is clearly defined.

Note that the ownership of an ACL object belongs to the function
that creates it. Other internal functions should operate on these ACL
objects using raw pointers to avoid unintended ownership transfers.

Additionally, since aclTensorList automatically frees its contained
aclTensor objects, any aclTensor added to a tensor list must release
ownership to avoid double free operations.

This PR also removes the asynchronous task submission mechanism.
Due to changes in recent CANN versions, tiling time has significantly
decreased. Even with a dual-thread submission model, the dispatch
overhead still falls on the critical path, making async submission
less beneficial. Moreover, aclGraph support provides a much better
path to reducing operator dispatch latency.

* CANN: resolve review comments
2025-11-17 21:05:46 +02:00
Pavels Zaicenkovs 9d95d9a1ee vulkan: add LOG operation support for F32 and F16 (llama/17183)
* vulkan: add LOG operation support for F32 and F16

Part of #14909.

* vulkan: Fix LOG operation types

* docs: Update operation support documentation for Vulkan LOG operation

* vulkan: fix log_f16 shader

* docs: restore missing LOG test cases and regenerate ops.md
2025-11-17 21:05:46 +02:00
Ruben Ortlam f571655e8e vulkan: fix MMQ quantize_y condition (llama/17301) 2025-11-17 21:05:46 +02:00
Georgi Gerganov 9549cc1051 metal : remove obosolete asserts (llama/17295) 2025-11-17 21:05:46 +02:00
lhez a75525cad0 opencl: fix rms_norm_mul (llama/17250)
* opencl: use subgrroup reduce for reduction in rms_norm_mul

* opencl: add comment about workgroup size
2025-11-17 21:05:46 +02:00
shaofeiqi c78845bfa9 opencl: add kernel to handle mat mul in attention to improve encoding speed (llama/17181)
* Add mul_mm_f16_f32_kq_kqv kernel

* Add ggml_cl_mul_mat_kq_kqv_adreno func

* fix whitespace

* remove unused variable

* remove redundant

* refactor and clean up

* remove trailing whitespace
2025-11-17 21:05:46 +02:00
shani-f 1fd63da9f2 sycl : unify unary kernels with a generic implementation and enable wide operator support (llama/17213)
* SYCL: add generic unary op implementation for multiple ops (ABS/SGN/…); unify non-contiguous access

* SYCL: update documentation and sycl.csv to reflect new unary op support

* update ops.md after syncing SYCL.csv changes

* Fix SYCL.csv merge conflict

* Update ops.md after fixing SYCL.csv conflicts

* Fix SYCL.csv tail after merge conflict and regenerate ops.md

* Fix line endings and final newline in SYCL.csv

* Remove TOPK_MOE entries from SYCL.csv as requested

* Update ops.md after removing TOPK_MOE from SYCL.csv

* Regenerated SYCL.csv and synced ops.md with upstream

* Update ops.md using create_ops_docs.py
2025-11-17 21:05:46 +02:00
Jeff Bolz ea3ebd8b0d vulkan: Fuse mul_mat_id+add_id+mul and mul_mat+add+add. (llama/17287)
These both show up in gpt-oss. Also, cleanup the mul_mat_vec fusion code a bit.
2025-11-17 21:05:46 +02:00
Ruben Ortlam 7caea54450 vulkan: Replace 16-bit unpack8 calls to work around legacy Windows AMD driver bug (llama/17285) 2025-11-17 21:05:46 +02:00
Giuseppe Scrivano 4c4e663da0 vulkan: implement ABS and NEG (llama/17245)
* docs: update Vulkan ops

* vulkan: add NEG op

* vulkan: add ABS op

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-11-17 21:05:46 +02:00
Jeff Bolz e1846fc599 vulkan: Use ggml_vk_tensor_subbuffer in mul_mat_vec(id) paths (llama/17244)
* vulkan: Use ggml_vk_tensor_subbuffer in mul_mat_vec(id) paths

* set allow_misalign
2025-11-17 21:05:46 +02:00
Jeff Bolz 9614a56314 vulkan: skip all-negative-inf blocks in FA (llama/17186) 2025-11-17 21:05:46 +02:00
Jeff Bolz 37d4bba152 vulkan: change graph_compute to be async and enable get_tensor_async (llama/17158)
* vulkan: change graph_compute to be async and enable get_tensor_async

This allows some additional CPU/GPU overlap for large pp workloads. Also seems
to help a bit for token gen, maybe getting rid of a small bubble between
graph_compute and get_tensor.

Async set and copy functions seem to be very rarely used, so I didn't enable
them because I didn't have a good way to test them.

The async commands need to be ordered against each other, so put them all on
the compute queue. The non-async commands still use the transfer queue.

The fence for graph_compute/get_tensor_async is submitted and waited on in
ggml_vk_synchronize.

* fix thread safety errors

* teardown context cleanly

* Handle async read to non-pinned dst
2025-11-17 21:05:46 +02:00
Georgi Gerganov 523a6c27ea metal : support argsort for ne00 > 1024 (llama/17247)
* metal : refactor argsort

* cont : sort chunks

* cont : merge sorted buckets

* cont : cleanup
2025-11-17 21:05:46 +02:00
Georgi Gerganov b4d7df3ba2 metal : make the FA extra sizes consistent (llama/17143) 2025-11-17 21:05:46 +02:00
Alberto Cabrera Pérez a81fbfc78e ggml-cpu: handle 3d tensors in repack mat_mul (llama/17241)
* ggml-cpu: handle 3d tensors in repack mul_mat

* Removed unnecessary branch, removed need for <algorithm>

* Fixed dst_ptr pointer in chunk + clang_format

* GGML_ASSERT to check wdata within bounds

* Accidental ggml.h inclusion

* Improved GGML_ASSERT on wdata boundaries

* Address performance regression in Qwen and llama.cpp due to chunking
2025-11-17 21:05:46 +02:00
Piotr Wilkin (ilintar) 3e684f26c1 ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (llama/17063)
* Add ops needed for new hybrid models: SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM

* Update ggml/include/ggml.h

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Code review

* Whitespace

* Update tests/test-backend-ops.cpp

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* This is actually sigmoid, duh.

* Add CONST, remove TRI_KEEP, other changes from review

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml.c

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml.c

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Remove extra script

* Update ggml/src/ggml.c

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* Update tests/test-backend-ops.cpp

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* moving changes from laptop [no ci]

* pre-rebase

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Refactor tests

* ggml : cleanup

* cont : fix ggml_fill srcs

* tests : add note

* ggml : add ggml_fill_inplace

* ggml : add asserts

* ggml : fix ggml_fill constant cast

* cont : ggml_tri minor

* Use TENSOR_LOCALS

* Fix regression from #14596, regenerate

* Don't make commits at night...

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
Co-authored-by: Aman Gupta <amangupta052@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-17 21:05:46 +02:00
Ruben Ortlam e8e0004fe5 vulkan: remove shell call from vulkan-shaders-gen tool, revert file check (llama/17219)
* vulkan: remove shell call from vulkan-shaders-gen tool

* use string vector for command execution

* Fix condition

* use string, remove const_cast

* Fix dependency file quotation on Windows

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-11-17 21:05:46 +02:00
Diego Devesa 210f0f860b sched : fix reserve ignoring user tensor assignments (llama/17232) 2025-11-17 21:05:46 +02:00
ixgbe 91fa5b5cac ggml-cpu : add RISC-V vector intrinsic support for silu and cvar operations (llama/17227)
Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-11-17 21:05:46 +02:00
bagheera 265d326fa8 metal: accelerated conv2d (llama/17175)
* metal: accelerated conv2d

* cont : cleanup

---------

Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-17 21:05:46 +02:00
Georgi Gerganov 6a1d830dfd Revert "ggml-cpu: handle 3d tensors in repack mat_mul (llama/17030)" (llama/17233)
This reverts commit 1c398dc9eca9c366ce98deb0e6f3538e444ebc8a.
2025-11-17 21:05:46 +02:00
Diego Devesa 6a91780c3b ggml-cpu : use template for argsort (llama/17222) 2025-11-17 21:05:46 +02:00
TecJesh 726912d1cb CANN: Add cross_entropy_loss op support (llama/16886)
* update L2_NORM op support

* update L2_NORM op support

* remove extra whitespace

* cann: update cross_entropy_loss op support

* remove trailing whitespaces

* rebase the latest code in the main repository and remove the l2_norm operator that already exists in another pull request.

* undo the l2_norm operator deletion
2025-11-17 21:05:46 +02:00
Aman Gupta 84275fc493 CUDA: fuse rope + set_rows (llama/16884)
* CUDA: add fused rope

* move k forward_expand up

* create helper function instead of re-using params

* make assert statement more in line with comment

* rope_norm: coalesced writes to global mem
2025-11-17 21:05:46 +02:00
Johannes Gäßler 566c4c4469 CUDA: static assert to prevent misuse of memcpy_1 (llama/17198) 2025-11-17 21:05:46 +02:00
Georgi Gerganov 3810a6180b ggml : use std::sort in ggml_argsort CPU implementation (llama/17211)
* ggml : use std::sort in ggml_argsort CPU implementation

* cont : add missing header
2025-11-17 21:05:46 +02:00
Alberto Cabrera Pérez 7df8515824 ggml-cpu: handle 3d tensors in repack mat_mul (llama/17030)
* ggml-cpu: handle 3d tensors in repack mul_mat

* Removed unnecessary branch, removed need for <algorithm>

* Fixed dst_ptr pointer in chunk + clang_format

* GGML_ASSERT to check wdata within bounds

* Accidental ggml.h inclusion

* Improved GGML_ASSERT on wdata boundaries
2025-11-17 21:05:46 +02:00
TecJesh e8b66d9f94 CANN: Add L2_NORM op support (llama/16856)
* update L2_NORM op support

* update L2_NORM op support

* remove extra whitespace
2025-11-17 21:05:46 +02:00
Neo Zhang Jianyu 8388350c66 fix ci crash about SSM_CONV (llama/17169)
* fix ci crash

* Update ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-17 21:05:46 +02:00
Max Krasnyansky 6748d27f55 hexagon: various Op fixes (llama/17135)
* hexagon: explicitly check for ops with zero nrows

llm_graph_context::build_inp_out_ids() can generate tensors with zero nrows.
Somehow other backends seems to handle this without obvious explicit checks.
In the hexagon case we need to check explicitly and skip them.

* hexagon: introduce fastdiv, fix test-backend-ops for ADD/SUB/MUL

Co-authored-by: chraac <chraac@gmail.com>

* hexagon: use fastdiv in ADD_ID

* hexagon: use ggml_op_is_empty and ggml_is_empty to check for NOPs

---------

Co-authored-by: chraac <chraac@gmail.com>
2025-11-17 21:05:46 +02:00
Eve 559091005a disable rms norm mul rope for chips with no fp16 rte (llama/17134) 2025-11-17 21:05:46 +02:00
ixgbe cd8f64d1b5 ggml-cpu : add RISC-V RVV (Zvfh) optimization for FP16 to FP32 conversion (llama/17161)
Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
2025-11-17 21:05:46 +02:00
duduta 1cefb03571 ggml-cpu: templateify ggml_compute_forward_rope_f32 and _f16 (llama/16805)
* extract rotate_pairs logic from ggml_compute_forward_rope_f32

* templateify ggml_compute_forward_rope_f32 and _f16

* abort when rope type not supported, remove GLM from test-rope

* add imrope branch to switch

* add rope tests for perf

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-17 21:05:46 +02:00
Charles Xu 3920ecce3a kleidiai: add optimized per-channel kernels for Q8_0 (llama/16993) 2025-11-17 21:05:46 +02:00
Mike Abbott c01bf73dd1 cmake : add version to all shared object files (llama/17091)
When compiling llama.cpp in Yocto, it fails QA checks because the generated so files aren't versioned.  This applies a version to all generated so files, allowing the package to build without errors.
2025-11-17 21:05:46 +02:00
lhez 46615d74d3 opencl: add fastdiv and use it in set_rows, ported from cuda (llama/17090)
* opencl: add fastdiv for mm q8_0

* opencl: use uint4 for fastdiv vals

* opencl: use fastdiv for set_rows

* opencl: do not use fastdiv for q8_0 mm
2025-11-17 21:05:46 +02:00
Max Krasnyansky ccf525baf0 cpu: skip NOPs to avoid barriers (llama/17133)
* cpu: skip NOPs to avoid barriers

* cpu: use ggml_op_is_empty
2025-11-17 21:05:46 +02:00
Georgi Gerganov 40aebfe8bf metal : cap threadgroups size of set_rows (llama/17146) 2025-11-17 21:05:46 +02:00
Adrien Gallouët 86be60093e ggml-cpu : inspect -march and -mcpu to found the CPU (llama/16333)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-11-17 21:05:46 +02:00
Ruben Ortlam ef71d83b76 vulkan: check glslc executable string (llama/17144) 2025-11-17 21:05:46 +02:00
Ruben Ortlam 43f2c1ff54 vulkan: fix validation issue introduced by #16868 (llama/17145) 2025-11-17 21:05:46 +02:00
Georgi Gerganov bb92c79f56 metal : enable tensor API for A19 (llama/17087) 2025-11-17 21:05:46 +02:00
fj-y-saito 4fea91f06e arm64: add i8mm route with SVE ggml_vec_dot_q4_K_q8_K and ggml_vec_dot_q6_K_… (#15277)
* add i8mm route with SVE ggml_vec_dot_q4_K_q8_K and ggml_vec_dot_q6_K_q8_K

* Surround SVE function with compiler directive

* fix compile switch

* fix coding style

* ggml : fix indent

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-17 21:05:46 +02:00
Acly 58a97d988f cuda/vulkan : bicubic interpolation (llama/17022)
* vulkan : implement upscale with bicubic interpolation

* cuda : implement upscale with bicubic interpolation

* tests : add ggml_interpolate with GGML_SCALE_MODE_BICUBIC to backend tests

* adapt OpenCL backend to not support the OP in that case so tests don't fail

* print scale mode & flags in test-backend-ops
2025-11-17 21:05:46 +02:00
Ruben Ortlam 2e04e7a906 vulkan: fix memory allocations (llama/17122) 2025-11-17 21:05:46 +02:00
KITAITI Makoto 27f485a14c
vad : Silero VAD v6.2.0 (#3524)
* Add ggml-silero-v6.2.0 to download candidates

* Make default VAD model ggml-silero-v6.2.0

* Make VAD model in documentations ggml-silero-v6.2.0
2025-11-17 22:26:17 +09:00
KITAITI Makoto d9b7613b34
ruby : VAD separately from ASR (#3518)
* Add Whisper::VAD::Context

* Add test for Whisper::VAD::Context

* Add Whisper::VAD::Segment

* Add Whisper::VAD::Segments

* Add Whisper::VAD::Context#detect

* Define Whisper::VAD::Segments#each

* Define Whisper::VAD::Segment#start_time and #end_time

* Define Whisper::VAD::Segment#deconstruct_keys

* Add tests for Whisper::VAD family

* Add signatures for VAD family

* Add document on VAD in README

* Define Whisper::VAD::Segments#length

* Add test for Whisper::VAD::Segments#length

* Add signature of Segments#length

* Make vad_segments responsible to initialize VAD::Segments

* Remove meaningless argument check

* Check NULL of segments member

* Add tests for Whisper::VAD::Segments

* Initialize Whisper::VAD::Segment on .allocate

* Add tests for Whisper::VAD::Segment

* Check NULL of context member

* Add test for Whisper::VAD::Context.allocate
2025-11-13 10:15:26 +09:00
Georgi Gerganov a1867e0dad sync : llama.cpp 2025-11-09 23:38:03 +02:00
Georgi Gerganov e67dfbc51b sync : ggml 2025-11-09 23:38:03 +02:00
Ruben Ortlam 1993e397bb vulkan: iGPU memory reporting fix (llama/17110)
* vulkan: use all device-local heaps for memory availability reporting

Co-authored-by: Giuseppe Scrivano <gscrivan@redhat.com>

* use all available heaps for iGPU memory reporting

* Allow multiple memory types per buffer request for devices with split heaps

---------

Co-authored-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-11-09 23:38:03 +02:00
Ruben Ortlam ee8349cf10 vulkan: fix mmq out of bounds reads (llama/17108)
* vulkan: fix mmq out of bounds reads, streamline outdated matmul host code

* fix mul_mat_id quantization call

* Fix compiler warnings
2025-11-09 23:38:03 +02:00
Jeff Bolz db98e8c5b4 vulkan: fuse mul_mat_id + mul (llama/17095)
* vulkan: fuse mul_mat_id + mul

This comes up in qwen3 moe.

* split mul_mat_id fusion tests into a separate class
2025-11-09 23:38:03 +02:00
Georgi Gerganov a4339e2ea7 metal : retain src and dst buffers during async ops (llama/17101) 2025-11-09 23:38:03 +02:00
Jeff Bolz 6de3404773 vulkan: Use spec constants for conv2d s/d/p and kernel W/H (llama/16978)
* vulkan: Use spec constants for conv2d s/d/p and kernel W/H

Also add some additional unroll hints, which seems to help.

* lock around map lookup
2025-11-09 23:38:03 +02:00
Aman Gupta 8967c9ad9b Revert "CUDA: add expert reduce kernel (ggml/16857)" (llama/17100) 2025-11-09 23:38:03 +02:00
Aman Gupta 522b9bce33 CUDA: skip fusion for repeating adds in bias (llama/17080) 2025-11-09 23:38:03 +02:00
SavicStefan 0caa32c772 vulkan: Increase BK to 32; use BK/4 for non-CM mul_mm.comp (llama/16636)
Signed-off-by: Stefan Savic <stefan.savic@huawei.com>
Co-authored-by: Stefan Savic <stefan.savic@huawei.com>
2025-11-09 23:38:03 +02:00
Aleksei Nikiforov 3c975ad523 ggml: disable vxe for cross-compilation by default (llama/16966)
Otherwise compilation will fail due to enabling -mvx -mzvector
and not setting corresponding -march options.
2025-11-09 23:38:03 +02:00
Jeff Bolz 257ce2f5c0 vulkan: fuse rms_norm + mul + rope (+ view + set_rows) (llama/16977)
This change combines the rms_norm+mul and rope+view+set_rows fusions to
allow fusing the whole sequence together. This comes up in Qwen3, Bailing,
and some other models.
2025-11-09 23:38:03 +02:00
Jeff Bolz 4eef518167 vulkan: Fix test-thread-safety crashes (llama/17024)
The std::map pipeline_flash_attn_f32_f16 could be searched and inserted at the
same time, which needs to hold the lock. To be safe, hold the lock for all of
ggml_vk_load_shaders.
2025-11-09 23:38:03 +02:00
Johannes Gäßler 358f77aca7 CUDA: fix MMQ stream-k fixup ne1 indices (llama/17089) 2025-11-09 23:38:03 +02:00
Reese Levine 78ea6c5b67 ggml webgpu: faster matrix multiplication/matrix-vector multiplication (llama/17031)
* Faster tensors (llama/8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings
2025-11-09 23:38:03 +02:00
bssrdf 547724b0a5 CUDA: properly handle nb00=nb02 case for cpy (llama/17081) 2025-11-09 23:38:03 +02:00
Acly 11543bf446 vulkan : refactor buffer handling in vk_op_f32 (llama/16840)
* vulkan : refactor/simplify buffer handling in vk_op_* functions

* Combine UMA handling into ggml_vk_tensor_subbuffer
2025-11-09 23:38:03 +02:00
Johannes Gäßler af8a88792f CUDA: fix should_use_mmvf for ne11 == 1 (llama/17085)
* CUDA: fix should_use_mmvf for ne11 == 1

* Apply suggestion from @am17an

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2025-11-09 23:38:03 +02:00
Adrien Gallouët a1746097bc Revert "ggml-cpu: detect correct cpu flags for arm64 (llama/16229) (#16239)" (llama/17084)
This reverts commit 7c23f3f0d4b9f5d6ea140756eb694b562d5acebb.
2025-11-09 23:38:03 +02:00
iron 512592513c ggml-cpu: detect correct cpu flags for arm64 (ggml/16229) (llama/16239)
When using GCC 9 and GCC 12 on the arm64 platform of ubuntu 2004,
the command "gcc -mcpu=native -E -v -" fails to detect the correct CPU flags,
which results in compilation failures for certain extended instructions,
but the correct CPU flags can be obtained by using gcc -march.

Signed-off-by: lizhenneng <lizhenneng@kylinos.cn>
Co-authored-by: lizhenneng <lizhenneng@kylinos.cn>
2025-11-09 23:38:03 +02:00
xctan 5bce732795 ggml-cpu : optimize RVV q2_k and q3_k kernels (llama/16887) 2025-11-09 23:38:03 +02:00
Johannes Gäßler b5d6fa438f CUDA: fix crash on uneven context without FA (llama/16988) 2025-11-09 23:38:03 +02:00
Georgi Gerganov 32ed574370 metal : initial Metal4 tensor API support (llama/16634)
* metal : rework mat-mat multiplication

* metal : initial Metal4 support

* cont

* metal : detect tensor support

* cont : better ifdefs

* metal : support tensors in mul_mm_id

* metal : add env for disabling tensor API

* tests : restore

* metal : remove unused constants

* metal : fix check for bfloat tensor support

* cont : handle API incompatibilities

* cont : handle even more incompatibilities

* metal : use tensor API only on M5 and later
2025-11-09 23:38:03 +02:00
YehuditE 45588b272e sycl: add CONCAT operator support (llama/16047)
* sycl: add CONCAT operator support

* cleanup: remove stray lines added by mistake

* fix: code format issues in concat.cpp and tests/test-backend-ops.cpp

* chore: fix editorconfig violations

* cleanup: drop unnecessary i16 type support

* docs: update sycl-csv and regenerate ops.md

* update docs/ops.md

* fix: adapt to upstream master changes after rebase

* fix: remove empty files

* fix: drop whitespace

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
l3utterfly b3324ae7d1 ggml-hexagon: graceful fallback for older socs where rpcmem_alloc2 and FASTRPC_GET_URI is unsupported (llama/16987)
* support older socs where FASTRPC_GET_URI is unsupported

* added graceful fallback when FASTRPC_GET_URI call fails

* use weak symbols instead of loading libcdsprpc.so dynamically

* Add weak pragma for rpcmem_alloc2

* Remove weak declaration for rpcmem_alloc2 in ggml-hexagon.cpp

Removed weak declaration for rpcmem_alloc2.

* Enforce ndev to 1 for archs below v75

Force ndev to 1 for SoCs architectures lower than v75.
2025-11-09 23:38:03 +02:00
bssrdf 13cd906501 improve CUDA cpy memory bandwidth when copying transposed tensor (llama/16841)
* WIP

* added a cpy kernel specific to transposed tensor which uses smem to avoid uncoalesced access; test cases also added shwoing improved memory bandwidth

* added BF16 support

* more strict check to make sure src0 is a transpose

* reformulated to handle more complicated transpose cases

* bring back 2D transpose for higher performance

* allow build on windows

* tranpose copy more shapes

* minor tweak

* final clean up

* restore some test cases

* keep only the kernel for true tranposed case; updated with review suggestions

* make CI happy

* remove headers not needed

* reduced bank conflicts for fp16 and bf16

* add missing const*

* now bank conflicts free

* use padding instead of swizzling

---------

Co-authored-by: bssrdf <bssrdf@gmail.com>
2025-11-09 23:38:03 +02:00
Jeff Bolz 558a04c9c7 vulkan: Fix GGML_VULKAN_CHECK_RESULTS to better handle fusion (llama/16919) 2025-11-09 23:38:03 +02:00
Reese Levine e734b5d6ef ggml webgpu: minor set rows optimization (llama/16810)
* Add buffer label and enable dawn-specific toggles to turn off some checks

* Minor set_rows optimization (ggml/4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Comment on dawn toggles

* Remove some comments

* Implement overlap binary operators

* Revert "Implement overlap binary operators"

This reverts commit ed710b36f51ab3f53fa13db15c1685dc8678a32a.

* Disable support for non-contiguous binary_op tensors and leave note for future support

---------

Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
2025-11-09 23:38:03 +02:00
nullname 44e77ccee6 refactor: replace sprintf with snprintf for safer string handling in dump functions (llama/16913) 2025-11-09 23:38:03 +02:00
Jeff Bolz 1672d41ab0 vulkan: remove the need for the dryrun (llama/16826)
* vulkan: remove the need for the dryrun

Allocate pipelines and descriptor sets when requested.

Reallocate the prealloc buffers when needed, and flush any pending work
before reallocating.

For rms_partials and total_mul_mat_bytes, use the sizes computed the last time
the graph was executed.

* remove dryrun parameters
2025-11-09 23:38:03 +02:00
Acly 997fdde0c4 ggml-cpu : bicubic interpolation (llama/16891) 2025-11-09 23:38:03 +02:00
Noah 52e43a2fa5 Fix garbled output with REPACK at high thread counts (llama/16956)
* Fix garbled output with REPACK at high thread counts

Fixed a race condition in the REPACK matrix multiplication code that caused garbled output when using 26+ threads (model-dependent threshold). The issue occurred because with high thread counts, the code forced chunk count to equal thread count, creating many small chunks. After aligning these chunks to NB_COLS boundaries, adjacent chunks could overlap, causing data corruption and race conditions. The fix enforces minimum chunk sizes based on NB_COLS and caps maximum chunk count to prevent creating too many tiny chunks, ensuring proper alignment without overlaps.

* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-cpu/repack.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-09 23:38:03 +02:00
Aman Gupta e51a2f90fe CUDA: avoid mul + bias fusion when doing fusion (llama/16935) 2025-11-09 23:38:03 +02:00
lhez f856023f46 opencl: support imrope (llama/16914)
* opencl: support imrope

* opencl: fix whitespace
2025-11-09 23:38:03 +02:00
theo77186 82ede64cd0 ggml: CUDA: add head size 72 for flash-attn (llama/16962) 2025-11-09 23:38:03 +02:00
Jinyang He 79801188f7 ggml : LoongArch fixes (llama/16958)
* Fix test-quantize-fns f16 and q4_0 failed when use LSX

* Fix LoongArch set float intrinsic when use LSX/LASX
2025-11-09 23:38:03 +02:00
shani-f f1da026bb8 SYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature/sycl repeat back opt (#16869)
* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* SYCL: optimize repeat_back kernel

* Remove Hebrew comment from repeat_back.cpp

* Remove comments for code clarity

Removed comments to clean up the code.

* Fix formatting in ggml-sycl.cpp

* Formatted lambda according to legacy style. No logic changes

* Remove blank line in repeat_back.cpp

Remove unnecessary blank line before assigning acc to dst_dd.
2025-11-09 23:38:03 +02:00
Georgi Gerganov 39834fde1b clip : use FA (llama/16837)
* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-11-09 23:38:03 +02:00
mnehete32 5ed97df483 CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917) 2025-11-09 23:38:03 +02:00
Aaron Teo 84854d246a ggml: add s390x cpu-feats (llama/16774) 2025-11-09 23:38:03 +02:00
Jeff Bolz 2001457367 vulkan: Fix multi_add invalid descriptor usage (llama/16899) 2025-11-09 23:38:03 +02:00
Jeff Bolz 90be9c9de1 vulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)
* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Oliver Simons 7d55fba06f CUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)
* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Johannes Gäßler 52e1bbb554 CUDA: Volta tensor core support for MMF (llama/16843)
* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2025-11-09 23:38:03 +02:00
Georgi Gerganov addda802dd ggml : fix conv2d_dw SVE path (ggml/1380)
* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length

The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.

Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.

Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>

* ci : reduce sam score threshold

* ci : update bbox checks for sam test

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 7d60b431a5 CUDA: add expert reduce kernel (llama/16857)
* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Jeff Bolz a9ba988e56 vulkan: disable spirv-opt for rope shaders (llama/16872) 2025-11-09 23:38:03 +02:00
Masato Nakasaka e2b3eca0dc vulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)
* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines
2025-11-09 23:38:03 +02:00
Ruben Ortlam 7ed570ee94 vulkan: fix shmem overrun in mmq id shader (llama/16873)
* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-09 23:38:03 +02:00
l3utterfly 486d39c2cb ggml-hexagon: respect input size when getting/setting tensor data (llama/16836)
* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function
2025-11-09 23:38:03 +02:00
lhez 7fdd53ac0d opencl: fix boundary handling for mul_mm (llama/16875) 2025-11-09 23:38:03 +02:00
Max Krasnyansky ffe1c832bd cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (llama/16833)
Very similar implementation to the flash-attention chunking, with similar benefits.
2025-11-09 23:38:03 +02:00
JJJYmmm e1780b209d model: add support for qwen3vl series (llama/16780)
* support qwen3vl series.

Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>

* bugfix: fix the arch check for qwen3vl-moe.

* use build_ffn

* optimize deepstack structure

* optimize deepstack feature saving

* Revert "optimize deepstack feature saving" for temporal fix

This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.

* code clean

* use fused qkv in clip

* clean up / rm is_deepstack_layers for simplification

* add test model

* move test model to "big" section

* fix imrope check

* remove trailing whitespace

* fix rope fail

* metal : add imrope support

* add imrope support for sycl

* vulkan: add imrope w/o check

* fix vulkan

* webgpu: add imrope w/o check

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix tensor mapping

---------

Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Max Krasnyansky f1fdb91e95 cpu: introduce chunking for flash attention (llama/16829)
Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.
2025-11-09 23:38:03 +02:00
Sigbjørn Skjæret f7dfa39104 cuda : fix argsort with 64k+ rows (llama/16849) 2025-11-09 23:38:03 +02:00
Jeff Bolz 887d984558 vulkan: Handle argsort with a large number of rows (llama/16851) 2025-11-09 23:38:03 +02:00
Oliver Simons 41f4daca57 Hide latency of bias and gate-loading (llama/16847)
This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.
2025-11-09 23:38:03 +02:00
Jeff Bolz efe8099268 vulkan: Fuse rope+set_rows (llama/16769)
This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).

Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).

Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.

Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.

Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.

Add new backend tests.
2025-11-09 23:38:03 +02:00
Jeff Bolz 35a3fda240 vulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)
* vulkan: Update topk_moe fusion to handle gpt's late softmax

Based on #16649.

* Add ggml_check_edges

* Add sync logging to show fusion effects

* handle clamp added in #16655

* Update ggml/src/ggml-impl.h

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-11-09 23:38:03 +02:00
Ruben Ortlam bc944bddc8 Vulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)
* vulkan: add mmq q2_k integer dot support

* Refactor mmq caching

* Reduce mmq register use

* Load 4 quant blocks into shared memory in one step

* Pack q2_k blocks into caches of 32

* Use 32-bit accumulators for integer dot matmul

* Add q4_k mmq

* Add q3_k mmq

* Add q5_k mmq

* Add q6_k mmq

* Add mxfp4 mmq, enable MMQ MUL_MAT_ID

* Fix mmv dm loads
2025-11-09 23:38:03 +02:00
Max Krasnyansky 4d74160c9a Hexagon Op queue & dispatch optimizations (llama/16820)
* hexagon: remove dspqueue callbacks and do all read processing inplace

* hexagon: there is no need to ref/deref the buffers at this point

We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.

* hexagon: bump the thread count in the adb wrapper scripts

We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.

* hexagon: add lhez as the second code owner
2025-11-09 23:38:03 +02:00
Aman Gupta 6051c704a0 CUDA: use fastdiv in set-rows (llama/16834)
* CUDA: use fastdiv in set-rows

* add assert about value fitting in u32
2025-11-09 23:38:03 +02:00
Jeff Bolz 82a23ca9c4 vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)
This lets the copy to the destination device use the host-visible
vidmem optimization.
2025-11-09 23:38:03 +02:00
Aman Gupta 5c316c48f7 CUDA: Fix bug in topk-moe for gpt-oss (llama/16821)
* CUDA: Fix bug in topk-moe for gpt-oss

When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss

* fix for qwen3 too

* change ifndef to ifdef
2025-11-09 23:38:03 +02:00
YaelLogic 5850c952e5 sycl: add RMS_NORM_BACK operation support (llama/16808)
* sycl: add RMS_NORM_BACK operation support

* sycl: rms_norm_back: add dual reduction paths (FP64 and FP32) and savepoint before further changes

* sycl: add RMS_NORM_BACK support

Implement RMS_NORM_BACK for the SYCL backend using FP32 compensated parallel reduction. Minimal docs updates (ops.md / SYCL.csv).

* revert: restore .gitignore and tools/run/CMakeLists.txt to upstream

* revert: restore tests/CMakeLists.txt to upstream

* sycl: optimize rms_norm_back

* fix: restore SYCL.csv to correct state with RMS_NORM_BACK support

* Update ggml/src/ggml-sycl/norm.cpp

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>

* fix: remove trailing whitespace and add missing newline (EditorConfig)

---------

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2025-11-09 23:38:03 +02:00
YaelGitAccount a983c9219d cuda: add SET operation support (llama/16804)
* feat(cuda): add GGML_OP_SET support

Implement CUDA kernel for SET operation with f32 support.

All tests passing (14598/14598).

* cuda(set): add I32 support; keep F32

* refactor(cuda): use ggml_cuda_cpy to unify SET operator logic and remove code duplication

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-cuda/set.cu

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
l3utterfly f863a42d97 initialise buffer.device in ggml_hexagon_session (llama/16816) 2025-11-09 23:38:03 +02:00
Chenguang Li cb39359e7f CANN: Improve device ID handling and aclnnArange checks (llama/16752)
* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var
2025-11-09 23:38:03 +02:00
Aman Gupta 0c8ff48103 CUDA: add unused vars to mmvf and mmvq (llama/16807) 2025-11-09 23:38:03 +02:00
tamarPal 9664420a54 sycl: add SSM_CONV operation support (llama/16800)
* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

 Flawless numerical accuracy - matches CPU bit-for-bit
 Optimal SYCL kernel design - efficient parallel execution
 Complete tensor layout compatibility - handles all strides correctly
 Robust error handling - comprehensive assertions and validation
 All official tests pass - 14,490/14,490 backend operations verified
 Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <tamarPal@example.com>
2025-11-09 23:38:03 +02:00
Acly bcda7c3e58 ggml : fix interpolate with align-corners and ne=1 (llama/16700)
* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning
2025-11-09 23:38:03 +02:00
Johannes Gäßler 1471b1fda7 HIP: fix AMDGPU_TARGETS, update documentation (llama/16803) 2025-11-09 23:38:03 +02:00
tamarPal 0e1b6c5fc4 sycl: add ROLL operation support (llama/16665)
* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <tamarPal@example.com>
2025-11-09 23:38:03 +02:00
shani-f 543221d824 sycl: add REPEAT_BACK operation support (llama/16734)
* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 97c3285cc4 CUDA: support for weight clamp in top-k norm (llama/16702) 2025-11-09 23:38:03 +02:00
Acly bd8734c050 ggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788) 2025-11-09 23:38:03 +02:00
Sigbjørn Skjæret e6ff2bceed cuda : use fast copy when src and dst are of different type and contiguous (llama/16789)
* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape
2025-11-09 23:38:03 +02:00
leejet 4f4246dcb4 ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (llama/16744)
* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Aman Gupta 9f75cc7eef CUDA: General GEMV fusion (llama/16715) 2025-11-09 23:38:03 +02:00
Gilad S c00ab7e5e6 vulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)
* fix: deduplicate and deprioritize Microsoft Direct3D12 vulkan devices from the `vulkan-dozen` driver

* style: indent

* fix: decrease priority

* fix: switch to `||`
2025-11-09 23:38:03 +02:00
Giuseppe Scrivano d0b544da70 vulkan: delete dead code (llama/16732)
ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-11-09 23:38:03 +02:00
Jeff Bolz 070b24f65c vulkan: Optimize SSM_SCAN (llama/16645) 2025-11-09 23:38:03 +02:00
leejet 5166efa7f0 ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)
* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions
2025-11-09 23:38:03 +02:00
Aman Gupta 524046d4d1 CUDA: use CUB for arbitary size argsort (llama/16754) 2025-11-09 23:38:03 +02:00
Aman Gupta 47efc4f115 ggml-cuda: use passed ops instead of hardcoded ops (llama/16712) 2025-11-09 23:38:03 +02:00
Matthew Michel 0a5b4c2e9b sycl: use async memory allocation to fix crashes during graph recording (llama/16644)
* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]
2025-11-09 23:38:03 +02:00
Max Krasnyansky 8bb12395fe Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547)
* model: add support for extra bufs for all devices

* hexagon: add experimental ggml-hexagon backend for the Hexagon NPU

This commit introduces a new experimental backend `ggml-hexagon` with support for the Hexagon NPU.

Highlights:
- Supports Hexagon versions: v73, v75, v79, and v81
- Targets Android devices based on Snapdragon SoCs: Gen3, 8-Elite, and 8-Elite Gen5
- Supports Q4_0, Q8_0, MXFP4, and FP32 data types
- Implements core LLM ops: MUL_MAT/MUL_MAT_ID, ADD/SUB/MUL/ADD_ID, RMS_NORM, ROPE, GLU/SWIGLU, SOFTMAX

**Note:** This backend is experimental and may exhibit instability or limited performance across supported devices.
It is intended for early testing and feedback from llama.cpp/ggml developer and user community.

Co-Authored-By: Rajdeep Ganguly <rganguly@qti.qualcomm.com>
Co-Authored-By: Todor Boinovski <todorb@qti.qualcomm.com>

* hexagon: fix format checker errors

* hexagon: update readme and cmake presets

* ci: add android-ndk-build jobs that build plain ARM64 and Snapdragon versions

* hexagon: add simple graph optimizer for stacking MUL_MAT ops with the same input

* hexagon: move ADB helper scripts into scripts/snapdragon/adb

* hexagon: replace all f/printfs with GGML_LOG_...

* readme: add hexagon to the list supported backends

* hexagon: stack malmuts with quantized inputs only

* hexagon: add TODO for fixing issues in hexagon_graph_optimize

* hexagon: update to hex-sdk 6.4.0 and add scripts for running on QDC

* scripts: fix lint errors

* scripts: update qdc pytest script to make linter happy

* hexagon: add reduce sum in fp32

* hexagon: reduce number of vector stores in matmul output

* hexagon: remove the need for vdelta in reduce-multiply-x8

* hexagon: consistent use of reduce_sum_fp32 for row_sums

* hexagon: some more matmul optimizations and comments

Optimize cases where tensor dims are not multiple of 1024 (e.g in Qwen models).
We've handled those cases already but at a higher overhead.

* hexagon: update cmake presets

* hexagon: add OPMASK support for run-bench.sh wrapper

* hexagon: update to use GGML_BACKEND_API

* hexagon: remove unused logic for setting tensor flags for the views

* hexagon: add asserts to set/get_tensor to make sure we handle complete tensors

Same asserts as the CPU backend.

* hexagon: use cpy_tensor slow path for non-host buffers

* hexagon: error checks in the buffer allocator

* cmake: move include(extProj) under ggml-hexagon

* hexagon: don't forget to delete the backend on free

* hexagon: set/get_tensor size assert apply only to quantized tensors

* hexagon: reintroduce HEX_VERBOSE wrapper for GGML_LOG_DEBUG for now

GGML_LOG_DEBUG is always enabled for test-backend-ops and the output gets in the way.
Ideally we need a bit more finer log levels.

* docs: typos in hexagon developer docs (libggm-...)

* hexagon: overhaul error handling in the session/device allocation

this should handle all failure paths in the session allocation.

* hexagon: update cmake presets to enable fp16 vectors

* hexagon: remove unused time_usec function

* hexagon: don't forget to release buffer contexts

* hexagon: fixed indents in hvx-utils (missed clang-format auto-format failure)

* hexagon: remove custom can_repeat function and use ggml_can_repeat

---------

Co-authored-by: Rajdeep Ganguly <rganguly@qti.qualcomm.com>
Co-authored-by: Todor Boinovski <todorb@qti.qualcomm.com>
2025-11-09 23:38:03 +02:00
Diego Devesa a2130ac501 Revert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" (#16723)
This reverts commit 19a5a3edfd306516cc419679d69d6435943b6816.
2025-11-09 23:38:03 +02:00
sirus20x6 773041e336 ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (llama/16522)
* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 431aaf56f0 CUDA: fix bug in topk-moe softmax (llama/16711) 2025-11-09 23:38:03 +02:00
Aman Gupta ba41a6ca6a CUDA: topk-moe: add optional parameter for gpt-oss (llama/16649) 2025-11-09 23:38:03 +02:00
Johannes Gäßler 99cea274e5 CUDA: better error for FA kernel with 0 occupancy (llama/16643) 2025-11-09 23:38:03 +02:00
Oleg Orlov 999a7e0cbf
whisper : enable IGPU (#3492)
Co-authored-by: Oleg Orlov <vk.orelsokolov@yandex.by>
2025-11-01 13:38:28 +01:00
KITAITI Makoto c62adfbd1e
ruby : tiny bug fix (#3490)
* Remove build-xcframework.sh from package

* Remove unused variable

* Bump version to 1.3.5

* Don't use variable before declaration
2025-10-29 03:50:44 +09:00
Orel-A f16c12f3f5
wasm : fix Hebrew ID (#3487)
whisper_lang_id: unknown language 'iw'
2025-10-27 08:49:32 +02:00
Georgi Gerganov 322c2adb75 talk-llama : sync llama.cpp 2025-10-22 12:58:11 +03:00
Georgi Gerganov 35ea5ced60 sync : ggml 2025-10-22 12:58:11 +03:00
Aman Gupta 9a8cfb040c ggml: add ggml_can_fuse_subgraph (llama/16662)
* ggml: add ggml_can_fuse_subgraph

* ggml-cuda: use ggml_can_fuse_subgraph for topk-moe

* format

* 1. remove inputs from signature as they are transient nodes
2. add check for views: view_src should be part of the subgraph

* - combine check into one loop
- check all view_src parents
- other minor review comments

* remove redudant if test

* - rename and other minor review comments

* add assert about count < 32
2025-10-22 12:58:11 +03:00
lhez 5c4c477d00 opencl: fix warnings and clean up profiling (llama/16688)
* opencl: remove unused headers, fix warnings

* opencl: clean up profiling, only keep kernel time
2025-10-22 12:58:11 +03:00
Jeff Bolz 7f16c71068 vulkan: Handle FA with all -inf mask values (llama/16447) 2025-10-22 12:58:11 +03:00
YehuditE 55cf00c20a sycl : add PAD_REFLECT_D1 operator support (llama/16145)
* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block
2025-10-22 12:58:11 +03:00
Diego Devesa 70b4d22f01 ggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679) 2025-10-22 12:58:11 +03:00
safranowith bb76672081 SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)
* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN
2025-10-22 12:58:11 +03:00
Aaron Teo 82bdf31267 ci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)
* devops: initial patch

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: rm baseline version

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-10-22 12:58:11 +03:00
Johannes Gäßler 72d98011db HIP: fix GPU_TARGETS (llama/16642) 2025-10-22 12:58:11 +03:00
Jeff Bolz 414901a42c vulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)
This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.
2025-10-22 12:58:11 +03:00
Aman Gupta 08345f15ec CUDA: use registers instead of smem in topk-moe (llama/16647)
Uses the technique used in the vulkan PR #16641. Neat trick!
2025-10-22 12:58:11 +03:00
Shawn Gu 8ffdf4bd96 opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)
* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings
2025-10-22 12:58:11 +03:00
Radoslav Gerganov 6aa18cccd8 rpc : report actual free memory (llama/16616)
* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server
2025-10-22 12:58:11 +03:00
Giuseppe Scrivano d22008b631 vulkan: Add State Space Model (SSM) Operations Support (llama/16463)
* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-10-22 12:58:11 +03:00
muggle-stack 328263f8fd ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <mingjun.rong@spacemit.com>
2025-10-22 12:58:11 +03:00
Jeff Bolz 4a384826a8 vulkan: fix debug build (add_rms_len/data not found) (llama/16624) 2025-10-22 12:58:11 +03:00
Ilia Ilmer 0ae492641c metal : add `CONV_TRANSPOSE_2D` (llama/16542)
* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 82332cea27 SYCL SET operator optimized for F32 tensors (llama/16350)
* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 7bb53032b3 sycl : add ARANGE operator (llama/16362)
* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
Chenguang Li fe965613c0 CANN: format code using .clang-format (llama/15863)
This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <huafengchun@gmail.com>
2025-10-22 12:58:11 +03:00
takuya kodama 3c136d699a ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set
2025-10-22 12:58:11 +03:00
yael-works f7b5ecf195 SYCL: Add GGML_OP_MEAN operator support (llama/16009)
* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-22 12:58:11 +03:00
safranowith 757d51d21d cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)
* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py
2025-10-22 12:58:11 +03:00
lhez bef9f74553 opencl: add q8_0 mm support (llama/16469)
* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path
2025-10-22 12:58:11 +03:00
lhez 16dab3d122 opencl: fix FA for f32 (llama/16584) 2025-10-22 12:58:11 +03:00
Sam/Samuel d8a146b0f9 metal: optimise `GGML_OP_SUM` (llama/16559)
* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
Julius Tischbein 0c9d49927c CUDA: Changing the CUDA scheduling strategy to spin (llama/16585)
* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Remove empty line

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-10-22 12:58:11 +03:00
Georgi Gerganov 8ed913da0e metal : avoid using Metal's gpuAddress property (llama/16576)
* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check
2025-10-22 12:58:11 +03:00
Georgi Gerganov 23c19308d8
server : set no_context == true (#3482) 2025-10-20 15:39:48 +03:00
Georgi Gerganov 4979e04f5d
release : v1.8.2 2025-10-15 10:29:42 +03:00
Georgi Gerganov 8ba3c13b0c talk-llama : sync llama.cpp 2025-10-15 09:29:17 +03:00
Georgi Gerganov ff2253b08a sync : ggml 2025-10-15 09:29:17 +03:00
SavicStefan 499f183e75 vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
Signed-off-by: Stefan Savic <stefan.savic@huawei.com>
Co-authored-by: Stefan Savic <stefan.savic@huawei.com>
2025-10-15 09:29:17 +03:00
Aman Gupta 2eb9119754 CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577) 2025-10-15 09:29:17 +03:00
Jeff Bolz 393fbbc80b vulkan: Support FA with K/V in F32 (llama/16543) 2025-10-15 09:29:17 +03:00
Jeff Bolz 73e200ee85 vulkan: Improve build time for MSVC (llama/16545)
Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.
2025-10-15 09:29:17 +03:00
Johannes Gäßler 1bdd746bc8 CUDA: enable FA for FP32 KV cache (llama/16546) 2025-10-15 09:29:17 +03:00
Aman Gupta f2075667fa CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code
2025-10-15 09:29:17 +03:00
Aman Gupta b4c5c6f71f CUDA: add fp kernel for larger batch size MoE (llama/16512)
* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks
2025-10-15 09:29:17 +03:00
Anav Prasad a12848e8e9 cuda : remove legacy copy-op pointer indirection code (llama/16485)
* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function
2025-10-15 09:29:17 +03:00
Georgi Gerganov 25ac94a6cb metal : FA support F32 K and V and head size = 32 (llama/16531)
* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]
2025-10-15 09:29:17 +03:00
lhez 66b0fc2fb7 opencl: fix build targeting CL 2 (llama/16554) 2025-10-15 09:29:17 +03:00
Johannes Gäßler 77272fe0df CUDA: fix numerical issues in tile FA kernel (llama/16540) 2025-10-15 09:29:17 +03:00
Jie Fu (傅杰) 8a9c2ba6a1 ggml : fix build broken with -march=armv9-a on MacOS (llama/16520)
* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Add #pragma message

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Address review comment.

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <jiefu@tencent.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-10-15 09:29:17 +03:00
Chenguang Li 417ecdddc5 CANN: fix CPU memory leak in CANN backend (llama/16549)
This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.
2025-10-15 09:29:17 +03:00
Sam/Samuel bfd88b8b6e metal: add support for opt_step_sgd (llama/16539)
* metal: add support for opt_step_sgd

* add newline to pass EditorConfig check
2025-10-15 09:29:17 +03:00
Georgi Gerganov ccac1b4772 ggml : fix scalar path for computing norm (llama/16558) 2025-10-15 09:29:17 +03:00
hipudding 53e21364a6 CANN: Update several operators to support FP16 data format (llama/16251)
Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <757486878@qq.com>
2025-10-15 09:29:17 +03:00
Sam/Samuel 7f22fe5d8f metal : add opt_step_adamw and op_sum (llama/16529)
* scaffold to support opt step adamw on metal (not written so far)

* add opt-step-adamw kernel for metal

* pass op->src[4] as a separate buffer to the pipeline

* add bounds check to opt-step-adamw kernel

* complete scaffold for GGML_OP_SUM

* naive GGML_OP_SUM kernel

* remove unwanted comment

* change OP_SUM capability gate

* Add has_simdgroup_reduction to both ops to pass CI
2025-10-15 09:29:17 +03:00
Neo Zhang Jianyu be778c992f fix UT fault cases: count-equal, argsort, pad OPs (llama/16521)
* fix/refactor OP argsort, pad

* fix count-equal op

* update SYCL OP list

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-15 09:29:17 +03:00
sirus20x6 70eb30f28e ggml : Fix FP16 ELU positive branch (llama/16519)
Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
sirus20x6 53721d6309 ggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (llama/16518)
The previous SVE implementation for `ggml_vec_dot_f16_unroll` contained a bug due to a copy-paste error. The wrong variable was used in an FMA instruction, leading to incorrect results. This commit corrects the variable usage and improves the clarity of the code by renaming variables to avoid confusion.

Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
Johannes Gäßler b5fb9b9f58 CUDA: faster tile FA, add oob checks, more HSs (llama/16492) 2025-10-15 09:29:17 +03:00
Georgi Gerganov a91dd3be72
release : v1.8.1 2025-10-12 11:17:59 +03:00
Georgi Gerganov ea174c62bc bench : update [no ci] 2025-10-12 11:16:23 +03:00
Georgi Gerganov ff4c1a5a53 talk-llama : sync llama.cpp 2025-10-12 11:16:23 +03:00
Georgi Gerganov ed6a3063ec sync : ggml 2025-10-12 11:16:23 +03:00
Georgi Gerganov d201705e71 metal : fix mul-mm condition + fix mul-mv permuted kernels (llama/16494) 2025-10-12 11:16:23 +03:00
Diego Devesa 1cc342427b cuda : avoid initializing unused devices (llama/16510) 2025-10-12 11:16:23 +03:00
Prajwal B Mehendarkar d8f1aa4e1d cmake : Dont define XOPENSOURCE on AIX (llama/16481) 2025-10-12 11:16:23 +03:00
duduta d83fef35df cpu : optimize the ggml NORM operation (llama/15953)
* ggml-cpu: optimize norm operation to use intrinsics or Accelerate

          rename function

          add endif macro comment

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* implement s390x SIMD suggested by @taronaeo

* add TODO comment

* tidy up spaces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2025-10-12 11:16:23 +03:00
Chenguang Li b9eac9419c CANN: Improve ACL graph matching (llama/16166)
* CANN: improve ACL graph matching

Record `ne` and `nb` information for src tensors and include them in the
graph matching check. This enhances the robustness of ACL graph matching
by preventing incorrect matches when src tensors share the same data
address but differ in shape or stride.

* CANN: add op_params match
2025-10-12 11:16:23 +03:00
Charles Xu c8b2c56fd2 kleidiai: kernel interface refactoring (llama/16460) 2025-10-12 11:16:23 +03:00
Neo Zhang Jianyu 7df6766b63 refactor soft_max, add soft_max_back (llama/16472)
* refactor to support soft_max_ext

* fix error and support soft_max_back

* rm unused functions

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-12 11:16:23 +03:00
ai-fonsi 21e6e72a2f Disable CUDA host buffers on integrated GPUs (llama/16308) 2025-10-12 11:16:23 +03:00
Georgi Gerganov 7ef78a72e1 metal : mark FA blocks (llama/16372)
* metal : better unroll in the FA kernels

* metal : index FA blocks

* tests : restore [no ci]

* metal : prevent division by zero in FA kernels

* metal : fix -INF detection logic
2025-10-12 11:16:23 +03:00
Reese Levine 4eea3efc49 ggml webgpu: profiling, CI updates, reworking of command submission (llama/16452)
* Add profiling

* More detailed profiling

* Rework command submission to avoid global locks

* Update wait handling

* try new method of waiting on futures

* Add serializing of command submission in some cases

* Add new pool for timestamp queries and clean up logging

* Serialize command submission in CI and leave a TODO note

* Update webgpu CI

* Add myself as WebGPU codeowner

* Deadlock avoidance

* Leave WebGPU/Vulkan CI serialized

* Fix divide by 0

* Fix logic in division by inflight_threads

* Update CODEOWNERS and remove serialize submit option
2025-10-12 11:16:23 +03:00
Georgi Gerganov 4bce4fa5e9 metal : add support for non-padded FA KV (llama/16148)
* metal : pad K, V and Mask when needed

* cont : simplify

* cuda : add TODO about KV padding requirement

* metal : add comments

* metal : remove mask padding requirement
2025-10-12 11:16:23 +03:00
Georgi Gerganov 6cf0c21b09 tests : add -INF blocks to the KQ mask in the FA tests (llama/16380)
* tests : add -INF blocks to the KQ mask in the FA tests

* cont : bump -INF block size to 64

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* ggml : prevent division by zero in FA CPU op

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov 1a4116f942 metal : various optimizations + refactoring (llama/16446)
* metal : ssm_scan minor opts

* metal : get_rows optimize

* metal : cpy optimize

* metal : ssm_conv opt

* metal : ssm_scan simplify

* metal : ssm_Scan opt
2025-10-12 11:16:23 +03:00
Georgi Gerganov 0e431b3cea ggml : fix unaligned access in AMX code (llama/16315) 2025-10-12 11:16:23 +03:00
Daniel Bevenius 0f29d7c3fa ggml-cpu : fix leftover handling in ggml_vec_scale_f32 for SVE (llama/16443)
This commit updates the leftover handling in ggml_vec_scale_f32.

The motivation for this is that the code currently incorrectly assumes
there would be fewer than ggml_f32_epr leftover elements. However,
since the main loop processes 2*ggml_f32_epr elements per iteration
, there can be up to (2*ggml_f32_epr - 1) leftover elements.

The original single-pass leftover code could only process ggml_f32_epr
elements, leaving some elements unscaled.

Example scenario with 256-bit SVE:
```
ggml_f32_epr  = 8 (elements per register)
ggml_f32_step = 16 (two registers per iteration)
n             = 25
np            = 16
leftovers     = 9 elements (16-24)

Original    : processes only elements 16-23, misses element 24
This commit : loop processes elements 16-23, then element 24
```

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/18070620247/job/51419855630
2025-10-12 11:16:23 +03:00
Reese Levine b8bdf06182 ggml webgpu: actually add softmax, fix rms_norm offset (llama/16400)
* implement soft_max

* Fix soft_max data race

* Temporary fix, wait on each submit
2025-10-12 11:16:23 +03:00
Eve 2ca8fa37fa vulkan: use a more appropriate amount of threads when generating shaders (llama/16418)
* use a more flexible amount of threads

* fix windows compile and 0 thread case

* nominmax
2025-10-12 11:16:23 +03:00
Radoslav Gerganov 93882335a8 rpc : check src buffer when copying tensor (llama/16421)
Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.
2025-10-12 11:16:23 +03:00
Radoslav Gerganov af51bbab88 rpc : add support for multiple devices (llama/16276)
* rpc : add support for multiple devices

Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.

closes: #15210

* fixes

* use ggml_backend_reg_t

* address review comments

* fix llama-bench backend report

* address review comments, change device naming

* fix cmd order
2025-10-12 11:16:23 +03:00
Acly 49e0a426f3 vulkan : incremental shader builds (llama/16341)
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times

* support dep-files so shaders are recompiled if their included files change

* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled

* vulkan : only write embedded shader .hpp/.cpp when they change

* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier

* fix hang in vulkan-shaders-gen when there are compilation errors

* early out did not decrement compile_count

* clean up

* fix glslc integer dot product test

* unconditionally write the embedded shader cpp output

* replace output filepath in generated dep-files to match output in CMakeLists

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov 93c1305565 metal : fix loop bound in ggml_mem_ranges (llama/16412) 2025-10-12 11:16:23 +03:00
Acly a70144a873 ggml : fix graph reallocation with multiple chunks (llama/16396)
reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower
2025-10-12 11:16:23 +03:00
Jeff Bolz 2e6888089f vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE

Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.

For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.

Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.

With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.

* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
2025-10-12 11:16:23 +03:00
Jeff Bolz 90bdcf2ef6 vulkan: Fix FA coopmat1 invalid array indexing (llama/16365)
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
2025-10-12 11:16:23 +03:00
Jeff Bolz fd11cd97ab vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (llama/16316) 2025-10-12 11:16:23 +03:00
Reese Levine 27ebde6afd ggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)
* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
Piotr Wilkin (ilintar) 33ca8355c4 model : Apertus model implementation (llama/15852)
* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-12 11:16:23 +03:00
R0CKSTAR e29508be8b musa: update compile flags (llama/16265)
Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>
2025-10-12 11:16:23 +03:00
uvos b73f67d3f6 HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
2025-10-12 11:16:23 +03:00
Eve b0560310aa vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using
2025-10-12 11:16:23 +03:00
lhez 31bb869929 opencl: support pad_ext (llama/15888) 2025-10-12 11:16:23 +03:00
Reese Levine 8208cea829 ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
lhez 199626d79e opencl: support ne3 in get_rows (llama/15866) 2025-10-12 11:16:23 +03:00
Ruben Ortlam c3b5c4d934
whisper : Support using devices of type iGPU (#3469) 2025-10-11 17:55:16 +03:00
Andreas Lubbe 85871a9469
whisper : add support for --carry-initial-prompt (#3395)
* Add support for --carry-initial-prompt

* PR fixes for ruby and go

* Refactoring for readability

* WIP 1

* WIP 2

* PR fixes

* More PR fixes

* PR fix

* Further simplification

* d'oh

* One more logic fix

* Update src/whisper.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Truncate prompt_past0 upon initialization

* Slight simplification

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-10 19:51:15 +03:00
Andreas Lubbe a0ca50f3b9
cli: Fix assignment for vad_min_silence_duration_ms (#3467)
* cli: Fix assignment for vad_min_silence_duration_ms

Found and fixed this simple copy/paste error

* server : fix vad_min_silence_duration_ms assignment

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-10-10 15:21:03 +02:00
Georgi Gerganov d3a29d7b88
minor : fix code style (#3463) 2025-10-10 11:33:01 +03:00
Silviu Caragea 85d1d3d3dc
vad : free vad_segments in whisper_vad (#3463)
This commit fixes multiple issues:

* memory leak because vad_segments is never released
* avoid segmentation fault when whisper_vad_segments_from_samples returns nullptr.
* avoid potential segmentation fault when the app fails to allocate memory for filtered samples and the vad context is released but also get released withing state itself when whisper_free_state is called
2025-10-10 06:20:21 +02:00
Georgi Gerganov 98930fded1
whisper : clean-up headers 2025-10-09 10:48:52 +03:00
KITAITI Makoto 8877dfc11a
[skip ci]Bump Ruby bindings' version to 1.3.4 (#3461) 2025-10-08 20:45:20 +09:00
Daniel Bevenius c8223a8548
vad : fix memory leaks in VAD implementation (#3453)
* vad : fix memory leak by storing ggml_context in vad context struct

This commit addresses a memory leak issue in the voice activity
detection (VAD) where the ggml_context is not stored within the vad
context structure.

The motivation for this change that this is causing the context memory
to stay allocated and the tensor still point to that memory but this
memory is never freed.

* vad : free memory allocated for VAD hparams

This commit frees the model hyperparameters allocated for the VAD
context in the `whisper_vad_free` function. Specifically, it deletes the
`encoder_in_channels`, `encoder_out_channels`, and `kernel_sizes` arrays
allocated with `new[]` in the `whisper_vad_init` function.

The motivation for this is to prevent memory leaks when the VAD.

* vad: free ggml buffer in whisper_vad_free

This commit frees the ggml buffer in the whisper_vad_free function to
prevent memory leaks.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3452

* Revert "vad : fix memory leak by storing ggml_context in vad context struct"

This reverts commit aeafca437e.

* whisper : free ggml context in whisper_vad_init_context

This commit frees the ggml_context after initializing the VAD context in
the whisper_vad_init_context function.

The motivation for this is to prevent memory leaks.
2025-10-06 14:57:44 +02:00
KITAITI Makoto 7849aff7a2
ruby : Loose RegExp for test (#3448) 2025-10-01 15:33:11 +03:00
Daniel Bevenius 2a56869669
bindings-java : disable flash attention by default (#3445)
This commit disables flash-attention for the Java binding test so that
the testFullTranscribe test passes.

Without this change the test was failing because the expected output
mismatches after the flash-attention change:
```console
<And so my fellow Americans ask not what your country can do for you ask what you can do for your country.>
but was:
<and so my fellow Americans ask not what your country can do for you ask what you can do for your country>
```

An alternative would also be to update the expected output but it felt
better to keep the same expected output and disable flash-attention and
not just change the expected output to match the new behavior.
2025-10-01 09:13:34 +02:00
Georgi Gerganov 8c0855fd6b
bench : update [no ci] 2025-09-30 21:40:32 +03:00
Georgi Gerganov 47fcd7da8b
scripts : add -nfa option [no ci] 2025-09-30 21:37:00 +03:00
Georgi Gerganov 8a67c55c8a
wchess : fix link [no ci] 2025-09-30 21:28:03 +03:00
Georgi Gerganov 41fc9dea6a
release : v1.8.0 2025-09-30 21:25:36 +03:00
Daniel Bevenius 5904d00dbb
examples : add wchess.wasm to wasm examples build (#3443)
* examples : add wchess.wasm to wasm examples build

This commit add the wchess.wasm example to the wasm examples that are
deployed to https://ggml.ai/whisper.cpp.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3434#issuecomment-3346980420
2025-09-30 16:23:01 +02:00
Georgi Gerganov 0b3587acdd
whisper : enable flash attention by default (#3441) 2025-09-30 15:47:20 +03:00
Georgi Gerganov 1e5ad50f8f
bench : add rtx 5090 [no ci] 2025-09-30 13:58:15 +03:00
Georgi Gerganov 527ff158d0 ggml : bump version to 0.9.4 (ggml/1363) 2025-09-30 13:54:08 +03:00
Georgi Gerganov e4bf87b0e9
bench : update [no ci] 2025-09-30 12:51:25 +03:00
Georgi Gerganov b57b9d3a27
sync : ggml 2025-09-30 12:31:08 +03:00
anavp-nvidia 62b3b86e3f
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)
* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs

* fix to ensure test-backend-ops check passes
2025-09-30 12:31:04 +03:00
Georgi Gerganov 78f85f2b92
metal : dynamic simdgroups for MV kernels (llama/16340)
* metal : dynamic simdgroups for MV kernels

* cont : minor
2025-09-30 12:31:04 +03:00
Charles Xu 01e86b69ab
kleidiai : fix work size and threads sync for fp16 (llama/16246) 2025-09-30 12:31:04 +03:00
alex-spacemit 35ebdf7304
ggml: riscv: add riscv spacemit backend (llama/15288)
* ggml: add spacemit backend

Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23

* add new line at end of file

Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2

* add riscv zba extension limit

Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce

* fixed for review comments, file renamed and format

Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce

* fixed for code format, after clang-format

Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2

* use _Float16 instead of __fp16

Change-Id: I039fb02bb95270e641bc4442204e658735859d43

* add ci for riscv64-spacemit-ime-native

Change-Id: I711c1033061df1a289ea77891b2997599dfe8279

* update debian-13-riscv64-spacemit-ime-native ci label

Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a

* remove license comment for spacemit ime

Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3

* upgrade binutils for gcc ime

Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45

* add spacemit ime cross jobs

Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6

* remove native compile for riscv64-spacemit-ime

Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e

* ci : add caching for spacemit ime cross toolchain

Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de

* ci: bug fixed for cache path and env

Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a

* Update .github/workflows/build-linux-cross.yml for cache path

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* bugfixed for  build-linux-cross.yml,  syntax error

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: cailinxi <linxi.cai@spacemit.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-30 12:31:03 +03:00
Rafal Lewczuk 94fe9bbe2b
ggml-backend : add root cause in error message if loading backend library fails (llama/16172)
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
2025-09-30 12:31:00 +03:00
Georgi Gerganov 32be14f8eb
bench : update [no ci] (#3439) 2025-09-29 17:42:38 +03:00
Georgi Gerganov a77d11d91e
bench : warm-up all kernels (#3438) 2025-09-29 17:27:53 +03:00
Georgi Gerganov 22c12ee86d
ggml : remove oboslete files (#0) 2025-09-29 16:47:30 +03:00
Georgi Gerganov d8cdcce884
ci : add self-hosted workflows (#3437)
* ci : add self-hosted workflows

* cont : fail workflow if there is an error
2025-09-29 16:42:39 +03:00
Georgi Gerganov b4909a6c78
whisper : remove ggml_mul_mat padding (#3436) 2025-09-29 16:42:08 +03:00
Georgi Gerganov fcf0181ee2
talk-llama : sync llama.cpp 2025-09-29 15:18:41 +03:00
Georgi Gerganov 404a93114c
sync : ggml 2025-09-29 15:18:18 +03:00
Georgi Gerganov 3201382792
cmake : remove metal flag (llama/0) 2025-09-29 15:18:13 +03:00
Sigbjørn Skjæret 112e10f2e4
ggml : check cuda and metal argsort limits and add test (llama/16323)
* check cuda argsort limits and add test

* add metal check
2025-09-29 15:18:12 +03:00
Georgi Gerganov 7ce0a7bcd0
ggml : fix dependencies for ggml_set_rows (llama/16318) 2025-09-29 15:18:12 +03:00
Jeff Bolz a375e4c4d2
vulkan: Fix validation failure in quantized flash attention (llama/16292) 2025-09-29 15:18:12 +03:00
Sigbjørn Skjæret 5c6e795607
ggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (llama/16307)
* fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32

* add test that fails on simd
2025-09-29 15:18:12 +03:00
Jeff Bolz 55d45edf6d
vulkan: 64-bit im2col (llama/16135)
* vulkan: 64-bit im2col

Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.

* fix validation error for large im2col
2025-09-29 15:18:12 +03:00
Georgi Gerganov 0102733cca
metal : extend mat-mat multiplication support (llama/16225)
* metal : support mul_mm with src1->type == GGML_TYPE_F16

* metal : support mul_mm_id with src1->type == GGML_TYPE_F16

[no ci]

* metal : mul_mm support ne00 % 32 != 0

* metal : support mul_mm_id with ne00 % 32 != 0

* cont : remove unnecessary unrolls

* cont : simplify data loading

* metal : optimize mul_mm when output bounds checks are not needed
2025-09-29 15:18:12 +03:00
Georgi Gerganov 45976f2857
metal : fuse non-sequential nodes (llama/16102)
* metal : fuse non-sequential nodes

* cont : add comment

* cont : simplify bounds checks
2025-09-29 15:18:12 +03:00
Jeff Bolz 91ab93b756
vulkan: handle mat_mul with A matrix > 4GB (llama/16176)
* vulkan: handle mat_mul with A matrix > 4GB

This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.

Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.

* build fixes
2025-09-29 15:18:12 +03:00
Jeff Bolz eb982dd786
vulkan: support arbitrary KV dimension in flash attention (llama/16160)
The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).
2025-09-29 15:18:12 +03:00
Acly bc1ac13c2f
vulkan : make the vulkan.hpp dynamic dispatcher instance private (llama/16224)
* don't use VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE which can cause conflicts if application or other libraries do the same
2025-09-29 15:18:12 +03:00
Aman Gupta 85e4455cd3
CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (llama/16277)
* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32

This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.

My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32

* Review: refactor if statement
2025-09-29 15:18:11 +03:00
Johannes Gäßler e856483cd6
CUDA: refactor and deduplicate vector FA kernels (llama/16208)
* CUDA: refactor and deduplicate vector FA kernels
2025-09-29 15:18:11 +03:00
Dmytro Minochkin 88dd9e0d45
vulkan: throw system error instead of SIGABRT during init on older devices (llama/16156)
* Throw system error on old Vulkan driver rather than SIGABRT

* Optionally handle any potential error in vulkan init
2025-09-29 15:18:11 +03:00
Jeff Bolz 97bd65f90f
vulkan: support GET_ROWS for k-quants (llama/16235)
The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.
2025-09-29 15:18:11 +03:00
Aaron Teo 23b3598952
devops: add s390x & ppc64le CI (llama/15925)
* devops: move s390x and ppc64le ci build

we have access to ubuntu-24.04-s390x and ppc64le images now

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: disable ppc64le for now since they have compiler errors

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: stop warnings as errors

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: switch to non-macro flag

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: going the llama macro route

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: add big-endian gguf test models

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: disable ppc64le to test s390x, check test build

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: dup .gguf.inp files for big-endian tests

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: dup .gguf.out files for big-endian too

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: add python setup and endian byteswap

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: pooring thing does not have s390x python3

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: add missing rust compiler for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: try rust actions runner

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "devops: try rust actions runner"

This reverts commit 3f8db04356033d6c1d7eccc75ca396bc5298250c.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: try a different path for rust

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: dump home directory and user info

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: install gguf-py only

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: missed relative path

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: remove big-endian files since local swapping is working

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: revert test-tokenizer-0 cmakelists

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fix unicode flags conversion from and to uint16_t

Bitfields are allocated in different order on s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Simplify byteswap command

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fix endianness detection in vocab loader

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Disable test-thread-safety on s390x

In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.

There is no clean way to separate all those steps
 to add byteswapping between them, so just skip this test.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fix q8_0 test in test-quantize-fns

vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: add big-endian stories260K

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: add s390x test-eval-callback

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: fix test does not exist

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: fix model not found llama-eval-callback

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fix q3_K dot product error in test-quantize-fns on s390x

Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: re-enable ppc64le for testing

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: activate test-thread-safety for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: disable ppc64le tests

for some reason it keeps failing test-thread-safety tests and I do not
    have a machine that is able to replicate the tests.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: LLAMA_FATAL_WARNINGS=ON

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Correct repository URL for s390x for test-thread-safety model

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fix fs_get_cache_directory

Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Re-enable CI for ppc64le

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Fortify ggml_rope_impl

Only memcpy data from sections argument if it's non-NULL.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way

* Update URL for big-endian model

* Update .github/workflows/build.yml

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update remaining mentions of BE models to ggml-org/models repo

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@linux.ibm.com>
Co-authored-by: Aleksei Nikiforov <103434461+AlekseiNikiforovIBM@users.noreply.github.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-29 15:18:11 +03:00
Georgi Gerganov 670d54ef5d
metal : report OOM errors (llama/16274) 2025-09-29 15:18:11 +03:00
Adrien Gallouët 9823c5cc51
common : use cpp-httplib as a cURL alternative for downloads (llama/16185)
* vendor : update httplib

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* common : use cpp-httplib as a cURL alternative for downloads

The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : Bump to Windows 10

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-09-29 15:18:11 +03:00
Aaron Teo 89a7b4d22c
ggml-cpu: implement MXFP4 SIMD for s390x (llama/16193)
* ggml-cpu: impl mxfp4 s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: missing s = sumf

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix incorrect kval_mxfp4 type

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rework mxfp4

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: missing delta calc

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix typo for vec_splats

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: expand to 2 blocks per loop

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add unroll to boost perf

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: back to 1 block per loop to test perf

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: back to 1 block per loop to test perf"

This reverts commit 1fe55724e2dc295701101bf838bdd4a512237492.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rm unroll from single block

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-29 15:18:11 +03:00
R0CKSTAR 98ac209ae1
musa: fix build warnings (llama/15611)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-29 15:18:10 +03:00
Aman Gupta d9bf63cfb8
CUDA: add a fused top-K MoE kernel (llama/16130)
* CUDA: add a fused top-K MoE kernel

This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory

It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models

* Refactor into ggml_cuda_should_use_topk_moe

* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before

* Review: format + micro-optimizations

* Fix bug: fix tie breakers

* Add optional norm + clean-up code

* Use smem for final write

* Add bounds check

* Use better memory pattern for writeback
2025-09-29 15:18:10 +03:00
junchao-zhao 24ea5476de
ggml : fix loongarch lsx compilation error (llama/15864) 2025-09-29 15:18:10 +03:00
Daniel Bevenius 611ff19f20
ggml : remove -dev suffix from release version (ggml/1355)
This commit removes the `-dev` suffix from the version string in
CMakeLists.txt and the release script. The version will now be
just be formatted as `MAJOR.MINOR.PATCH`.
2025-09-29 15:18:10 +03:00
Daniel Bevenius 06d7b3d124
ggml : bump version to 0.9.3 (ggml/1353) 2025-09-29 15:18:10 +03:00
Georgi Gerganov ac678efb35
metal : fuse NORM + MUL + ADD, support non-multiples of 4 (llama/16220)
* metal : fuse NORM + MUL + ADD

* metal : support norms of non-multiple of 4

* cont : fix comment [no ci]
2025-09-29 15:18:10 +03:00
Georgi Gerganov 268f1c961b
metal : relax reorder conditions (llama/16216) 2025-09-29 15:18:10 +03:00
Georgi Gerganov 0a5b811f2e
metal : restore im2col perf (llama/16219) 2025-09-29 15:18:10 +03:00
Radoslav Gerganov 0946619662
rpc : use ggml logging facilities
Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.
2025-09-29 15:18:10 +03:00
Johannes Gäßler cd431223e0
llama: print memory breakdown on exit (llama/15860)
* llama: print memory breakdown on exit
2025-09-29 15:18:10 +03:00
Acly 5069c08034
ggml : split graph allocations according to backend max buffer size (llama/15815)
* ggml : make gallocr respect the backend's max buffer size

* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max  allocation size in buffer type  interface

* fix missing newline, apple-clang warning

* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.

* track (chunk, offset) pairs instead of "global" offsets through gallocr.

* simpler, don't need loops to map between local/global offsets
* touches more code

* fix dyn_tallocr_max_size and initialization

* fix memory leak when buffers are reused due to same buffer type appearing multiple times

* make vbuffer allocation follow the same logic as backend_buffer did before

* continue to use leftover unallocated space of previous chunks after a new one has been created

* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size

* refactor: move adding new free block and new chunk into separate functions

* allocate chunks individually with a separate free-blocks list for each one

* needs a bit more memory/allocations/indirections, but code is simpler

* fix warnings (missing static) & debug checks
2025-09-29 15:18:09 +03:00
Xiangyan Sun 41245891c1
ggml-cpu: Respect cpumask settings (llama/16164) 2025-09-29 15:18:09 +03:00
Sigbjørn Skjæret 73e8f3acb8
ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (llama/15928)
* fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl

* change initialization to true
2025-09-29 15:18:09 +03:00
Aaron Teo c706a50746
zdnn: refactor codebase + add docs (llama/16178)
* zdnn: initial matmul refactor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rm static from funcs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: update ggml-zdnn.h

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: change header files to hpp

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: switch to common.hpp

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: move mulmat forward around

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rm inline from utils

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: add zDNN docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-29 15:18:09 +03:00
Daniel Bevenius d8d31e3638
ggml-cpu : fix typo in gemm comments [no ci] (llama/16189) 2025-09-29 15:18:09 +03:00
Sigbjørn Skjæret 4e32ee733b
ggml : implement set_rows with i32 index (llama/16159)
* implement set_rows with i32 index

* template fix

* test quantized path

warnings--

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* forgotten name change

* deduplicate cuda/sycl and test-fix

* indent++

* vulkan: support set_rows with i32 index type (llama/16162)

* disable i32 index for webgpu for now

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-09-29 15:18:09 +03:00
Georgi Gerganov df672c6372
ggml : extend ggml_can_fuse to work with non-sequential nodes (llama/16123)
* ggml : extend ggml_can_fuse to work with non-sequential nodes in the graph

* cont : fix wrong bounds check condition

* cont : remove unnecessary overload
2025-09-29 15:18:09 +03:00
Georgi Gerganov 973054a8cd
ggml : add ggml_op_is_empty (llama/16122)
* ggml : add ggml_op_is_empty

* ggml : move to ggml-impl.h
2025-09-29 15:18:09 +03:00
Shin-myoung-serp 9f673df08d
Vulkan: add conv_transpose_2d operation (llama/16022)
* Vulkan: add conv_transpose_2d operation

* Vulkan: fix typo in conv_transpose_2d shader(s0mp, s0L, s1mp, s1L)

* Vulkan: fix incorrect indentation in conv_transpose_2d shader

* Vulkan: add checking the push constants size limit and reuse conv2d_mm.comp for conv_transpose_2d operation

* Vulkan: revert the order of the index calculation and bound check in conv_2d shader

* Vulkan: explicity check push constants limit in supports_op() for conv_transpose_2d operation.

* Vulkan: remove unnecessary lower bound checks for H/W_idx in the conv_2d shader.
2025-09-29 15:18:09 +03:00
Jeff Bolz 14723f25a1
vulkan: add RTE variants of exp shader (llama/16165)
This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.
2025-09-29 15:18:08 +03:00
Ruben Ortlam 95b29fab78
vulkan: vec dot matrix multiplication fix (llama/16151)
* vulkan: fix matrix multiplication index calculation for odd m/n and odd k in combination with batching

* add odd m/n + odd k test with batching
2025-09-29 15:18:08 +03:00
lhez 4b7f09ac0b
opencl: fix concat crash on win arm64 with Adreno (llama/15944) 2025-09-29 15:18:08 +03:00
lhez 0a7096f4f3
opencl: initial `q8_0` mv support (llama/15732) 2025-09-29 15:18:08 +03:00
Giuseppe Scrivano eae2be0ca2
vulkan: optimize UMA buffer operations and fix driver hangs (llama/16059)
* vulkan: optimize UMA buffer operations and fix driver hangs

The previous implementation was blocking the GPU for extended periods,
causing the i915 driver to reset the context due to the hangcheck
protection.

[32628.443070] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:1:85dffffb, in llama-server [194114]
[32628.443091] i915 0000:00:02.0: [drm] llama-server[194114] context reset due to GPU hang

* vulkan: implement deferred_memset on UMA

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-09-29 15:18:08 +03:00
Jeff Bolz 9a6c2036a9
vulkan: fix validation error about VK_PIPELINE_CREATE_CAPTURE_STATISTICS_BIT_KHR (llama/16086) 2025-09-29 15:18:08 +03:00
Georgi Gerganov 8d10ded025
ggml : prepare for development of 0.9.2-dev 2025-09-29 15:18:08 +03:00
Georgi Gerganov d89164a08d
ggml : bump version to 0.9.1 2025-09-29 15:18:05 +03:00
Georgi Gerganov 36778bd8b8
talk-llama : sync llama.cpp 2025-09-20 13:58:28 +03:00
Georgi Gerganov 66ad624d5b
sync : ggml 2025-09-20 13:46:41 +03:00
Ruben Ortlam 76d0934287
vulkan: use vec dot for matrix matrix multiplications (llama/16056)
* vulkan: Change the mul_mm shared memory and register caching system to use vec2 instead of scalars, to enable using dot2 instructions

* use fma instead of dot to fix Nvidia and Apple performance issues
2025-09-20 13:46:39 +03:00
Xuan-Son Nguyen 2ad00d5586
ggml : refactor forward_dup for cpu backend (llama/16062)
* ggml : refactor forward_dup for cpu backend

* clean up a bit

* add quant/dequant perf test
2025-09-20 13:46:39 +03:00
Adrien Gallouët 4d8cd07825
ggml-amx : fix ggml_amx_init() on generic Linux (llama/16049)
Generalize Linux check to `__linux__` to support non-glibc systems (like musl).
Also, return `false` on unknown/untested OS.

Without this commit, the code compiles (with warnings) but fails:

    register_backend: registered backend CPU (1 devices)
    register_device: registered device CPU (Intel(R) Xeon(R) Platinum 8488C)
    build: 6487 (51c4cac6) with x86_64-linux-musl-gcc (GCC) 15.1.0 for x86_64-linux-musl (debug)
    system info: n_threads = 8, n_threads_batch = 8, total_threads = 16
    ....
    print_info: n_ctx_orig_yarn  = 262144
    print_info: rope_finetuned   = unknown
    print_info: model type       = 4B
    Illegal instruction (core dumped)

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-09-20 13:46:39 +03:00
Adrien Gallouët 4575f96873
cmake : fix static linking for OpenMP on Unix-like systems (llama/16031)
When compiling with GGML_STATIC=ON, the build process would produce a
binary that was still dynamically linked to OpenMP. This defeats the
purpose of a static build:

    $ cmake -B build \
            -DBUILD_SHARED_LIBS=OFF \
            -DLLAMA_CURL=OFF \
            -DGGML_CCACHE=OFF \
            -DGGML_NATIVE=OFF \
            -DGGML_STATIC=ON

    $ ldd llama-server
            linux-vdso.so.1 (0x0000e1a434e3b000)
            libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000e1a4345a0000)
            libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000e1a434300000)
            libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000e1a434240000)
            libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000e1a434200000)
            libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000e1a434030000)
            /lib/ld-linux-aarch64.so.1 (0x0000e1a434df0000)

This commit resolves the issue by modifying `CMAKE_FIND_LIBRARY_SUFFIXES`
to prioritize `.a` files, forcing CMake to link the static version of
the library.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-09-20 13:46:39 +03:00
Shawn Gu f4a225cea6
opencl: optimize mxfp4 kernels (llama/16037)
- flatten mxfp4 and packed fp4->fp16 bit-wise convert function (replace lut)
- MoE kernel optimizations

---------

Co-authored-by: Li He <lih@qti.qualcomm.com>
2025-09-20 13:46:39 +03:00
Jeff Bolz 7fcb7e83ec
rename optimize_graph to graph_optimize (llama/16082) 2025-09-20 13:46:39 +03:00
Bowen Han fce6354e0f
CUDA: Optimize PAD_REFLECT_1D (llama/15957)
* CUDA: Optimize PAD_REFLECT_1D
feat: add more test cases for PAD_REFLECT_1D

* use fast_div to improve performance

* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* optimize

* use a concise expression to further speedup the cuda kernel

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:46:38 +03:00
Johannes Gäßler 05bdfd4380
CUDA: fix compilation on CC 6.0 (llama/16091) 2025-09-20 13:46:38 +03:00
Georgi Gerganov 960aaa9904
metal : use function constants for mul_mv_ext kernels (llama/16074)
* metal : use function constants for mul_mv_ext kernels

ggml-ci

* metal : remove NW template argument

ggml-ci

* metal : adjust constants

ggml-ci
2025-09-20 13:46:38 +03:00
Sigbjørn Skjæret 225d7c1d5a
cuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn (llama/16060) 2025-09-20 13:46:38 +03:00
Georgi Gerganov d37f590a77
metal : improve F32, F16 and BF16 mat-vec multiplication (llama/16057)
* metal : improve F32, F16 and BF16 mat-vec multiplication

ggml-ci

* metal : make the NSG a function constant in mul_mv kernels

ggml-ci
2025-09-20 13:46:38 +03:00
Jhen-Jie Hong 32b6d9c134
metal : avoid call free for non-owned buffer (llama/16067) 2025-09-20 13:46:38 +03:00
Georgi Gerganov 1f24b1df4d
metal : handle nil cv during pipeline creation (llama/16065)
ggml-ci
2025-09-20 13:46:38 +03:00
Chenguang Li c46adc0817
CANN: Remove print (llama/16044)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:46:38 +03:00
Reese Levine 1361f679cc
GGML WebGPU: Support for ADD, MUL, RMS_NORM, GET_ROWS operators (llama/16018)
* Add paramater buffer pool, batching of submissions, refactor command building/submission

* Add header for linux builds

* Free staged parameter buffers at once

* Format with clang-format

* Fix thread-safe implementation

* Use device implicit synchronization

* Update workflow to use custom release

* Remove testing branch workflow

* some f32 tests passing

* Disable set_rows until it's implemented

* f32 add all tests passing

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Add templated addition, clean up code

* Get addition and multiplication working

* Implement rms_norm

* Add get_rows implementation

* Add new get_rows files

* Refactor use of wg size entry

* Fix compilation

* Try manually unrolled q4_0 quant

* Revert "Try manually unrolled q4_0 quant"

This reverts commit 77f8b96515f7e640ae4b0e44f066321fbc4a6166.

* Move to constant max wg size

* Check for tensor size in supports_op

* Vectorize f32 and change default workgroup size

* Move f32 get_rows from < 4 to % 4 != 0

* fix linter errors

* Add in-place tests

---------

Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
2025-09-20 13:46:37 +03:00
Georgi Gerganov eb2c01f92e
metal : refactor + optimize v2 (llama/15995) 2025-09-20 13:46:10 +03:00
Georgi Gerganov 6458bac4c1
sync : ggml 2025-09-20 13:45:32 +03:00
Johannes Gäßler d452f0cf8c
CUDA: fix FA occupancy, optimize tile kernel (llama/15982) 2025-09-20 13:45:30 +03:00
Eve e96b285011
vulkan: automatically remove unsupported devices (llama/15976)
* remove unsupported vulkan devices

* make this happen during selection instead

* pass by reference
2025-09-20 13:45:30 +03:00
Chenguang Li e32c3b0fd3
CANN: Optimize ggml_cann_set_device (llama/15935)
* CANN: Fix ggml_cann_set_device to avoid redundant device switches

- Added a check to skip aclrtSetDevice if the current device is already set.
- Prevents unnecessary context switches while keeping thread/device consistency.

* CANN: add device default id
2025-09-20 13:45:30 +03:00
Daniel Bevenius 5c524bb879
ggml : fix padding in timestep embedding kernels (llama/15932)
* ggml : remove adding extra dim timestep embedding

This commit updates the ggml_timestep_embedding function to no longer
add an extra dimension when the specified dimension is odd.

The motivation for this change is that this introduces an unnecessary
dimension when the dimension is odd, which caused an issue in the
kernels which were not expecting this extra dimension and it resulted in
uninitialized memory for the second to last dimension.

* ggml-cuda : fix padding in timestep embedding kernel

This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.

* ggml-metal : fix padding in timestep embedding kernel

This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel

* ggml-opencl : fix padding in timestep embedding kernel

This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.

* ggml-sycl : fix padding in timestep embedding kernel

This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.

* ggml-vulkan : fix padding in timestep embedding kernel

This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.

* ggml-cpu : fix padding in timestep embedding function

This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.
2025-09-20 13:45:30 +03:00
Jake Karnes f72ec185fb
CUDA: fix im2col_3d to respect non-contiguous inputs (views) (llama/15956)
* fix im2col_3d to respect non-contiguous inputs (views)

The CUDA 3D im2col kernel computed source addresses assuming compact layout (products of dims), ignoring nb[] strides.

This patch switches im2col_3d source indexing to use true strides derived from src1->nb[] (in elements), mirroring the approach used in the 2D CUDA im2col path. Destination indexing is unchanged.

* use ggml_element_size() for src strides

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:45:30 +03:00
yael-works a642b533a4
SYCL: Add COUNT_EQUAL operator support (llama/15991)
* SYCL: Add COUNT_EQUAL operator support (rebased on master)

* SYCL: remove duplicate op_count_equal definition

* tests: remove test_count_equal_typed and use test_count_equal for all cases

* tests: keep only I32 case for COUNT_EQUAL as suggested

* tests: keep only I32 case for COUNT_EQUAL as requested
2025-09-20 13:45:30 +03:00
Aman Gupta 10bd5d3626
CUDA: some micro-optimizations in mmf.cuh for mul_mat_id (llama/15926) 2025-09-20 13:45:30 +03:00
Georgi Gerganov 82a8c141ea
metal : remove memory pools (llama/15966)
* metal : remove mem pool usage

ggml-ci

* metal : remove mem pool implementation

ggml-ci

* metal : take into account the actual allocated memory of the tensor

ggml-ci

* cont : use ggml_backend_buft_get_alloc_size

ggml-ci

* cont : improve, comments

ggml-ci

* cont : add functions for the extra tensor sizes

* metal : add comments

ggml-ci

* metal : implement .get_alloc_size for the rest of the buffer types

ggml-ci

* metal : remove ggml_metal_heap

ggml-ci
2025-09-20 13:45:29 +03:00
Ruben Ortlam c36358cb3c
Vulkan: Clean up mul_mm shader (llama/15987)
* vulkan: move mul_mm dequantization steps into a separate file and functions

* improve mul_mm vector load code

* fix debug mode issues and warnings
2025-09-20 13:45:29 +03:00
Georgi Gerganov 2d3f15607f
metal : fix kernel requirements (llama/15983)
* metal : fix kernel requirements

ggml-ci

* cont : fix supports_op

* cont : fix supports_op for ARGMAX
2025-09-20 13:45:29 +03:00
Aaron Teo 7dca05ca77
ggml-zdnn: rm user mapped buffers (llama/15965)
* ggml-zdnn: rm user mapped buffers

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rm dead code

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt to fix missing extra data buffer free

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-20 13:45:29 +03:00
Jeff Bolz 1789ed3f2c
vulkan: fix failing dequant shaders (llama/15862)
* vulkan: fix failing dequant shaders

* add missing const
2025-09-20 13:45:29 +03:00
Jeff Bolz a3defb0a3b
vulkan: initialize vulkan-hpp to allow using extension function pointers (llama/15705)
Use this to query register count for shader compiles on NVIDIA. Currently
this is only for performance debug, but it could eventually be used in some
heuristics like split_k.
2025-09-20 13:45:29 +03:00
Georgi Gerganov 2caf15d68a
metal : refactor kernel loading (llama/15964)
* metal : refactor bin kernels loading

ggml-ci

* metal : refactor rms kernel loading

ggml-ci

* ci : try to add memory leaks check

ggml-ci

* ci : try to enable memory leak detection for Mac

* cont : seems to be working
2025-09-20 13:45:29 +03:00
Georgi Gerganov 0d36ba9e1a
metal : allow ops to run concurrently (llama/15929)
* metal : run graphs ops concurrently

ggml-ci

* cont : add flags for debugging and disabling concurrency

ggml-ci

* cont : refactor and handle fusing

ggml-ci

* cont : simplify - no need to use GPU address

ggml-ci

* cont : prepare mem ranges for reuse + add ggml-metal-common.cpp

ggml-ci

* cont : avoid redundant keywords in cpp [no ci]

* metal : reorder graph for better concurrency

ggml-ci

* metal : fix race on mem pool buffers

ggml-ci

* cont : add env GGML_METAL_GRAPH_OPTIMIZE_DISABLE

ggml-ci

* cont : refactor, optimize, add comments

ggml-ci

* cont : refactor ggml-metal.m

ggml-ci

* minor : update logs [no ci]
2025-09-20 13:45:29 +03:00
Georgi Gerganov 20a930ec94
metal : fix memory leaks (llama/15962)
ggml-ci
2025-09-20 13:45:28 +03:00
Aaron Teo e902731ccc
ggml-zdnn: fix #15414, activate FP16 and BF16 acceleration and incorrect zTensor free (llama/15839) 2025-09-20 13:45:28 +03:00
Ruben Ortlam 424c85f22a
Vulkan iGPU device selection overhaul and PCI ID API support (llama/15947)
* vulkan: implement ggml igpu device type, implement pci id support

* fix compiler warning

* prevent printf overflow warning
2025-09-20 13:45:28 +03:00
Mathieu Baudier 5a752bab84
vulkan: Make device memory check more portable (llama/15939) 2025-09-20 13:45:28 +03:00
Neo Zhang Jianyu cd764eaf2b
Revert "sycl: add usage of enqueue_functions extension (llama/14244)" (llama/15910)
* Revert "sycl: add usage of enqueue_functions extension (#14244)"

This reverts commit 8308f98c7fb778e54bf75538f5234d8bd20915e9.

* fix missed revert code, format the code
2025-09-20 13:45:28 +03:00
Diego Devesa 555dcb3e01
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (llama/15797)
* ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type

ggml-backend : add device id to device props

llama : only use iGPU devices if there are no GPU devices

llama : do not use multiple devices from different backends with the same device id
2025-09-20 13:45:28 +03:00
Johannes Gäßler f0768eb575
CUDA: larger SRAM reads for tile FA, AMD FP16 dot (llama/15927)
* CUDA: larger SRAM reads for tile FA, AMD FP16 dot

* fix logic for availability of v_dot2_f32_f16
2025-09-20 13:45:28 +03:00
Daniel Bevenius 020eb19eb3
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (llama/15922)
This commit adds a check for GGML_MACHINE_SUPPORTS_i8mm when enabling
MATMUL_INT8 features, ensuring that i8mm intrinsics are only used when
the target hardware actually supports them.

The motivation for this is to fix ggml CI build failures where the
feature detection correctly identifies that i8mm is not supported,
adding the +noi8mm flag, but MATMUL_INT8 preprocessor definitions are
still enabled, causing the compiler to attempt to use vmmlaq_s32
intrinsics without i8mm support.

Refs: https://github.com/ggml-org/ggml/actions/runs/17525174120/job/49909199499
2025-09-20 13:45:28 +03:00
Charles Xu b079d9c8b0
kleidiai: fix GGML_ASSERT(*cur_backend_id != -1) failed (llama/15614)
* kleidiai: fix GGML_ASSERT(*cur_backend_id != -1) failed

* removes the Whisper-specific check for GET_ROWS support
2025-09-20 13:45:27 +03:00
hipudding dadf73665a
CANN: Disable acl_graph for prefill stage (llama/15933)
Since the prefill length is not fixed, graphs constructed for the
prefill stage cannot be reused. For this reason, ACL graph
execution is disabled by default during prefill.
2025-09-20 13:45:27 +03:00
Oliver Simons f5ef0e25e2
CUDA: Add `fastdiv` to `k_bin_bcast*`, giving 1-3% E2E performance (llama/15872)
* Add fastdiv and fastmodulo to k_bin_bcast kernel

* Address review comments

* `prod_` instead of `prod` suffix

* Add test case for `k_bin_bcast_unravel` in CUDA backend
2025-09-20 13:45:27 +03:00
Daniel Bevenius 3617008c37
ggml-cpu : fix padding in ggml_timestep_embedding (llama/15917)
This commit fixes the zero padding for odd dimensions in
ggml_compute_forward_timestep_embedding_f32.
The motivation for this is that currently if an odd dimension is used,
the padding check incorrectly uses the dimension value for indexing.
For example, with dim=15:

Elements 0-6 are set to cosine values
Elements 7-13 are set to sine values
Element 14 is left uninitialized (contains garbage)
Element 15 is correctly set to zero

This fix changes embed_data[dim] to embed_data[2 * half] so that
element 14 (the first unused element) is properly set to zero as well
as the last element.

Resolves: https://github.com/ggml-org/ggml/issues/1324
2025-09-20 13:45:27 +03:00
Georgi Gerganov c974f63057
sync : ggml 2025-09-20 13:44:48 +03:00
Georgi Gerganov 7eae055e61
metal : make the backend async (llama/15906) 2025-09-20 13:44:27 +03:00
Georgi Gerganov e2c7f1cccd
sync : ggml 2025-09-20 13:43:01 +03:00
Chenguang Li 4d453b14a9
CANN: Add ROPE sin/cos cache for reuse (llama/15912)
* CANN: Add ROPE sin/cos cache for reuse

Introduce sin/cos caching mechanism in ROPE to avoid redundant
computation across layers. The cache is built on the first layer
per device and reused by subsequent layers if parameters match.

- Added sin_cache / cos_cache pointers and position_length tracking
- Introduced cache validity flags and properties:
  (ext_factor, theta_scale, freq_scale, attn_factor, is_neox)
- Accelerates ROPE by eliminating repeated sin/cos generation

This change reduces overhead in multi-layer scenarios while
preserving correctness by verifying parameter consistency.

Co-authored-by: hipudding <huafengchun@gmail.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-09-20 13:42:53 +03:00
Chenguang Li 9b773acac0
CANN: implement LRU cache for ACL graphs (llama/15814)
* CANN: implement LRU cache for ACL graphs in CANN backend

- Introduce ggml_cann_graph_lru_cache to store multiple ggml_cann_graph objects.
- Graphs are loaded on demand and evicted using LRU policy when capacity is exceeded.
- Updated push, move_to_front, and clear methods to manage cached graphs efficiently.
- Ensures reuse of graphs, reducing graph reconstruction overhead in CANN backend.

* fix typo

* The LRU cache capacity can be configured via an env variable

Signed-off-by: noemotiovon <757486878@qq.com>

* refactory acl graph

* refactory && fix review comments

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:53 +03:00
Ruben Ortlam 7abe187860
vulkan: throw the oom error instead of no memory type found (llama/15905) 2025-09-20 13:42:53 +03:00
Jeff Bolz d0e98656c3
vulkan: Fix OOB accesses in soft_max_back (llama/15861) 2025-09-20 13:42:52 +03:00
Johannes Gäßler e35d1375ee
HIP: use v_dot2_f32_f16 instruction for FA (llama/15884) 2025-09-20 13:42:52 +03:00
lksj92hs 7fbbb67b47
Workaround for subgroup arithmetic failing on MoltenVK with AMD GPUs (issue 15846) (llama/15886) 2025-09-20 13:42:52 +03:00
Aman Gupta 621764b1a5
CUDA: Add mul_mat_id support for the mmf kernel (llama/15767)
* CUDA: Add mul_mat_id support the mmf

Add support for mul_mat_id for bs < 16

* Review: use warp_size, fix should_use_mmf condition

* Launch one block per expert, stride along n_expert_used

* templatize mul_mat_id

* Pad shmem to 16 bytes, add helper function mul_mat_f_switch_ids

* Reduce compile times by dividing mmf into f16, bf16 and f32 variants

* Divide mmf by ncols_dst

* Add missing files

* Fix MUSA/HIP builds
2025-09-20 13:42:52 +03:00
Johannes Gäßler 260982232c
CUDA: fix GET_ROWS for large tensors (llama/15882) 2025-09-20 13:42:52 +03:00
Jeff Bolz c29cd54818
vulkan: sort graph to allow more parallel execution (llama/15850)
* vulkan: sort graph to allow more parallel execution

Add a backend proc to allow the backend to modify the graph. The
vulkan implementation looks at which nodes depend on each other
and greedily reorders them to group together nodes that don't
depend on each other. It only reorders the nodes, doesn't change
the contents of any of them.

With #15489, this reduces the number of synchronizations needed.

* call optimize_graph per-split
2025-09-20 13:42:52 +03:00
Aman Gupta 70ee808f3d
CUDA: generate_cu_files.py - add missing mxfp4 (llama/15880) 2025-09-20 13:42:52 +03:00
Georgi Gerganov ae6cc6a386
cuda : fix supports_op condition for get_rows when number of blocks is too large (llama/15868)
* cuda : fix supports_op condition for get_rows when src1->ne2 > 1

ggml-ci

* ggml : add comment about ggml_get_rows

ggml-ci

* cuda : add FIXME [no ci]

* cuda : update support condition

ggml-ci
2025-09-20 13:42:52 +03:00
Georgi Gerganov e9cb59e970
metal : refactor + optimize (llama/15857) 2025-09-20 13:42:51 +03:00
Xuan-Son Nguyen 40bcd1a469
ggml: allow casting between f32 and i32 (llama/15783)
* ggml: allow casting between f32 and i32

* fix cuda

* add vulkan

* fix CPU non-cont

* add non-cont test case

* add note

* extend test number range

* correct note

* add cont version for vulkan
2025-09-20 13:42:51 +03:00
Sigbjørn Skjæret 0175a1df8d
CUDA: non-contiguous src0 not supported for PAD (llama/15869) 2025-09-20 13:42:51 +03:00
Chenguang Li d9c0ead2ab
CANN: Stream sync between devices for acl_graph (llama/15809)
* CANN: Switch to stream synchronization

Switch to stream synchronization because events are not effective.

Co-authored-by: hipudding <huafengchun@gmail.com>

* CANN: add Comments

---------

Co-authored-by: hipudding <huafengchun@gmail.com>
2025-09-20 13:42:51 +03:00
Jeff Bolz dfa7722e2e
vulkan: support im2col_3d (llama/15795) 2025-09-20 13:42:51 +03:00
Aaron Teo db4f504b69
ggml-cpu: clean up s390x SIMD (llama/15855)
* ggml-cpu: clean up s390x simd

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 0da4b6aa07d96b758812d17b2c82267632fa4ba5)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix hsum data types

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-20 13:42:51 +03:00
Jeff Bolz 9523fd8de6
vulkan: Support pad_ext (llama/15794) 2025-09-20 13:42:51 +03:00
Jeff Bolz 647e2d7de5
vulkan: Use larger loads in scalar/coopmat1 matmul (llama/15729)
I think glslang will translate an access like x[i][1].z to
OpAccessChain ... x, i, 1, 2
OpLoad float16_t ...

rather than loading all of x[i] in a single OpLoad. Change the
code to explicitly load the vector/matrix.
2025-09-20 13:42:51 +03:00
Daniel Bevenius cda7d4e5ac
ggml WebGPU: remove userdata from request adapter callback (llama/15527)
* ggml WebGPU: remove userdata from request adapter callback

This commit removes the `userdata` parameter from the WebGPU request
adapter callback in `ggml-webgpu.cpp`. Instead, the lambda function
captures the `webgpu_context` directly.

The motivation for this change is to simplify the code and improve
readability.

* inline the callback lambda into the RequestAdapter call

This commit removes the callback lambda variable and inlines it directly
into the RequestAdapter call.
2025-09-20 13:42:50 +03:00
Johannes Gäßler cd70d89628
CUDA: faster tile FA (Pascal/AMD), headsize 256 (llama/15769) 2025-09-20 13:42:50 +03:00
Charles Xu be2676bb1c
kleidiai: generalize compute_forward_kv_cache to compute_forward_fp16 (llama/15817) 2025-09-20 13:42:50 +03:00
Johannes Gäßler 69400f16f1
ggml-cpu: document use of "free" memory [no ci] (llama/15834) 2025-09-20 13:42:50 +03:00
Aaron Teo f499271c4e
ggml-cpu: drop support for nnpa intrinsics (llama/15821) 2025-09-20 13:42:50 +03:00
Johannes Gäßler 6ff468cfaa
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (llama/15802)
* CUDA: fastdiv, launch bounds for mmvq + q8_1 quant
2025-09-20 13:42:50 +03:00
Daniel Bevenius 4d6e1144b1
ggml : introduce semantic versioning (ggml/1336)
* ggml : introduce semantic versioning

This commit introduces semantic versioning for the GGML library.

The motivation for this is that the current versioning, using build
numbers, makes it difficult to track changes and releases for projects
that use ggml.

The release steps are the following:
1. Sync the changes from llama.cpp using sync-llama-am.sh and after the
   PR has been approved and merged move to step 2.
2. Run scripts/release.sh and specify the type of release, major, minor,
   or patch. This script will handle incrementing the version
   (major|minor|patch), create a new commit with the version change,
   create a tag for the version, and prepare for the next development
   iteration.
3. Inspect the commits/tag and push to master. This will trigger the
   github release workflow which is triggered for new tags which will
   then publish a new release on github.

Example usage:
```console
$ ./scripts/release.sh major --dry-run
[dry-run] - No changes will be made

Step 1: Reading current version...
Current version: 0.9.0-dev
New release version: 1.0.0

Step 2: Updating version in CMakeLists.txt...
  [dry-run] Would update GGML_VERSION_MAJOR to 1
  [dry-run] Would update GGML_VERSION_MINOR to 0
  [dry-run] Would update GGML_VERSION_PATCH to 0
  [dry-run] Would remove -dev suffix

Step 3: Committing version bump...
  [dry-run] Would commit: 'ggml : bump version to 1.0.0'

Step 4: Creating git tag...
  [dry-run] Would create tag: v1.0.0 with message 'Release version 1.0.0'

Step 5: Preparing for next development cycle...
  [dry-run] Would update GGML_VERSION_MINOR to 1
  [dry-run] Would add -dev suffix back

Step 6: Committing development version...
  [dry-run] Would commit: 'ggml : prepare for development of 1.1.0-dev'

[dry-run] Summary (no changes were made):
  • Would have released version: 1.0.0
  • Would have created tag: v1.0.0
  • Would have set next development version: 1.1.0-dev
```

Refs: https://github.com/ggml-org/ggml/issues/1333

* ggml: create branch for release candidate and check master

* ggml : sign the git tag
2025-09-20 13:42:50 +03:00
Gregor Jasny c80f78cc7b
CUDA : conditionally add cuda architectures (ggml/1341) 2025-09-20 13:42:50 +03:00
Gabe Goodhart ffe560cbb1
metal : Add template specialization for mul_mm_id w/ ne20 == 10 (llama/15799)
Branch: GGMLMetalNE20

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-20 13:42:49 +03:00
Chenguang Li 3780a3c917
CANN: Refactor ND to NZ workspace to be per-device (llama/15763)
* CANN:Refactor ND to NZ workspace to be per-device in Ascend backend

- Replaced the previous single global ND→NZ workspace with a per-device
  cache using unordered_map keyed by device ID.
- Functions `release_nz_workspace`, `relloc_nz_workspace`, and
  `get_nz_workspace` now manage workspace independently for each device,
  preventing memory conflicts in multi-device / pipeline parallel scenarios.
- This change fixes potential precision issues caused by workspace
  overwrites when multiple devices perform ND→NZ conversions concurrently.

Co-authored-by: hipudding <huafengchun@gmail.com>

* refactor

Signed-off-by: noemotiovon <757486878@qq.com>

* rename

Signed-off-by: noemotiovon <757486878@qq.com>

* fix review comments

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-09-20 13:42:49 +03:00
leejet 2228462b19
ggml: add ops for WAN video model (cuda && cpu) (llama/15669)
* add conv3d support

* add ggml_pad_ext for cpu & cuda backend

* cuda/cpu: add im2col_3d support

* cuda: make im2col a little faster

* fix cuda pad/scale/im2col3d

* make im2col_3d faster

* gguf: support loading tensors which n_dims > GGML_MAX_DIMS

* fix cuda get_rows

* avoid ggml_conv_3d conflict

* correct GGML_OP_COUNT assertion

* avoid build failure

* avoid build failure on MacOS

* cuda: remove unnecessary MIN define

* fix cpu im2col_3d

* adjust the code style

* cuda: use simpler loop in get_rows

* add test_im2col_3d to test-backend-ops

* test-backend-ops.cpp: remove trailing whitespace

* cpu: im2col_3d support non continuous src

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* fix test_im2col_3d

* remove unused variables

* cuda: get_rows: dfloat2 -> float2

* add test_pad_ext to test-backend-ops.cpp

* add gguf_init_from_file_ext impl

* Revert "gguf: support loading tensors which n_dims > GGML_MAX_DIMS"

This reverts commit d8377a0a37f314bd3713fe043b4333ad661610c1.

* Revert "add gguf_init_from_file_ext impl"

This reverts commit d9f1d13208c68ef83b3538201ac7f31614fb1994.

* update ggml_backend_vk_device_supports_op

* fix ggml_backend_vk_device_supports_op

* update other backend supports op for ggml_pad_ext

* metal/opencl/sycl/vulkan: fix GGML_OP_PAD check in supports_op

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-09-20 13:42:49 +03:00
hipudding 96efb472b4
CANN: Fix precision issue on 310I DUO multi-devices (llama/15784) 2025-09-20 13:42:49 +03:00
rmatif 1569daf524
opencl: add hs=40 to FA (llama/15758) 2025-09-20 13:42:49 +03:00
Chenguang Li 5c860e94c6
CANN: fix acl_rstd allocation size in ggml_cann_rms_norm (llama/15760)
Fixes #15330

Adjust the allocation size of acl_rstd. The parameter `dims` is set to 3 according to the CANN documentation.

Co-authored-by: Yuchuan <yuchuan-cao@users.noreply.github.com>
2025-09-20 13:42:49 +03:00
Ruben Ortlam 719a05c665
vulkan: fix mmv subgroup16 selection (llama/15775) 2025-09-20 13:42:49 +03:00
Jeff Bolz 4a702a867c
vulkan: don't use std::string in load_shaders, to improve compile time (llama/15724)
* vulkan: don't use std::string in load_shaders, to improve compile time

* keep the string version for those calls that use it
2025-09-20 13:42:49 +03:00
Daniel Bevenius 4144ae10e9
vulkan : update ggml_vk_instance_validation_ext_available (llama/15666)
* vulkan : update ggml_vk_instance_validation_ext_available

This commit updates ggml_vk_instance_validation_ext_available() to
check for VK_EXT_validation_features instead of
VK_KHR_portability_enumeration.

Based on how the returned boolean is used later in the code (to enable
both the validation layer and the VK_EXT_validation_features extension),
it appears the function may have been intended to check for the
validation layer features extension.

* remove try/catch

This was a left over from a previous iteration where I was explicitly
quering for a specific validation layer first, which would throw.

* update warning message about validation layers
2025-09-20 13:42:48 +03:00
Shin-myoung-serp 85c7aa3750
ggml vulkan: add hardsigmoid and hardswish operations (llama/15762) 2025-09-20 13:42:48 +03:00
Oliver Simons 9eef377330
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (llama/15715)
* Add fastdiv, use it in modulo and use modulo in rms_norm_f32

Fastdiv is much faster way to do integer division, which was identified
as bottleneck in rms_norm_f32

* Support more `block_size` values in `rms_norm_f32`

This makes us more flexible in selecting the optimal threads w.r.t
paralellizing across a col vs. launch-overheads of threads and mio
throttles

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Replace modulo with fastmodulo in `rms_norm_f32`

* Use `BinPackArguments=true` for formating function calls

Will file a separate PR to adjust .clang-format file

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Use uint3 for both `fastdiv` and `fastmodulo`

The compiler seems to reliably optimize away the unused .z component in
the fastdiv use-case, see https://godbolt.org/z/rx8KPrKr3

* More constrained type declarations

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Rename fastdiv and fastmodulo variables to shared variable name

As suggest by JohannesGaessler, this increases clarity of the intended
use

* Pack fastdiv/fastmodulo constants into uint2/uint3 objects

By packing constants to be used together into a struct, we are less
likely to make errors.

* Rename function parameter of fastmodulo

`modulo_consts` is more fitting/descriptive

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:48 +03:00
hipudding 51bc843f3a
CANN: Add RoPE contiguous check for 310I DUP device (llama/15735) 2025-09-20 13:42:48 +03:00
xctan 75f739c7c8
ggml-cpu : optimize RVV kernels (llama/15720)
* ggml-cpu : optimize rvv ggml_vec_dot_f32

* ggml-cpu : optimize 128-bit rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : fix riscv arch flags

* ggml-cpu : add more rvv ops

* ggml-cpu : optimize rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : optimize rvv ggml_vec_dot_q6_K_q8_K

* ggml-cpu : minor rvv adjustments

* ggml-cpu : fix riscv include
2025-09-20 13:42:48 +03:00
hipudding 91e9e72ecd
CANN: Mask unsupported TRANSPOSE_1D operator (llama/15733)
CANN currently does not support kernels larger than 255.
This change disables such cases.
2025-09-20 13:42:48 +03:00
Chenguang Li d84b96d9d0
CANN: Fix type float_t to float (llama/15736)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:48 +03:00
Ruben Ortlam e584edb5ba
vulkan: fix shaders gen when no integer dot is available (llama/15740) 2025-09-20 13:42:48 +03:00
hipudding 5aee53c40f
CANN: Resolve soft_max precision issue (llama/15730)
Previously, the slope tensor was set to fp16 to improve efficiency.
While this worked correctly in FA, it caused precision issues in soft_max.
This change applies different data types for different operators
to balance both accuracy and performance.
2025-09-20 13:42:47 +03:00
Jeff Bolz 1e03aa66f7
vulkan: Fix macro parameter order for f32 matmul shaders (llama/15716) 2025-09-20 13:42:47 +03:00
rmatif fb37f91163
opencl: add attn sinks support for FA kernels (llama/15706) 2025-09-20 13:42:47 +03:00
Chenguang Li 3db49c1c26
CANN: Support eager execution mode under ACL graph compilation (llama/15712)
* [CANN] Support eager execution mode under ACL graph compilation

Add support for running operators in eager mode while ACL graph
compilation is enabled. This allows bypassing graph execution
and directly submitting ops, which is useful for debugging and
reducing graph build overhead in certain scenarios.

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

* rename to acl_graph_mode

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:47 +03:00
hipudding 13d3963f71
CANN: Support ext_factor in rope (llama/15710) 2025-09-20 13:42:47 +03:00
Johannes Gäßler f20a7b0e99
ggml-backend: raise GGML_MAX_SPLIT_INPUTS (llama/15722) 2025-09-20 13:42:47 +03:00
Gilad S 9e3600e569
vulkan: use memory budget extension to read memory usage (llama/15545)
* vulkan: use memory budget extension to read memory usage

* fix: formatting and names

* formatting

* fix: detect and cache memory budget extension availability on init

* fix: read `budgetprops.heapBudget` instead of `heap.size` when memory budget extension is available

* style: lints
2025-09-20 13:42:47 +03:00
Jeff Bolz 7a5e7368a3
vulkan: add missing clamps in new mul_mat_id paths (llama/15702)
This is a missing interaction between #15546 and #15652
2025-09-20 13:42:46 +03:00
Ruben Ortlam d5f80a2982
vulkan: disable large mmv subgroups on older Nvidia GPUs (llama/15717) 2025-09-20 13:42:46 +03:00
s-goto-11 8218dc609c
ggml: SVE support for exponential functions (llama/15145)
* SVE support for exponential functions

Add const notation to variable pg

* Update ggml/src/ggml-cpu/vec.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Add const

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-09-20 13:42:46 +03:00
Prashant Vithule 31840a3a56
ggml: aarch64: Implement SVE F16 kernels for vector functions (llama/15115)
* Added sve implementation for vec_dot_fp16 Kernel

* removed white spaces

* Added comment

* removed white spaces

* changed GGML_F16x_VEC_FMA for code consistency

* Update vec.h

---------

Co-authored-by: vithulep <p.m.vithule1517@gmail.com>
2025-09-20 13:42:46 +03:00
Ruben Ortlam 5e70d901b0
Vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (llama/14903)
* vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants

* vulkan: use subgroup operations for quantize_q8_1 shader

* vulkan: add q8_1_x4 type with 128-bit alignment, use in mul_mat_vecq shader

* vulkan: use q8_1_x4 blocks in mul_mmq shader

* vulkan: do 8 calculations per invocation instead of 32 in mul_mat_vecq, similar to mul_mat_vec

* vulkan: tune mul_mat_vecq performance for Intel

* vulkan: fix quantizing issue when tensor is not divisible by 128

* vulkan: adapt integer dot mmv to mmv small m optimization (llama/15355)

* vulkan: allow all subgroup modes for mmv and mmvq

* vulkan: use prealloc intermediate reuse for mmvq path

* vulkan: tune mmvq for Intel, AMD GCN and Nvidia RTX 3090

* vulkan: adapt mmv quantize_y path to conditional sync logic

* vulkan: disable q8_0 mmvq on Nvidia

* vulkan: enable q8_0 on Nvidia pre-turing

* fix prealloc sync condition

* fix llvmpipe subgroup 8 issue
2025-09-20 13:42:46 +03:00
Daniel Bevenius c5f511e697
ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (llama/15695)
* ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops

This commit adds support for the TRANSPOSE and RESHAPE operations in the
ggml webgpu backend.

Co-authored-by: Diego Devesa <slarengh@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-20 13:42:46 +03:00
Akarshan Biswas 2ba5e0cb47
CUDA: fix build error from ambiguous __half conversions in conv2d (llama/15690)
* CUDA: fix build error from ambiguous __half conversions in conv2d

Building conv2d with half precision failed because `__half` defines
multiple implicit conversion operators (to float, int, short, etc.),
causing ambiguous overload resolution when multiplying with float.

Introduce a templated `to_float` helper that explicitly converts
`__half` via `__half2float`, while passing through float unchanged.
Use this helper in conv2d accumulation to ensure unambiguous and
correct promotion to float.

Fixes some build errors with half-precision kernels on CUDA.

ggml-ci

* CUDA: Replace custom to_float helper with unified ggml_cuda_cast and add half‑>float conversion

* CUDA: Add missing convert.cuh header

* CUDA: remove unnecessary extension in ggml_cuda_cast

* CUDA: Address review comment, remove second type template argument
2025-09-20 13:42:46 +03:00
hipudding bb5f844ec7
CANN: Optimize MUL_MAT_ID (llama/15658) 2025-09-20 13:42:46 +03:00
hipudding ed7ebdc757
CANN: fix RoPE cache issue on multi-device (llama/15629)
* CANN: fix RoPE cache issue on multi-device

RoPE cache only needs to be computed once per token.
However, in multi-device scenarios, not every device starts
computation from layer 0, which may lead to unallocated memory
issues and precision errors.

This commit records the first layer of each device to avoid
the above issues.

* CANN: Optimize first-layer detection method

* CANN: Remove trailing whitespace

* CANN: Only cache the data that can be determined as unchanged through the parameters.

* CANN: Update function comment
2025-09-20 13:42:45 +03:00
Georgi Gerganov 3d470687de
metal : fix checks for available FA kernels (llama/15700)
* metal : fix checks for available FA kernels

ggml-ci

* cont : fix comment [no ci]
2025-09-20 13:42:45 +03:00
Diego Devesa b11c972b88
llama : separate compute buffer reserve from fattn check (llama/15696)
Exposes ggml_backend_sched_split_graph() to allow splitting the graph without allocating compute buffers and uses it to split the graph for the automatic Flash Attention check.
2025-09-20 13:42:45 +03:00
Jeff Bolz db7ecfb61d
vulkan: handle large sizes for get_rows (llama/15686) 2025-09-20 13:42:45 +03:00
Jeff Bolz 191def71ce
vulkan: mul_mat_id coopmat2 optimizations (llama/15546)
* vulkan: mul_mat_id coopmat2 optimizations

Add a path for when the tile fits in BN/2, similar to what we have for mul_mat.

Only call fetch_scales/store_scales once per QUANT_K block, and once at the
beginning in case start_k is not aligned.

* Also add a path for BN/4 - worth a couple more percent
2025-09-20 13:42:45 +03:00
Daniel Bevenius b092e95aaa
vulkan : remove unused portability_enumeration_ext variable (llama/15679)
This commit removes the portability_enumeration_ext variable from the
ggml_vk_instance_portability_enumeration_ext_available function as it
is initialized to false but never modified, making it redundant.
2025-09-20 13:42:45 +03:00
Jeff Bolz 20ce6fcf6a
vulkan: Allow fallback to sysmem memory when vidmem is full (llama/15649)
* vulkan: Allow fallback to sysmem memory when vidmem is full

* vulkan: Add env var GGML_VK_ALLOW_SYSMEM_FALLBACK
2025-09-20 13:42:45 +03:00
Jeff Bolz 71f0ee70bf
vulkan: clamp matmul and FA results to the max finite value (llama/15652)
* vulkan: clamp matmul and FA results to the max finite value

* only clamp for fp16
2025-09-20 13:42:45 +03:00
Charles Xu 74583845b6
ggml: update kleidiai to v1.13.0 (llama/15663) 2025-09-20 13:42:44 +03:00
Johannes Gäßler f6ba3949b6
llama: use FA + max. GPU layers by default (llama/15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-09-20 13:42:44 +03:00
Johannes Gäßler b7809c401b
CUDA: use FP32 arithmetic for conv2d (llama/15683) 2025-09-20 13:42:44 +03:00
Jeff Bolz a6dec4f49d
vulkan: Skip syncing for prealloc_y when it is reused (llama/15544) 2025-09-20 13:42:44 +03:00
Chenguang Li d629af157e
CANN: FIx compiler warnings (llama/15661)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:44 +03:00
Aman Gupta 82ce91e7d2
CUDA: fix bug in rms_norm fusion (llama/15660)
* CUDA: fix bug in rms_norm fusion

* Fix bug for OP_REPEAT

* Fix index for add
2025-09-20 13:42:44 +03:00
Aman Gupta 6d7ddaf793
CUDA: fuse adds, fuse add with rms norm (llama/15631)
* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast
2025-09-20 13:42:44 +03:00
mnehete32 dc9f55bbb0
CUDA: add conv2d (llama/15635)
* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const
2025-09-20 13:42:44 +03:00
Aaron Teo 6287027a2c
ggml-cpu: fix invalid hsum build in debug s390x (llama/15634)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-20 13:42:43 +03:00
compilade 6dffbaa0cb
ggml : fix SSM_SCAN for n_groups > 1 (llama/15625) 2025-09-20 13:42:43 +03:00
Georgi Gerganov cac6253744
kv-cache : remove LLAMA_SET_ROWS checks (llama/15505)
ggml-ci
2025-09-20 13:42:43 +03:00
matiaslin 88c0582b61
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)
Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.
2025-09-20 13:42:43 +03:00
uvos 65fa2c0c1a
HIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615) 2025-09-20 13:42:43 +03:00
Chenguang Li 02e8b23137
CANN: refactor mask handling and improve performance in FA (llama/15561)
* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <757486878@qq.com>

* [CANN]: fix review

Signed-off-by: noemotiovon <757486878@qq.com>

* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:43 +03:00
xctan ece1bdfe7e
ggml-cpu : add basic RVV support for vector f32 ops (llama/15057)
* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax
2025-09-20 13:42:43 +03:00
rmatif a6ec224efa
OpenCL: add fused group_norm/norm, mul, add (llama/15314)
* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace
2025-09-20 13:42:43 +03:00
Akarshan Biswas 94fa9f63b3
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)
The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.
2025-09-20 13:42:42 +03:00
shalinib-ibm 31c7784e09
llamafile: PowerPC Sgemm Optimization (llama/15558)
This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-09-20 13:42:42 +03:00
Johannes Gäßler 53010199a1
CUDA: return -1 for nonexistent compiled arch (llama/15587) 2025-09-20 13:42:42 +03:00
Georgi Gerganov 1c21a850be
metal : optimize FA vec for large sequences and BS <= 8 (llama/15566)
* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci
2025-09-20 13:42:42 +03:00
Georgi Gerganov dc693ca8c9
metal : improve `MUL_MAT_ID` (llama/15541)
* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci
2025-09-20 13:42:42 +03:00
Sigbjørn Skjæret 3bb52acb46
metal : remove contiguous assertion for src0 in IM2COL (llama/15577)
* remove contiguous assertion for src0 in IM2COL

* add contiguous check in supports_op
2025-09-20 13:42:42 +03:00
Yoshi_likes_e4 9828caafb5
Add a warning for special devices (llama/15563)
* Add warning

* Print the devices names

* Add newlines

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Fix vector names

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:42 +03:00
Jeff Bolz 79e2bd5ea8
vulkan: Remove splitting for mul_mat_id (llama/15568)
row_ids only needs to hold the BN rows for the current tile.
2025-09-20 13:42:42 +03:00
Qeeweew 2468074e91
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (llama/15451)
* CUDA: optimize get_int_from_table_16

* CUDA: use v_perm_b32 to replace byte_perm on AMD GPUs

* revise documentation

---------

Co-authored-by: xix <xiapc@outlook.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:41 +03:00
lhez 582ef379ab
opencl: fix support ops condition for `rms_norm` (llama/15560) 2025-09-20 13:42:41 +03:00
Ruben Ortlam 335d2a5405
vulkan: fix min subgroup 16 condition for mmid subgroup optimization (llama/15565) 2025-09-20 13:42:41 +03:00
Ihar Hrachyshka 8851ef5463
metal: fix regression when no metal devices are present (llama/15531) 2025-09-20 13:42:41 +03:00
Johannes Gäßler 1e856b2919
CUDA: MoE helper in device code, better tile sizes (llama/15525)
* CUDA: MoE helper in device code, better tile sizes

* reduce superfluous CUDA blocks
2025-09-20 13:42:41 +03:00
Georgi Gerganov 54be54f4ce
metal : add FA kernels for HS=40 (llama/15559)
ggml-ci
2025-09-20 13:42:41 +03:00
Chenguang Li 86331f74e0
CANN: ROPE cache sin/cos repeat (llama/15501)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:41 +03:00
Ruben Ortlam ee11ed42a9
vulkan: apply MUL_MAT_ID subgroup optimization to non-coopmat devices (llama/15524)
* vulkan: use subgroup function for mul_mat_id shader even without coopmat

* vulkan: fix compile warnings

* vulkan: properly check for subgroup size control and require full subgroups for subgroup mul_mat_id

* vulkan: disable subgroup mul_mat_id on devices with subgroups < 16
2025-09-20 13:42:41 +03:00
Jeff Bolz 85d4d2c875
vulkan: Support FA with any multiple of 8 head sizes (llama/15537)
The scalar FA shader already handled multiples of 8. The coopmat1 FA
shader assumed 16x16x16 and the shared memory allocations need the HSK
dimensions padded to a multiple of 16. NVIDIA's coopmat2 implementation
requires multiples of 16 for N and K, and needs the matrix dimensions
padded and loads clamped.

Store the FA pipelines in a map, indexed by the pipeline state.
2025-09-20 13:42:40 +03:00
Ruben Ortlam 8c7872d6ed
vulkan: enable Conv2D for Apple after MoltenVK fixed the bug (llama/15526) 2025-09-20 13:42:40 +03:00
Jeff Bolz 27817867cc
vulkan: workaround MoltenVK compile failure in multi_add (llama/15506)
* vulkan: workaround MoltenVK compile failure in multi_add

* Update ggml/src/ggml-vulkan/vulkan-shaders/multi_add.comp

Co-authored-by: 0cc4m <picard12@live.de>
2025-09-20 13:42:40 +03:00
Johannes Gäßler b0d15e1eb6
CUDA: fix half2 -> half conversion for HIP (llama/15529) 2025-09-20 13:42:40 +03:00
Jeff Bolz 2f6288c33c
vulkan: optimize rms_norm, and allow the work to spread across multiple SMs (llama/15281)
* vulkan: optimize rms_norm, and allow the work to spread across multiple SMs

There are really two parts to this change:
(1) Some optimizations similar to what we have in soft_max, to unroll with
different numbers of iterations.
(2) A fusion optimization where we detect add followed by rms_norm, and make
the add shader atomically accumulate the values^2 into memory. Then the
rms_norm shader can just load that sum. This allows the rms_norm to be
parallelized across multiple workgroups, it just becomes a simple per-element
multiply.

The fusion optimization is currently only applied when the rms_norm is on a
single vector. This previously always ran on a single SM. It could apply more
broadly, but when there are other dimensions the work can already spread across
SMs, and there would be some complexity to tracking multiple atomic sums.

* Change add+rms_norm optimization to write out an array of partial sums
rather than using atomic add, to make it deterministic. The rms_norm
shader fetches a subgroup's worth in parallel and uses subgroupAdd to
add them up.

* complete rebase against fused adds - multi_add shader can also compute partial sums

* fix validation errors

* disable add_rms_fusion for Intel due to possible driver bug

* resolve against #15489, sync after clearing partial sums
2025-09-20 13:42:40 +03:00
Jeff Bolz d8eb9f7d67
vulkan: Rewrite synchronization to allow some overlap between nodes (llama/15489)
Track a list of nodes that need synchronization, and only sync if the new node
depends on them (or overwrites them). This allows some overlap which can
improve performance, and centralizes a big chunk of the synchronization logic.

The remaining synchronization logic involves writes to memory other than the
nodes, e.g. for dequantization or split_k. Each of these allocations has a bool
indicating whether they were in use and need to be synced. This should be
checked before they are written to, and set to true after they are done being
consumed.
2025-09-20 13:42:40 +03:00
Acly 5094171c37
vulkan : support ggml_mean (llama/15393)
* vulkan : support ggml_mean

* vulkan : support sum, sum_rows and mean with non-contiguous tensors

* vulkan : fix subbuffer size not accounting for misalign offset

* tests : add backend-op tests for non-contiguous sum_rows

* cuda : require contiguous src for SUM_ROWS, MEAN support
* sycl : require contiguous src for SUM, SUM_ROWS, ARGSORT support

* require ggml_contiguous_rows in supports_op and expect nb00=1 in the shader
2025-09-20 13:42:40 +03:00
Jeff Bolz 485c5c3b3b
vulkan: optimize mul_mat_id loading row ids into shared memory (llama/15427)
- Spread the work across the whole workgroup. Using more threads seems to
far outweigh the synchronization overhead.
- Specialize the code for when the division is by a power of two.
2025-09-20 13:42:40 +03:00
Reese Levine bb5d7e2c31
ggml WebGPU: add support for quantization types (llama/15440)
* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Work on templating for different types in shaders

* Work on shader type generation

* Working q4_0 mul_mat and some templating for different types

* Add q4_0_f16 matmul and fix device init

* Add matmul support for basic quantization types

* Add q2_k and q3_k quantization

* Add rest of k-quants

* Get firt i-quant working

* Closer to supporting all i-quants

* Support rest of i-quants

* Cleanup code

* Fix python formatting

* debug

* Bugfix for memset

* Add padding to end of buffers on creation

* Simplify bit-shifting

* Update usage of StringView
2025-09-20 13:42:39 +03:00
rmatif d7b7498e76
ggml: add `conv3d` op (llama/15182)
* add conv3d

* bump GGML_OP_COUNT
2025-09-20 13:42:39 +03:00
Yavor Ivanov 18ca4e8f63
cuda : add Pad Reflect 1D support (llama/14659)
* Add Pad Reflect 1D CUDA support

* Update ggml/src/ggml-cuda/pad_reflect_1d.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:39 +03:00
Aaron Teo 380d3db216
ggml-cpu: Support Q5_0 and Q5_1 on s390x (llama/15486)
* ggml-cpu: initial q5_0 impl for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: updated q5_0 code for better performance

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: use optimised hsum for better performance

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: introduce q5_1 simd + refactor q5_0

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix incorrect return type vec_hsum

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: q5_0 incomplete refactor + table_b2b_0 activation

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: refactor q5_1

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: q5_1 update loop unroll to 4

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: update q5_0 unroll to 4

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: update build-s390x docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: update unused variables q5_0

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: update the last update date

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-20 13:42:39 +03:00
Chenguang Li be841c3f6e
CANN: Optimize RMS_NORM using cache (llama/15419)
* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

* fix review comment

Signed-off-by: noemotiovon <757486878@qq.com>

* codestyle adjustment

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:39 +03:00
Diego Devesa 554f96f385
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (llama/15488) 2025-09-20 13:42:39 +03:00
Acly 9dd5039968
vulkan : support conv_2d_dw with f16 weights (llama/15392) 2025-09-20 13:42:39 +03:00
Dong Won Kim 7eebd498ff
vulkan: add exp operation (llama/15456)
Co-authored-by: aeseulgi <kim2h7903@gmail.com>
2025-09-20 13:42:39 +03:00
Jeff Bolz 04d0f9a066
vulkan: Reuse conversion results in prealloc_y (llama/15410)
* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used
2025-09-20 13:42:38 +03:00
Xuan-Son Nguyen c5874bcf42
ggml : fix condition of im2col on Metal backend (llama/15460) 2025-09-20 13:42:38 +03:00
R0CKSTAR 7c077845fd
musa: add GGML_UNUSED_VARS (llama/15446)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-20 13:42:38 +03:00
Diego Devesa 622dec5bf6
sched : copy only the used experts when offloading prompt processing (llama/15346) 2025-09-20 13:42:38 +03:00
Johannes Gäßler 8f0579a33d
CUDA: refactor FA support/selection code (llama/15454) 2025-09-20 13:42:38 +03:00
Johannes Gäßler 316ed78d68
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (llama/15433) 2025-09-20 13:42:38 +03:00
Jeff Bolz 5907ab3e4a
vulkan: shorten pipeline name strings (llama/15431)
These detailed strings were causing increased build time on gcc.
2025-09-20 13:42:38 +03:00
R0CKSTAR 0eb2d653bd
musa: fix build warnings (llama/15258)
* musa: fix build warnings

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* fix warning: comparison of integers of different signs: 'const int' and 'unsigned int' [-Wsign-compare]

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-20 13:42:38 +03:00
lhez db1d2380a0
opencl: mark `argsort` unsupported if cols exceed workgroup limit (llama/15375) 2025-09-20 13:42:37 +03:00
SHUAI YANG 2572322bac
CANN: optimize rope operator (llama/15335)
* optimize rope ops

* amendment

* delete trailing whitespace

* change the variable name
2025-09-20 13:42:37 +03:00
R0CKSTAR 02b49af98d
musa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (llama/15413)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-09-20 13:42:37 +03:00
Marvin Gießing 2ce5860a62
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (llama/15385)
* Added VSX intrinsics for Power9+ systems

Signed-off-by: mgiessing <marvin.giessing@gmail.com>

* Manual unrolling for minor perf improvement

Signed-off-by: mgiessing <marvin.giessing@gmail.com>

* Update ggml/src/ggml-cpu/arch/powerpc/quants.c

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Signed-off-by: mgiessing <marvin.giessing@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-09-20 13:42:37 +03:00
Georgi Gerganov 80447f7412
cuda : remove obsolete sources (ggml/1332)
ggml-ci
2025-09-20 13:42:37 +03:00
Carlos Zoido 44fa2f647c
ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (#3426)
While working on the [whisper-cpp](https://conan.io/center/recipes/whisper-cpp) Conan package for ConanCenter, I noticed that enabling the `with_blas` option fails to build due to an issue in the _MKL_ detection logic.  

The problem is that the CMake condition currently expands `BLAS_INCLUDE_DIRS` without quotes:

```cmake
if (${BLAS_INCLUDE_DIRS} MATCHES "mkl" AND (${GGML_BLAS_VENDOR} MATCHES "Generic" OR ${GGML_BLAS_VENDOR} MATCHES "Intel"))
```
When `BLAS_INCLUDE_DIRS` is a list (as Conan provides it), the `if()` command receives multiple arguments and produces a CMake error:

```bash
...
-- BLAS found, Includes: /root/.conan2/p/b/openb034c5a6ca927b/p/include;/root/.conan2/p/b/openb034c5a6ca927b/p/include/openblas
CMake Error at ggml/src/ggml-blas/CMakeLists.txt:77 (if):
  if given arguments:

    "/root/.conan2/p/b/openb034c5a6ca927b/p/include" "/root/.conan2/p/b/openb034c5a6ca927b/p/include/openblas" "MATCHES" "mkl" "AND" "(" "OpenBLAS" "MATCHES" "Generic" "OR" "OpenBLAS" "MATCHES" "Intel" ")"

  Unknown arguments specified
...
```
This PR fixes the issue by quoting the variable:

```cmake
if ("${BLAS_INCLUDE_DIRS}" MATCHES "mkl" AND (${GGML_BLAS_VENDOR} MATCHES "Generic" OR ${GGML_BLAS_VENDOR} MATCHES "Intel"))
```

With this change, the whole list is treated as a single string and the regex still works correctly.
2025-09-19 05:33:53 +02:00
Siva Mahadevan edea8a9c3c
whisper : prefer curl over wget in download scripts (#3409)
On busybox-based systems like Alpine Linux, wget does not have
certain CLI flags such as '--no-config'. Thus, search for the
existence of 'curl' first in the PATH before wget. wget2 is
still the preferred download tool.
2025-09-08 06:32:19 +02:00
Daniel Bevenius bb0e1fc60f
ci : remove brew installation of cmake for macos-latest (#3408)
This commit remove the brew install of cmake for macos-latest
as this now seems to be pre-installed on the runner.

The motivation for this is that this job is failing with the following
error:
```console
Error: cmake was installed from the local/pinned tap
but you are trying to install it from the homebrew/core tap.
Formulae with the same name from different taps cannot be installed at the same time.
```
2025-09-05 15:20:32 +02:00
Daniel Bevenius 9bfc535130
tests : use CMake definitions for model/sample paths (#3406)
This commit modifies the test-vad and test-vad-full tests to use CMake
definitions for the model and sample paths.

The motivation for this is that currently the tests use relative paths
which might not always be correct depending on the working directory.
With the changes in this commit the tests can be run usins ctest:
```console
$ ctest -R ^test-vad$ --test-dir build
```
Or directly (which is not currently possible without this fix):
```
./build/bin/test-vad
```

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3404
2025-09-04 15:08:30 +02:00
Treboko 7745fcf328
Handle negative value in padding (#3389)
this might happen depending on the way the $stderr.winsize is defined. If the expression "$stderr.winsize[1] - line.size" in Line 114 gets negative, we will get a "negative argument" exception in the padding calculation
2025-08-25 01:34:23 +09:00
Thea Mukhi c09b0e0c4c
models : update`./models/download-ggml-model.cmd` to allow for tdrz download (#3381)
* added patch to cmd to allow for tdrz download

* remove @signs

* Update models/download-ggml-model.cmd

Add missing closing double quote.

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-08-24 11:52:33 +02:00
Georgi Gerganov fc45bb8625 talk-llama : sync llama.cpp
ggml-ci
2025-08-18 20:30:45 +03:00
Georgi Gerganov 33c3c2fe2e sync : ggml 2025-08-18 20:30:45 +03:00
Reese Levine 5ed45b2518 ggml: Add initial WebGPU backend (llama/14521)
ggml-ci
2025-08-18 20:30:45 +03:00
Aaron Teo 03d6607691 ggml : initial zDNN backend (llama/14975) 2025-08-18 20:30:45 +03:00
Georgi Gerganov 7fd2fbde45 common : handle mxfp4 enum
ggml-ci
2025-08-18 20:30:45 +03:00
compilade 0fd4a250df ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)
* ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors

* ggml-quants : avoid division by zero in make_q3_quants
2025-08-18 20:30:45 +03:00
Jeff Bolz fcd694ec1a vulkan: disable spirv-opt for bfloat16 shaders (llama/15352) 2025-08-18 20:30:45 +03:00
Jeff Bolz 6835e0cf77 vulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)
* vulkan: Use larger workgroups for mul_mat_vec when M is small

Also use subgroup instructions for (part of) the reduction when supported.
Without this, the more expensive reductions would eat into the benefits of
the larger workgroups.

* update heuristic for amd/intel

Co-authored-by: 0cc4m <picard12@live.de>

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-08-18 20:30:45 +03:00
Dong Won Kim c225f25907 vulkan: support sqrt (llama/15370) 2025-08-18 20:30:45 +03:00
Jeff Bolz 0a8285186a vulkan: Optimize argsort (llama/15354)
- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.
2025-08-18 20:30:45 +03:00
Jeff Bolz c44d449635 vulkan: fuse adds (llama/15252)
* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug
2025-08-18 20:30:45 +03:00
Jeff Bolz d14e626e6a vulkan: Support mul_mat_id with f32 accumulators (llama/15337)
* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).
2025-08-18 20:30:45 +03:00
Jeff Bolz 5b62995350 vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334) 2025-08-18 20:30:45 +03:00
rmatif e27f4f205d OpenCL: add initial FA support (llama/14987)
* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops
2025-08-18 20:30:45 +03:00
lhez 77771b2711 opencl: add initial mxfp4 support via mv (llama/15270)
* opencl: add reference `mul_mv_mxfp4_f32`

* opencl: add reference `mul_mv_id` for mxfp4

* Q4_0 tranpose fix for Adreno

---------

Co-authored-by: shawngu-quic <shawngu@qti.qualcomm.com>
2025-08-18 20:30:45 +03:00
Georgi Gerganov 1e8d692365 vulkan : fix out-of-bounds access in argmax kernel (llama/15342)
ggml-ci
2025-08-18 20:30:45 +03:00
Georgi Gerganov 1a92fde1b6 vulkan : fix compile warnings on macos (llama/15340)
ggml-ci
2025-08-18 20:30:45 +03:00
Aaron Teo f797a6f9c8 ggml: initial IBM zDNN backend (llama/14975)
* ggml-zdnn: inital backend impl

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: temp change z17 to arch15

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: fix build bugs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: tensor->extra logging check

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: add layout name mapping, ztensor information

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: separate logging into its own line

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: add shape comparison

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: add ggml_tensor shape log

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

ggml-zdnn: fix incorrect shape logging

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add output buffer check

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: run compute and store into tensor->extra

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add set_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add more loggers

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: update set_tensor logging to check only for matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: last working matmul version

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add comments to prevent accidentally deleting lines

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: support op out_prod

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: update op out_prod to use tensor->extra

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rewrite the backend implementation

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: bugfix new impl

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix compiler warnings and bugfixes

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: test ztensor finding in init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: implement at least 1 op to test

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: assign tensor->extra to buffer

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add check for view tensors to prevent init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rework init_tensor to create new buffers

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: switch to std vector instead of array

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: switch buffers back and set to arbitrary number

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: impl init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: update supports_op matmul matrix

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: impl matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix compiler error missing type

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix missing data transform call

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add bias init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: tighten memory usage, change string allocation

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add bias ztensor and data free

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add bias data transform

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add more debug info for extra buffer transform

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add logger to check if mat mul ops go through set_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: activate bias transform in matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: move weights transform into mulmat

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add more safeguards in matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix sequencing of transforms

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: bugfix transform ztensor vs origtensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: figure out why sigtrap is happening

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix sigsegv

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: move everything back to local declaration

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: move bias data to local also

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: bring back working matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: rewrite into mre

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix missing vector import

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix missing vector import in header

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt to fix sigsegv

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix missing load tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix invalid ztensor buffer release

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add logging to debug free buffer

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: remove free_buffer debug info

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add parmblkformat detections

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add nnpa installed detection

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add zdnn_init call for static libs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt at fixing invalid buffer

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: switch to using deque to fix pointer deref problem

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add weights logging to check

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt to use unique ptr

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add tensor to pre_tfm_desc logging

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add inputs logging

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: disable op_none initialisation for testing

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix missing return from init_tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: load ztensors in cgraph exec

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: work on moving output ztensor as well

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: disable logging and breakpoints for full test

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt at manually changing the layout

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt at using default nwhc format instead

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: disable global load ztensor for now

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix errorenous output load tensor

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: add guards to prevent loading ztensor if transformed

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: bring load ztensor back to init routine

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix ztensor deallocation abort

stabilise ggml <-> zdnn api

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: clean up matmul selection

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: clean up project structure

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: update documentation, prepare for upstream

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* chore: add codeowners

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: disable batched matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: attempt at fixing tensor views during matmul

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: deny all view tensors directly

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix pr comments

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: update ops docs for zdnn

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: redo test-backend-ops for ops.md

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-zdnn: fix typo in build-s390x.md

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* codeowners: remove taronaeo for now

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "codeowners: remove taronaeo for now"

This reverts commit 411ea4ed78d08778967bd0bd33a6538cfcbe082f.

* ggml-zdnn: remove unused ggml_zdnn macro

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-08-18 20:30:45 +03:00
Johannes Gäßler ba32f5df0a CUDA: fix negative KV_max values in FA (llama/15321) 2025-08-18 20:30:45 +03:00
uvos 0e15332255 HIP: Cleanup hipification header (llama/15285)
add expicit conversion operator to support older versions of rocm
Switch over to hip_bf16 from legacy hip_bfloat16
Simplify RDNA3 define
Reduce swap over of new hipblas api to rocm 6.5 as this version is used for rocm 7.0 previews

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-08-18 20:30:45 +03:00
Jeff Bolz 1d8b21caa0 vulkan: perf_logger improvements (llama/15246)
* vulkan: perf_logger improvements

- Account for batch dimension in flops calculation.
- Fix how "_VEC" is detected for mat_mul_id.
- Fix "n" dimension for mat_mul_id (in case of broadcasting).
- Include a->type in name.

* use <=mul_mat_vec_max_cols rather than ==1
2025-08-18 20:30:45 +03:00
Jason Ni 4a6cf896ad ggml: fix ggml_conv_1d_dw bug (ggml/1323)
* ggml: fix ggml_conv_1d_dw bug

* Fixed conv1d_dw weight tensor dimension.
2025-08-18 20:30:45 +03:00
Sigbjørn Skjæret 367cd11f5d cuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)
* fix USE_CUDA_GRAPH=OFF

ggml-ci

* check capture status

* completely disable capturing check instead
2025-08-18 20:30:45 +03:00
Jonathan Graehl c76ec72d59 finetune: SGD optimizer, more CLI args (llama/13873)
* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-08-18 20:30:45 +03:00
uvos cbaec6c4ac HIP: bump requirement to rocm 6.1 (llama/15296) 2025-08-18 20:30:45 +03:00
Judd 80ef57f0f0 ggml : update `ggml_rope_multi` (llama/12665)
* update `rope_multi`:

1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-08-18 20:30:45 +03:00
Georgi Gerganov 0e8b244366 ggml : repack block_iq4_nlx8 (llama/14904)
ggml-ci
2025-08-18 20:30:45 +03:00
Oliver Simons b8b1b50c47 CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (llama/15132)
* Factor out `reduce_rows_f32` from common.cuh

This increases iteration cycle speed by not having to recompile
every kernel all the time

* Hide memory-latency by loop unrolling in reduce_rows_f32

* Further optimizations to `reduce_rows_f32`

1. Increase threadblock size to better hide latency of memory requests.
   As a consequence of bigger threadblocks, do 2-step summation, using
   shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims

* Add perf tests for `reduce_rows_f32` kernel

* Add heuristic to toggle 128/512 threads based on sm count

Break even point was the minimum of the following multiples.

| GPU Model                     | Nrow SM Count Multiple |
| -----------                   | -----------            |
| RTX 4000 SFF ADA              | 2.0x                   |
| RTX 6000 ADA                  | 2.5x                   |
| RTX PRO 6000 Blackwell Max-Q  | 3.04x                  |
| RTX PRO 4500 Blackwell	| 3.15x                  |

* Ensure perf gains also for small ncols and large nrows

Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily

* Modify perf and unit-tests

* Apply auto-formatting by clang

* Fix CI build failure

See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486
Building with VS generator worked though.

* Remove sm_count property from `ggml_backend_cuda_context`

Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect

* Add CUB-based implementation for GGML_OP_MEAN

Currently this branch is only executed for nrows==1

* Add heuristics to execute CUB branch only when it brings perf

Heuristics were determined on the following HW:

* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell

* Add unit-test for CUB-based mean

Tests should run with CUDA Graphs enabled per default on NVGPUs

* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`

Suggested by @JohannesGaessler

* Unindent Preprocessor directives

See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506
2025-08-18 20:30:45 +03:00
Tak-RS 4e234ac013 ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188)
* ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055

* ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv

* rpc: drop n==0 special case in send_data(); retry in loop per review

* rpc: remove trailing whitespace in send_data()

---------

Co-authored-by: Shinnosuke Takagi <nosuke@nosukenoMacBook-Pro.local>
2025-08-18 20:30:45 +03:00
uvos 8df931b608 HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama/15273) 2025-08-18 20:30:45 +03:00
Romain Biessy 1334f434f3 sycl: Fix and disable more configurations of mul_mat (llama/15151)
* sycl: Fix and disable more configurations of mul_mat

* Disable more configurations
2025-08-18 20:30:45 +03:00
rmatif 139110701e opencl: allow mixed f16/f32 `add` (llama/15140) 2025-08-18 20:30:45 +03:00
Aman Gupta 082c7ba67c CUDA cmake: add `-lineinfo` for easier debug (llama/15260) 2025-08-18 20:30:45 +03:00
Chenguang Li 0effaad964 CANN: GGML_OP_CPY optimization (llama/15070)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-08-18 20:30:45 +03:00
R0CKSTAR 8e2ddfec31 musa: fix failures in test-backend-ops for mul_mat_id op (llama/15236)
* musa: fix failures in test-backend-ops for mul_mat_id op

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-08-18 20:30:45 +03:00
hipudding 3e2c262c08 CANN: Add broadcast for softmax and FA (llama/15208)
* refactor softmax

* fix fa

* fix mask shape

* format

* add comments

* Remove whitespace
2025-08-18 20:30:45 +03:00
Charles Xu 30cc11dc94 kleidiai: fix unsigned overflow bug (llama/15150)
* kleidiai: fix unsigned overflow bug

* address review comments
2025-08-18 20:30:45 +03:00
David Zhao 457eadfe6f cuda: refactored ssm_scan and use CUB (llama/13291)
* cuda: refactored ssm_scan to use CUB

* fixed compilation error when when not using CUB

* assign L to constant and use size_t instead of int

* deduplicated functions

* change min blocks per mp to 1

* Use cub load and store warp transpose

* suppress clang warning
2025-08-18 20:30:45 +03:00
Aman Gupta 93c7a08019 CUDA: add attention sinks for tile and wmma (llama/15178)
* CUDA: add attention sinks for tile and wmma

* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
2025-08-18 20:30:45 +03:00
compilade 62566a5436 gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)
* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4
2025-08-18 20:30:45 +03:00
AN Long 573bf9d128 ggml : fix field name when new ggml_backend (llama/14944) 2025-08-18 20:30:45 +03:00
Johannes Gäßler 2baea5e4b3 CUDA: attention sinks for mma FlashAttention (llama/15157) 2025-08-18 20:30:45 +03:00
lhez 8a36cd924a opencl: support sink in `soft_max` (attn sinks) (llama/15152) 2025-08-18 20:30:45 +03:00
Jeff Bolz 1984530710 vulkan: support fattn sinks (llama/15126) 2025-08-18 20:30:45 +03:00
Jeff Bolz 414e9074e0 vulkan: Add env var to disable host visible vidmem (llama/15109) 2025-08-18 20:30:45 +03:00
uvos 813ceb2a74 HIP: add cmake option to enable compiler output of kernel resource usage metrics (llama/15103) 2025-08-18 20:30:45 +03:00
Christian Kastner 6d7ffea292 ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)
Any available libraries are found and loaded dynamically at runtime.
2025-08-18 20:30:45 +03:00
Johannes Gäßler 5caf8a1ea2 CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
2025-08-18 20:30:45 +03:00
rmatif b405fd88b3 fix profiling crash (llama/15072) 2025-08-18 20:30:45 +03:00
lhez d153cfb507 opencl: add `swiglu_oai` and `add_id` (llama/15121)
* opencl: add `swiglu-oai`

* opencl: add `add_id`

* opencl: add missing `add_id.cl`
2025-08-18 20:30:45 +03:00
Diego Devesa 6fb55d8f7c ggml : fix fallback to CPU for ununsupported ops (llama/15118) 2025-08-18 20:30:45 +03:00
Chenguang Li e809e81e69 CANN: add support for ACL Graph (llama/15065)
* feat(cann): add optional support for ACL Graph execution

This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:

    -DUSE_CANN_GRAPH=ON

By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.

Key additions:
- CMake option  to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
  is unset or invalid

This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.

Signed-off-by: noemotiovon <757486878@qq.com>

* Fix review comments

Signed-off-by: noemotiovon <757486878@qq.com>

* remane USE_CANN_GRAPH to USE_ACL_GRAPH

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-08-18 20:30:45 +03:00
Georgi Gerganov d3aab3efde llama : add gpt-oss (llama/15091)
* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (llama/7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (llama/1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (llama/11)

* ggml : add fused swiglu_oai op

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>

* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <slarengh@gmail.com>

change kvalues_mxfp4 table to match e2m1 (llama/6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (llama/13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: slaren <slarengh@gmail.com>
2025-08-18 20:30:45 +03:00
Romain Biessy 6558022873 sycl: fix mul_mat selection (llama/15092) 2025-08-18 20:30:45 +03:00
Christian Kastner 349b9a2097 cmake: Add GGML_BACKEND_DIR option (llama/15074)
* cmake: Add GGML_BACKEND_DIR option

This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.

* Fix phrasing
2025-08-18 20:30:45 +03:00
Jeff Bolz 00ff38376a vulkan: fix build when using glslang that does not support coopmat2 (llama/15062) 2025-08-18 20:30:45 +03:00
Jeff Bolz abc971e69a vulkan: Use coopmat2 for conv2d (llama/14982) 2025-08-18 20:30:45 +03:00
lhez 53d8c5179f opencl: fix adreno compiler detection logic (llama/15029) 2025-08-18 20:30:45 +03:00
Johannes Gäßler d6e7315717 CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (llama/15035) 2025-08-18 20:30:45 +03:00
leejet a3123e105b cuda: make im2col a little faster (llama/15025) 2025-08-18 20:30:45 +03:00
Georgi Gerganov d119ecf0c1 cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1

ggml-ci

* cont : fix cont types

ggml-ci

* cont : adopt variable names and comment from the other branch
2025-08-18 20:30:45 +03:00
Jeff Bolz b374fd6172 vulkan: coopmat2 mul_mat optimizations (llama/14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
  interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
2025-08-18 20:30:45 +03:00
Jeff Bolz 97341224b2 vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015) 2025-08-18 20:30:45 +03:00
Jeff Bolz 46e9e5b9a7 vulkan: optimizations for direct convolution (llama/14933)
* vulkan: optimizations for direct convolution

- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
  the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.

* Three tiles sizes for CONV_2D, and a heuristic to choose

* reallow collectives for pre-Turing

* make SHMEM_PAD a spec constant

* fixes for intel perf - no shmem padding, placeholder shader core count

* shader variants with/without unrolling

* 0cc4m's fixes for AMD perf

Co-authored-by: 0cc4m <picard12@live.de>

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-08-18 20:30:45 +03:00
Johannes Gäßler 7e7557ac50 CUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014) 2025-08-18 20:30:45 +03:00
lhez ba6a81c9c9 opencl: add f16 for `add`, `sub`, `mul`, `div` (llama/14984) 2025-08-18 20:30:45 +03:00
Srihari-mcw 1c6cb7df47 ggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)
* Initial Q2_K Block Interleaving Implementation

* Addressed review comments and clean up of the code

* Post rebase fixes

* Initial CI/CD fixes

* Update declarations in arch-fallback.h

* Changes for GEMV Q2_K in arch-fallback.h

* Enable repacking only on AVX-512 machines

* Update comments in repack.cpp

* Address q2k comments

---------

Co-authored-by: Manogna-Sree <elisetti.manognasree@multicorewareinc.com>
2025-08-18 20:30:45 +03:00
diannao 78668cb8d1 docker : add cann build pipline (llama/14591)
* docker: add cann build pipline

* docker: add cann build pipline

* docker: fix cann devops

* cann : fix multi card hccl

* Update ggml/src/ggml-cann/ggml-cann.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Update ggml-cann.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2025-08-18 20:30:45 +03:00
Ruben Ortlam 41e161657e Vulkan: Fix minor debug mode issues (llama/14899)
* vulkan: fix debug mode issues

* vulkan: remove broken check_results GGML_OP_SET_ROWS support
2025-08-18 20:30:45 +03:00
hipudding 572152d6af CANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)
* CANN: Improve loading efficiency after converting weights to NZ format.

* CANN: fix typo
2025-08-18 20:30:45 +03:00
lhez 4904bc3bda opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (llama/14809) 2025-08-18 20:30:45 +03:00
uvos 8ed27b407d HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama/14949) 2025-08-18 20:30:45 +03:00
Johannes Gäßler 113d88686b CUDA: skip masked KV slices for all FA kernels (llama/14924) 2025-08-18 20:30:45 +03:00
uvos 4e624e42fa HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (llama/14945) 2025-08-18 20:30:45 +03:00
uvos 7f203f41aa HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)
This is useful for testing for regressions on GCN with CDNA hardware.

With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
2025-08-18 20:30:45 +03:00
uvos a3899e78af HIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
2025-08-18 20:30:45 +03:00
hipudding c42e55e054 CANN: Add ggml_set_rows (llama/14943) 2025-08-18 20:30:45 +03:00
Sigbjørn Skjæret 682d659416 cuda : add softcap fusion (llama/14907) 2025-08-18 20:30:45 +03:00
Aman Gupta 577f47111e CUDA: add roll (llama/14919)
* CUDA: add roll

* Make everything const, use __restrict__
2025-08-18 20:30:45 +03:00
xctan 4dca34a4de ggml-cpu : deduplicate scalar implementations (llama/14897)
* remove redundant code in riscv

* remove redundant code in arm

* remove redundant code in loongarch

* remove redundant code in ppc

* remove redundant code in s390

* remove redundant code in wasm

* remove redundant code in x86

* remove fallback headers

* fix x86 ggml_vec_dot_q8_0_q8_0
2025-08-18 20:30:45 +03:00
Akarshan Biswas 4908e9dd05 SYCL: Add set_rows support for quantized types (llama/14883)
* SYCL: Add set_rows support for quantized types

This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.

The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.

This addresses part of the TODOs mentioned in the code.

* Use get_global_linear_id() instead

ggml-ci

* Fix formatting

ggml-ci

* Use const for ne11 and size_t variables in set_rows_sycl_q

ggml-ci

* Increase block size for q kernel to 256

ggml-ci

* Cleanup imports

* Add float.h to cpy.hpp
2025-08-18 20:30:45 +03:00
Johannes Gäßler 24d3524bfd CUDA: fix pointer incrementation in FA (llama/14916) 2025-08-18 20:30:45 +03:00
Alberto Cabrera Pérez 923619ffd5 sycl: refactor quantization to q8_1 (llama/14815)
* sycl: quantization to q8_1 refactor

* Refactored src1 copy logic in op_mul_mat
2025-08-18 20:30:45 +03:00
Kai Pastor 45784c05ae cmake : Fix BLAS link interface (ggml/1316) 2025-08-18 20:30:45 +03:00
Kai Pastor 01bdc522e0 vulkan : fix 32-bit builds (ggml/1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.
2025-08-18 20:30:45 +03:00
Georgi Gerganov 9446500b9d scripts : update sync scripts 2025-08-18 20:30:45 +03:00
Daniel Bevenius 040510a132
node : add win platform check for require path (#3363)
This commit adds a check to the platform in use and adjust the path to
the addon.node shared library.

The motivation for this change is that on windows addon.node library is
built into build\bin\Release and on linux into build/Release.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3360
2025-08-15 14:54:23 +02:00
ustas 16c2924cb2
ci : update main-cuda.Dockerfile (#3371)
* Update main-cuda.Dockerfile

Bump CUDA to 13.0.0 and exclude the `compute_50` arch from build because it was deprecated and now throws an error.

* Add quotes in main-cuda.Dockerfile
2025-08-13 19:30:45 +02:00
Dw9 5527454cdb
whisper : fixed crash in GPU device selection on multi-GPU systems (#3372) 2025-08-12 13:58:52 +03:00
Georgi Gerganov b02242d0ad
wasm : change ggml model host to HF (#3369) 2025-08-10 13:00:17 +03:00
Adam Debono 4245c77b65
ruby : Add ruby binding for max_len (#3365)
* add ruby binding for max_len

* add test, update param numbers
2025-08-07 11:37:45 +09:00
Daniel Bevenius 0becabc8d6
stream.wasm : add language selection support (#3354)
* stream.wasm : add language selection support

This commit adds support for selecting the language in the stream.wasm
example. This is includes adding the model `base` which supports
multilingual transcription, and allowing the user to select a language
from a dropdown menu in the HTML interface.

The motivation for this is that it allows users to transcribe audio in
various languages.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3347

* squash! stream.wasm : add language selection support

Remove strdup() for language in stream.wasm and update butten text for
base (should not be "base.en" but just "base").
2025-08-02 07:03:04 +02:00
Georgi Gerganov f7502dca87
whisper : reset conv scheduler when CoreML is used (#3350)
ggml-ci
2025-07-30 21:54:58 +03:00
Georgi Gerganov 28b39c624e
ggml : remove old kompute, cann (skip) (#3349)
ggml-ci
2025-07-30 16:08:57 +03:00
Georgi Gerganov d0a9d8c7f8 talk-llama : sync llama.cpp 2025-07-28 13:02:32 +03:00
Georgi Gerganov 5b4646df1a sync : ggml
ggml-ci
2025-07-28 13:02:32 +03:00
Erik Scholz d96f4d8ea1 vulkan : add fp16 support for the conv_2d kernel (llama/14872)
* add f16 to conv_2d testing
* weaken conv2d test error threshold
2025-07-28 13:02:32 +03:00
Jeff Bolz 5693b857d2 vulkan: skip empty set_rows to avoid invalid API usage (llama/14860) 2025-07-28 13:02:32 +03:00
deepsek b275e52b46 HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
2025-07-28 13:02:32 +03:00
hipudding 4692558a1f CANN: Implement GLU ops (llama/14884)
Implement REGLU, GEGLU, SWIGLU ops according to #14158
2025-07-28 13:02:32 +03:00
R0CKSTAR 8643960acc musa: fix build warnings (unused variable) (llama/14869)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-28 13:02:32 +03:00
Aaron Teo 6629201471 ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
* docs: update s390x document for sentencepiece

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit e086c5e3a7ab3463d8e0906efcfa39352db0a48d)

* docs: update huggingface links + reword

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 8410b085ea8c46e22be38266147a1e94757ef108)

* ggml-cpu: disable ggml-nnpa compile flag by default

fixes #14877

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 412f4c7c88894b8f55846b4719c76892a23cfe09)

* docs: update s390x build docs to reflect nnpa disable

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d)

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-07-28 13:02:32 +03:00
Gabe Goodhart 0b0de0bbf2 metal: SSM_SCAN performance (llama/14743)
* feat: Add s_off as a parameter in the args struct

This may not be necessary, but it more closely mirrors the CUDA kernel

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state

This is a first attempt at optimizing the metal kernel. The changes here
are:

- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
  computation

When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Update logic to correctly do the multi-layer parallel sum

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Correctly size the shared memory bufer and assert expected size relationships

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Compute block offsets once rather than once per token

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Use local variable for state recursion

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Use a secondary simd_sum instead of a for loop

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add assertion and comment about relationship between simd size and num simd groups

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Parallelize of d_state for mamba-1

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Parallel sum in SSM_CONV

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* Revert "feat: Parallel sum in SSM_CONV"

After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.

https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357

This reverts commit 16bc059660c1c59e566628201c0ca2c20c9f4bc3.

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Simplify shared memory sizing

Branch: GraniteFourPerf

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-Authored-By: Georgi Gerganov <ggerganov@gmail.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-28 13:02:32 +03:00
lhez d414c3f6ac opencl: add fused `rms_norm_mul` (llama/14841)
* opencl: add fused `rms_norm` + `mul`

* opencl: improve workgroup size for `rms_norm_mul`
2025-07-28 13:02:32 +03:00
Oliver Simons bbf2389919 ggml : remove invalid portPos specifiers from dot files (llama/14838)
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):

> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".

I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
2025-07-28 13:02:32 +03:00
Chris Rohlf 56350ecc12 rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868) 2025-07-28 13:02:32 +03:00
Diego Devesa 270fa9b25c sched : fix multiple evaluations of the same graph with pipeline parallelism (llama/14855)
ggml-ci
2025-07-28 13:02:32 +03:00
R0CKSTAR 89ae789450 musa: upgrade musa sdk to rc4.2.0 (llama/14498)
* musa: apply mublas API changes

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: update musa version to 4.2.0

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: restore MUSA graph settings in CMakeLists.txt

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: disable mudnnMemcpyAsync by default

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: switch back to non-mudnn images

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* minor changes

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: restore rc in docker image tag

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-28 13:02:32 +03:00
Kai Pastor 5823eabc78 cmake : Indent ggml-config.cmake (ggml/1310) 2025-07-28 13:02:32 +03:00
Alberto Cabrera Pérez 7dc5ae2d6a sycl: fixed semantics of block offset calculation (llama/14814) 2025-07-28 13:02:32 +03:00
Georgi Gerganov faedce5dcb metal : fix fusion across different encoders (llama/14849)
* metal : fix fusion across different encoders

ggml-ci

* cont : add assertion

ggml-ci
2025-07-28 13:02:32 +03:00
Donghyeon Jeong e648f9f079 sycl: fix undefined variable in work group size check (llama/14843) 2025-07-28 13:02:32 +03:00
Johannes Gäßler 95efcf011d CUDA: fix overflow in FA, tune performance (llama/14840) 2025-07-28 13:02:32 +03:00
Johannes Gäßler 8272aa9f14 CUDA: fix compilation with GGML_CUDA_F16 (llama/14837) 2025-07-28 13:02:32 +03:00
Johannes Gäßler a65976fc3c CUDA: fix quantized KV cache + multiple sequences (llama/14822)
* CUDA: fix quantized KV cache + multiple sequences

* Update ggml/src/ggml-cuda/fattn-common.cuh

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-28 13:02:32 +03:00
lixing-star 026d8a0c6e ggml: fix loongarch quantize_row_q8_1 error (llama/14827) 2025-07-28 13:02:32 +03:00
chen fan 49d5540206 CANN: weight format to NZ for Ascend310P3 (llama/14407)
* weight format to nz for 310p

* remove quant weight format to nz

* clean code

* fix

* make the conditions for converting weights to NZ format consistent

* clean code
2025-07-28 13:02:32 +03:00
Aman Gupta f8402d0a95 CUDA: add fused rms norm (llama/14800) 2025-07-28 13:02:32 +03:00
Jeff Bolz c91361379a vulkan: fix rms_norm_mul to handle broadcasting dim0 (llama/14817) 2025-07-28 13:02:32 +03:00
Sigbjørn Skjæret 810018a63a cuda : implement bf16 cpy ops and enable bf16 cont (llama/14763)
* implement bf16 cpy ops and enable bf16 cont

* deduplicate copy functions

* deduplicate checks
2025-07-28 13:02:32 +03:00
lhez de49384ab3 opencl: remove unreachable `return` (llama/14806) 2025-07-28 13:02:32 +03:00
R0CKSTAR 9008410087 cuda: remove linking to cublasLt (llama/14790)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-28 13:02:32 +03:00
Sigbjørn Skjæret e81e17b048 opencl: fix `im2col` when `KW!=KH` (llama/14803) 2025-07-28 13:02:32 +03:00
rmatif a2a5612402 opencl: add conv2d kernel (llama/14403)
* add conv2d kernel

* fix trailing whitespace

* whitespace fixe

* handle f16 input and f16 kernel, more opt

* resolve conflicts

* use enqueue_ndrange_kernel
2025-07-28 13:02:32 +03:00
Romain Biessy 52ad451c8a sycl: Fix im2col (llama/14797) 2025-07-28 13:02:32 +03:00
Charles Xu fc2ff438fd kleidiai: add support for get_rows (llama/14676)
* kleidiai: add support for get_rows

* apply fixes based on code review

* apply more fixes based on code review
2025-07-28 13:02:32 +03:00
Jeff Bolz e3f4162a06 vulkan/cuda: Fix im2col when KW!=KH (llama/14789)
The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.
2025-07-28 13:02:32 +03:00
Ervin Áron Tasnádi 92a9e85d8b ggml: adds CONV_2D op and direct GEMM Vulkan implementation (llama/14316)
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan

* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),

* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests

* * Performance fixes: minimized branch divergence, uses collectives to
  eliminate redundant calculation, macros removed.

* Kernel shared memory size check

* Updates test-backend-ops to support graphs for performance
  measurement.

* * Apple/Win32 compile errors fixed

* Subgroup size used to determine tile size -> fixes llvmpipe errors.

* Collectives disabled by default.

* Intel support is disabled as the performance is poor.

* Conv2d enabled for Intel with disabled collectives, disabled for Apple

* test-backend-ops modifications are reverted

* Trailing spaces and missing override fixed.

* Triggering pipeline relaunch.

* Code formatted with .clang-format.
2025-07-28 13:02:32 +03:00
Peter0x44 50f983a17e vulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (llama/14707) 2025-07-28 13:02:32 +03:00
0cc4m b06f314667 Vulkan: Fix fprintf format-security warning (llama/14770) 2025-07-28 13:02:32 +03:00
Kai Pastor 5c3b794c51 cmake : fix usage issues (ggml/1257)
* CMake config: Create target only once

Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.

* CMake config: Add CUDA link libs

* CMake config: Add OpenCL link libs

* CMake config: Use canonical find_dependency

Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.

* CMake config: Wire OpenMP dependency
2025-07-28 13:02:32 +03:00
Daniel Bevenius e238dc1bdd ggml-cpu : remove stdlib include from repack.cpp (ggml/1276)
This commit removes the inclusion of `<cstdlib>`.

The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
2025-07-28 13:02:32 +03:00
Rich Waters e7bf0294ec
Support static xcframework packaging in build-xcframework.sh (#3322)
* This commit allows for the building of a static xcframework by adding a
BUILD_STATIC_XCFRAMEWORK option. When enabled, the build-xcframework.sh
script builds a self-contained static whisper.xcframework.

The motivation for this change is so that command line binaries can link
whisper.cpp without forcing users to install the whisper.xcframework
separately.

* Update build-xcframework.sh

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* Address reviewer feedback: remove extra indentation around static xcframework creation.

* squash! Address reviewer feedback: remove extra indentation around static xcframework creation.

Fix whitespaces.

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-07-26 12:25:44 +02:00
Daniel Bevenius 7de8dd783f
examples : add note about WHISPER_WASM_SINGLE_FILE [no ci] (#3332)
This commit adds a note to the README files of the WASM examples
about the `WHISPER_WASM_SINGLE_FILE` option.

The motivation for this is that currently this option is not documented
and might be surprising to users who expect a separate .wasm file to be
generated.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3290
2025-07-24 16:06:48 +02:00
Daniel Bevenius 85e474fd55
ci : add paths to build.yml (#3333)
This commit adds specific paths to the GitHub Actions workflow file
`.github/workflows/build.yml`.

The motivation for this to avoid unnecessary builds when unrelated files
are changed, which can save resources and time during the CI process.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3285
2025-07-24 16:04:21 +02:00
R0CKSTAR 210bbbe4d5
musa: upgrade musa sdk to rc4.2.0 (#3324)
* musa: upgrade musa sdk to 4.2.0

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: restore rc in docker image tag

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-24 13:19:57 +03:00
Sacha Arbonel 1f5cf0b288
server : hide language probabilities option behind flag (#3328)
* examples/server: hide language probabilities option behind flag

* code review

* fix
2025-07-21 13:03:54 +02:00
BVK Chaitanya 2e6be2f380
go: fix Mac OS X builds (#3310)
This commit fixes Go bindings build failure for Mac OS X (15.1) which is currently failing.

Co-authored-by: Chaitanya Bayapuneni <bvk@mini.cinnamon-interval.ts.net>
2025-07-21 08:47:35 +02:00
Georgi Gerganov c0dc391349 sync : ggml
ggml-ci
2025-07-20 00:23:50 +03:00
Georgi Gerganov 0ed687c6f1 metal : fuse add, mul + add tests (llama/14596)
ggml-ci
2025-07-20 00:23:50 +03:00
Oliver Simons d4a7ea1634 cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (llama/14741)
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs

Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.

* Exclude `project_per_layer_input` by matching node names

This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.

* Revert unnecessary formatting changes
2025-07-20 00:23:50 +03:00
Aman Gupta 9a07cb064a CUDA: set_rows + cpy.cu refactor (llama/14712) 2025-07-20 00:23:50 +03:00
Neo Zhang Jianyu fed20b0682 use max work group size for device to replace the magic number (llama/14732) 2025-07-20 00:23:50 +03:00
Reese Levine 17c5411195 ggml: Add initial WebGPU backend (llama/14521)
* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults

* Initialize webgpu device

* Making progress on setting up the backend

* Finish more boilerplate/utility functions

* Organize file and work on alloc buffer

* Add webgpu_context to prepare for actually running some shaders

* Work on memset and add shader loading

* Work on memset polyfill

* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it

* Implement get_tensor and buffer_clear

* Finish rest of setup

* Start work on compute graph

* Basic mat mul working

* Work on emscripten build

* Basic WebGPU backend instructions

* Use EMSCRIPTEN flag

* Work on passing ci, implement 4d tensor multiplication

* Pass thread safety test

* Implement permuting for mul_mat and cpy

* minor cleanups

* Address feedback

* Remove division by type size in cpy op

* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends

* Fix name

* Fix macos dawn prefix path
2025-07-20 00:23:50 +03:00
Georgi Gerganov ae1bb2c8ea llama : add high-throughput mode (llama/14363)
* kv-cache : prepare K/V buffers for separation

ggml-ci

* batched-bench : fix oob write

ggml-ci

* llama : add "virtual sequences"

ggml-ci

* llama : use "stream" vs "virtual sequence"

ggml-ci

* graph : fix stream splitting when KV cache is not used

ggml-ci

* kv-cache : add multi-stream save/load support

ggml-ci

* llama : add "--attn-streams" flag

ggml-ci

* kv-cache : fix handling when find_slot fails

ggml-ci

* kv-cache : restore find_slot impl

ggml-ci

* kv-cache : add comments

* kv-cache : add bounds checks for sequence id

ggml-ci

* cont : add n_seq_max to batch allocr

ggml-ci

* kv-cache : perform stream copies lazily after llama_synchronize

ggml-ci

* kv-cache : avoid throwing exceptions across the C boundary

ggml-ci

* CUDA: 4D FlashAttention support (llama/14628)

* CUDA: 4D FlashAttention support

* CUDA: fix WMMA FA kernel

* llama : rename attn_streams -> kv_unified

ggml-ci

* common : rename kv_split -> kv_unified

ggml-ci

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-07-20 00:23:50 +03:00
Georgi Gerganov 9cc645fec0 ggml : add asserts (llama/14720)
* ggml : add asserts

ggml-ci

* cont : fix constant type

Co-authored-by: Diego Devesa <slarengh@gmail.com>

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-07-20 00:23:50 +03:00
Jeff Bolz 8d1a0485f1 vulkan: fix noncontig check for mat_mul_id splitting (llama/14683)
* vulkan: fix noncontig check for mat_mul_id splitting

Remove supports_op check for > 4096 (splitting fixes this)

* vulkan: fix batched matmul dequant for Q*_K
2025-07-20 00:23:50 +03:00
Jeff Bolz b33841c453 vulkan: add RTE variants for glu/add/sub/mul/div (llama/14653) 2025-07-20 00:23:50 +03:00
R0CKSTAR ab79c6c118 cuda: fix build warnings in set-rows.cu (unused variable) (llama/14687)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-20 00:23:50 +03:00
Anton Mitkov a6b9271c2c sycl: Hotfix for non dnnl codepath (llama/14677) 2025-07-20 00:23:50 +03:00
shalinib-ibm ded2e3cf6d ggml : refactor llamafile_sgemm PPC code (llama/14673)
Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions

2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-07-20 00:23:50 +03:00
Akarshan Biswas ebb0e9d0ed SYCL: use 1D kernel for set_rows (llama/14618)
* SYCL: Use 1D kernel for set_rows

* Remove dangling comment

* Refactor and use ceil_div
2025-07-20 00:23:50 +03:00
Anton Mitkov 24803d62c6 sycl: Batched mulmat rework for oneDNN dispatch (llama/14617) 2025-07-20 00:23:50 +03:00
Sigbjørn Skjæret 0611387d17 cuda : add set rows for bf16 (llama/14664) 2025-07-20 00:23:50 +03:00
Yavor Ivanov fe33572b22 cuda : add ELU support (llama/14657) 2025-07-20 00:23:50 +03:00
Georgi Gerganov 21308b4e6e ggml : add build-time message to remind about ggml_set_rows (llama/14661)
ggml-ci
2025-07-20 00:23:50 +03:00
Yavor Ivanov 3cad26d807 metal : Add missing unary ops Metal support (llama/14660) 2025-07-20 00:23:50 +03:00
Aman Gupta 66b3a39bdc CUDA: add set rows for f32 and f16 (llama/14551)
* CUDA: add set rows for f32 and f16

* Review: change kernel params, use strides from host

* Use 1-d kernel

* Review: use int64_t for blockDim.x, rename nb->s for clarity
2025-07-20 00:23:50 +03:00
Charles Xu 032697b9a8
whisper: validate get_rows support for cpu extra buffer (#3323) 2025-07-14 15:13:44 +03:00
Greg Sadetsky a16da91365
examples : update links in wasm examples (#3318)
* fix 404 link

* update link in whisper.wasm example

* update example in command.wasm

* update link in bench.wasm example

* update link in stream.wasm example
2025-07-12 23:22:35 +02:00
Georgi Gerganov 3775c503d5 sync : resolve conflicts (#0)
ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov 6ddff4d96a talk-llama : sync llama.cpp
ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov 6d64e4abf3 sync : ggml 2025-07-12 19:23:56 +03:00
Georgi Gerganov 85dcc74b88 sync : resolve conflicts (ggml/0)
ggml-ci
2025-07-12 19:23:56 +03:00
Jeff Bolz 915fc153a5 vulkan: support SET_ROWS (llama/14587)
* vulkan: support SET_ROWS

Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.

* vulkan: optimize set_rows

Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.
2025-07-12 19:23:56 +03:00
Jeff Bolz 8670a3fd5d vulkan: optimizations for deepseek prompt processing (llama/14555)
* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader

* vulkan: increase coopmat2 mul_mat_id tile size

* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path

* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)
2025-07-12 19:23:56 +03:00
Tarek Dakhran 74f6d47904 model : support LiquidAI LFM2 hybrid family (llama/14620)
**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
```
2025-07-12 19:23:56 +03:00
Slobodan Josic a4ff4ec9cb HIP : Add HIP 7.0+ compatibility for hipBLAS compute types (llama/14634) 2025-07-12 19:23:56 +03:00
rmatif b0754136be opencl: add tiled mul_mat_f16_f32 (llama/14535)
* add tiled mul_mat_f16_f32

* fix trailing whitespace

* add insightful comments
2025-07-12 19:23:56 +03:00
lhez 6f113cbcaa opencl: add `set_rows` for `f16` and `f32` (llama/14547)
* opencl: add `set_rows` for `f16` and `f32`

* opencl: better choose workgroup size for `set_rows`
2025-07-12 19:23:56 +03:00
Akarshan Biswas 3c21cde540 SYCL: Initial set_rows kernel implementation (llama/14562)
* SYCL: Initial set_rows kernel implementation

* Revert max_threads to 256

* Refactor set_rows and address review comments

* Deduplicate conversion function

* Remove guard before kernel launch and refactor

* Fix and add back SFINAE
2025-07-12 19:23:56 +03:00
compilade fb885fa48b cuda : support Falcon-H1 state size for SSM_SCAN (llama/14602) 2025-07-12 19:23:56 +03:00
Xuan-Son Nguyen 2021870fb8 ggml : add ggml_scale_bias (llama/14417)
* ggml : add ggml_scale_bias

* ggml_vec_mad1_f32

* add more simd

* add CUDA

* sycl

* vulkan

* cann (placeholder)

* opencl

* will this fix cpu?

* fix cuda

* suggestions from coderabbit

* fix cann compile error

* vDSP_vsmsa

* rm __ARM_FEATURE_SVE

* use memcpy for op params

* make code looks more consistent

* use scalar for __ARM_FEATURE_SVE

* add x param to ggml_vec_mad1_f32
2025-07-12 19:23:56 +03:00
Miaoqian Lin 48b18f9eb8 ggml : prevent integer overflow in gguf tensor size calculation (llama/14595) 2025-07-12 19:23:56 +03:00
Jeff Bolz fadb3233b6 vulkan: optimize flash attention split_k_reduce (llama/14554)
* vulkan: allow FA split_k with smaller KV values

* vulkan: spread split_k_reduce work across more threads

k_num can get rather large. Use the whole workgroup to reduce the M/L values.

Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
2025-07-12 19:23:56 +03:00
Jeff Bolz 9750e4c988 vulkan : fix rope with partial rotation and non-cont src (llama/14582) 2025-07-12 19:23:56 +03:00
Georgi Gerganov c3942b3db6 cuda : fix rope with partial rotation and non-cont src (llama/14580)
* cuda : fix rope non-cont

ggml-ci

* cont : fix multi-rope + add test

ggml-ci

* sycl : try fix

ggml-ci

* cont : fix sycl + clean-up cuda

ggml-ci
2025-07-12 19:23:56 +03:00
Aman Gupta 98e7beac6c CUDA: add bilinear interpolation for upscale (llama/14563) 2025-07-12 19:23:56 +03:00
R0CKSTAR 7e9c6bbab2 musa: fix build warnings (unused variable) (llama/14561)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-07-12 19:23:56 +03:00
Aman Gupta 8e545f466c CUDA: add bf16 and i32 to getrows (llama/14529) 2025-07-12 19:23:56 +03:00
Eve e753b9a952 vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (llama/14485)
Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260

Co-authored-by: Rémy Oudompheng <remyoudompheng@gmail.com>
2025-07-12 19:23:56 +03:00
Jeff Bolz 9d0c408260 vulkan: fix rms_norm+mul fusion (llama/14545)
The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.
2025-07-12 19:23:56 +03:00
Jeff Bolz 3aebb8d5d3 vulkan: Handle updated FA dim2/3 definition (llama/14518)
* vulkan: Handle updated FA dim2/3 definition

Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.

* handle null mask for gqa

* allow gqa with dim3>1
2025-07-12 19:23:56 +03:00
Sigbjørn Skjæret df5af1dc75 opencl: add GELU_ERF (llama/14476) 2025-07-12 19:23:56 +03:00
Georgi Gerganov 10d0d28f7c metal : disable fast math in all quantize kernels (llama/14528)
ggml-ci
2025-07-12 19:23:56 +03:00
luyhcsu af304ef080 CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (llama/14002)
Co-authored-by: luyuhong <luyuhong@kylinos.cn>
2025-07-12 19:23:56 +03:00
Sigbjørn Skjæret e8138c51d2 ggml : implement GEGLU_ERF and GEGLU_QUICK ops (llama/14445) 2025-07-12 19:23:56 +03:00
lhez 7cec4cc83a opencl : broadcast for soft_max (llama/14510) 2025-07-12 19:23:56 +03:00
Jeff Bolz a432929d58 vulkan: support mixed/deepseekR1 FA head sizes (llama/14509)
* vulkan: better parameterize FA by head sizes

* vulkan: support mixed/deepseekR1 FA head sizes
2025-07-12 19:23:56 +03:00
Johannes Gäßler 4aaf8114e7 ggml: backward pass for split swiglu (llama/14483) 2025-07-12 19:23:56 +03:00
Nicolò Scipione 0ca760433c Fix conditional enabling following arch checks for ggml-sycl (llama/14504)
Signed-off-by: nscipione <nicolo.scipione@codeplay.com>
2025-07-12 19:23:56 +03:00
Georgi Gerganov ed639c7f22 kv-cache : use ggml_set_rows (llama/14285)
* kv-cache : use ggml_set_rows

ggml-ci

* graph : separate k and v indices

ggml-ci

* cont : remove redundant ifs

ggml-ci

* kv-cache : improve find_slot impl

* kv-cache : bounds-check when accessing slot_info indices

* kv-cache : add comments

ggml-ci

* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends

ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov 0abd0660e1 ggml : fix FA mask dim 2 and 3 (llama/14505)
* ggml : fix FA mask dim 2 and 3

ggml-ci

* backends : unsupport batched FA in CUDA and Vulkan

ggml-ci

* vulkan : disable FA for mask->ne[2] != 1
2025-07-12 19:23:56 +03:00
Aman Gupta 9cde908c0a CUDA: add dynamic shared mem to softmax, refactor general usage (llama/14497) 2025-07-12 19:23:56 +03:00
compilade d2d120c256 llama : initial Mamba-2 support (llama/9126)
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
2025-07-12 19:23:56 +03:00
Aman Gupta fb5c4095ee CUDA: add softmax broadcast (llama/14475)
* CUDA: add softmax broadcast

* Pass by const ref

* Review: Use blockDims for indexing, remove designated initializers

* Add TODO for noncontigous input/output
2025-07-12 19:23:56 +03:00
Johannes Gäßler 70515ed728 CUDA: broadcasting for FlashAttention mask (llama/14500) 2025-07-12 19:23:56 +03:00
Jeff Bolz 1b3e06a400 vulkan: support softmax/FA batch and broadcast (llama/14449) 2025-07-12 19:23:56 +03:00
Georgi Gerganov d1286cf32b ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (llama/14435) 2025-07-12 19:23:56 +03:00
zhouwg 2e04b81f3e opencl : fix possible buffer overflow in dump_tensor (llama/14490) 2025-07-12 19:23:56 +03:00
Eric Zhang cd87a2f7e0 opencl : skip empty nodes on cgraph compute (llama/14491) 2025-07-12 19:23:56 +03:00
lhez e43c38f9f1 opencl : update upscale to support align corners (llama/14488) 2025-07-12 19:23:56 +03:00
Björn Ganster ab850d4680 ggml : Callback before abort (llama/14481)
* Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed.

* Return previous callback to allow callback chaining

* style fixes

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-07-12 19:23:56 +03:00
Georgi Gerganov cdf5e72163 ci : disable fast-math for Metal GHA CI (llama/14478)
* ci : disable fast-math for Metal GHA CI

ggml-ci

* cont : remove -g flag

ggml-ci
2025-07-12 19:23:56 +03:00
Chenguang Li 32d7c10766 CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (llama/14411)
* [CANN]update to aclnnGroupedMatmulV2

Signed-off-by: noemotiovon <757486878@qq.com>

* Support MUL_MAT_ID on 310p

Signed-off-by: noemotiovon <757486878@qq.com>

* fix editorconfig

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-07-12 19:23:56 +03:00
Jeff Bolz 3c7939cfe5 vulkan: Split large mul_mat_id to fit in shared memory (llama/14451) 2025-07-12 19:23:56 +03:00
Sigbjørn Skjæret 6fc80e8456 add GELU_ERF (llama/14455) 2025-07-12 19:23:56 +03:00
Acly 19b9aaf044 vulkan : implement bilinear interpolation for ggml_upscale/ggml_interpolate (ggml/1291)
* supports GGML_SCALE_MODE_BILINEAR and GGML_SCALE_FLAG_ALIGN_CORNERS
2025-07-12 19:23:56 +03:00
Acly f98cb6607b vulkan : implement ggml_roll (ggml/1290)
* vulkan : implement ggml_roll

* vulkan : refactor vk_op_unary_push_constants initialization
2025-07-12 19:23:56 +03:00
Daniel Bevenius 5ea5c58768 ggml : add version function to get lib version (ggml/1286)
* ggml : add version function to get lib version

This commit adds a function `ggml_version()` to the ggml library that
returns the version of the library as a string.

The motivation for this is that it can be useful to be able to
programmatically check the version of the ggml library being used.

Usage:
```c
printf("GGML version: %s\n", ggml_version());
```
Output:
```console
GGML version: 0.0.2219
```

* ggml : add ggml_commit()

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-12 19:23:56 +03:00
accessiblepixel 869335f2d5
server : add dtw.params for v3-large-turbo (#3307)
* Add DTW model large-v3-turbo parameters to server.cpp example

DTW support is available in whispercpp and the large-v3-turbo model has already been added to the sources, but the large-v3-turbo model hasn't been added to the server.cpp file to make use of it. This commit hopefully corrects that issue.

* match original linebreak of original server.cpp file after adding large.v3.turbo dtw
2025-07-07 12:51:15 +03:00
Lin Xiaodong d9999d54c8
feat: support vad for addon.node (#3301)
Co-authored-by: linxiaodong <calm.lin@wukongsch.com>
2025-07-02 13:14:29 +03:00
Georgi Gerganov bca021c974 sync : ggml
ggml-ci
2025-07-01 17:54:53 +03:00
Georgi Gerganov 1f816de7da talk-llama : sync llama.cpp 2025-07-01 17:54:53 +03:00
Georgi Gerganov c4ea72be9a ggml : remove trailing whitespace (llama/0) 2025-07-01 17:54:53 +03:00
lhez 1e930ab1b8 opencl : add GEGLU, REGLU, SWIGLU (llama/14456) 2025-07-01 17:54:53 +03:00
Aman Gupta b5b237d49a Add Conv2d for CPU (llama/14388)
* Conv2D: Add CPU version

* Half decent

* Tiled approach for F32

* remove file

* Fix tests

* Support F16 operations

* add assert about size

* Review: further formatting fixes, add assert and use CPU version of fp32->fp16
2025-07-01 17:54:53 +03:00
Georgi Gerganov 679f31a9d1 metal : disable fast-math for some cpy kernels (llama/14460)
* metal : disable fast-math for some cpy kernels

ggml-ci

* cont : disable for q4_1

ggml-ci

* cont : disable for iq4_nl

ggml-ci
2025-07-01 17:54:53 +03:00
Romain Biessy e29e36aee7 ggml-cpu: sycl: Re-enable exp f16 (llama/14462) 2025-07-01 17:54:53 +03:00
xiaobing318 6bb1234a56 cmake : Remove redundant include path in CMakeLists.txt (llama/14452)
* Update docker.yml

修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动

* Remove redundant include path in CMakeLists.txt

The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths.

* Enable scheduled Docker image builds

Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date.
2025-07-01 17:54:53 +03:00
Vedran Miletić 3239359bd1 scripts : make the shell scripts cross-platform (llama/14341) 2025-07-01 17:54:53 +03:00
Akarshan Biswas e81be92931 SYCL: disable faulty fp16 exp kernel (llama/14395)
* SYCL: disable faulty fp16 CPU exponent for now

* Revert "SYCL: disable faulty fp16 CPU exponent for now"

This reverts commit ed0aab1ec31b4eb4b0f275dd7acd41d96a375202.

* SYCL: disable faulty fp16 CPU exponent for now

* Fix logic of disabling exponent kernel
2025-07-01 17:54:53 +03:00
Sigbjørn Skjæret 130044f228 ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (llama/14443) 2025-07-01 17:54:53 +03:00
Sigbjørn Skjæret 8bc638ee56 ggml : implement REGLU/GEGLU/SWIGLU ops (llama/14158)
* implement unary REGLU/GEGLU/SWIGLU cpu ops

* relax constraints

* duplicate shape of source

* fix ggml_vec_geglu_f16

* special case gated ops

* implement unary REGLU/GEGLU/SWIGLU cuda ops

* tighten constraints again

* refactor into GGML_GLU_OP

* metal : add glu kernels

ggml-ci

* add CUDA_GLU_BLOCK_SIZE [no ci]

* more constraints and use 64bit ints

ggml-ci

* 64bit multiplication [no ci]

* implement swapped variants (cpu/cuda)

* update comment [no ci]

ggml-ci

* Vulkan: Add GLU ops and shaders

* SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate

* ggml : implement GLU for split up/gate (llama/14181)

* implement GLU for split up/gate

* add tests for ggml_glu_split

* Vulkan: Implement glu_split logic and shader support

* add split to logging [no ci]

* SYCL: refactor element_size ops and add split up and gate support to gated kernels

* SYCL: switch GEGLU to use tanh approximation

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Akarshan <akarshan@menlo.ai>

* GGML: increase OP count in assertion

* Refactor: Optimize SYCL element-wise operations with unary function inlining

This commit refactors the SYCL element-wise operations to improve performance by:

- Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead.
- Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic.
- Replacing direct kernel calls with calls to these inlined functions.
- Using `__dpct_inline__` to encourage compiler inlining.
- Minor code cleanup and consistency improvements.

The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices.

* vulkan: Increase workgroup size for GLU, for performance (llama/14345)

* vulkan: Increase workgroup size for GLU, for performance

* vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup

* merge fix

* metal : add support for split and swap

ggml-ci

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Akarshan <akarshan@menlo.ai>
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-07-01 17:54:53 +03:00
Jeff Bolz 00b36237ba vulkan: Add fusion support for RMS_NORM+MUL (llama/14366)
* vulkan: Add fusion support for RMS_NORM+MUL

- Add a use_count to ggml_tensor, so we can detect if an output is used more than once.
- Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor.
- Add detection logic and basic fusion logic in ggml-vulkan.
- Add some testing support for fusion. Rather than computing one node at a time, allow
for computing the whole graph and just testing one node's results. Add rms_norm_mul tests
and enable a llama test.

* extract some common fusion logic

* fix -Winconsistent-missing-override

* move ggml_can_fuse to a common function

* build fix

* C and C++ versions of can_fuse

* move use count to the graph to avoid data races and double increments when used in multiple threads

* use hash table lookup to find node index

* change use_counts to be indexed by hash table slot

* minimize hash lookups

style fixes

* last node doesn't need single use.
fix type.
handle mul operands being swapped.

* remove redundant parameter

---------

Co-authored-by: slaren <slarengh@gmail.com>
2025-07-01 17:54:53 +03:00
Aman Gupta b900ee424c CUDA: add bf16 and f32 support to cublas_mul_mat_batched (llama/14361)
* CUDA: add bf16 and f32 support to cublas_mul_mat_batched

* Review: add type traits and make function more generic

* Review: make check more explicit, add back comments, and fix formatting

* Review: fix formatting, remove useless type conversion, fix naming for bools
2025-07-01 17:54:53 +03:00
Jeff Bolz f641a4c410 vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (llama/14378) 2025-07-01 17:54:53 +03:00
Jeff Bolz 9e48afba2f vulkan: lock accesses of pinned_memory vector (llama/14333) 2025-07-01 17:54:53 +03:00
Xinpeng Dou f31ed384f4 fix async_mode bug (llama/14432) 2025-07-01 17:54:53 +03:00
Jeff Bolz 0b09f5bbad vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (llama/14427)
This setting needs to be passed through to vulkan-shaders-gen
2025-07-01 17:54:53 +03:00
Radoslav Gerganov 48fb51f314 ggml : add ggml_set_rows (llama/14274)
* ggml : add ggml_set_rows

Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using
indices from 'c'.

ref: #8366

* use I64 for indices

* ggml : add repeat impl for i64

* ggml : add ggml_is_contiguous_rows

* ggml : ggml_set_rows support broadcast

* ggml : ggml_set_rows support quantized dst

ggml-ci

* ggml : support GGML_TYPE_F32 ".from_float" trait

* ggml : ggml_set_rows update comment + better index name

* tests : add ggml_set_rows

* metal : add ggml_set_rows implementation

ggml-ci

* ggml : simplify forward_dup_f32

* ggml : fix supports_op

* tests : add comment to set_rows

* ggml : leave the repeat_i64 for a separate PR

ggml-ci

* ggml : set_rows use std::min instead of MIN

* ggml : better error message for set_rows unsupported type

* metal : perform op->type check only once

* tests : more consistent implementation + more tests

ggml-ci

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-01 17:54:53 +03:00
bandoti 566462a5c0 cmake: regen vulkan shaders when shaders-gen sources change (llama/14398)
* Add shaders-gen sources as target deps
2025-07-01 17:54:53 +03:00
Georgi Gerganov c300f1e32d metal : add special-case mat-vec mul for ne00 == 4 (llama/14385)
ggml-ci
2025-07-01 17:54:53 +03:00
Georgi Gerganov c848b9fbef metal : batch rows copy in a single threadgroup (llama/14384)
* metal : batch rows copy in a single threadgroup

ggml-ci

* metal : handle some edge cases when threadgroup size is not a power of 2

ggml-ci
2025-07-01 17:54:53 +03:00
R0CKSTAR a5e6a3c953 musa: enable fp16 mma (all) and cublas on qy2 (llama/13842)
* musa: enable fp16 mma (all) and cublas on qy2

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-07-01 17:54:53 +03:00
Aaron Teo 16aa7d151d ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
* ggml-cpu: add nnpa compile flag

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 4a9f60c201573128f73a65999b3e5cc497fae5c1)

* ggml-cpu: add fp16->fp32 nnpa first

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 8d4a7987f9c1887f716be96250f2caeee0253929)

* ggml-cpu: add fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 0ff0d6516247a41d2ade42b42cf0d676a4dd1627)

* ggml-cpu: better variable names

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 2f58bbcbb89c183340e252362b2a40651f573f1f)

* docs: update s390x docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 01b929491b50071a5d0572235dcf5a449da70aa7)

* ggml-cpu: add debugging prints to see if dlf16 is correct

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix print vs printf

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix float placeholder

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: ensure fp16 and fp32 load and stores are called

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fp16 load ensured to hit

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove sigint from fp16 store

for some reason, the function is not getting a hit when debugged with
    gdb. we will need to investigate further

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: nnpa switch to vec_xst test

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to vec_xst for 4 element loops also

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rework noop

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove noop, general code cleanup

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarify variable naming

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add breakpoint for debugging

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: test fix for conversion failure

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: disable fp32->fp16 nnpa conversions for now

there are some conversion failures in nnpa that requires the eyes of an
ibm stsm. will create a separate pr to introduce the fp32->fp16 change.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to elif macro

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix compiler types

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: change to typedef vector types

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add 4 element loops for fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarified vector naming

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back fp32->fp16 store nnpa

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add nnpa macro check in ggml-impl

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add missing __func__

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: diagnose why __NNPA__ macro is not being defined

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: import vecintrin.h to fix compiler errors

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: update macro tests

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 157f856c34589566151630e294563a420702db39.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to importing ggml-cpu-impl instead

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix macro declaration

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: test more macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add debug prints

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bruteforce macro definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move macro definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add ggml-impl.h to cmakelists

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to private macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 157f856c34589566151630e294563a420702db39)

* ggml-cpu: move things around

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back compile macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to quotes for import

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add compiler error macro

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add s390x detection in ggml-src

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back compile definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: undo cmakelists work

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove typedefs.h

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove typedef from cmakelists

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add ggml-impl.h future notes

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add todo comment for future reference

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarify naming of dlf16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove unnecessary target compile definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: update broken huggingface link for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix duplicate func names during compile

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: fix duplicate func names during compile"

This reverts commit fbb733451f27677063b914d4f6c9a9841d45b38d.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu"

This reverts commit bd288e8fa52b5244f65cee21cb61062f1a9e0ca5.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: refactor fp16<->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix missing simd-mappings.h import in quants.c

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix missing simd-mappings.h within repack

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix amx mmq missing simd-mappings.h

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: attempt at fixing loongarch failing build

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move nnpa together with other fp16<->fp32 simd

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix wrong refactor of ggml-base

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: remove dependency on ggml-cpu from ggml-base

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove mistaken fallback macro

fallback logic was already implemented but i was too sleepy to realise

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures"

This reverts commit 32a3533564bdb7902cefb9c89b1c9e956a81ce29.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml: move ggml_table_f32_f16 to ggml-cpu"

This reverts commit 9e40d984ad27d7b60392fb2b7548885201864fe4.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 9e40d984ad27d7b60392fb2b7548885201864fe4)

* ggml: move ggml_table_f32_f16 to ggml-cpu.c

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: extern c ggml_table_f32_f16 + chore docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h

we rely on the variable declaration in ggml-cpu.c instead

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h"

This reverts commit f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back ggml_table_f32_f16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: bring back ggml_table_f32_f16"

This reverts commit 2dce119178bed5ef5c8398c4230ddd14fef80e49.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* fix ggml time initialization

* fix f32_f16 table init

* remove extra line

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: slaren <slarengh@gmail.com>
2025-07-01 17:54:53 +03:00
Sigbjørn Skjæret 99764f5767 ggml : do not output unprintable characters on GGUF load failure (llama/14381) 2025-07-01 17:54:53 +03:00
Anton Mitkov fc28594112 sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (llama/13973) 2025-07-01 17:54:53 +03:00
lhez acfbf2921b opencl: ref count `ggml_backend_opencl_context` and refactor profiling (llama/14254)
* Move profiling info into `ggml_backend_opencl_context`
* Add `enqueue_ndrange_kernel` to launch kernel
2025-07-01 17:54:53 +03:00
uvos 6a1d12a8ea CUDA/HIP: optimize mmv paths taken for HIP devices (llama/14324)
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-07-01 17:54:53 +03:00
Johannes Gäßler 06b01ba87b CUDA: mul_mat_v support for batch sizes > 1 (llama/14262)
* CUDA: mul_mat_v support for batch sizes > 1

* use 64 bit math for initial offset calculation
2025-07-01 17:54:53 +03:00
uvos 791201a974 HIP: enable vec fattn on RDNA4 (llama/14323) 2025-07-01 17:54:53 +03:00
Aman Gupta abb650c0ec CUDA: add mean operation (llama/14313)
* CUDA: add mean operation

* add back sum_rows_f32_cuda

* Review: early exit if col!=0
2025-07-01 17:54:53 +03:00
Markus Tavenrath e036676795 Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (llama/13792)
* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.
2025-07-01 17:54:53 +03:00
Georgi Gerganov c1418b9906 metal : fix thread-safety (llama/14300)
ggml-ci
2025-07-01 17:54:53 +03:00
Acly 9d7cb80f04 ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285)
* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners
2025-07-01 17:54:53 +03:00
Daniel Bevenius 515df20351 ggml-quants : rename best_mad to best_error (ggml/1283)
This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.

The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.
2025-07-01 17:54:53 +03:00
Daniel Bevenius c88ffbf9ba
ci : use selective copy for musa image (#3296)
This commit modified the musa docker file to selectively copy
directories needed for the container image.
This commit also added a step to the docker workflow to free up disk
space in attempt to make enough room for the large musa build
containers.

The motivation for this change is to reduce the size of the container
image and try to avoid disk usage issues in CI.
2025-06-27 15:43:56 +02:00
Daniel Bevenius 7069394447
ci: set fail-fast to false in docker.yml (#3294)
* ci: set fail-fast to false in docker.yml

This commit modifies the GitHub Actions workflow for Docker builds to
disable the fail-fast behavior.

The motivation for this is that currently if one of the strategy jobs
fails any other job that is in progress will be cancelled. There is no
need for this as the jobs are independent.

* ci : update docker.yml to use a single build

This commit updates the docker job to only build the image once instead
of twice (only happens when pushing to the master branch). Instead this
will tag the image with the commit SHA when pushing to master.

The motivation for this change is to reduce the time it takes to run
this job and also it might help with the disk space issues we are
experiencing for this job when it runs on pushes to master.
2025-06-27 09:55:56 +02:00
KITAITI Makoto f8abbeb234
ruby : add Whisper::VERSION (#3292)
* Add a test for segment

* Check option existence

* Use more proper variable to define build option

* Assert Core ML enabled

* Define Whisper::VERSION

* Add test for Whisper::VERSION

* Add signature of Whisper::VERSION
2025-06-27 04:41:26 +02:00
Daniel Bevenius 32cf4e2aba
whisper : add version function (#3289)
* whisper : add version function

This commit adds a version function to the whisper API.

The motivation for this is that it might be convenient to have a way to
programmatically check the version.

Example usage:
```c++
printf("Using whisper version: %s\n", whisper_version());
```
Will output:
```console
Using whisper version: 1.7.6
```

* examples : add version to android example CMakeLists.txt
2025-06-26 18:09:42 +02:00
Daniel Bevenius 35034c5aea
ci : add should_release variable (#3288)
* ci : add should_release variable

This commit adds a `should_release` variable to the GitHub Actions
workflow to determine if a release should be created based on the tag or
branch conditions.

The motivation for this that it simplifies the logic for deciding
whether to upload artifacts or not, making it easier to maintain if we
need to change the conditions in the future.

* ci : set release draft to true
2025-06-26 16:29:29 +02:00
toboil-features 897b071dc6
docs : add cmake "-j" flag in README.md (#3284)
Make cmake commands encounter multithreading in README.md file.
2025-06-26 13:23:19 +02:00
Daniel Bevenius 4daf7050ca
ci : add support for tag-based releases (#3287)
This commit modifies the GitHub Actions workflow to support
tag-based releases. When a tag is pushed that starts with 'v', the
workflow will use that tag name for the release process.

I think this was the once the behavior, but it was lost in updates that
I've made to the workflow. This commit restores that functionality.
2025-06-25 21:43:58 +02:00
Georgi Gerganov a8d002cfd8
release : v1.7.6 2025-06-25 16:47:03 +03:00
Georgi Gerganov 06bdaa6c0c
bench : update benches 2025-06-25 16:45:19 +03:00
Georgi Gerganov dc8dda60ee
bench : print system info before ctx check 2025-06-25 16:01:32 +03:00
Daniel Bevenius 1ad258ca31
stream : add nullptr check of whisper_context (#3283)
* stream : add nullptr check of whisper_context

This commit adds a check to ensure that the `whisper_context` is not
null after initialization.

The motivation for this is that currently, if the initialization fails,
the program continues to run leading to a segmentation fault. This sort
of check is performed by others examples like whisper-cli.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3280#issuecomment-3003778035

* examples : add nullptr check for whisper_context
2025-06-25 14:16:31 +02:00
Daniel Bevenius 7dd2997a01
ci : enable main-cuda build (#3282)
This commit re-enables the main-cuda Docker build in the CI workflow.
The main-cuda Dockerfile has been updated to remove build artifacts
and also print the size of the /app directory after the build. A similar
change was recently made to the musa Dockerfile, and perhaps this job
was also having similar disk space issues.

The motivation for this change is that this configuration has been
disabled for a while due to persistent build failures. However, the
actual logs are now longer available.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3040
2025-06-25 12:12:36 +02:00
Joas Dev c85b1ae84e
bindings.java : update java example (#3281)
This commit updates the example in the README.md file as the current Java example code is not working.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/2860
2025-06-25 06:35:38 +02:00
glaszig 0083335ba0
coreml : backport CoreML features to macos < 14 (#3255) 2025-06-24 09:24:27 +02:00
Daniel Bevenius 9c47902308
ci : reduce musa image size (#3277)
* ci : reduce musa image size

This commit contains an attempt to reduce the size of the musa Docker
image by copying only the necessary files from the build stage.

The motivation for this is that the CI runs sometimes fail with out of
memory errors. These seems to be able to pass for PRs, at least
sometimes but fail upon push to the master branch.

* ci : remove build time files instead of selective copying
2025-06-24 08:20:28 +02:00
Yukimasa Funaoka a0d2c632e4
whisper : add .gitignore entries for OpenVINO support (#3276) 2025-06-24 07:50:16 +02:00
Aaron Ang 4d6ae52ed3
command: output commands to text file (#3273)
This commit implements code for the command line argument `-f --file FNAME` which is currently missing.
2025-06-24 06:41:21 +02:00
Daniel Bevenius a422176937
ci : add apt-get clean to musa Dockerfile (#3275)
* ci : add apt-get clean to musa Dockerfile

This commit adds `apt-get clean` to the musa Dockerfile to reduce the
image size by removing cached package files after installation.

The motivation for this is to try to reduce the size of the Docker image
and see if this can avoid the "no space left on device" error during
the CI build process.

Refs: https://github.com/ggml-org/whisper.cpp/actions/runs/15815324254
2025-06-23 12:34:44 +02:00
KITAITI Makoto cead8f5357
ruby : specify Apple frameworks explicitly on build (#3270)
* Add Apple frameworks to $LDFLAGS when needed

* Add utility method to Options

* Remove unnecessary propaty date from gemspec

* Add Apple frameworks for CoreML build

* Add Accelerate framework only for Apple platform

* Fix ZipURI#cache signature

* Download test fixtures if needed
2025-06-23 06:34:05 +02:00
Georgi Gerganov e6c10cf3d5 talk-llama : sync llama.cpp
ggml-ci
2025-06-21 07:34:17 +03:00
Georgi Gerganov d65a579a0a sync : ggml
ggml-ci
2025-06-21 07:34:17 +03:00
Aman Gupta b68222f92c CUDA: add conv_2d_transpose (llama/14287)
* CUDA: add conv_2d_transpose

* remove direct include of cuda_fp16

* Review: add brackets for readability, remove ggml_set_param and add asserts
2025-06-21 07:34:17 +03:00
Nicolò Scipione a455dcb04c sycl: add usage of enqueue_functions extension (llama/14244)
* Add header and namespace to use enqueue_functions extension

* Convert submit and parallel_for to use new extension in convert.cpp

* Convert submit and parallel_for to use extension in ggml-sycl.cpp

* Convert submit and parallel_for to use extension in gla.cpp

* Convert submit and parallel_for in mmq.cpp

* Convert submit and parallel_for in mmvq.cpp

* Convert submit and parallel_for in remaining files

* Convert all simple parallel_for to nd_launch from enqueue_functions
extension

* Wrapping extension in general function

Create a general function that enable the enqueue_functions extension if
it is enable in the compiler, otherwise call the general SYCL function
to launch kernels.

---------

Signed-off-by: nscipione <nicolo.scipione@codeplay.com>
2025-06-21 07:34:17 +03:00
Christian Kastner af7168174c Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)
* Add PowerPC feature detection and scoring

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for PowerPC

* ggml-cpu: Delay some initializations until function is called

When using GGML_BACKEND_DL=ON, these initializations might use
instructions that are not supported by the current CPU.

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-06-21 07:34:17 +03:00
Diego Devesa 33d1f0a3e0 cuda : synchronize graph capture and cublas handle destruction (llama/14288)
Workarounds an issue that may cause CUDA graph capture to fail when a cuBLAS handle is destroyed in a different thread
2025-06-21 07:34:17 +03:00
Georgi Gerganov 018b2d340e ggml : fix repack work size for mul_mat_id (llama/14292)
ggml-ci
2025-06-21 07:34:17 +03:00
Charles Xu 694f435d22 ggml: Update KleidiAI to v1.9.0 (llama/14277) 2025-06-21 07:34:17 +03:00
Aman Gupta 5efd43c956 CUDA: add conv_2d_dw (llama/14265)
* CUDA: add conv_2d_dw

* better naming

* simplify using template

* Review: fix operation ordering in ggml-cuda, use __forceinline__, use more const
2025-06-21 07:34:17 +03:00
Diego Devesa 71adde9203 ggml-cpu : remove unnecesary arm feature detection (llama/14281)
Support for Arm runtime feature detection has now been added to GGML_CPU_ALL_VARIANTS. This removes the old and not very functional code.
2025-06-21 07:34:17 +03:00
fanyang cef59c1e26 build : suppress gcc15 compile warnings (llama/14261)
* Change _contains_any() substrs to std::string_view and fix the find comparison logic.
2025-06-21 07:34:17 +03:00
Anton Mitkov a02a2d4240 sycl: Cleanup codepaths in Get Rows in sycl backend (llama/14215)
Addresses unused reorder path
2025-06-21 07:34:17 +03:00
Aaron Teo be4ea0826b llamafile : support s390x SIMD instruction set (llama/14273) 2025-06-21 07:34:17 +03:00
0cc4m 1aca7b5c8a Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (llama/14249) 2025-06-21 07:34:17 +03:00
Georgi Gerganov b251d739ad metal : add mean kernel (llama/14267)
* metal : add mean kernel

ggml-ci

* cont : dedup implementation

ggml-ci
2025-06-21 07:34:17 +03:00
Aaron Teo 203451bcba ggml-cpu: reduce asm calls for hsum (llama/14037)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-06-21 07:34:17 +03:00
Aaron Teo 34940abe53 ggml-cpu: fix uncaught underscore terminators (llama/14023)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-06-21 07:34:17 +03:00
Charles Xu 4fc9c34126 ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258) 2025-06-21 07:34:17 +03:00
Acly 471df139fa Add `ggml_roll` (ggml/1274)
* ggml : add ggml_roll

* use set/get_op_params & std::min
2025-06-21 07:34:17 +03:00
Daniel Bevenius 3e65f518dd
android : update CMakeLists.txt to use FetchContent for ggml (#3268)
* android : update CMakeLists.txt to use FetchContent for ggml

This commit updates the CMakeLists.txt file for the Android Whisper
example to use FetchContent for managing the ggml library.

The motivation for this change is avoid having to make manual changes to
the CMakeLists.txt file after syncing the ggml library.

I've built and run the example locally to verify that it works as
expected.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3265#issuecomment-2986715717

* android.java : update cmake to use FetchContent for ggml

This commit updates the CMake configuration for the Android Java example
to use `FetchContent` for including the `ggml` library. Do be able to
use FetchContent we also update the `compileSdkVersion` and
`targetSdkVersion` to 31, and the `buildToolsVersion` to '30.0.3'.
This also required a an update to the Gradle plugin version to 7.4.0.

The motivation for this change is avoid having to make manual changes to
the CMakeLists.txt file after syncing the ggml library.
2025-06-19 16:06:42 +02:00
Georgi Gerganov 17bece1885
cmake : fix android build (#3265)
* cmake : fix android build

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-06-19 08:24:41 +02:00
Daniel Bevenius ecb8f3c2b4
examples : add stereo to mono conversion in read_audio_data (#3266)
This commit adds a conversion from stereo to mono in the
`read_audio_data` function of `common-whisper.cpp`.

The motivation for this change is prior to Commit
7d3da68f79 ("examples : use miniaudio for
direct decoding flac, mp3, ogg and wav (#2759)", there was a step that
read stereo int16 data -> pcm16 (448512 samples), and then converted to
mono (224256 samples), and then also convert to stereo in `pcmf32s.

The middle step here seems to have been missed when rewriting the code to
use Miniaudio and caused issues then transcribing stereo audio files.

For example, currently using the audio sample in the linked issue the
output is:
```console
[00:00:00.000 --> 00:00:03.000]  (speaker 1) Sous-titres réalisés para la communauté d'Amara.org
```

And with the change in this commit the output is:
```
[00:00:00.000 --> 00:00:01.500]  (speaker 1) *sonnerie de téléphone*
[00:00:01.500 --> 00:00:07.000]  (speaker 1) Salut jeune homme !
[00:00:07.000 --> 00:00:08.500]  (speaker 0) C'est vrai que je te dérange ?
[00:00:08.500 --> 00:00:10.500]  (speaker 1) Ah pas du tout, pas du tout, pas du tout !
[00:00:10.500 --> 00:00:12.500]  (speaker 1) J'étais en train de...
[00:00:12.500 --> 00:00:14.500]  (speaker 1) de préparer un courrier
```

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3092
2025-06-18 17:41:43 +02:00
Georgi Gerganov 2f60ebc3c2 talk-llama : sync llama.cpp
ggml-ci
2025-06-18 12:40:34 +03:00
Georgi Gerganov 69061e356f sync : ggml
ggml-ci
2025-06-18 12:40:34 +03:00
bandoti 0e068779c7 cmake: remove shader-gen step-targets from ggml-vulkan (llama/14226)
* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen
2025-06-18 12:40:34 +03:00
xctan ac8a303c9a ggml-cpu : remove the weak alias trick (llama/14221) 2025-06-18 12:40:34 +03:00
R0CKSTAR 2a84593960 musa: fix build warning (unused variable) (llama/14231)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-06-18 12:40:34 +03:00
Diego Devesa 44871c8a3e llama : add thread safety test (llama/14035)
* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-18 12:40:34 +03:00
bandoti ad6cd94a3a cmake: clean up external project logic for vulkan-shaders-gen (llama/14179)
* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time
2025-06-18 12:40:34 +03:00
uvos dbad9d8fba HIP: disable rocwmma on gfx12 by default until rocm 7.0 (llama/14202) 2025-06-18 12:40:34 +03:00
Charles Xu 518835ee56 ggml: Add Android support for GGML_CPU_ALL_VARIANTS (llama/14206) 2025-06-18 12:40:34 +03:00
Jeff Bolz a3d1c55c66 vulkan: mutex around vkQueueSubmit (llama/14127)
This fixes the remaining crash in test-thread-safety on my system.
2025-06-18 12:40:34 +03:00
xctan 0c25129d30 ggml-cpu : rework weak alias on apple targets (llama/14146)
* ggml-cpu : rework weak alias on apple targets

* fix powerpc detection

* fix ppc detection

* fix powerpc detection on darwin
2025-06-18 12:40:34 +03:00
uvos a433680a2f CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (llama/14196) 2025-06-18 12:40:34 +03:00
uvos aeaed9806f HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (llama/14183) 2025-06-18 12:40:34 +03:00
Anton Mitkov 4ea599afdf sycl: Adding additional cpy dbg print output (llama/14034) 2025-06-18 12:40:34 +03:00
Ewan Crawford 783cf0309f SYCL: Bump oneMath commit (llama/14152)
Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.

With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.

```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2

UR CUDA ERROR:
        Value:           700
        Name:            CUDA_ERROR_ILLEGAL_ADDRESS
        Description:     an illegal memory access was encountered
        Function:        operator()
        Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154

Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
  in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```
2025-06-18 12:40:34 +03:00
Anton Mitkov 0097eaf839 sycl: Remove not needed copy f16->f32 for dnnl mul mat (llama/14125) 2025-06-18 12:40:34 +03:00
Georgi Gerganov a96a880f7b cmake : handle whitepsaces in path during metal build (llama/14126)
* cmake : handle whitepsaces in path during metal build

ggml-ci

* cont : proper fix

ggml-ci

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-06-18 12:40:34 +03:00
Christian Kastner 26c16ad6bd Implement GGML_CPU_ALL_VARIANTS for ARM (llama/14080)
* ggml-cpu: Factor out feature detection build from x86

* ggml-cpu: Add ARM feature detection and scoring

This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.

This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM

Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.

Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.

* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now

The other platforms will need their own specific variants.

This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.
2025-06-18 12:40:34 +03:00
Jeff Bolz 40d0d47cf1 vulkan: Better thread-safety for command pools/buffers (llama/14116)
This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
2025-06-18 12:40:34 +03:00
Jeff Bolz 40c6525517 vulkan: Track descriptor pools/sets per-context (llama/14109)
Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.
2025-06-18 12:40:34 +03:00
lhez 74c68067dc opencl: add `mul_mv_id_q4_0_f32_8x_flat` (llama/14003) 2025-06-18 12:40:34 +03:00
0cc4m 794bf23994 Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (llama/14099) 2025-06-18 12:40:34 +03:00
Isaac McFadyen 26dcc196c7 rpc : nicer error messages for RPC server crash (llama/14076) 2025-06-18 12:40:34 +03:00
Daniel Bevenius ffe5400d1b ggml : disable warnings for tests when using MSVC (ggml/1273)
* ggml : disable warnings for tests when using MSVC

This commit disables warnings for tests on windows when using MSVC.

The motivation for this is that this brings the build output more
inline with what Linux/MacOS systems produce.

There is still one warning generated for the tests which is:
```console
  Building Custom Rule C:/ggml/tests/CMakeLists.txt
cl : command line  warning D9025: overriding '/DNDEBUG' with '/UNDEBUG'
[C:\ggml\build\tests\test-arange.vcxproj]
  test-arange.cpp
  test-arange.vcxproj -> C:\ggml\build\bin\Release\test-arange.exe
```

* ggml : fix typo in tests disable list
2025-06-18 12:40:34 +03:00
Daniel Bevenius 1b01c0cc4e ggml : remove unused ggml_context_container (ggml/1272)
This commit removes the unused `ggml_context_container` structure from
the ggml library. It looks like the usage of this struct was removed in
Commit 4757fe18d56ec11bf9c07feaca6e9d5b5357e7f4 ("ggml : alloc
ggml_contexts on the heap (whisper/2525)").

The motivation for this changes is to improve code clarity/readability.
2025-06-18 12:40:34 +03:00
Daniel Bevenius db30f46761 examples : include examples in msvc disable warn (ggml/1270)
This commit adds the examples in the "list" of targets to ignore MSVC
warnings.

The motivation for this is that currently the examples generate a number
of warnings that are ignore/disabled for the core ggml project. This
makes for a cleaner output when building.
2025-06-18 12:40:34 +03:00
Daniel Bevenius 1591558ccc
whisper : clear result_all if vad_samples is empty (#3262)
This commit clears the results_all vector no VAD segments are found.

The motivation for this is that this would normally be done by
`whisper_full_with_state` but when no VAD segments are detected this
current implementation does not call that function and hence the vector
does not get reset. This can lead to issues in applications like the
server example where it will incorrectly process the old results.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3250
2025-06-18 11:30:29 +02:00
Daniel Bevenius f3ff80ea8d
examples : set the C++ standard to C++17 for server (#3261)
This commit updates the server example to use C++17 as the standard.

The motivation for this change is that currently the ci-run
`ggml-100-mac-m4` is failing when compiling the server example on
macOS. The `talk-llama` example also has this setting so it looks like
an alright change to make.

ggml-ci

Refs: https://github.com/ggml-org/ci/tree/results/whisper.cpp/2a/4d6db7d90899aff3d58d70996916968e4e0d27/ggml-100-mac-m4
2025-06-17 11:29:48 +02:00
w1redch4d 2a4d6db7d9
examples : update usage/help in yt-wsp.sh (#3251)
This commit updates the usage/help message to be more readable and include the environment variables available to set options.
2025-06-16 12:21:16 +02:00
Sacha Arbonel 107c303e69
server : graceful shutdown, atomic server state, and health endpoint Improvements (#3243)
* feat(server): implement graceful shutdown and server state management

* refactor(server): use lambda capture by reference in server.cpp
2025-06-16 10:14:26 +02:00
Daniel Bevenius 705db0f728
whisper : fix VAD processing for skipped audio segments (#3230)
This commit addresses an issue with token timestamps when audio segments
are skipped, in `whisper_exp_compute_token_level_timestamps` related to
the VAD processing and the energy levels.

The motivation for this is that the token timestamps exceed the energy
array bounds due to segment timing misalignment:
```console
                  (skipped introduction)
                    ↓
Audio segment:     [2600ms → 5600ms]  (3 seconds of actual audio)
Energy array:      [0 → 480652]       (samples for 3 seconds)
Token timestamps:  [3266ms → 3408ms]  (absolute timestamps)
```
So both `s0` and `t1` get clamped to the maximum sample index (480652)
which causes the start/end timestamps to be the same for all the tokens
after a certain point.

This is addressed by using segment-relative timestamps in the
`timestamp_to_sample` and `sample_to_timestamp`.
2025-06-13 17:35:52 +02:00
Daniel Bevenius 0a4d85cf8a
server : add Voice Activity Detection (VAD) support (#3246)
* server : add Voice Activity Detection (VAD) support

This commit adds support for Voice Activity Detection (VAD) in the
server example.

The motivation for this is to enable VAD processing when using
whisper-server.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3089

* server : add VAD parameters to usage in README.md [no ci]

This commit also adds a few missing parameters.

* server : fix conflicting short options [no ci]
2025-06-13 13:24:03 +02:00
Daniel Bevenius 9df8d54bcb
cli : fix short name conflict for vad options [no ci] (#3247)
This commit fixes a short name conflict whisper-cli for
`--vad-min-speech-duration-ms` and `--vad-min-silence-duration-ms` which
currently have the same short name `-vsd`.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3246#pullrequestreview-2923800114
2025-06-13 10:25:25 +02:00
Daniel Bevenius 20d203aacf
ruby : add .gitignore entries for ext directory (#3245)
This commit adds entries to `.gitignore` for directories in the
`ext` directory.

The motivation for this is that currently after building locally these
following files are reported by git as untracked:
```console
Untracked files:
  (use "git add <file>..." to include in what will be committed)
	ext/examples/
	ext/ggml/
	ext/include/
	ext/scripts/
	ext/src/
```
2025-06-13 10:04:20 +02:00
Daniel Bevenius ebbc874e85
ci : update windows runner to windows-2022 (#3242)
* ci : update windows runner to windows-2022

This commit changes the windows-2019 runner to windows-2022.

The motiation for this is that the windows-2019 runner is scheduled for
deprection and will be removed 2025-06-30. There are currently "burnout"
periods that started 2025-06-01 and during these times jobs with
windows-2019 will fail which has happened lately on our CI.

Refs: https://github.com/actions/runner-images/issues/12045
2025-06-11 13:53:16 +02:00
Daniel Bevenius 2679bec6e0
ruby : add cleaning of library names in dependencies (#3241)
* ruby : add cleaning of library names in dependencies

This commit adds a cleaning step to the library names in the
`Dependencies` class of the Ruby bindings.

The motivation for this is that with the introduction of a library name
alias for ggml in Commit (b933d17c30
"Add in-build ggml::ggml ALIAS library (ggml/1260)) causes the Makefile
generation to break:
```console
$ sed -n '165,170p' ext/Makefile
CLEANOBJS     = $(OBJS) *.bak
TARGET_SO_DIR_TIMESTAMP = $(TIMESTAMP_DIR)/.sitearchdir.time
$(TARGET_SO): libcommon.a libwhisper.a libggml\n(ggml::ggml).a libggml-cpu.a libggml-base.a
libcommon.a libwhisper.a libggml\n(ggml::ggml).a libggml-cpu.a libggml-base.a: cmake-targets
cmake-targets:
	/usr/bin/cmake -S sources -B build -D BUILD_SHARED_LIBS=OFF -D CMAKE_ARCHIVE_OUTPUT_DIRECTORY=/home/danbev/work/ai/whisper.cpp/bindings/ruby/ext -D CMAKE_POSITION_INDEPENDENT_CODE=ON
```

* squash! ruby : add cleaning of library names in dependencies

Apply PR review feedback.
2025-06-10 15:06:40 +02:00
Georgi Gerganov 93d543905e ggml : fix weak alias win32 (#0)
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 962361bd79 android : fix builds (#0)
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov dbe81c1042 sync : ggml
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 175e7e4f1a files : remove old sources (part 2) 2025-06-10 12:40:33 +03:00
Georgi Gerganov 56475d01dc sync : ggml
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 38347a7dda files : remove old sources 2025-06-10 12:40:33 +03:00
Georgi Gerganov db264d6220 talk-llama : sync llama.cpp
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 96eaf46ec6 sync : ggml
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 7a675807a2 metal : use less stack memory in FA kernel (llama/14088)
* metal : use less stack memory in FA kernel

ggml-ci

* cont : fix BF16 variant
2025-06-10 12:40:33 +03:00
xctan 8cbc889f85 ggml-cpu : split arch-specific implementations (llama/13892)
* move ggml-cpu-aarch64 to repack

* split quantize_row_q8_0/1

* split helper functions

* split ggml_vec_dot_q4_0_q8_0

* split ggml_vec_dot_q4_1_q8_1

* split ggml_vec_dot_q5_0_q8_0

* split ggml_vec_dot_q5_1_q8_1

* split ggml_vec_dot_q8_0_q8_0

* split ggml_vec_dot_tq1_0_q8_K

* split ggml_vec_dot_tq2_0_q8_K

* split ggml_vec_dot_q2_K_q8_K

* split ggml_vec_dot_q3_K_q8_K

* split ggml_vec_dot_q4_K_q8_K

* split ggml_vec_dot_q5_K_q8_K

* split ggml_vec_dot_q6_K_q8_K

* split ggml_vec_dot_iq2_xxs_q8_K

* split ggml_vec_dot_iq2_xs_q8_K

* split ggml_vec_dot_iq2_s_q8_K

* split ggml_vec_dot_iq3_xxs_q8_K

* split ggml_vec_dot_iq3_s_q8_K

* split ggml_vec_dot_iq1_s_q8_K

* split ggml_vec_dot_iq1_m_q8_K

* split ggml_vec_dot_iq4_nl_q8_0

* split ggml_vec_dot_iq4_xs_q8_K

* fix typos

* fix missing prototypes

* rename ggml-cpu-quants.c

* rename ggml-cpu-traits

* rename arm folder

* move cpu-feats-x86.cpp

* rename ggml-cpu-hbm

* update arm detection macro in quants.c

* move iq quant tables

* split ggml_quantize_mat_q8_0/K

* split ggml_gemv_*

* split ggml_gemm_*

* rename namespace aarch64 to repack

* use weak aliases to replace test macros

* rename GGML_CPU_AARCH64 to GGML_CPU_REPACK

* rename more aarch64 to repack

* clean up rebase leftover

* fix compilation errors

* remove trailing spaces

* try to fix clang compilation errors

* try to fix clang compilation errors again

* try to fix clang compilation errors, 3rd attempt

* try to fix clang compilation errors, 4th attempt

* try to fix clang compilation errors, 5th attempt

* try to fix clang compilation errors, 6th attempt

* try to fix clang compilation errors, 7th attempt

* try to fix clang compilation errors, 8th attempt

* try to fix clang compilation errors, 9th attempt

* more cleanup

* fix compilation errors

* fix apple targets

* fix a typo in arm version of ggml_vec_dot_q4_K_q8_K

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-10 12:40:33 +03:00
Diego Devesa e16a84cd95 cuda : fix device sync on buffer clear (llama/14033) 2025-06-10 12:40:33 +03:00
Xinpeng Dou 26282282fa CANN: Simplify the environment variable setting(#13104)
* Simplify the environment variable setting to specify the memory pool type.

* Adjust the GGML_CANN_ASYNC_MODE setting to accept yes, enable, 1, or on (case-insensitive) as valid options.

* update

* fix CI

* update

* delete whitespace

* fix according to review

* update CANN.md

* update CANN.md
2025-06-10 12:40:33 +03:00
Nicolò Scipione 4737a8c780 sycl: Add reorder to Q6_K mmvq implementation (llama/13885)
* Add Reorder to Q6_K mmvq implementation

* Address PR comments: clean up comments

* Remove unused parameter after refactoring q4_k

* Adding inline to function and removing unnecessary reference to int

---------

Signed-off-by: nscipione <nicolo.scipione@codeplay.com>
2025-06-10 12:40:33 +03:00
Diego Devesa 8a70f4d18b cuda : fix buffer type check with integrated GPUs (llama/14069) 2025-06-10 12:40:33 +03:00
Akarshan Biswas 489dc158a6 SYCL: Implement few same quantized type copy kernels (llama/13739)
* SYCL: Implement few same quantized type copy kernels

* Use memcpy for copying contiguous tensors

ggml-ci

* feat(sycl): add contiguous tensor copy support and device checks

Adds a memcpy path for contiguous tensors of the same type to optimize data transfer. Updates device support checks to recognize contiguous tensor operations, improving compatibility and performance.

* refactor: replace specific block copy functions with template

The changes replace multiple redundant block copy functions (e.g., cpy_block_q8_0_q8_0, cpy_block_q5_0_q5_0) with a single templated function cpy_blck_q_q. This reduces code duplication by using a generic template that works for any block type, improving maintainability while preserving the same functionality. The template is instantiated with specific block types (e.g., block_q8_0) where needed.

* Exclude BF16 support for COPY tensors for now
ggml-ci

* perf: adjust SYCL copy kernel block sizes for efficiency

Use ceil_div to ensure full element coverage and update nd_range parameters to better align with SYCL block sizes, improving parallelism and device utilization in copy operations.
2025-06-10 12:40:33 +03:00
Masato Nakasaka f0f5a9f7fb vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (llama/14001)
* allowing B580 and U9-288V

* experimenting code to detect Xe2

* allowing coopmat only for Xe2 GPUs

* fixed comment wording

* fixed comment wording

* removed unnecessary driver check
2025-06-10 12:40:33 +03:00
Diego Devesa 13a03c5d33 llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (llama/14013) 2025-06-10 12:40:33 +03:00
Jeff Bolz 6dd91d4f7e vulkan: automatically deduce size of push constants (llama/13936) 2025-06-10 12:40:33 +03:00
Ervin Áron Tasnádi 5171b24f70 ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (llama/13813)
* * ggml-vulkan: adds op CONV_TRANSPOSE_1D

* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D

* Missing barrier added to shader.
Number of additional tests reduced to 108.

* * Fixes typo in variable name.

* Removes extra whitespaces.

* Adds int64->int32 casts to prevent possible warnings.

* Problem size reduced in tests to pass tests with llvmpipe.

* supports_op condition moved from unintended position
2025-06-10 12:40:33 +03:00
Diego Devesa 23e2fe0682 releases : use dl backend for linux release, remove arm64 linux release (llama/13996) 2025-06-10 12:40:33 +03:00
Johannes Gäßler 7f4d110f53 CUDA: fix FTZ in FA for Gemma 3 (llama/13991) 2025-06-10 12:40:33 +03:00
Jeff Bolz ee0ef39fee vulkan: fix warnings in perf logger querypool code (llama/13937) 2025-06-10 12:40:33 +03:00
lhez 62791ba2e6 opencl: add `backend_synchronize` (llama/13939)
* This is not needed by the normal use where the result is read
  using `tensor_get`, but it allows perf mode of `test-backend-ops`
  to properly measure performance.
2025-06-10 12:40:33 +03:00
rmatif e16ef08884 OpenCL: Add concat, tsembd, upscale, tanh, pad and repeat (llama/13840)
* add concat, pad, repeat, tsembd, tanh, upscale

* small fixes
2025-06-10 12:40:33 +03:00
Georgi Gerganov c72d3ce935 metal : use F32 accumulators in FA kernels (llama/13975)
ggml-ci
2025-06-10 12:40:33 +03:00
shalinib-ibm 126aeb4a49 cmake : Handle mixed-case 'Power' strings in POWER CPU detection (llama/13966)
Some systems report the CPU implementation as "Power11" instead of "POWER11".
The existing CMake logic uses a case-sensitive regular expression to extract
the CPU generation, which fails when the casing doesn't exactly match "POWER".

This patch provides a fix by first converting the string to uppercase before applying the regex.

Signed-off-by: root <root@rheldb2v.pperf.tadn.ibm.com>
Co-authored-by: root <root@rheldb2v.pperf.tadn.ibm.com>
2025-06-10 12:40:33 +03:00
Atharva Dubey ef2a79d2b8 sycl: quantize and reorder the input to q8_1 when reorder is enabled (llama/13826)
* [WIP]: fuse q8 quantization and reorder

* wip2: fuse q8 quantization and reorder

* working q8 reorder commit

* restored common.hpp

* remove debug prints

* remove unnecessary headers and remove trailing whitespace

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Alberto Cabrera Pérez <alberto.cabrera@intel.com>

---------

Co-authored-by: Alberto Cabrera Pérez <alberto.cabrera@intel.com>
2025-06-10 12:40:33 +03:00
Johannes Gäßler 9589645e72 gguf: fix failure on version == 0 (llama/13956) 2025-06-10 12:40:33 +03:00
Aaron Teo 20f913d119 ggml: check if non-native endian model is being loaded (llama/13943)
* gguf: prevent non-native endian models from being loaded

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* gguf: update error message

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* gguf: make the non-native endian check more verbose

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: move ggml_assert location

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: reword the endianness check error message

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-06-10 12:40:33 +03:00
Kai Pastor b933d17c30 Add in-build ggml::ggml ALIAS library (ggml/1260)
Enable uniform linking with subproject and with find_package.
2025-06-10 12:40:33 +03:00
KITAITI Makoto fbead67549
ruby : output format (#3237)
* Fix a typo

* Don't allocate output string unless needed

* Add methods to output SRT and WebVTT

* Add tests for output methods

* Make constants for output private

* Add signatures for output methods

* Add document on output methods

* Fix method name: Segment#speaker_next_turn? -> #speacker_turn_next?

* Add Whisper::Segment#descotruct_keys

* Add test for Whisper::Context#descotruct_keys

* Add signature of Whisper::Segment#deconstruct_keys

* Use parentheses to suppress warning

* Update date
2025-06-10 06:10:17 +02:00
藍+85CD d78f081423
ci : build and publish main-intel image (#3231) 2025-06-09 06:42:53 +02:00
藍+85CD b175baa665
docker : add main-intel dockerfile (#3229) 2025-06-06 05:30:02 +02:00
KITAITI Makoto 799eacdde4
ruby : Add parallel transcription support (#3222)
* Fix indentation of code sample in document comment

* Make Whisper::Context#transcribe able to run non-parallel

* Add test for Whisper::Context#transcribe with parallel option

* Follow signature API change of Context#transcribe

* Remove useless variable assignment

* Move simple usage up in README

* Add need help section in README

* Add document on Context#transcribe's parallel option in README

* Update date

* Fix signature of Context.new

* Make Context#subscribe accept n_processors option

* Make test follow #transcribe's change

* Make RBS follow #transcribe's change

* Add document for #transcribe's n_processors option

* Rename test directory so that Rake tasks' default setting is used
2025-06-04 14:50:18 +09:00
Daniel Bevenius 82f461eaa4
ci : add mirror for ports.ubuntu.com (ARM packages) (#3221)
This commit updates the build workflow to replace `ports.ubuntu.com`
with `mirror.kumi.systems` in the apt sources list for ARM64 builds.

The motivation for this change is intended to improve package download
reliability and speed by using a more stable mirror for ARM64 packages.
2025-06-03 07:56:58 +02:00
Joas Dev 269dad68a2
bindings.java : apply whisperParams in fullTranscribeWithTime instead of ignoring them (#3201)
This pull request fixes a bug in the fullTranscribeWithTime method, where the whisperParams argument was declared but never used. As a result, the model did not apply the configuration defined in whisperParams.
2025-06-03 06:15:21 +02:00
R0CKSTAR 121d27a495
musa: correct MUSA SDK rc4.0.1 download URL (#3217)
* musa: correct MUSA SDK rc4.0.1 download URL

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Fix typo

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-06-03 06:02:12 +02:00
Daniel Bevenius e05af2457b
ci : use mirrors.kernel.org for Ubuntu packages (#3220)
This commit updates the ubuntu jobs to use mirrors sites instead of archive.ubuntu.com.

The motivation of this is an attempt to make the CI build more stable and avoid errors like:
https://github.com/ggml-org/whisper.cpp/actions/runs/15384056535/job/43291948394?pr=3217
2025-06-02 16:46:40 +02:00
Daniel Bevenius b505539670
node : add language detection support (#3190)
This commit add support for language detection in the Whisper Node.js
addon example. It also updates the node addon to return an object
instead of an array as the results.

The motivation for this change is to enable the inclusion of the
detected language in the result, in addition to the transcription
segments.

For example, when using the `detect_language` option, the result will
now be:
```console
{ language: 'en' }
```

And if the `language` option is set to "auto", it will also return:
```console
{
  language: 'en',
  transcription: [
    [
      '00:00:00.000',
      '00:00:07.600',
      ' And so my fellow Americans, ask not what your country can do for you,'
    ],
    [
      '00:00:07.600',
      '00:00:10.600',
      ' ask what you can do for your country.'
    ]
  ]
}
```
2025-06-02 14:58:05 +02:00
Georgi Gerganov 7fd6fa8097 talk-llama : sync llama.cpp
ggml-ci
2025-06-01 15:14:44 +03:00
Georgi Gerganov 3f46282cbe sync : ggml
ggml-ci
2025-06-01 15:14:44 +03:00
Max Krasnyansky 1e16340f4b threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling (llama/12995)
* threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling

We talked about adding LOW priority for GGML threads in the original threadpool PR.
It might be useful for some cases to avoid contention.

Latest Windows ARM64 releases started parking (offlining) the CPU cores
more aggresively which results in suboptimal performance with n_threads > 4.
To deal with that we now disable Power Throttling for our threads for the NORMAL
and higher priorities.

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* threading: disable SetThreadInfo() calls for older Windows versions

* Update tools/llama-bench/llama-bench.cpp

Co-authored-by: Diego Devesa <slarengh@gmail.com>

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-06-01 15:14:44 +03:00
Shawn yang 4a50254998 CUDA: add a prop in ggml_cuda_device_infor for distinguish iGPU or dGPU in cuda (#13856) (llama/13895)
* 1.  add "integrated" in ggml_cuda_device_info for distinguish whether it is Intergrate_gpu or discrete_gpu
2. Adjust the func:"ggml_backend_cuda_device_supports_buft" for this new feature

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Adjusted code indentation

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Fixed incorrect setting of variable types

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Adjusted the judgment logic

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* add a host_buft assert in case of integrated_cuda_device with func:'evaluate_and_capture_cuda_graph()'

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Add a defensive security assert

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Adjusted the support judgment logic.

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* revoke the suggest commit changes due to it's not applicable in jetson_device

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Add parentheses to enforce operator precedence​

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Fix ci bug: add a spaces

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: yangxiao <yang_xl@tju.edu.cn>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: yangxiao <yangxl_zz@qq.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-06-01 15:14:44 +03:00
Johannes Gäßler a5aff28198 CUDA: fix typo in FlashAttention code (llama/13926) 2025-06-01 15:14:44 +03:00
Diego Devesa 6c0472ab8f sched : avoid changing cur_copy when a graph is already allocated (llama/13922) 2025-06-01 15:14:44 +03:00
Diego Devesa b14cee184a cuda : prevent using split buffers with 3d/4d matrices (llama/13919) 2025-06-01 15:14:44 +03:00
Akarshan Biswas f7f92d0aab SYCL: Add mrope kernel (llama/13755)
* SYCL: Add mrope kernel

* feat: Optimize rope operations with vectorization

Uses `sycl::vec` to load and store two elements at a time,
significantly improving performance in `rope_norm`,
`rope_neox`, and `rope_multi`. This reduces the number of memory
accesses and leverages SIMD instructions for faster execution.

* Use ceil_div
2025-06-01 15:14:44 +03:00
Christian Kastner 1893359cfd cmake: Guard GGML_CPU_ALL_VARIANTS by architecture (llama/13890) 2025-06-01 15:14:44 +03:00
Yibo Cai ea643c6ae3 arm64: optimize q4_k_q8_k kernel with i8mm (llama/13886)
This PR improves q4_k_q8_k gemm kernel with arm64 i8mm instruction.

Tested on neoverse-n2 with llama3 8b q4_k_m quantization model.
- 34% ~ 50% S_PP uplift for all batch sizes
- 12% ~ 37% S_TG uplift for batch size 4 and above

Perplexity doesn't change with this PR.

```
// tested on neoverse-n2
$ llama-batched-bench \
      -m Meta-Llama-3-8B-Instruct-Q4_K_M.gguf \
      --no-mmap -fa \
      -c 8192 -b 4096 -ub 512 -npp 128 -ntg 128 \
      -npl 1,2,4,8,16,32 \
      -t 64

---------------------------------------------------------------------
|    PP |     TG |    B |       S_PP t/s      |       S_TG t/s      |
|       |        |      | original |  this pr | original |  this pr |
|-------|--------|------|----------|----------|----------|----------|
|   128 |    128 |    1 |   110.12 |   147.83 |    24.36 |    24.28 |
|   128 |    128 |    2 |   121.16 |   172.42 |    46.36 |    47.93 |
|   128 |    128 |    4 |   120.15 |   169.75 |    74.68 |    84.00 |
|   128 |    128 |    8 |   130.97 |   196.81 |    91.04 |   114.74 |
|   128 |    128 |   16 |   131.01 |   196.88 |   101.43 |   135.79 |
|   128 |    128 |   32 |   130.85 |   196.51 |   106.97 |   147.29 |
---------------------------------------------------------------------
```
2025-06-01 15:14:44 +03:00
Christian Kastner 1d7b3c79f4 cmake: Factor out CPU architecture detection (llama/13883)
* cmake: Define function for querying architecture

The tests and results match exactly those of src/CMakeLists.txt

* Switch arch detection over to new function
2025-06-01 15:14:44 +03:00
Vineel Abhinav ccfaac2bb0 ggml: aarch64: Implement SVE F32 kernels for Mamba Sequential Scan Algorithm (llama/13882)
* F32-Mamba-Seq_Scan-SVE

* Fix formatting

* ggml : missing space

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-01 15:14:44 +03:00
Vineel Abhinav 1230d37bca ggml: aarch64: Implement SVE F32 kernels for vector functions (llama/13843)
* F32-Mamba-SVE

* F32-Mamba-SVE

* Resolve test errors-1

* Resolve test errors-2

* F32-vec-SVE

* F32-vec-SVE

* F32-vec-SVE
2025-06-01 15:14:44 +03:00
Johannes Gäßler 9a500394ad CUDA: fix FA tg at long context for CC >= 8.9 (llama/13852) 2025-06-01 15:14:44 +03:00
leo-pony 0035b8527c CANN: Add SOC TYPE printing in cmake configuration (llama/13837) 2025-06-01 15:14:44 +03:00
lhez 3623186312 opencl: add new ops - `argsort`, `div`, `sub`, `addrows`, `sigmoid`, `group_norm` (llama/13787)
* opencl: add `argsort`

* opencl: add `div`

* opencl: add `add_rows`

* opencl: add `sub`

* opencl: add `sigmoid`, both `f16` and `f32`

* opencl: add `group_norm`
2025-06-01 15:14:44 +03:00
lhez 67beac47f3 opencl: mark `mul_mat` `f32f32` as supporting non-contiguous tensors (llama/13790) 2025-06-01 15:14:44 +03:00
Jeff Bolz 47a19bae25 vulkan: use timestamp queries for GGML_VULKAN_PERF (llama/13817)
Also change it to be controlled by an env var rather than cmake flag
2025-06-01 15:14:44 +03:00
Akarshan Biswas 3d5c7ca4bc SYCL: add gelu_erf kernel (llama/13749)
* SYCL: add gelu_erf kernel

* refactor code

Co-authored-by: Atharva Dubey <atharva.dubey@codeplay.com>

* Use scope_op_debug_print

---------

Co-authored-by: Atharva Dubey <atharva.dubey@codeplay.com>
2025-06-01 15:14:44 +03:00
Xuan-Son Nguyen 4dfb2c2215 ggml : add ggml_repeat_4d (llama/13824) 2025-06-01 15:14:44 +03:00
Kai Pastor ad433403ce vulkan : Remove unexpected ; (ggml/1253) 2025-06-01 15:14:44 +03:00
Kai Pastor 4064dd6484 cmake : Fix broken CMake error messages (ggml/1252) 2025-06-01 15:14:44 +03:00
Radoslav Gerganov fd75c4995b ggml : remove ggml_graph_import and ggml_graph_export declarations (ggml/1247)
The implementation is already deleted with commit 9d0762e.

closes: #1235
2025-06-01 15:14:44 +03:00
KITAITI Makoto 0251445005
ruby : add Core ML support (#3214)
* Prevent overflow

* Fix memsize of Whisper::Context

* Rename xxx_initialize to more Ruby-esque name: xxx_s_new

* Define Whisper::Model::ZipURI

* Define Whisper::Model.coreml_compiled_models

* Make Options' @cmake_options Hash

* Use --{enable,disable}-whisper-coreml option for -I/opt/homebrew/opt/llvm/include

* Prepare Core ML model if enabled

* Add test for ZipURI

* Add signatures for ZipURI

* Add Whisper.system_info_str

* Add test for Whisper.system_info_str

* Add signagure for Model.coreml_compiled_models

* Add signature for Whisper.system_info_str

* Add test for Core ML

* Update date

* Maintain .gitignore
2025-06-01 18:16:02 +09:00
Daniel Bevenius 98dfe8dc26
vad : revisit timestamp alignment/mapping (#3173)
* vad : revisit timestamp alignment/mapping

This commit improving the timestamp alignment by introducing a mapping
table, adding intermediate reference points for longer segments, and
binary search for lookups.

The motivation for this changes is to address issues with the currently
solution where zero-length segments are possible, and also to improve
the precision of the VAD timestamps.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3162

* vad : use uint64_t for time mapping

This commit changes the type of the `processed_time` and `original_time`
fields in the `vad_time_mapping` struct from `double` to `uint64_t`.

The motivation for this change is made to improve precision and avoid
floating-point inaccuracies and also be consistent with other part of
the code base that use `uint64_t` for time representation.

This is a part of a refactoring where I'm also going to change the
vad_segment_info struct to use `uint64_t` for the start and end times.
This is the reason for the not so pleasant conversion and casts in the
code at the moment.

* vad : change vad_segment_info and whisper_vad_segment to use uint64_t

* vad : use int64_t instead of uint64_t for timestamps

To be consistent with other timestamps in the codebase.

* vad : add centisecond conversion functions

* vad : extract vad processing from whisper_full_with_state

This commit extracts the VAD processing from the
`whisper_full_with_state` function into the `whisper_full` and
`whisper_full_parallel` functions.

The motivation for this is that I did not take into account that when
`whisper_full_parallel` is called with `n_processors > 1`, then the
vad processing would not be applied correctly. Instead the VAD
processing should be done prior to processing in the case of
`whisper_full_parallel`.

* vad : remove filtered_n_samples from whisper_vad

The commit removes the parameter `filtered_n_samples` from the
`whisper_vad` function signature and its usage, as it is no longer
needed since filtered samples is now a vector (previously it was a
float*)

The motivation for this is to simplify the usage of this function.

* vad : remove vad_mapping_table_initialized flag

* vad : fix leaning (none) of pointer/references
2025-05-30 06:28:46 +02:00
KITAITI Makoto e5e900dd00
ruby : handle build options on installation (#3206)
* Don't pass empty string to cmake command

* Refactor Dependencies

* Use found cmake path for options

* Maintain extsources.rb

* List dependent files by directory separator agnostic way

* Prepend whitespace before '='

* Handle build options on install

* Remove useless test

* Retrieve gem file name and version from spec file

* Bump version to 1.3.3

* Update date

* Add install option examples

* [skip ci]Remove unused module
2025-05-30 01:32:49 +09:00
Daniel Tang 4d18e52f55
ggml : Fix backtrace breaking Windows build (#3203) 2025-05-29 13:26:58 +03:00
Georgi Gerganov ca890f566f sync : ggml
ggml-ci
2025-05-29 09:56:26 +03:00
Radoslav Gerganov 48dddbbac1 ggml : install dynamic backends (ggml/1240) 2025-05-29 09:56:26 +03:00
Daniel Tang 5ea2c37a4c ggml : Print backtrace on uncaught C++ exceptions (ggml/1232)
The goal is to have what users call "full logs" contain the backtrace.

This is registered upon ggml_init. Also fixes a minor fd leak on Linux.
2025-05-29 09:56:26 +03:00
Daniel Bevenius 73a8c5fb94
whisper : remove whisper_load_backends function (#3196)
* whisper : remove whisper_load_backends function

This commit removes the `whisper_load_backends` function, which was used
to load all GGML backends.

The motivation for this change push the responsibility of loading
backends to user applications to give them more control over which
backends to load and when. See the references below for more context.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3182
Refs: https://github.com/ggml-org/whisper.cpp/pull/3042#issuecomment-2801778733
Refs: https://github.com/ggml-org/whisper.cpp/pull/3042#issuecomment-2801928990

* ruby : add check for rwc is NULL

This commit adds a check to ensure that the `rwc` pointer is not NULL
before attempting to mark its members in the garbage collector.

The motivation for this is an attempt to see if this fixed the CI build
as I'm not able to reproduce the issue locally.

Refs: https://github.com/ggml-org/whisper.cpp/actions/runs/15299612277/job/43036694928?pr=3196
2025-05-29 08:03:17 +02:00
KITAITI Makoto 1f5fdbecb4
ruby : add VAD support, migration to Ruby's newer API (#3197)
* Add VAD models

* Extract function to normalize model path from ruby_whisper_initialize()

* Define ruby_whisper_vad_params struct

* Add VAD-related features to Whisper::Params

* Add tests for VAD-related features

* Define Whisper::VADParams

* Add Whisper::VAD::Params attributes

* Add test suite for VAD::Params

* Make older test to follow namespace change

* Add test for transcription with VAD

* Add assertion for test_vad_params

* Add signatures for VAD-related methods

* Define VAD::Params#==

* Add test for VAD::Params#==

* Fix Params#vad_params

* Add test for Params#vad_params

* Fix signature of Params#vad_params

* Use macro to define VAD::Params params

* Define VAD::Params#initialize

* Add tests for VAD::Params#initialize

* Add signature for VAD::Params.new

* Add documentation on VAD in README

* Wrap register_callbask in prepare_transcription for clear meanings

* Set whisper_params.vad_params just before transcription

* Don't touch NULL

* Define ruby_whisper_params_type

* Use TypedData_XXX for ruby_whisper_params instead of Data_XXX

* Remove unused functions

* Define rb_whisper_model_data_type

* Use TypedData_XXX for ruby_whisper_model instead of Data_XXX

* Define ruby_whisper_segment_type

* Use TypedData_XXX for ruby_whisper_segment instead of Data_XXX

* Define ruby_whisper_type

* Use TypedData_XXX for ruby_whisper instead of Data_XXX

* Qualify with const
2025-05-28 20:05:12 +09:00
Simon Booth 5720426d97
whisper : install shared libs when using GGML_BACKEND_DL (#3195) 2025-05-28 10:15:04 +02:00
Fujimoto Seiji b9d27b1358
tests : add a new benchmark test for long-form audio (#3185)
* tests : add a new benchmark test for long-form audio

Based on "Earnings-21" corpus by Del Rio et al.

    Earnings-21: A Practical Benchmark for ASR in the Wild (2021)
    https://arxiv.org/abs/2104.11348

This dataset contains 39 hours of long-form speech, sourced from public
earning calls. Each recording contains roughly 50 minutes of English
dialogues between multiple speakers (2-20 persons).

This benchmark suite should allow us to evaluate the performance of
whisper.cpp on long-form audio data.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* tests : apply PR feedback to 'earnings21/README.md'

Based on feedback from Daniel Bevenius.

 - Simplify how to download & prepare a Silero VAD model.
 - Fix typo: inferece -> inference

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* tests : avoid crashing on non-UTF-8 characters

Based on feedback from Daniel Bevenius.

Add 'errors' parameter to open() in order to avoid unhandled
exception on invalid UTF-8 bytes.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* tests : try to interpret the hypothesis as Windows-1252

Based on the discussion in PR#3185.

Evidently Whisper.cpp can represent a quotation mark as '0x93', which
implifies Windows-1252 (Microsoft's ASCII excention), and cannot be
decoded by UTF-8.

Add an explicit decoding loop to address the issue.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

---------

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-05-28 07:08:44 +02:00
Daniel Bevenius 0ed00d9d30
ci : update windows-blas uploads action (#3192)
This commit modifies windows-blas which was updated previously to use
the zip functionality provided by `actions/upload-artifact`. This turned
out to be incorrect and I should not have done that. The reason for
zipping the archives first is that otherwise the artifacts when
downloaded will be unzipped and just be simple directories. In our case
the release task depends on the artifacts having a .zip extension so
that those archives are include in the release.
2025-05-27 18:01:31 +02:00
Georgi Gerganov 527fe6aaeb sync : fix builds - musa, ruby 2025-05-27 18:03:00 +03:00
Georgi Gerganov 26eb48cb08 talk-llama : sync llama.cpp
ggml-ci
2025-05-27 18:03:00 +03:00
Georgi Gerganov 546928c33f sync : ggml
ggml-ci
2025-05-27 18:03:00 +03:00
xctan 15ae9dc2a4 ggml : riscv: add xtheadvector support (llama/13720)
* ggml : riscv: add xtheadvector support

* ggml : clean up some macro usage
2025-05-27 18:03:00 +03:00
Christian Kastner 2e7a1e3e43 ggml-cpu: x86 feature detection is specific to x86 (llama/13811) 2025-05-27 18:03:00 +03:00
Diego Devesa b75babebb2 ggml : allow CUDA graphs when using pipeline parallelism (llama/13814) 2025-05-27 18:03:00 +03:00
Georgi Gerganov cc7a0105ef cuda : avoid cuGetErrorString (llama/13791)
ggml-ci
2025-05-27 18:03:00 +03:00
Akarshan Biswas 195fde8804 SYCL: Add non contiguous support in RMS_NORM and NORM kernels (llama/13611)
* SYCL: Add non contiguous input support to norm kernel

* refactor and add RMS_NORM non contiguous input support

ggml-ci

* restore subgroup reduction for multi-subgroup thread blocks in norm kernels

* Swap grid dims of nsamples and nrows

ggml-ci

* Revert "Swap grid dims of nsamples and nrows"

This reverts commit 43be2d657fec7f7fba54e2cd154106bc0fc45adf.

* restore not required changes
ggml-ci

* address review comments: change it to more like SYCL

* Use a common function to calculate offset

* remove wrap around logic for handling broadcasts

* remove static from calculate_offset fn and use ceil_div
2025-05-27 18:03:00 +03:00
Romain Biessy 25e27904ca sycl: Add more debug prints (llama/13640) 2025-05-27 18:03:00 +03:00
Jeff Bolz 474f7be8b6 vulkan: mark IM2COL as supporting non-contig (llama/13783) 2025-05-27 18:03:00 +03:00
Bizhao Shi e35fecc2a1 CANN: Add the basic supports of Flash Attention kernel (llama/13627)
* cann: add the basic FA support

* cann: update the readme

* cann: update the FlashAttention with PSEShift

* cann: update the input parameters in FA

* cann: update the alibi with max_bias

* cann: add the constrints of softcap

* cann: update the docs CANN.md

* cann: update the docs CANN.md

* cann: fix typo of CANN.md

* cann: add some comments and update the CANN.md

* cann: update the CANN.md

* cann: update the inner precise for fusedInferAttention

* cann: update the constraints of flash_attn_ext on ggml-cann.cpp

* cann: clean the whitespace

* cann: clean the whitespace

* cann: add a new endline
2025-05-27 18:03:00 +03:00
Akarshan Biswas 1cd7028428 SYCL: revert "sycl: simplify bin_bcast_kernel (ggml/13383)" (llama/13752)
Temporarily reverted due to failing fp16 DIV operation

This reverts commit 02cdd2d8b092b5a4bb18e013c6887ce49ba20ac5.

ggml-ci
2025-05-27 18:03:00 +03:00
Diego Devesa 99596d6031 ggml-cpu : set openmp wait time if not set (llama/13758) 2025-05-27 18:03:00 +03:00
Xuan-Son Nguyen 2d6c6862f7 ggml : add ggml_gelu_erf() CUDA kernel (llama/13719)
* ggml : add ggml_gelu_erf() CUDA kernel

* missing semicolon
2025-05-27 18:03:00 +03:00
Johannes Gäßler f1576b2659 CUDA: fix race condition in FA vector kernels (llama/13742) 2025-05-27 18:03:00 +03:00
Chenguang Li 994b4f86ab CANN: Support MUL_MAT_ID for q8_0 and q4_0 (llama/13705)
* [CANN]Support MUL_MAT_ID Q8 && Q4

Signed-off-by: noemotiovon <757486878@qq.com>

* codestyle adjustment

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-05-27 18:03:00 +03:00
Xuan-Son Nguyen 3e7eaccf55 ggml : fix the order of ggml_unary_op (llama/13718) 2025-05-27 18:03:00 +03:00
Jeff Bolz 191f040414 vulkan: support CPY from any type to itself (llama/13695)
Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.
2025-05-27 18:03:00 +03:00
Jeff Bolz 2d49d4a9b5 vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (llama/13696) 2025-05-27 18:03:00 +03:00
Judd 000d65befb use LOG_WARN to replace `std::cerr` (llama/13657) 2025-05-27 18:03:00 +03:00
Nicolò Scipione f0803e6646 sycl : Remove waits from function calls (llama/13702)
* removes the waits in async memcpy functions
2025-05-27 18:03:00 +03:00
Ewan Crawford 730a00be8a SYCL: Avoid using with SYCL-Graph for unsupported nodes (llama/13587)
Currently on a CUDA backend to SYCL when running
`GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0` there
are two operations that throw an exception from the blocking
waits during queue recording.

* `-o CONCAT` : Use of blocking waits on a queue that's being recorded https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/concat.cpp#L185-L187
* `-o MUL_MAT_ID`: Blocking wait on a recording queue for a copy to host memory https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/ggml-sycl.cpp#L3072-L3074

We've noticed that `ggml-cuda.cu` has the
[check_node_graph_compatibility_and_refresh_copy_ops](39e73ae0d6/ggml/src/ggml-cuda/ggml-cuda.cu (L2458-L2458))
method for checking if a graph can be used, even if enabled. I've taken a
similar approach in this PR by adding a method to `ggml-sycl.cpp` for checking
if a graph can be used for the operations even if a user has asked for it to be
enabled.
2025-05-27 18:03:00 +03:00
Henry Linjamäki 316600e8ee opencl: Add support for multiple devices (llama/12622)
* opencl: Add support for multiple devices

... but limited to one platform. A platform with a GPU will be preferred.

Additionally:

* Filter out devices that lack capabilities needed by the backend
  implementation (half support, OpenCL 2.0+, etc).

* Make ggml_backend_opencl_reg() thread-safe.

* fixup: fix an error in sync_with_other_backends

... when there is only one OpenCL device available.
2025-05-27 18:03:00 +03:00
Henry Linjamäki 42f2b3bb65 opencl: fix couple crashes (llama/12795)
* opencl: fix couple crashes

* fix kernel launches failed on devices which do not support
  non-uniform work-groups. When non-uniform work-groups are not
  supported, set `local_work_size` to NULL (= let driver choose the
  work-group sizes). This patch does not cover everything - just the
  cases tested by test-backend-ops.

* fix sub-buffer creation failed due to `cl_buffer_region::origin` not
  being aligned to `CL_DEVICE_MEM_BASE_ADDR_ALIGN`.

* OpenCL: query non-uniform WG sizes only on OpenCL 3.0+
2025-05-27 18:03:00 +03:00
Xuan-Son Nguyen dd6ef64060 ggml : add ggml_gelu_erf() (llama/13667)
* ggml : add ggml_gelu_na (not approximated)

* fix naming order

* rename na --> erf

* apply review suggesions

* revert naming order
2025-05-27 18:03:00 +03:00
R0CKSTAR 131ee546ca musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (llama/13647)
* musa: fix build warning (unused parameter)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: upgrade MUSA SDK version to rc4.0.1

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: use mudnn::Unary::IDENTITY op to accelerate D2D memory copy

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Update ggml/src/ggml-cuda/cpy.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* musa: remove MUDNN_CHECK_GEN and use CUDA_CHECK_GEN instead in MUDNN_CHECK

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-05-27 18:03:00 +03:00
Eve 4712f7b663 vulkan: fix warnings (llama/13626)
* small fixes

* remove ifdef
2025-05-27 18:03:00 +03:00
Johannes Gäßler 926fe234e9 CUDA: skip fully masked-out KV in FA vec kernel (llama/13584)
* CUDA: skip fully masked-out KV in FA vec kernel
2025-05-27 18:03:00 +03:00
Svetlozar Georgiev f44b53480f sycl: disable reorder for sycl mulmat (llama/13536) 2025-05-27 18:03:00 +03:00
Georgi Gerganov e04e8f1c79 metal : fix typo in FA kernel comments (llama/13651) 2025-05-27 18:03:00 +03:00
Nicolò Scipione ee3f177cba sycl : Overcoming workaround for mmap() allocation on Windows (llama/13482)
* Remove mmap workaround on windows

After some testing I found that mmap is supported on windows and for
many GPUs on Linux. Therefore I remove the workaround for windows since
it is not necessary.

* Update llama-bench README

SYCL backend introduced a workaround that allows execution of
llama-bench also without specifying `--mmp 0` flag
2025-05-27 18:03:00 +03:00
0cc4m 0b69f74e15 Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (llama/13607) 2025-05-27 18:03:00 +03:00
Georgi Gerganov e415db0ed7 sync : ggml 2025-05-27 18:03:00 +03:00
Daniel Bevenius 2bb7694edb
docs : convert README_sycl.md to utf8 format [no ci] (#3191)
This commit updates the README_sycl.md file to use UTF-8 encoding.

The motivation for this is that while this file displays correctly in
github it will fail to render with tools that expect UTF-8 encoding.
For example this is the case when using `grip` to view the file locally.
2025-05-27 10:53:50 +02:00
Daniel Bevenius 450de0787e
node : enable no_prints to suppress all output (#3189)
This commit enable the node addon to suppress all output, even the
result of the transcription if the no_prints parameter is set to true.

The motivation for this is that for the node addon there is a
fullfilment handler/success callback to process the transcription
result. And it might be useful to be able to disable the printing of
the transcription result to the console, so that the user can handle
the result in their own way.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3176
2025-05-27 05:51:47 +02:00
matteng1 ea9f206f18
talk-llama : fix for swedish umlauts + expose model inference settings in talk-llama.cpp (#3187)
Quick fix for not removing swedish umlauts.

* Update talk-llama.cpp

Expose model inference settings to user instead of hard coding them. Same defaults as previous defaults.

* Update examples/talk-llama/talk-llama.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-05-26 07:57:39 +02:00
KITAITI Makoto 13d92d08ae
docs : fix VAD section heading levels (#3186) 2025-05-23 10:38:26 +02:00
Daniel Bevenius aab6976465
ci : use dynamic libopenblas.dll for window-blas (#3177)
* ci : use dynamic libopenblas.dll for window-blas

This commit updates the windows-blas job to use the dynamic (can load
different kernels depending of the CPU arch) libopenblas.dll instead of
the "static" openblas.dll that get installed by vcpgk.

The motivation for this change is that there have been reports of
performance drops in later version specifically related to blas. Please
see the links below for more details.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3166
Refs: https://github.com/ggml-org/whisper.cpp/issues/2666#issuecomment-2885978811
2025-05-23 05:48:08 +02:00
Sacha Arbonel 78b31ca782
server : Add k6 Load Testing Script (#3175)
* add load testing script and update README for k6 integration
2025-05-22 10:03:04 +02:00
Daniel Bevenius cbe557f9b1
docs : add VAD model download instructions [no ci] (#3180) 2025-05-22 07:49:29 +02:00
Alpaim 273af4aab9
docs : replace typo "]"with ")" in README (#3179) 2025-05-22 05:49:44 +02:00
Daniel Bevenius bd1cb0c8e3
whisper : remove redundant assignments (#3178)
This commit removes some redundant assignments in the function
`whisper_exp_compute_token_level_timestamps`.

The motivations for this is that tokens[j] and token are references to
the same object and this can be a little confusing when reading the
code.
2025-05-21 13:23:20 +02:00
Jugal Haresh Sheth 62dc8f7d7b
whisper : update CMakeLists.txt to handle deprecated gpu Warnings (#3163)
* Fix CMakeLists.txt to handle deprecated gpu Warnings

* Conditionally apply -Wno-deprecated-gpu-targets only when GGML_CUDA is enabled

* Conditionally apply -Wno-deprecated-gpu-targets only when GGML_CUDA is enabled and not MSVC

---------

Co-authored-by: Jugal Sheth <jugal.sheth@marineai.co.uk>
2025-05-20 11:58:25 +02:00
Daniel Bevenius 2c4b904596
ruby : add GGML_SYCL_DNN option to ruby bindings (#3172)
This commit adds the `GGML_SYCL_DNN` option to the Ruby bindings for
the GGML library. This option as added to ggml in
Commit (5e7e07758a5f3172380500e173ca71f679bbef1e "sycl: use oneDNN for
matrices multiplication")

The motivation for this change to enable the CI build to pass.
2025-05-19 17:59:43 +02:00
Georgi Gerganov 6b6cf19c65 talk-llama : sync llama.cpp
ggml-ci
2025-05-19 14:58:39 +03:00
Georgi Gerganov 05501c218d sync : ggml
ggml-ci
2025-05-19 14:58:39 +03:00
Chenguang Li 9da3fc27be CANN: Support MOE Model MUL_MAT_ID (llama/13042)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-05-19 14:58:39 +03:00
Gilad S. 2c13651e08 cmake: use the current build config for vulkan-shaders-gen (llama/13595)
* fix: use the current build config for `vulkan-shaders-gen`

* fix: only pass a valid build type to `--config`
2025-05-19 14:58:39 +03:00
Jeff Bolz 13dca86c56 vulkan: move common FA code to flash_attn_base.comp (llama/13556)
* vulkan: move common FA code to flash_attn_base.comp

* vulkan: move common FA index/stride setup code to flash_attn_base.comp

* build fix
2025-05-19 14:58:39 +03:00
Jeff Bolz 6d61a09bc4 vulkan: use scalar FA rather than coopmat2 when N==1 (llama/13554) 2025-05-19 14:58:39 +03:00
Georgi Gerganov 4fedad988b metal : add FA-vec kernel for head size 64 (llama/13583)
ggml-ci
2025-05-19 14:58:39 +03:00
Łukasz Ślusarczyk a8e17a244d sycl : fixed compilation warnings (llama/13582) 2025-05-19 14:58:39 +03:00
Diego Devesa 0c76acd08a gguf : use ggml log system (llama/13571)
* gguf : use ggml log system

* llama : remove unnecessary new lines in exception messages
2025-05-19 14:58:39 +03:00
Atharva Dubey 27964db1be sycl: simplify bin_bcast_kernel (llama/13383) 2025-05-19 14:58:39 +03:00
Svetlozar Georgiev 8081e7a23d sycl: reordered Q4_K MMVQ (llama/13109) 2025-05-19 14:58:39 +03:00
Łukasz Ślusarczyk d807c497a4 sycl: use oneDNN for matrices multiplication (llama/12972) 2025-05-19 14:58:39 +03:00
Yibo Cai 8e9bf548f4 arm64: optimize q6_k_q8_k kernel with i8mm (llama/13519)
This PR improves q6_k_q8_k gemm kernel with arm64 i8mm instruction.

Tested on neoverse-n2 with llama3 8b q6_k quantization model.
- 40% ~ 54% S_PP uplift for all batch sizes
- 16% ~ 47% S_TG uplift for batch size 4 and above

Perplexity doesn't change with this PR.

```
// tested on neoverse-n2
$ llama-batched-bench \
      -m Meta-Llama-3-8B-Instruct-Q6_K.gguf \
      --no-mmap -fa \
      -c 8192 -b 4096 -ub 512 -npp 128 -ntg 128 \
      -npl 1,2,4,8,16,32 \
      -t 64

---------------------------------------------------------------------
|    PP |     TG |    B |       S_PP t/s      |       S_TG t/s      |
|       |        |      | original |  this pr | original |  this pr |
|-------|--------|------|----------|----------|----------|----------|
|   128 |    128 |    1 |    78.52 |   109.18 |    18.63 |    18.88 |
|   128 |    128 |    2 |    84.62 |   123.94 |    34.54 |    36.92 |
|   128 |    128 |    4 |    84.36 |   122.49 |    52.65 |    61.32 |
|   128 |    128 |    8 |    90.52 |   138.87 |    63.46 |    84.41 |
|   128 |    128 |   16 |    90.11 |   138.56 |    71.04 |   101.33 |
|   128 |    128 |   32 |    89.81 |   137.79 |    75.14 |   110.47 |
---------------------------------------------------------------------
```
2025-05-19 14:58:39 +03:00
Johannes Gäßler 0dda27bc0b CUDA: fix crash on large batch size for quant. MoE (llama/13537) 2025-05-19 14:58:39 +03:00
Johannes Gäßler ffa4720f25 CUDA: faster Deepseek FA, add Turing support (llama/13435) 2025-05-19 14:58:39 +03:00
bandoti 9b8eea28b5 cmake: simplify vulkan shader test logic (llama/13263) 2025-05-19 14:58:39 +03:00
Jeff Bolz 162bbe8220 vulkan: KHR_coopmat flash attention (llama/13506)
This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.
2025-05-19 14:58:39 +03:00
Jeff Bolz a221288dc6 vulkan: workaround FA compile failures on macos (llama/13517) 2025-05-19 14:58:39 +03:00
Georgi Gerganov 08436716ae metal : use FA-vec kernel up to batch size 20 (llama/13496)
* batched-bench : fix pp batch contents

* metal : optimize multi-sequence FA vec kernel

ggml-ci

* metal : use FA-vec kernel up to batch size 20

ggml-ci
2025-05-19 14:58:39 +03:00
Georgi Gerganov e11fc21e6c metal : optimize multi-sequence FA vec kernel (llama/13493)
* batched-bench : fix pp batch contents

* metal : optimize multi-sequence FA vec kernel

ggml-ci
2025-05-19 14:58:39 +03:00
Dan Johansson a77a924b20 ggml-cpu: Update KleidiAI to v1.6 and fix include directives (llama/13509)
Signed-off-by: Dan Johansson <dan.johansson@arm.com>
2025-05-19 14:58:39 +03:00
Johannes Gäßler 405b9c77ad mnist: fix segmentation fault (ggml/1227) 2025-05-19 14:58:39 +03:00
Diego Devesa 9c3bfc1499 ggml : fix apple OS check in ggml_print_backtrace (ggml/1229) 2025-05-19 14:58:39 +03:00
Daniel Tang 5b7797f674 ggml : Fix missing backtrace on Linux (ggml/1228)
* Modern Linux defaults /proc/sys/kernel/yama/ptrace_scope to 1
* Fixed lldb attach
* Simplify by having the child do ggml_print_backtrace_symbols
2025-05-19 14:58:39 +03:00
Daniel Bevenius 82ad275800
examples : add vad-speech-segments to win warns [no ci] (#3170)
The commit includes the vad-speech-segments in the disable msvc warnings
"list".
2025-05-19 12:17:18 +02:00
Daniel Bevenius d1f114da61
vad : return early if no vad segments are detected (#3158)
This commit adds a check to `whisper_full_with_state` and if no VAD
segments are detected, the function will return early.

The motivation for this is that if no VAD segments are detected, the
function will not have any samples to process which can happen if an
audio sample does not contain any speech. I did not test this previously
and only discovered this when updating the stream example.
2025-05-16 08:50:53 +02:00
Daniel Bevenius bae5d074c7
vad : store VAD context in whisper_state (#3156)
* vad : store VAD context in whisper_state

This commit stores the VAD context in the whisper_state structure,
allowing for better management and reuse of the VAD context across
multiple calls to the whisper_vad function.

The motivation for this change is that when updating the stream example
I noticed that the VAD context was being re-initialized every time the
whisper_vad function was called. This involved loading the VAD model
which is expensive and unnecessary if the context can be reused.

Storing this in the whisper_state seems follow the pattern simliar to
how whisper_coreml_context and whisper_openvion_context are stored.

* vad : free vad_context in whisper_free_state
2025-05-16 07:53:26 +02:00
Daniel Bevenius 20a20decd9
whisper : add build_*/ to .gitignore [no ci] (#3157)
This commit add `build_*/` to `.gitignore` to ignore all build
directories that start with `build_`.

The motivation for this is that the Go bindings creates a directory
named build_go, which is not ignored by the current .gitignore. I was
not sure if changing this to build-go could effect exising users so I
opted to update .gitignore instead.
2025-05-15 14:28:10 +02:00
Daniel Bevenius f389d7e3e5
examples : add --print-confidence option to cli (#3150)
* examples : add --print-confidence option to cli

This commit adds a new command-line option `--print-confidence` to the
whisper-cli. When enabled, this option prints the confidence level of each
token in the transcribed text using ANSI formatting codes.

The confidence levels are represented using different styles:
```console
main: confidence: highlighted (low confidence), underlined (medium), dim (high confidence)
```

Refs: https://github.com/ggml-org/whisper.cpp/issues/3135
2025-05-14 19:21:48 +02:00
Daniel Bevenius 96d791ae61
vad : add download-vad-model scripts (#3149)
* vad : add download-vad-model scripts

This commit adds a script to download VAD models.

* vad : add vad model download script for windows [no ci]

Refs: https://github.com/ggml-org/whisper.cpp/issues/3146
2025-05-14 16:47:18 +02:00
Daniel Bevenius 3882a099e1
server : add --flash-attn usage output (#3152)
This commit adds the `--flash-attn` option to the usage output of the
server example.

The motivation for this change is that while it is possible to set this
option it is not printed in the usage output.
2025-05-14 15:22:05 +02:00
Georgi Gerganov f890560575 talk-llama : sync llama.cpp
ggml-ci
2025-05-13 13:59:21 +03:00
Georgi Gerganov a14c89aefa whisper : update to ggml-backend changes (#0)
ggml-ci
2025-05-13 13:59:21 +03:00
Georgi Gerganov a6a956b36d sync : ggml
ggml-ci
2025-05-13 13:59:21 +03:00
Xuan-Son Nguyen 75e9a840c5 ggml : add mrope kernel for metal (llama/13457) 2025-05-13 13:59:21 +03:00
Georgi Gerganov 41ed62bdbc metal : optimize MoE for large batches (llama/13388) 2025-05-13 13:59:21 +03:00
lhez 029c8837f8 opencl: remove unnecessary assert for `add` (llama/13257) 2025-05-13 13:59:21 +03:00
Johannes Gäßler 5d8b068249 llama/ggml: add LLM training support (llama/10544)
* llama/ggml: add LLM training support

more compact progress bar

llama_save_model_to_file

llama_opt_param_filter

ggml_graph_dup force_grads

refactor ggml_opt, fix test-opt

* remove logits_all

* refactor CUDA implementation for ACC

* reset graph at beginning of opt period
2025-05-13 13:59:21 +03:00
Dan Johansson 93ef22657e ggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel (llama/13053)
* ggml-cpu: Integrate fp32=bf16xbf16 SME KleidiAI kernel

Signed-off-by: Dan Johansson <dan.johansson@arm.com>

* * code review fixes

Signed-off-by: Dan Johansson <dan.johansson@arm.com>

* * adds a comment that clarifies barrier usage

Signed-off-by: Dan Johansson <dan.johansson@arm.com>

---------

Signed-off-by: Dan Johansson <dan.johansson@arm.com>
Co-authored-by: Charles Xu <charles.xu@arm.com>
2025-05-13 13:59:21 +03:00
Johannes Gäßler 866f685bbc CUDA: fix misaligned synchronization in FA (llama/13469) 2025-05-13 13:59:21 +03:00
Atharva Dubey 250bcc041a enable dpcpp nightly builds with libraries (llama/13406) 2025-05-13 13:59:21 +03:00
Johannes Gäßler 90b17a99bf CUDA: fix crash with partial offloading of MoE (llama/13439) 2025-05-13 13:59:21 +03:00
David Huang e1b2ace0f8 Add `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (llama/13386) 2025-05-13 13:59:21 +03:00
Johannes Gäßler 6db0e01db6 CUDA: fix race conditions FlashAttention kernels (llama/13438) 2025-05-13 13:59:21 +03:00
Johannes Gäßler 16f3546f38 CUDA: fix FlashAttention on Turing (llama/13415) 2025-05-13 13:59:21 +03:00
Jeff Bolz a04b329ad1 vulkan: scalar flash attention implementation (llama/13324)
* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA
2025-05-13 13:59:21 +03:00
Alberto Cabrera Pérez 45d8b2352e sycl : implementation of reordered Q4_0 MMVQ for Intel GPUs (llama/12858)
* sycl : Implemented reorder Q4_0 mmvq

Signed-off-by: Alberto Cabrera <alberto.cabrera@codeplay.com>

* sycl : Fixed mmvq being called when reorder is disabled

* sycl : Improved comments in the quants header

Signed-off-by: Alberto Cabrera <alberto.cabrera@codeplay.com>

* Use static_assert

* safe_div -> ceil_div

* Clarify qi comment

* change the reorder tensor from init to execute OP

* dbg

* Undo changes to test-backend-ops

* Refactor changes on top of q4_0 reorder fix

* Missing Reverts

* Refactored opt_for_reorder logic to simplify code path

* Explicit inlining and unroll

* Renamed mul_mat_algo enum for consistency

---------

Signed-off-by: Alberto Cabrera <alberto.cabrera@codeplay.com>
Co-authored-by: romain.biessy <romain.biessy@codeplay.com>
2025-05-13 13:59:21 +03:00
Johannes Gäßler 2d436bfbfb CUDA: FA support for Deepseek (Ampere or newer) (llama/13306)
* CUDA: FA support for Deepseek (Ampere or newer)

* do loop unrolling via C++ template
2025-05-13 13:59:21 +03:00
Johannes Gäßler 4b7cbb62ef CUDA: fix crash on large batch size for MoE models (llama/13384) 2025-05-13 13:59:21 +03:00
Radoslav Gerganov e27c91f6d6 rpc : add rpc_msg_set_tensor_hash_req (llama/13353)
* rpc : add rpc_msg_set_tensor_hash_req

Use a dedicated struct for the request of RPC_CMD_SET_TENSOR_HASH which
makes the code cleaner.

* fix
2025-05-13 13:59:21 +03:00
Jeff Bolz e46df4850f vulkan: Allow up to 4096 elements for mul_mat_id row_ids (llama/13326)
This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf:

GGML_ASSERT(nei0 * nei1 <= 3072);

The tensor is 8 x 512. Increase this array size to accommodate.
2025-05-13 13:59:21 +03:00
Alberto Cabrera Pérez e8a7f1b7bb sycl: addressing non-contiguous src1 mul_mats (nc and batched) (llama/13343)
* sycl: fixed non-contiguous src1 mul_mats (nc and batched)

* Fixed wrong static_cast inside kernel
2025-05-13 13:59:21 +03:00
Daniel Bevenius fbad8058c4
examples : add VAD speech segments example (#3147)
This commit adds an example that demonstrates how to use a VAD (Voice
Activity Detection) model to segment an audio file into speech segments.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3144
2025-05-13 12:31:00 +02:00
Daniel Bevenius b2513a6208
vad : remove shortform for --vad option in cli.cpp (#3145)
This commit removes the shortform for the --vad option in cli.cpp.

The motivation for this is that `-v` is often used for verbose or
version is many tools and this might cause confusion.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3065#issuecomment-2873243334
2025-05-13 06:04:05 +02:00
Tomer Schlesinger 587ea01f55
docs : update README.md for whisper.objc app (#2569) 2025-05-13 06:03:50 +02:00
Daniel Bevenius e41bc5c61a
vad : add initial Voice Activity Detection (VAD) support (#3065)
* vad : add initial Voice Activity Detection (VAD) support

This commit add support for Voice Activity Detection (VAD). When enabled
this feature will process the audio input and detect speech segments.
This information is then used to reduce the number of samples that need
to be processed by whisper_full.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3003

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-05-12 16:10:11 +02:00
Daniel Bevenius e39ba750cd
whisper : remove dummy commit comment [no ci] (#3143)
This commit removes a dummy comment that was add by
Commit(589b408 "ci : dummy commit to trigger CI").
2025-05-12 14:40:17 +02:00
Daniel Bevenius db0fc9edc6
docs : fix -owts flag typo karaoke section [no ci] (#3142) 2025-05-12 10:56:39 +02:00
Daniel Bevenius 186855e38b
cli : print color scheme info for --print-colors (#3141)
This commit adds a description of the color scheme used in the CLI
when the --print-colors option is enabled.

The motivation for this is that it is not immediately clear what the
color scheme is when using the CLI with the --print-colors option.

Example output:
```console
$ ./build/bin/whisper-cli -f samples/jfk.wav --print-colors
...

main: color scheme: red (low confidence), yellow (medium), green (high confidence)

[00:00:00.000 --> 00:00:11.000]   And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.
```
The description will not be dispayed if the `--no-prints` options is
set.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3135
2025-05-12 10:43:04 +02:00
Simon Booth a513146102
docs : update Readme to recommend same Openvino as Python tools (#3138) 2025-05-12 09:06:51 +02:00
Daniel Bevenius 4730950492
examples : update link to Paul Tol's color scheme [no ci] (#3140)
This commit updates the link to Paul Tol's color scheme in the
`examples/common.h` file. The previous link was outdated and
pointed to a non-existent page.
2025-05-12 09:02:06 +02:00
KITAITI Makoto 9dd9685c79
ruby : test extra build options only when env var specified (#3136)
* Test Ruby bindings' extra options only when commanded

* ruby : test extra build options only when env var specified

* Fix extra_options

* Update gem date
2025-05-12 06:49:46 +02:00
Daniel Bevenius 2e310b841e
ruby : omit test_build_options locally (#3132)
This commit omits the test for `test_build_options` when run locally as
it currently fails on Linux and MacOS platforms.
`
The motivation for this change is that currently when running the tests
locally on a non-macOS platform the test fails with the following error:
```console
.F
========================================================================
Failure: test_build_options(TestPackage):
  <["ACCELERATE_FRAMEWORK",
   "CMAKE_OSX_ARCHITECTURES",
   "CMAKE_OSX_SYSROOT",
   "FOUNDATION_LIBRARY",
   "METALKIT_FRAMEWORK",
   "METAL_FRAMEWORK"]> was expected to be empty.
/home/danbev/work/ai/whisper.cpp/bindings/ruby/tests/test_package.rb:43:in `test_build_options'
     40:     options = BuildOptions::Options.new
     41:     assert_empty options.missing_options
     42:     unless ENV["CI"]
  => 43:       assert_empty options.extra_options
     44:     end
     45:   end
     46: end
========================================================================
```
2025-05-10 08:18:08 +02:00
Enes Grahovac 5d4390d281
examples : add HEAPU8 to all of the exported runtime methods (#3134)
This commit adds HEAPU8 to the list of exported methods.

The motivation for this commit is that currently this is causing an error on Window systems where HEAPU8 in undefined, which results in the following error message in the web console:

main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42

danbev originally fixed this for whisper.wasm, stream.wasm, and command.stream, but the issue still exists on the other examples which I patch in this code.

Resolves: #3059
2025-05-10 06:44:13 +02:00
Daniel Bevenius 9791647653
wasm : add note about worker.js file generation [no ci] (#3133)
This commit updates the documentation for the WASM examples to include a
note about the generation of the `worker.js` file. As of Emscripten
3.1.58 (April 2024), separate worker.js files are no longer generated
and the worker is embedded in the main JS file.

The motivation for this change is to inform users about the new behavior
of Emscripten and why the `worker.js` file may not be present.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3123
2025-05-09 15:42:45 +02:00
Daniel Bevenius 288304ee64
whisper : deprecate WHISPER_CCACHE CMake option (#3131)
* whisper : deprecate WHISPER_CCACHE CMake option

This commit deprecates the WHISPER_CCACHE CMake option in favor of
the GGML_CCACHE option.

The motivation for this change is that currently when setting, or not
setting WHISPER_CCACHE, the outut message from ggml will be that to
enable ccache you need to set GGML_CCACHE which can be confusing.
This also seems to be inline with what llama.cpp does which does not
have a LLAMA_CCACHE option as far as I know.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3063

* ruby : change "WHISPER_CCACHE" to "GGML_CCACHE"

* ruby : move GGML_CCACHE to sorted position
2025-05-09 14:13:41 +02:00
Daniel Bevenius b6f3fa4059
stream.wasm : add HEAPU8 to exported runtime methods (#3130)
* stream.wasm : add HEAPU8 to exported runtime methods

This commit adds HEAPU8 to the list of exported methods for stream.wasm.

The motivation for this is that without it HEAPUD8 will be undefined
and when its 'buffer' attribute is accessed this will cause error as
reported in the referenced issue.

Note that to test this make sure that the web browsers caches is cleared
first.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3123

* command.wasm : add HEAPU8 to exported runtime methods
2025-05-08 16:58:34 +02:00
Georgi Gerganov cb2bd11ee8 sync : ggml
ggml-ci
2025-05-07 21:00:32 +03:00
R0CKSTAR 09e6b66025 cuda : remove nrows_x in mul_mat_q_process_tile (llama/13325)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-05-07 21:00:32 +03:00
Johannes Gäßler d41cf26a0f CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (llama/13135) 2025-05-07 21:00:32 +03:00
Akarshan Biswas 3c67195be9 SYCL: Disable reorder optimize by default and stop setting tensor extras when optimize is disabled (llama/13254)
* SYCL: Do not set tensor extras when reorder optimize is disabled

* SYCL: Disable reorder optimize by default
2025-05-07 21:00:32 +03:00
Johannes Gäßler f9f78a773f CUDA: fix bad asserts for partial offload (llama/13337) 2025-05-07 21:00:32 +03:00
Johannes Gäßler be55e25cac CUDA: fix --split-mode row for MMQ (llama/13323) 2025-05-07 21:00:32 +03:00
Johannes Gäßler 2ffdda99e8 CUDA: fix logic for clearing padding with -ngl 0 (llama/13320) 2025-05-07 21:00:32 +03:00
Akarshan Biswas 9bbedc51cc SYCL: Disable mul_mat kernels for noncontiguous tensor b (llama/13308)
ggml-ci
2025-05-07 21:00:32 +03:00
Diego Devesa 1e1fa27add rpc : use backend registry, support dl backends (llama/13304) 2025-05-07 21:00:32 +03:00
Aaron Teo e1bdd148c5 ggml : activate s390x simd for Q3_K (llama/13301)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-05-07 21:00:32 +03:00
Johannes Gäßler 7fa8bb303f CUDA: fix race condition in MMQ stream-k fixup (llama/13299) 2025-05-07 21:00:32 +03:00
Johannes Gäßler 7564f5e6f1 CUDA: fix race condition in MMQ ids_dst (llama/13294) 2025-05-07 21:00:32 +03:00
Jeff Bolz 22ba2e27ce vulkan: Additional type support for unary, binary, and copy (llama/13266)
Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
2025-05-07 21:00:32 +03:00
Daniel Bevenius 0676b2dab2
ci : add bindings-java jar artifact to release (#3126)
This commit adds the jar artifact from bindings java to the release
process.
2025-05-07 16:26:54 +02:00
Georgi Gerganov 4a512cb153 cli : avoid std::exchange
ggml-ci
2025-05-07 15:39:32 +03:00
Georgi Gerganov 76171ce199 sync : ggml
ggml-ci
2025-05-07 15:39:32 +03:00
Georgi Gerganov 5eac2a3fbb vulkan : fix lint (llama/0) 2025-05-07 15:39:32 +03:00
shalinib-ibm 42938398f9 ggml : Enable MMA for BF16 in llamafile_sgemm (llama/13148)
This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.

This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.

The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-05-07 15:39:32 +03:00
Justin Santa Barbara a8fe90ae15 rpc : avoid uninitialized memory in serialize_tensor (llama/13210)
Zero out the name and padding buffers.
2025-05-07 15:39:32 +03:00
Jesse Gross c5a5a2da5b ggml: Don't assert fail when tensor data changes (llama/13222)
The following scenario will cause an assertion failure in the graph
allocator:
 - Build and allocate a graph containing a tensor with a non-NULL data
   pointer
 - Build and allocate a new graph where that data is NULL

Result:
ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed

This happens during revalidation because we think that memory should
have been previously allocated based on the current graph but in
reality the previous graph was different. In this situation, we
should do a full reallocation pass.
2025-05-07 15:39:32 +03:00
Diego Devesa 8316bfd82b build : fix build info on windows (llama/13239)
* build : fix build info on windows

* fix cuda host compiler msg
2025-05-07 15:39:32 +03:00
Jeff Bolz fd1cb9fc12 vulkan: Add bfloat16 support (llama/12554)
* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O
2025-05-07 15:39:32 +03:00
Jeff Bolz 17f6b8225e vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (llama/13191)
* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader
2025-05-07 15:39:32 +03:00
Acly 6374ea32ca vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (ggml/1204)
* vulkan : add kernels for depthwise 2d convolution (OP_CONV_2D_DW)

* review: remove src_x/y < 0 checks; add performance tests
2025-05-07 15:39:32 +03:00
Daniel Bevenius 3a66f9f248
ci : zip windows artifacts for release uploading (#3124)
This commit adds steps to the windows jobs to zip and upload
artifacts produced.

The motivation for this is that currently the artifacts are not zipped
which means that will not be picked up by the release job and hence not
be included in github releases.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3119
2025-05-07 13:12:08 +02:00
Daniel Bevenius 9b584b0cc0
ci : add zip extension to xcframework artifact name (#3120)
This commit add the .zip extension to the xcframework artifact name in
the GitHub Actions workflow.

The motivation for this that the release job will look for .zip files
and will not find the xcframework artifact without the extension, and
hence will not upload it to the release.
2025-05-07 12:02:29 +02:00
Daniel Bevenius 09846f4e12
whisper: remove MSVC warnings pragmas (#3090)
* ggml : remove MSVC warnings pragmas

This commit removes the MSVC-specific pragmas as these are now handled
in CMakeLists.txt.

* whisper : remove MSVC warning pragmas

This commit removes the MSVC-specific pragmas. These are now handled in
the CMakeLists.txt file.
2025-05-05 13:09:35 +02:00
Sacha Arbonel bcf1ed0163
server: update abort mechanism to handle HTTP connection closure (#3112) 2025-05-05 07:16:54 +02:00
Daniel Tang 934d4b3083
cli : support "-" for stdout like stdin (#3050)
This changes examples/cli/cli.cpp to be like
examples/common-whisper.cpp. "-of -" can be specified (or this can be
inferred from "-" as the input file) to output to stdout. This is useful
for piping to other applications.

Log fname_out consistently when not stdout
- Terminals have stdout=stderr, so remove the message before
  successful output to ease copying
- Don't affect actual error messages
- Move opening the ofstream into the factory, fixing missing
  open and/or error messages in output_score/output_wts
- Fix struct naming convention

Closes #3048
2025-05-05 07:15:39 +02:00
Arpit Jain 988dcd4b5b
docs : Update cli documentation (#3102)
* docs : Update cli documentation

This updates the documentation of cli based on the actual output

In the longterm this should ideally be auto generated to prevent mismatch

* docs : Update cli documentation

This updates the documentation of cli based on the actual output

In the longterm this should ideally be auto generated to prevent mismatch
2025-05-02 14:18:33 +02:00
Jared Tweed 9f540ad8cb
cmake : removed stdc++fs (#3097)
* removed stdc++fs

* kept line, but removed stdc++fs
2025-05-02 12:41:35 +03:00
Sacha Arbonel 1fa17bc752
server : update httplib.h to version 0.20.0 (#3101) 2025-05-02 06:09:41 +02:00
KITAITI Makoto 366082d072
ruby : refine HTTP cache feature (#3109)
* Use cache file when model host doesn't support if-modified-since

* Update gem date

* Revert "ruby : ignore "Downloading" output in test_log_suppress (#3106)"

This reverts commit edbd4cb7f5.
2025-05-01 23:04:53 +09:00
Georgi Gerganov 0778b6ff5f talk-llama : sync llama.cpp
ggml-ci
2025-05-01 13:29:02 +03:00
Georgi Gerganov 5cd59c9396 sync : ggml 2025-05-01 13:29:02 +03:00
Johannes Gäßler d052e64d42 CUDA: batched+noncont MMQ, refactor bs>1 MoE code (llama/13199) 2025-05-01 13:29:02 +03:00
Jeff Bolz 780750a108 vulkan: use uint array index to avoid glslang bug (llama/13193) 2025-05-01 13:29:02 +03:00
shalinib-ibm 919c78e618 ggml : fix ppc64le build (llama/13176)
Build fails with compilation error on power pc.
This patch fixes the same.

Tested with unit tests run via
 --build <build_dir> && cd <build_dir> && make test

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-05-01 13:29:02 +03:00
Aaron Teo dc288f84cd feat(ggml-cpu): enable z17 compile (llama/13182)
z17 compilation requires GCC 15.1.0 and onwards

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-05-01 13:29:02 +03:00
Johannes Gäßler 1543a3600c CUDA: fix non-cont. inputs for batched mat mul (llama/13155) 2025-05-01 13:29:02 +03:00
Ville Vesilehto 4872355f6e fix(rpc): Improve input validation and error handling (llama/13069)
* fix(rpc): Improve input validation and error handling

The `rpc-server` was vulnerable to Denial of Service attacks via
several RPC commands (`SET_TENSOR`, `GRAPH_COMPUTE`, etc.). Malformed
messages could trigger failed assertions (e.g., invalid `ggml_type`)
or out-of-bounds reads/writes leading to `GGML_ABORT` calls,
crashing the server process.

This PR introduces robust input validation and replaces `abort()`
calls with graceful error handling:

- **Type Validation:** `deserialize_tensor` now checks if the
  `tensor->type` is within the valid `GGML_TYPE_COUNT` range
  *before* calling `ggml_new_tensor_4d`. Returns `nullptr` on
  invalid type.
- **Bounds Checks:** Replaced `GGML_ABORT` in `set_tensor`,
  `set_tensor_hash`, and `get_tensor` handlers with error
  logging and returning `false` when data/offset parameters
  are out of buffer bounds.
- **Size Checks:** Added safe arithmetic checks (for overflow) in
  `graph_compute` when calculating required message sizes based
  on client-provided `n_nodes` and `n_tensors`. Returns early
  if the reported sizes conflict with the actual message size or
  would lead to overflow.
- **Error Propagation:**
    - `create_node` now checks for `nullptr` return values from
      `deserialize_tensor` and its recursive calls, propagating
      `nullptr` upwards on failure. Uses `find` instead of `at`
      for safer map access.
    - `copy_tensor` now checks for `nullptr` from `deserialize_tensor`
      and sets the response status to failure if deserialization
      or bounds checks fail.
    - `graph_compute` now checks for `nullptr` return from
      `create_node` and returns failure status correctly. The final
      return value now reflects the actual computation status.

These changes improve the RPC server's resilience
against malformed client requests, preventing crashes and ensuring
errors are handled more gracefully.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): address pr comments

removed comments and unnecessary returns

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): ambiguous nullptr from create_node

rpc_server::create_node could previously return nullptr if the input ID
was 0 (valid) or if an internal error (deserialization, recursion
failure) occurred (invalid). This ambiguity made error handling
difficult for the caller (`graph_compute`).

This commit clarifies the meaning of nullptr:
- `graph_compute` now checks if the input 'id' was non-zero when
  `create_node` returns nullptr, correctly identifying failures
  versus intentional null links.
- `create_node` avoids recursive calls for zero IDs and propagates
  nullptr unambiguously on failure during recursion.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): initial zero check in create_node

The caller (`graph_compute`) already checks `id != 0` when handling
a `nullptr` return from `create_node`, correctly distinguishing
intentional null links from actual errors. This makes the initial
`if (id == 0)` check redundant.

Also removes the log message when a tensor ID is not found in the
provided map which was added in this branch.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* fix(rpc): Handle get_alloc_size failure in server

Check the return value of `server.get_alloc_size` in the RPC server
loop. If the call fails, return early to close the connection.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): input size validation in graph_compute

Removes detailed, step-by-step size calculations and overflow
checks in favor of simpler direct comparisons, assuming 64-bit
overflow is unlikely.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): remove extra status code setting

Removes the explicit setting of `response.result = GGML_STATUS_FAILED`
when `create_node` returns `nullptr` within `graph_compute`.
Primary signal is the `false` return value in case of failure.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

* refactor(rpc): remove redundant check for tensor->type

Breaks CI on ubuntu-cpu-make. Tensor type is uint32_t, thus
the check is not needed.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>

---------

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-05-01 13:29:02 +03:00
Akarshan Biswas 1a76e97c28 SYCL: Add all missing unary kernels (llama/13074)
* SYCL: Add all missing unary kernels

ggml-ci

* decouple kernel launch range from data size using strided loop

* use ciel_div helper for num_blocks
ggml-ci

* clean auto imported header files
2025-05-01 13:29:02 +03:00
R0CKSTAR 7017c1d37d musa: fix typo in cc control (llama/13144)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-05-01 13:29:02 +03:00
Johannes Gäßler 670bf02662 CUDA: fix q_nope_absorbed prec for DS 2 Lite f16 (llama/13137) 2025-05-01 13:29:02 +03:00
R0CKSTAR 9fff2f751c musa: fix build warning (llama/13129)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-05-01 13:29:02 +03:00
SXX 46392f733f ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107)
* ggml: dynamic x86_64 feature detection for FP32 <-> FP16/BF16 conversion

* move fp converter to ggml-cpu

* Switch ggml_compute_forward_get_rows_f16/bf16 to new ggml_cpu_fp16/bf16_to_fp32
2025-05-01 13:29:02 +03:00
Neo Zhang Jianyu eeb259909e change the reorder tensor from init to execute OP (llama/13003) 2025-05-01 13:29:02 +03:00
Radoslav Gerganov fe21ddf0dc rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)
RPC_CMD_SET_TENSOR always returns an empty response and we send this 4
times per token. We can improve TG speed if we don't wait for this empty
response.

The performance impact of this change depends on the network latency.
2025-05-01 13:29:02 +03:00
Diego Devesa 33bdbfbb33 ggml : fix ggml_gallocr_ptr type (ggml/1205) 2025-05-01 13:29:02 +03:00
Daniel Bevenius 0f49edf0f3
whisper : add check that target name exists (#3103)
This commit adds a check to makes sure that the target exists before
trying to add compile options to ignore warnings when using MSVC.

The motivation for this is currently the build is broken depending on
the cmake options provided. With this fix it should be possible to build
even if the targets are not actually available.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3090#issuecomment-2842760104
2025-05-01 10:05:24 +02:00
Daniel Bevenius 25efcfe3ed
server : add --no-gpu option to print usage output (#3098)
This commit adds the the command line option `--no-gpu` to the server
examples print usage function.

The motivation for this is that this options is available and can be set
but it is not displayed in the usage message.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3095
2025-05-01 09:15:12 +03:00
Daniel Bevenius edbd4cb7f5
ruby : ignore "Downloading" output in test_log_suppress (#3106)
This commit adds a temporary fix to the `test_log_suppress` test in the
Ruby bindings.

The motivation for this changes is that I suspect that the recent
migration of the models to HuggingFace Xet has changed the way HTTP
caching works for the models. This is causing the test in question to
fail. This is a temporary fix so that CI is not broken while we
investigate this further.
2025-05-01 09:12:48 +03:00
Georgi Gerganov 3ae9b8416a
make : fix samples glob pattern (#3100) 2025-04-30 14:21:51 +03:00
Daniel Bevenius 55d73a13f5
ggml : suppress Windows compiler warnings (#3075)
* whisper: suppress Windows compiler warnings

This commit disables compiler warnings on window using MSVC.

The motivation for these changes is that some compilers generate
warnings for these conversion, for example Windows MSVC, and
there are quite a few of them. This makes it a little difficult to
spot new warnings that may be introduced and also can be difficult
for users/embedders of ggml where these warnings are hard to separate
from their own warnings.

* squash! whisper: suppress Windows compiler warnings

Move ggml related warnings into ggml. This commit also fixes the
indentation and adds a missing whitespace to the if statement.
2025-04-29 15:47:55 +02:00
Daniel Bevenius 2e30e6df59
whisper : fix grammar advance stack warning (#3087)
This commit addresses a warnings that is present for Release builds:
```console
[ 30%] Building CXX object src/CMakeFiles/whisper.dir/whisper.cpp.o
In file included from /usr/include/c++/13/bits/stl_tree.h:63,
                 from /usr/include/c++/13/map:62,
                 from /home/danbev/work/ai/whisper.cpp/src/whisper-arch.h:5,
                 from /home/danbev/work/ai/whisper.cpp/src/whisper.cpp:2:
In static member function ‘static void std::__copy_move<false, false, std::random_access_iterator_tag>::__assign_one(_Tp*, _Up*) [with _Tp = const whisper_grammar_element*; _Up = const whisper_grammar_element* const]’,
    inlined from ‘static _Up* std::__copy_move<_IsMove, true, std::random_access_iterator_tag>::__copy_m(_Tp*, _Tp*, _Up*) [with _Tp = const whisper_grammar_element* const; _Up = const whisper_grammar_element*; bool _IsMove = false]’ at /usr/include/c++/13/bits/stl_algobase.h:440:20,
    inlined from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false; _II = const whisper_grammar_element* const*; _OI = const whisper_grammar_element**]’ at /usr/include/c++/13/bits/stl_algobase.h:506:30,
    inlined from ‘_OI std::__copy_move_a1(_II, _II, _OI) [with bool _IsMove = false; _II = const whisper_grammar_element* const*; _OI = const whisper_grammar_element**]’ at /usr/include/c++/13/bits/stl_algobase.h:533:42,
...
```
This warning is caused by the fact that the `stack` vector is empty
when it is passed to `new_stacks.push_back(stack);`.

The suggested fix is to use `new_stacks.emplace_back();` instead of
`new_stacks.push_back(stack);`.
2025-04-28 19:11:38 +02:00
Sacha Arbonel f0171f0616
examples : expose language detection probabilities to server example (#3044)
* feat: expose language detection probabilities to server.cpp

* feat: enhance language detection output in server.cpp

* Remove empty spaces.
2025-04-28 18:25:45 +02:00
Daniel Bevenius b7db9e7aac
whisper : remove empty .gitmodules file [no ci] (#3085)
This commit removes the empty `.gitmodules` file from the repository.

The motivation of this is that this file is currently empty and the
project does not use any submodules at this time. Removing it mainly to
reduce clutter in the repository and any confusion when seen the file
in repo.
2025-04-28 15:52:05 +02:00
Georgi Gerganov f3c42399a3
talk-llama : sync llama.cpp (#3084)
ggml-ci
2025-04-28 16:40:23 +03:00
Daniel Bevenius 28dcdff4c5
ci : disable publishing of java binding [no ci] (#3086)
This commit disables the publishing of the Java binding to the Maven
repository.

The motivation for this is that this job was disabled for some time and
recently it was re-enabled, but the publishing of the Java binding
caused the build to fail and needs to be investigated further.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3079
2025-04-28 15:38:52 +02:00
R0CKSTAR 50218b935d
build : Add Moore Threads GPU support and update GitHub workflow for MUSA build (#3069)
* Update PATH for main/main-cuda container

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Add Dockerfile for musa, .dockerignore and update CI

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Add Moore Threads GPU Support in README.md and replace ./main with whisper-cli

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Forward GGML_CUDA/GGML_MUSA to cmake in Makefile

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Minor updates for PATH ENV in Dockerfiles

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-28 11:06:41 +03:00
Pedro f9b2dfdd8c
examples : fix deprecated FFmpeg functions (#3073)
* Fix deprecated FFmpeg functions and free packet

* avcodec_free_context
2025-04-28 06:16:50 +02:00
KITAITI Makoto 50fda73f4c
ruby : add encoder begin callback related methods (#3076)
* Lazy run TestBase.whisper

* Fix indentation

* Remove disused GGML_HIP_UMA from Ruby

* Add encoder_begin_callback

* Comment out existing abort mechanism

* Add test for encoder_begin_callback

* Add signatures for encoder_begin_callback related methods

* Update gem date
2025-04-26 04:33:11 +09:00
Daniel Bevenius 1c20f46887
ci : enable bindings java job (#3070)
* ci : re-enable bindings-java (java) job

This commit re-enables the job previously name `java` which was
disabled in the build.yml file.

The motivation for this is that we recently fixed a few issue in the
java bindings and it should be possible to build them on windows.

Refs: https://github.com/ggerganov/whisper.cpp/pull/2949
Resolves: https://github.com/ggerganov/whisper.cpp/issues/2781
2025-04-25 14:56:06 +02:00
Georgi Gerganov adaea088bc ruby : add cmake option (#0) 2025-04-24 20:39:16 +03:00
Georgi Gerganov 6c0d843f9d cuda : fix unused variable compile warning (#0)
ggml-ci
2025-04-24 20:39:16 +03:00
Georgi Gerganov efb800557f sync : ggml
ggml-ci
2025-04-24 20:39:16 +03:00
Georgi Gerganov 337becefb9 opencl : remove obsolete files (skip) (ggml/1200) 2025-04-24 20:39:16 +03:00
Georgi Gerganov 11ae30c19e sync : ggml 2025-04-24 20:39:16 +03:00
lhez 88c3cecd43 opencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)
---------

Co-authored-by: Shangqing Gu <quic_shawngu@quicinc.com>
2025-04-24 20:39:16 +03:00
Georgi Gerganov fe4acb33e3 ggml : fix trailing whitespaces (llama/0) 2025-04-24 20:39:16 +03:00
Johannes Gäßler fd5a3e1bc6 CUDA: use switch statements in constexpr functions (llama/13095) 2025-04-24 20:39:16 +03:00
Georgi Gerganov 01e1600edd metal : fix floating-point range of attention scores in FA kernels (llama/13090)
ggml-ci
2025-04-24 20:39:16 +03:00
Eve cf3eb291ab vulkan: matmul gcn tuning (llama/13016)
* tune matmul for gcn

* this one is more power efficient

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <picard12@live.de>

* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-04-24 20:39:16 +03:00
Johannes Gäßler 3d54b68ea7 CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)
* CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID

* fix logic for RoPE support, CUDA graphs
2025-04-24 20:39:16 +03:00
Diego Devesa 11218294db ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)
* ggml : add SSE 4.2 variant for CPUs without AVX

* ggml : add x64 base ABI variant
2025-04-24 20:39:16 +03:00
Akarshan Biswas 33c89ade7d SYCL: Add non-contiguous support in ROPE (llama/12993)
ggml-ci
2025-04-24 20:39:16 +03:00
Jeff Bolz 27a56e7243 vulkan: support noncontiguous rms_norm (llama/13031) 2025-04-24 20:39:16 +03:00
Jeffrey Morgan f4ca3e2f9c metal: add neg operator (llama/13029) 2025-04-24 20:39:16 +03:00
Akarshan Biswas 0287a5c51b SYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)
* SYCL: refactor move to a separate file

* Fix binbcast

* Remove duplicates

* fix include formatting

* fix typo
2025-04-24 20:39:16 +03:00
Radoslav Gerganov 24d29c55df rpc : add RPC_CMD_HELLO (llama/12955)
Add RPC_CMD_HELLO for getting the version of the protocol implemend by
the server. Follow the semantic versioning rules at https://semver.org

Hopefully this bring better user experience when we make breaking
changes at the protocol level and avoid issues like #12465
2025-04-24 20:39:16 +03:00
Georgi Gerganov 36019c35a3 graph : make FA compatible with MLA + add initial Metal kernels (llama/12953)
* graph : make mla compatible with FA

* metal : add exp FA kernels for DeepSeek models

ggml-ci

* llama : minor naming updates

ggml-ci

* ggml : disable FA for DS head sizes

* tests : add FA tests for MLA shapes

ggml-ci
2025-04-24 20:39:16 +03:00
Alan Gray 4e936e2afa ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970) 2025-04-24 20:39:16 +03:00
hipudding 314ce5981e CANN: Add support for async operator submission (llama/12864)
Submit operators using asynchronous threads to improve performance.

Use the environment variable GGML_CANN_ASYNC_MODE to control whether
asynchronous submission is enabled. It is disabled by default.

Testing shows a 10%–20% performance improvement in scenarios with
small parameter sizes, especially in quantized models.
2025-04-24 20:39:16 +03:00
kimminsu cb7642b0f5 opencl: fix incorrect local_size index in profiling log (llama/12868) 2025-04-24 20:39:16 +03:00
Jeff Bolz 7db8f278f0 vulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)
The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.
2025-04-24 20:39:16 +03:00
Chenguang Li be42a19eab CANN: Add 310P operator support check (llama/12962) 2025-04-24 20:39:16 +03:00
Georgi Gerganov b8755670ca metal : add FA-vec kernels for head size 96 (llama/12952)
ggml-ci
2025-04-24 20:39:16 +03:00
hipudding 483eecae62 CANN: Add x86 build ci (llama/12950)
* CANN: Add x86 build ci

* CANN: fix code format
2025-04-24 20:39:16 +03:00
David Huang 43e3d25d93 CUDA/HIP: Share the same unified memory allocation logic. (llama/12934)
Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.
2025-04-24 20:39:16 +03:00
Akarshan Biswas e1dbf9a42e SYCL: Add ROPE vision kernel (llama/12887)
* SYCL: Add ROPE vision kernel

* Add comment about rope mode
2025-04-24 20:39:16 +03:00
Srihari-mcw ee0013865d ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)
* Add AVX512 implementation of GEMM - q4kx8

* Update changes to remove unnecessary whitespaces
2025-04-24 20:39:16 +03:00
Chenguang Li 32a407166b CANN: Opt ROPE optimization (llama/12865)
* [CANN]Opt ROPE optimization

* [CANN]Codestyle adjustment

* [CANN]Fix the ROPE precision issue

* [CANN]codestyle fix

* [CANN]add rope unsupport case

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Xinpeng Dou 622f981853 CANN: Optimize CANN buffer pool memory management (llama/12875)
Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
   is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
   the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.
2025-04-24 20:39:16 +03:00
Akarshan Biswas d049d67065 SYCL: Fix im2col (llama/12910)
* SYCL: Fix im2col

* restore local workgroup size adjustments for large inputs

* restore format
2025-04-24 20:39:16 +03:00
Radoslav Gerganov 877308838e rpc : use ggml_context_ptr (llama/12938) 2025-04-24 20:39:16 +03:00
Acly d87dfcf7c0 ggml : Depthwise 2D convolution (ggml/1152)
* ggml-cpu : kernels for faster depthwise 2D convolution

* fix compile: remove static after moving to ops.cpp

* add dilation for depthwise_conv_2d

* review: rename to ggml_conv_2d_dw_direct, remove redundant struct keywords, pass by ref, whitespace

* review: rename depthwise_conv_2d -> conv_2d_dw everywhere
2025-04-24 20:39:16 +03:00
SXX 915c14ef10 ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773)
* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register

* simplifies the codebase by removing redundant functions
2025-04-24 20:39:16 +03:00
Alan Gray 5d33d3c929 ggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)
Fixes #12798
2025-04-24 20:39:16 +03:00
Jeff Bolz 751e42b21e vulkan: use aligned loads for flash attention mask (llama/12853)
Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.
2025-04-24 20:39:16 +03:00
Ewan Crawford e8ee32d12d sycl: Support sycl_ext_oneapi_limited_graph (llama/12873)
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
2025-04-24 20:39:16 +03:00
Akarshan Biswas e9ce285135 SYCL: Add fp16 type support to unary op kernels (llama/12788)
* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code
2025-04-24 20:39:16 +03:00
Aaron Teo b942f451b6 ggml: fix compilation error s390x (llama/12848)
* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: add documentation for code change

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

---------

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
2025-04-24 20:39:16 +03:00
cmdr2 e6410faf99 cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190) 2025-04-24 20:39:16 +03:00
Chenguang Li 182df69384 CANN: Support more ops (llama/12841)
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Prajwal B Mehendarkar 3bf9691dfd Fixes #12823 (llama/12830)
* Including limits file on AIX

* Fixes #12823
2025-04-24 20:39:16 +03:00
Piotr Kubaj ba444e9c23 ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
error: unknown type name '_Bool'
2025-04-24 20:39:16 +03:00
Piotr Kubaj c6caf8eef2 ggml-impl.h: fix build on POWER9 (llama/12855)
error: ISO C++17 does not allow 'register' storage class specifier
2025-04-24 20:39:16 +03:00
Chenguang Li 6cae79a1d7 CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Jeff Bolz b9bfe0c693 vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
2025-04-24 20:39:16 +03:00
Jeff Bolz 1d50c6ac22 vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-24 20:39:16 +03:00
Sigbjørn Skjæret 79f23d9132 cuda : add f32 to bf16 copy op (llama/12806)
This allows BF16 KV-cache on CUDA.
2025-04-24 20:39:16 +03:00
Georgi Gerganov ee2cbeeb74 llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci
2025-04-24 20:39:16 +03:00
cmdr2 868a5ce310 ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
fix #1186
2025-04-24 20:39:16 +03:00
Diego Devesa b9c71fae5a ggml : add bilinear upscale support (ggml/1185) 2025-04-24 20:39:16 +03:00
Diego Devesa 6d67c6d93d ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
* ggml : add more generic ggml_custom op

* ggml : remove deprecated custom ops
2025-04-24 20:39:16 +03:00
Neo Zhang Jianyu 12cade118e Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)
* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"

This reverts commit 518a01480eb3a7c80a4951b430db9dee55428310.

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* rm tail space
2025-04-24 20:39:16 +03:00
lhez fd1c725e65 opencl: better identify Adreno GPU (llama/12760) 2025-04-24 20:39:16 +03:00
Georgi Gerganov d33fd00cfe cuda : fix HIP and MUSA BF16 (llama/0)
ggml-ci
2025-04-24 20:39:16 +03:00
zhouwg 3e0d89782a sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734) 2025-04-24 20:39:16 +03:00
zhouwg 7074b622eb CANN: fix typo in ggml-cann (llama/12733) 2025-04-24 20:39:16 +03:00
hipudding b8d3e45342 CANN: Refactor to reduce duplicate code (llama/12731)
* CANN: Refactor to reduce duplicate code

* CANN: fix review comment
2025-04-24 20:39:16 +03:00
R0CKSTAR 1901505138 musa: fix compilation warnings in mp_22/31 (llama/12780)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-24 20:39:16 +03:00
Jeff Bolz 3c26dd3353 vulkan: fix NaN issue in flash attention shader (llama/12776)
Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.
2025-04-24 20:39:16 +03:00
Jeff Bolz d792d2a2dc vulkan: Use unclamped loads for flash attention mask (llama/12720)
nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.
2025-04-24 20:39:16 +03:00
0cc4m 8add58aa5e Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767) 2025-04-24 20:39:16 +03:00
Nicolò Scipione 8f8ede1b12 sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625) 2025-04-24 20:39:16 +03:00
Ronny Brendel 3a6fe8d767 cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
fixes error for compiler paths with spaces
2025-04-24 20:39:16 +03:00
Jeff Bolz 76231bda56 vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
2025-04-24 20:39:16 +03:00
Jeff Bolz 785437c253 vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744) 2025-04-24 20:39:16 +03:00
Gaurav Garg 2f0612cb1c CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-04-24 20:39:16 +03:00
Jeff Bolz e944065d5b vulkan: Fix missing cmake logic for dot product extension (llama/12721) 2025-04-24 20:39:16 +03:00
a3sh ccc7b5df0b fix MUSA compiler warning (llama/12704)
* fix MUSA compiler warning

* replace (void) with GGML_UNUSED
2025-04-24 20:39:16 +03:00
Chenguang Li fbed36851e CANN: Support operator SIN COS ARGMAX (llama/12709)
* [CANN]support sin cos argmax

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]codestyle adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]Remove redundant code

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Alan Gray d1d847f184 Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers

Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.

Fixes #12152

* Addressed comments

* fix HIP builds

* properly sync to stream

* removed ggml_cuda_cpy_fn_ptrs

* move stream sync before free

* guard to only use indirection with graphs

* style fixes

* check for errors

---------

Co-authored-by: slaren <slarengh@gmail.com>
2025-04-24 20:39:16 +03:00
hipudding 337f91d4a6 CANN: Fix failed test cases (llama/12708)
* CANN: Fix memory waste in aclnn_tensor

* CANN: fix backend ops fail

* CANN: fix acl_tensor memory alloc.

* CANN: format

* CANN: remove trailing whitespace
2025-04-24 20:39:16 +03:00
lhez 317a0031f9 opencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705) 2025-04-24 20:39:16 +03:00
Jeff Bolz b243416918 vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
2025-04-24 20:39:16 +03:00
bandoti 6e532c7187 cmake: remove caching from vulkan coopmat checks (llama/12719) 2025-04-24 20:39:16 +03:00
Jeff Bolz 2105b110d3 vulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)
When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
2025-04-24 20:39:16 +03:00
0cc4m f82622180f Vulkan: Fix mmq int dot float cache size (llama/12722) 2025-04-24 20:39:16 +03:00
Diego Devesa a71c64512a llama : add option to override model tensor buffers (llama/11397)
* llama : add option to override tensor buffers

* ggml : fix possible underflow in ggml_nbytes
2025-04-24 20:39:16 +03:00
Georgi Gerganov 1e9c2f87f1 ggml : simplify Arm fp16 CPU logic (ggml/1177)
* ggml : simlpify Arm fp16 CPU logic

ggml-ci

* cont : bring back CUDA/MUSA checks

ggml-ci
2025-04-24 20:39:16 +03:00
Sigbjørn Skjæret 06ce8f83e6 CUDA: don't convert BF16 weights to FP32 (ggml/1174)
* add bf16 support

* use convert_from_bf16_cuda instead of convert_unary_cuda for f32

* revert 7ec5085

* move functionality into convert_unary with constexpr
2025-04-24 20:39:16 +03:00
Daniel Bevenius 8b92060a10
coreml : set convert_to="mlprogram" in convert
* coreml : skip model load in convert-whisper-to-coreml.py

This commit updates the conversion process for Whisper models to use the
"mlprogram" format instead of "neuralnetwork".

The motivation for this change is that when using the "neuralnetwork"
format the underlying model produced is based on protobuf and my
understanding is that there are limitations to this format, such as
sizes of strings and the complexity of the model.

Currently when trying to convert larger models such as large-v3 the
conversion fails but succeeds for smaller models.

The "mlprogram" format is a more recent addition to CoreML and is
designed to be more flexible and powerful, allowing for more complex
models and larger data types. This seems to work for larger and smaller
models alike and unless I'm there are considerations that I'm not aware
of I think this is what we should be using moving forward.
The error that is generated for large models is the following:
```console
Running MIL backend_neuralnetwork pipeline: 100%|█████████| 9/9 [00:00<00:00, 35.44 passes/s]
Translating MIL ==> NeuralNetwork Ops: 100%|███████████| 5641/5641 [03:31<00:00, 26.65 ops/s]
Traceback (most recent call last):
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 322, in <module>
    encoder = convert_encoder(hparams, encoder, quantize=args.quantize)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/models/convert-whisper-to-coreml.py", line 255, in convert_encoder
    model = ct.convert(
            ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
              ^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 186, in mil_convert
    return _mil_convert(
           ^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/converters/mil/converter.py", line 245, in _mil_convert
    return modelClass(
           ^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 489, in __init__
    self.__proxy__, self._spec, self._framework_error = self._get_proxy_and_spec(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/danbev/work/ai/whisper-work/venv/lib/python3.11/site-packages/coremltools/models/model.py", line 550, in _get_proxy_and_spec
    _MLModelProxy(
ValueError: basic_string
```

Refs: https://github.com/ggml-org/whisper.cpp/issues/3012
2025-04-23 08:24:38 +02:00
Daniel Bevenius 7858eddd10
ci : disable freeBSD job in build.yml (#3064)
This commit disables the FreeBSD job in build.yml of the GitHub Actions
workflow.

The motivation for this is that this job seems to stall and timeout from
time to time, taking up to 6 hours to complete/cancel.
2025-04-22 11:07:54 +02:00
Daniel Bevenius 3a88f1e504
examples : add HEAPU8 to exported runtime methods (#3062)
This commit adds `HEAPU8` to the list of exported methods.

The motivation for this commit is that currently this is causing an
error on Window systems where HEAPU8 in undefined, which results in the
following error message in the web console:
```console
main.js:1 Uncaught TypeError:
Cannot read properties of undefined (reading 'buffer') at __emval_get_property
(main.js:1:1363125) at 003a453a:0xc4a47 at 003a453a:0xc51cd at
Object.full_default (eval at craftInvokerFunction (main.js:1:1347011),
<anonymous>:9:10) at whisper.cpp/:647:42
```

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3059
2025-04-20 19:40:25 +02:00
KITAITI Makoto f0d2bfbfb7
ruby : make Ruby bindings installed with build options (#3056)
* Fix signature of URI.new7s return value

* Use path instead of string | _ToPath

* Add document comment to RBS

* Remove unnecessary build flags

* Remove unnecessary line

* Remove files have become unnecessary

* Make gem install accept build options for whisper.cpp

* Add instraction for build options in README

* Add methods for check to Options

* Test build options

* Rename: configs -> options

* Add assert_installed assertion

* Use assert_installed

* Remove unused attribute

* Extract dependency check logic as Dependencies class

* Update README

* Add WHISPER_FFMPEG option

* Test extra build options only on local test

* Bump version to 1.3.2 [skip ci]
2025-04-17 18:49:58 +09:00
Sacha Arbonel 170b2faf75
whisper : add no_context parameter to whisper_params (#3045) 2025-04-16 06:24:38 +02:00
Fujimoto Seiji f8a3509b6d
examples : add FFmpeg v7.0 support to ffmpeg-transcode.cpp (#3038)
FFmpeg introduced a new channel layout API that uses `AVChannelLayout`
interface in v6.0. It subsequently dropped the old bitmask-based API
in v7.0.

This updates decode_audio() to support the new channel layout API,
so that we can compile `whisper-cli` and `whisper-server` with FFmpeg
v7.0 or later.

Tested on on Ubuntu 24.10 with FFmpeg v7.0.2.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-04-15 06:09:00 +02:00
KITAITI Makoto 2a2d21c75d
ruby: use CMake in build process (#3043)
* Use CMake to build shared object

* Make Rakefile follow change of build process

* Add test for packaging

* Run CI for Ruby bindings almost always

because each CMakeLists.txt might affect Ruby bindings

* Enable PIC

* Bump Ruby version to 3.2 on CI

* Check libgomp

* Check dependency of whisper.cpp accurately
2025-04-14 18:18:27 +09:00
Jeff Klassen 9cfcd6cc45
docs : update README.md to note newer nvidia gpus (#3031)
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3030
2025-04-11 08:54:51 +02:00
Lin Xiaodong e853620270
addon.node : support max_context api for addon.node (#3025)
* feat: support max content

* feat: show api in test file

---------

Co-authored-by: linxiaodong <calm.lin@wukongsch.com>
2025-04-11 06:36:38 +02:00
Georgi Gerganov 549db9376f
whisper : reduce delta_min from 1000ms to 100ms (#3028)
ggml-ci
2025-04-11 06:23:02 +02:00
Fujimoto Seiji 33a25e4dda
docs : document how to use 'WHISPER_FFMPEG' build option (#3029)
FFmpeg integration was introduced in 1b51fdf by William Tambellini,
but not mentioned in the main documentation.

Add a short guide on how to enable the feature. Confirmed to work
on both Ubuntu 24.04 and Fedora 39.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-04-10 18:21:38 +02:00
Ekaitz Zárraga 43f5030aeb
docs : fix README.md (#3024) 2025-04-09 19:49:37 +02:00
Daniel Bevenius cf794133de
xcf : use check for visionos build version (#3021)
This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.

This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.

Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-2773292223
2025-04-09 16:34:58 +02:00
Olli ef6cf357e7
ruby : fix types of arguments for rb_get_kwargs in ruby_whisper_params.c (#3022)
Change param_names and values not to be references for rb_get_kwargs - so it can be compiled on ruby 3.3.6 and 3.4.1
2025-04-09 20:49:25 +09:00
Olli b1f5c11b32
ruby : Update uri.rb (#3016)
Bugfix ... without this Pathname the "/" operator wouldn't work and will throw an error
2025-04-08 22:27:40 +09:00
Greg Sadetsky ada745f4a5
models : fix dead link to models in readme (#3006) 2025-04-06 08:29:41 +03:00
KITAITI Makoto 01985c22c0
ruby : change homepage URI in Ruby gemspec (#3007) 2025-04-05 07:55:09 +03:00
Fujimoto Seiji 448f3d3b93
tests : add script to benchmark whisper.cpp on LibriSpeech corpus (#2999)
* tests : add script to benchmark whisper.cpp on LibriSpeech corpus

LibriSpeech is a widely-used benchmark dataset for training and
testing speech recognition models.

This adds a set of scripts to measure the recognition accuracy of
whisper.cpp models, following the common benchmark standards.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* Document how to prepare `whisper-cli` and model files

Feedback from Daniel Bevenius.

This adds a short code example how to prepare the `whisper-cli`
command, to make the initial setup step a little bit clearer.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

* tests : Simplify how to set up Python environment

Based on a feedback from Georgi Gerganov.

Instead of setting up a virtual environment in Makefile, let users
set up the Python environment. This is better since users may have
their own preferred workflow/toolkit.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>

---------

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-04-04 19:51:26 +03:00
Fujimoto Seiji e6234cd435
whisper : fix "bench-all outputs an invalid result on larger models" (#3002)
The benchmark script 'scripts/bench-all.sh' assumes that the 11th
field of the output line is a timestamp. This assumption does not
hold when the target model takes a bit longer to process.

Fix this issue by introducing an explicit whitespace to the output
lines of `whisper_print_timings()`.

Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net>
2025-04-04 18:36:19 +03:00
Georgi Gerganov 2b6d0d2200
rename : ggerganov -> ggml-org (#3005) 2025-04-04 16:11:52 +03:00
Daniel Bevenius 0b17d4507e
examples : update server.py to match github pages app [no ci] (#3004)
This commit updates examples/server.py which is used to serve the wasm
examples locally. The changes include:

- Added a redirect from the root URL to /whisper.cpp.
  So now accessing http://localhost:8000/ will redirect to
  http://localhost:8000/whisper.cpp/ which matches the url for the app
  deployed to github pages.

- Custom handling for coi-serviceworker.js to serve it to avoid
  and error in the console. This file is not strictly necessary
  for the local server to work as the headers are provided already but
  it is nice to not have an error in the console.

- Fixed the shutdown of the server to ensure it exits cleanly
  on Ctrl+C. Previously it would continue to hang onto the port even
  after the processed had exited.
2025-04-04 10:23:53 +02:00
Daniel Bevenius 77e0c86ab6
whisper.wasm : fix unknown language issue (#3000)
* whisper.wasm : fix unknown language issue

This commit addresses an issue with whisper.wasm where the following
error was being displayed when running the application in github pages:
```
whisper_lang_id: unknown language 'д=␙c'
```

This turned out to be a memory corruption issue and further details
can be found in the reference issue below.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2998
2025-04-03 19:50:47 +02:00
Georgi Gerganov eac1bc9c47 examples : add new sources
ggml-ci
2025-04-03 10:30:16 +03:00
Georgi Gerganov cbde66d913 sync : ggml 2025-04-03 10:30:16 +03:00
cmdr2 513ecf8dc0 cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
* cpu: refactor SIMD mappings and vectorized op functions into separate files

* Fix warning for ggml_float to float

* Fix warnings

* cpu: move all the operations (except mul_mat) to a separate c++ file

* fix whitespace

* Update ggml/src/ggml-cpu/vec.h

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* Fix PR comments - use GGML_UNUSED, use cassert in ops.cpp

* Reverse the order of import for ops.h and vec.h, to match what was present in ggml-cpu.c previously

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-04-03 10:30:16 +03:00
Daniel Bevenius cce5daf17b
docs : add xcframework section to README.md [no ci] (#2997)
This adds a section to the README.md file that describes how to use the
XCFramework.

The modification for this is that is not obvious how to use the
XCFramework and and example will help.
One thing to note is that the example is using the latest release
including the checksum. We are thinking about how we might automate
this in the future but for now this is a good start.
2025-04-03 09:06:53 +02:00
Georgi Gerganov 2c502b3c00
readme : update roadmap link 2025-04-02 17:38:35 +03:00
Georgi Gerganov 51c6961c7b release : v1.7.5 2025-04-02 16:39:48 +03:00
Georgi Gerganov 503a786c9a
bench : update numbers [no ci] (#2993) 2025-04-02 16:27:36 +03:00
Georgi Gerganov ad4e350933 sync : ggml
ggml-ci
2025-04-02 15:51:57 +03:00
Chenguang Li d7a9346ab1 get_rows and dup optimization (llama/12671)
* [CANN]get_rows and dup optimization.

Co-authored-by: hipudding <huafengchun@gmail.com>
Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]GET_ROWS and CPY/DUP optimization

Co-authored-by: hipudding <huafengchun@gmail.com>
Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]code style adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]code style adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]code style adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]code style adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-04-02 15:51:57 +03:00
Junil Kim b63d23f728 opencl : fix memory allocation size (llama/12649)
issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-2760611283

This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.
2025-04-02 15:51:57 +03:00
Georgi Gerganov f6ce10e4a1 metal : use F32 prec in FA kernels (llama/12688)
* metal : use F32 prec in FA kernels

ggml-ci

* cont : fix FA vec kernel

ggml-ci
2025-04-02 15:51:57 +03:00
R0CKSTAR 6cb2b86581 Fix clang warning in gguf_check_reserved_keys (llama/12686)
* Fix clang warning in gguf_check_reserved_keys

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Fix typo

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-02 15:51:57 +03:00
Wagner Bruna 801d6bd809 vulkan: fix build when glslc doesn't support coopmat (llama/12683) 2025-04-02 15:51:57 +03:00
Romain Biessy ddf7e6a15d SYCL: Rename oneMKL to oneMath (llama/12192)
* Rename oneMKL Interface to oneMath

* Use oneMath for Intel vendor

* Rename occurences to mkl

* clang-format

* Silence verbose warnings

* Set oneMath HIP_TARGETS

* Fix silence warnings

* Remove step to build oneMath from build instructions

* Use fixed oneMath version

* Remove INTEL_CPU

* Fold CMake oneDNN conditions

* Use Intel oneMKL for Intel devices

* Improve CMake message

* Link against MKL::MKL_SYCL::BLAS only

* Move oneMath documentation to Nvidia and AMD sections
2025-04-02 15:51:57 +03:00
Akarshan Biswas 0d42097fd3 SYCL: switch to SYCL namespace (llama/12674) 2025-04-02 15:51:57 +03:00
a3sh 842b9c984c ggml : faster ssm scan (llama/10558)
* faster ssm_scan

* delete unused commnet

* clang format

* add space

* modify unnecessary calculations

* faster ssm conv implementatioin

* modify file name with dash
2025-04-02 15:51:57 +03:00
0cc4m 0810f02547 Vulkan: Add DP4A MMQ and Q8_1 quantization shader (llama/12135)
* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version
2025-04-02 15:51:57 +03:00
Georgi Gerganov 8c13c78f9d cmake : fix whitespace (llama/0) 2025-04-02 15:51:57 +03:00
Daniel Bevenius f31b404fcb
tests : remove gh label test-whisper-cli-tiny-en (#2988)
This commit removes test-whisper-cli-tiny-en from the gh label.

The motivation for this change is that until recently the tests were
disabled. But now that they are enabled some of the tests, specifically
the ci jobs that use sanatizers (e.g. thread-sanitizer) take a long time
to run as they are instrumented.
Some of these jobs also have matricies which means that there are
multiple jobs are created that all run these tests.
The suggestion here is to limit the number of tests that are run in the
ci jobs so cut down the CI build time.
2025-04-02 10:50:31 +02:00
Daniel Bevenius 854c0518bc
examples : clarify Core ML encoder model usage [no ci] (#2987)
This commit clarifies the usage of the Core ML encoder model in the
whisper.obj and whisper.swiftui examples.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783
2025-04-02 08:32:14 +02:00
Daniel Bevenius c8e3968edd
ci : remove intermediate build on push to master (#2986)
This commit removes the builds that happen on each push to master.

Refs: https://github.com/ggerganov/whisper.cpp/discussions/2983#discussioncomment-12691424
2025-04-02 08:29:28 +02:00
Daniel Bevenius b358de2458
whisper.objc : fix typo in README.md [no ci] (#2985)
This commit fixes a typo in the README.md file of the whisper.objc
example.

Resolves: https://github.com/ggerganov/whisper.cpp/issues/2984
2025-04-02 08:26:57 +02:00
Daniel Bevenius 11688b262f
coreml: fix Whisper to CoreML conversion by disabling SDPA [no ci] (#2979)
* coreml: fix Whisper to CoreML conversion by disabling SDPA

This commit disables the use of PyTorch's
`scaled_dot_product_attention` in the Whisper model to avoid
compatibility issues during CoreML conversion.
The issue occurs because coremltools requires PyTorch 2.5.0, but the
Whisper implementation may expect behavior from newer PyTorch versions.

By setting `MultiHeadAttention.use_sdpa = False`, we force Whisper to
use its fallback manual attention implementation, which works correctly
with PyTorch 2.5.0 during the tracing process.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783

* coreml: fix audio shape in whisper decoder conversion

This commit fixes the audio shape in the whisper decoder conversion
script.

The motivation for this is that the  audio shape was incorrect and
was causing the conversion to fail.

* coreml : set -e in generate-coreml-interface.sh

The commit sets the -e flag in the generate-coreml-interface.sh script
to make sure the script fails if any command fails.

* coreml : update generated encoder/decoder interfaces

This commit updates the generated encoder/decoder interfaces for the
whisper model which is the result of running the
generate-coreml-interface.sh script.
2025-04-01 18:01:23 +02:00
Daniel Bevenius 04b9508fb3
ci : add coreml job that converts base.en to coreml [no ci] (#2981)
* ci : add coreml job that converts base.en to coreml [no ci]

This commit adds a new job to the CI pipeline that downloads the base.en
model and converts it to CoreML format. The CoreML model is then packed
into a zip file and uploaded as an artifact.

This will only be done for pushes to master, releases, or pre-releases.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2783

* coreml : remove publishing of coreml model

* ci : add GGML_OPENMP=OFF to ubuntu-22-gcc-sanitized
2025-04-01 17:04:32 +02:00
Daniel Bevenius 4200430e75
tests : re-enable tests [no ci] (#2977)
This commit re-enables the tests in the build process which are
currently commented out.

It is possible to build the tests using `-DWHISPER_BUILD_TESTS=ON` and
then run a single test using:
```console
$ ctest -R test-whisper-cli-tiny.en --test-dir build
Internal ctest changing into directory: /home/danbev/work/ai/whisper-work/build
Test project /home/danbev/work/ai/whisper-work/build
    Start 2: test-whisper-cli-tiny.en
1/1 Test #2: test-whisper-cli-tiny.en .........   Passed    4.44 sec

100% tests passed, 0 tests failed out of 1

Label Time Summary:
en      =   4.44 sec*proc (1 test)
gh      =   4.44 sec*proc (1 test)
tiny    =   4.44 sec*proc (1 test)

Total Test time (real) =   4.44 sec
```

Some of the tests take a long time to run so it might not be a good idea
to enable them in CI, or perhaps we could only run a subset of the tests
in CI.
2025-03-31 17:04:37 +02:00
Daniel Bevenius e153b8eaa2
android.java : re-add ggml source updates (#2975)
This commit updates the ggml source to include the new unary and binary
operations. I merged https://github.com/ggerganov/whisper.cpp/pull/2958
which seems to have overwritten the changes to the ggml source which
were added in https://github.com/ggerganov/whisper.cpp/pull/2972.

Sorry about this.
2025-03-31 16:14:33 +02:00
Daniel Bevenius 83af237f0b
ci : re-enable freeBDS-latest job (#2973)
This commit re-enables the freeBSD-latest job which has been commented
out.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2781
2025-03-31 15:24:08 +02:00
Daniel Bevenius 7a2e39750a
ci : re-enable android_java job (#2958)
This commit re-enables the android_java job in the CI workflow. The job
was disabled because of a failing build.

The motivation for this is that Commit
226d344f56 ("whisper.android.java : update
build with ggml source changes") addressed build issues and it should
now be possible to re-enable this job.
2025-03-31 15:14:24 +02:00
Georgi Gerganov 0a40ae9728 android : add new ggml source files
ggml-ci
2025-03-31 14:56:53 +03:00
Georgi Gerganov 32cfdcbf42 ruby : add new ggml sources
ggml-ci
2025-03-31 14:56:53 +03:00
Georgi Gerganov cfa42aca09 sync : ggml
ggml-ci
2025-03-31 14:56:53 +03:00
Akarshan Biswas 2e2f0f954b SYCL: Remove misleading ggml_sycl_op_flatten function (llama/12387)
* SYCL: Remove misleading ggml_sycl_op_flatten function

* remove trailing whitespace

* Fix L2 norm from rebase

* remove try catch block from element_wise.cpp

* remove comment from common.hp

* ggml-sycl.cpp: Add try catch sycl::exception block in compute_forward

* norm.cpp: remove try catch exception block
2025-03-31 14:56:53 +03:00
Georgi Gerganov 93631b2be6 metal : use constexpr in FA kernels + fix typedef (llama/12659)
* metal : use constexpr in FA kernels

ggml-ci

* cont

ggml-ci

* cont : fix typedef

ggml-ci
2025-03-31 14:56:53 +03:00
R0CKSTAR f9015b585b musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (llama/12611)
* musa: fix all warnings

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: enable -DLLAMA_FATAL_WARNINGS=ON in run.sh

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: update ci doc (install ccache)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* fix Windows build issue

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-03-31 14:56:53 +03:00
Jay 1880ffd7ff cmake : fix ccache conflict (llama/12522)
If users already set CMAKE_C_COMPILER_LAUNCHER globally, setting it in
cmake again will lead to conflict and compile fail.

Signed-off-by: Jay <BusyJay@users.noreply.github.com>
2025-03-31 14:56:53 +03:00
Xuan-Son Nguyen 9173932c78 cpu : rm unused variable (ggml/1166) 2025-03-31 14:56:53 +03:00
cmdr2 94c3f3877f cpu: de-duplicate some of the operators and refactor (ggml/1144)
* cpu: de-duplicate some of the operators and refactor

* Fix PR comments

* Fix PR comments
2025-03-31 14:56:53 +03:00
Sandro Hanea 00086469fb
cmake: improve Vulkan cooperative matrix support checks (#2966)
Co-authored-by: Sandro Hanea <me@sandro.rocks>
2025-03-31 13:44:36 +03:00
Daniel Bevenius 2d8e40e2a0
examples : update README links to point to pages deployment (#2971)
This commit updates the README links to point to the pages deployment
instead of whisper.ggerganov.com.
2025-03-31 12:32:27 +02:00
Daniel Bevenius e17af6524f
ci : add github pages workflow for wasm examples (#2969)
* ci : add github pages workflow for wasm examples

This commit adds a github workflow to build and deploy the wasm examples
to github pages. The whisper.wasm example is deployed as the main page.

This workflow is trigged by a push to master and will deploy the
examples to: https://ggerganov.github.io/whisper.cpp/.

This requires that the repository has enabled github actions in
`Settings` -> `Pages` -> `Build and deployment` -> `Source` be set to
`GitHub Actions`.

One thing to note is that this commit removes the `talk` example as I'm
not sure how this example is built yet.

Refs: https://github.com/ggerganov/whisper.cpp/issues/2784
2025-03-31 11:34:40 +02:00
1492 changed files with 317671 additions and 102646 deletions

View File

@ -1,6 +1,6 @@
ARG UBUNTU_VERSION=22.04
# This needs to generally match the container host's environment.
ARG CUDA_VERSION=12.3.1
ARG CUDA_VERSION=13.0.0
# Target the CUDA build image
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
# Target the CUDA runtime image
@ -13,28 +13,38 @@ WORKDIR /app
ARG CUDA_DOCKER_ARCH=all
# Set nvcc architecture
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH}
# Enable cuBLAS
ENV GGML_CUDA=1
RUN apt-get update && \
apt-get install -y build-essential libsdl2-dev wget cmake git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
# Ref: https://stackoverflow.com/a/53464012
ENV CUDA_MAIN_VERSION=12.3
ENV CUDA_MAIN_VERSION=13.0
ENV LD_LIBRARY_PATH /usr/local/cuda-${CUDA_MAIN_VERSION}/compat:$LD_LIBRARY_PATH
COPY .. .
RUN make base.en
# Enable cuBLAS
RUN make base.en CMAKE_ARGS="-DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES='75;80;86;90'"
RUN find /app/build -name "*.o" -delete && \
find /app/build -name "*.a" -delete && \
rm -rf /app/build/CMakeFiles && \
rm -rf /app/build/cmake_install.cmake && \
rm -rf /app/build/_deps
FROM ${BASE_CUDA_RUN_CONTAINER} AS runtime
ENV CUDA_MAIN_VERSION=12.3
ENV CUDA_MAIN_VERSION=13.0
ENV LD_LIBRARY_PATH /usr/local/cuda-${CUDA_MAIN_VERSION}/compat:$LD_LIBRARY_PATH
WORKDIR /app
RUN apt-get update && \
apt-get install -y curl ffmpeg wget cmake git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY --from=build /app /app
RUN du -sh /app/*
RUN find /app -type f -size +100M
ENV PATH=/app/build/bin:$PATH
ENTRYPOINT [ "bash", "-c" ]

View File

@ -0,0 +1,28 @@
ARG ONEAPI_VERSION=2025.1.1-0-devel-ubuntu24.04
FROM intel/oneapi-basekit:$ONEAPI_VERSION AS build
WORKDIR /app
RUN apt-get update && \
apt-get install -y build-essential libsdl2-dev wget cmake git \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY .. .
# Enable SYCL
ARG GGML_SYCL_F16=OFF
RUN if [ "${GGML_SYCL_F16}" = "ON" ]; then \
echo "GGML_SYCL_F16 is set" \
&& export OPT_SYCL_F16="-DGGML_SYCL_F16=ON"; \
fi && \
make base.en CMAKE_ARGS="-DGGML_SYCL=1 -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ${OPT_SYCL_F16}"
FROM intel/oneapi-basekit:$ONEAPI_VERSION AS runtime
WORKDIR /app
RUN apt-get update && \
apt-get install -y curl ffmpeg libsdl2-dev wget cmake git \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY --from=build /app /app
ENV PATH=/app/build/bin:$PATH
ENTRYPOINT [ "bash", "-c" ]

View File

@ -0,0 +1,40 @@
ARG UBUNTU_VERSION=22.04
# This needs to generally match the container host's environment.
ARG MUSA_VERSION=rc4.2.0
# Target the MUSA build image
ARG BASE_MUSA_DEV_CONTAINER=mthreads/musa:${MUSA_VERSION}-devel-ubuntu${UBUNTU_VERSION}-amd64
# Target the MUSA runtime image
ARG BASE_MUSA_RUN_CONTAINER=mthreads/musa:${MUSA_VERSION}-runtime-ubuntu${UBUNTU_VERSION}-amd64
FROM ${BASE_MUSA_DEV_CONTAINER} AS build
WORKDIR /app
RUN apt-get update && \
apt-get install -y build-essential libsdl2-dev wget cmake git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/* /tmp/* /var/tmp/*
COPY .. .
# Enable muBLAS
RUN make base.en CMAKE_ARGS="-DGGML_MUSA=1"
RUN find /app/build -name "*.o" -delete && \
find /app/build -name "*.a" -delete && \
rm -rf /app/build/CMakeFiles && \
rm -rf /app/build/cmake_install.cmake && \
rm -rf /app/build/_deps
FROM ${BASE_MUSA_RUN_CONTAINER} AS runtime
WORKDIR /app
RUN apt-get update && \
apt-get install -y curl ffmpeg wget cmake git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/* /tmp/* /var/tmp/*
COPY --from=build /app/build/bin /app/build/bin
COPY --from=build /app/samples /app/samples
COPY --from=build /app/models /app/models
ENV PATH=/app/build/bin:$PATH
ENTRYPOINT [ "bash", "-c" ]

View File

@ -0,0 +1,20 @@
FROM ubuntu:24.04 AS build
WORKDIR /app
RUN apt-get update && \
apt-get install -y build-essential wget cmake git libvulkan-dev glslc \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY .. .
RUN make base.en CMAKE_ARGS="-DGGML_VULKAN=1"
FROM ubuntu:24.04 AS runtime
WORKDIR /app
RUN apt-get update && \
apt-get install -y curl ffmpeg libsdl2-dev wget cmake git libvulkan1 mesa-vulkan-drivers \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY --from=build /app /app
ENV PATH=/app/build/bin:$PATH
ENTRYPOINT [ "bash", "-c" ]

View File

@ -16,4 +16,5 @@ RUN apt-get update && \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
COPY --from=build /app /app
ENV PATH=/app/build/bin:$PATH
ENTRYPOINT [ "bash", "-c" ]

3
.dockerignore Normal file
View File

@ -0,0 +1,3 @@
build*/
.github/
.devops/

View File

@ -13,10 +13,10 @@ jobs:
ubuntu-22:
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v5
- uses: actions/setup-go@v6
with:
go-version: '^1.23'
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- run: |
cd bindings/go
make test

View File

@ -1,55 +1,11 @@
name: Bindings Tests (Ruby)
on:
push:
paths:
- bindings/ruby/**
- src/**/*.c
- src/**/*.cpp
- src/**/*.h
- src/**/*.m
- src/**/*.metal
- include/**/*.c
- include/**/*.cpp
- include/**/*.h
- include/**/*.m
- include/**/*.metal
- ggml/**/*.c
- ggml/**/*.cpp
- ggml/**/*.h
- ggml/**/*.m
- ggml/**/*.metal
- scripts/get-flags.mk
- examples/common.h
- examples/common.cpp
- examples/common-whisper.h
- examples/common-whisper.cpp
- examples/stb_vorbis.c
- examples/miniaudio.h
branches:
- master
pull_request:
paths:
- bindings/ruby/**
- src/**/*.c
- src/**/*.cpp
- src/**/*.h
- src/**/*.m
- src/**/*.metal
- include/**/*.c
- include/**/*.cpp
- include/**/*.h
- include/**/*.m
- include/**/*.metal
- ggml/**/*.c
- ggml/**/*.cpp
- ggml/**/*.h
- ggml/**/*.m
- ggml/**/*.metal
- scripts/get-flags.mk
- examples/common.h
- examples/common.cpp
- examples/common-whisper.h
- examples/common-whisper.cpp
- examples/stb_vorbis.c
- examples/miniaudio.h
types: [opened, synchronize, reopened]
jobs:
ubuntu-22:
@ -60,6 +16,6 @@ jobs:
steps:
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.1'
- uses: actions/checkout@v4
ruby-version: '3.2'
- uses: actions/checkout@v6
- run: rake test

File diff suppressed because it is too large Load Diff

View File

@ -15,16 +15,18 @@ jobs:
env:
COMMIT_SHA: ${{ github.sha }}
strategy:
fail-fast: false
matrix:
config:
- { tag: "main", dockerfile: ".devops/main.Dockerfile", platform: "linux/amd64" }
#TODO: the cuda image keeps failing - disable for now
# https://github.com/ggerganov/whisper.cpp/actions/runs/11019444428/job/30602020339
#- { tag: "main-cuda", dockerfile: ".devops/main-cuda.Dockerfile", platform: "linux/amd64" }
- { tag: "main-musa", dockerfile: ".devops/main-musa.Dockerfile", platform: "linux/amd64" }
- { tag: "main-intel", dockerfile: ".devops/main-intel.Dockerfile", platform: "linux/amd64" }
- { tag: "main-cuda", dockerfile: ".devops/main-cuda.Dockerfile", platform: "linux/amd64" }
- { tag: "main-vulkan", dockerfile: ".devops/main-vulkan.Dockerfile", platform: "linux/amd64" }
steps:
- name: Check out the repo
uses: actions/checkout@v3
uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
@ -41,21 +43,35 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image (versioned)
if: github.event_name == 'push'
uses: docker/build-push-action@v5
with:
context: .
push: true
platforms: ${{ matrix.config.platform }}
tags: "ghcr.io/${{ github.repository }}:${{ matrix.config.tag }}-${{ env.COMMIT_SHA }}"
file: ${{ matrix.config.dockerfile }}
- name: Free up disk space
run: |
sudo apt-get remove -y '^dotnet-.*' '^llvm-.*' '^mysql-.*' '^postgresql-.*'
sudo apt-get autoremove -y
sudo apt-get autoclean
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
docker system prune -af
df -h
- name: Generate tags
id: tags
run: |
TAGS="ghcr.io/${{ github.repository }}:${{ matrix.config.tag }}"
if [ "${{ github.event_name }}" == "push" ]; then
TAGS="$TAGS,ghcr.io/${{ github.repository }}:${{ matrix.config.tag }}-${{ env.COMMIT_SHA }}"
fi
echo "tags=$TAGS" >> $GITHUB_OUTPUT
- name: Build and push Docker image (tagged)
uses: docker/build-push-action@v4
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.event_name == 'push' }}
platforms: ${{ matrix.config.platform }}
tags: "ghcr.io/${{ github.repository }}:${{ matrix.config.tag }}"
tags: ${{ steps.tags.outputs.tags }}
file: ${{ matrix.config.dockerfile }}

97
.github/workflows/examples-wasm.yml vendored Normal file
View File

@ -0,0 +1,97 @@
name: Examples WASM
on:
push:
branches: ["master"]
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
deploy-wasm-github-pages:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Setup emsdk
uses: mymindstorm/setup-emsdk@v14
- name: Build WASM Examples
# Enable for real build later in whisper.cpp
run: |
mkdir -p build-em && cd build-em
emcmake cmake .. -DCMAKE_BUILD_TYPE=Release
make -j
- name: Create staging directory
run: mkdir -p staging
- name: Create .nojekyll file in staging directory
run: touch staging/.nojekyll
- name: Copy application files
run: |
build_dir=build-em/bin
ls ${build_dir}
# command.wasm
target_dir=staging/command.wasm
mkdir -p ${target_dir}
cp ${build_dir}/command.wasm/{index.html,command.js,helpers.js} ${target_dir}
cp ${build_dir}/libcommand.js ${target_dir}
# bench.wasm
target_dir=staging/bench.wasm
mkdir -p ${target_dir}
cp ${build_dir}/bench.wasm/{index.html,bench.js,helpers.js} ${target_dir}
cp ${build_dir}/libbench.js ${target_dir}
# stream.wasm
target_dir=staging/stream.wasm
mkdir -p ${target_dir}
cp ${build_dir}/stream.wasm/{index.html,stream.js,helpers.js} ${target_dir}
cp ${build_dir}/libstream.js ${target_dir}
# wchess.wasm
target_dir=staging/wchess.wasm
mkdir -p ${target_dir}
cp -r ${build_dir}/wchess.wasm/{index.html,css,img,js} ${target_dir}
cp ${build_dir}/wchess.wasm.js ${target_dir}
# whisper.wasm (this will be the main example page)
target_dir=staging
mkdir -p ${target_dir}
cp ${build_dir}/whisper.wasm/{index.html,main.js,helpers.js} ${target_dir}
cp ${build_dir}/libmain.js ${target_dir}
# Copy Cross-Origin Isolation service worker
cp -v examples/coi-serviceworker.js staging/
- name: List files in staging directory (for debugging)
run: |
echo "Files in staging directory:"
find staging -type f | sort
- name: Upload artifact
uses: actions/upload-pages-artifact@v4
with:
path: ./staging
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@ -17,7 +17,7 @@ jobs:
node-version: [ 16.x, 18.x ]
steps:
- name: Clone
uses: actions/checkout@v1
uses: actions/checkout@v6
- name: Dependencies
run: |
@ -27,7 +27,7 @@ jobs:
sudo apt-get install libsdl2-dev
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
uses: actions/setup-node@v6
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'

6
.gitignore vendored
View File

@ -14,6 +14,8 @@
build/
build-*/
build_*/
tmp/
# SPM
.build/
@ -49,6 +51,8 @@ extra/bench-gg.txt
models/*.mlmodel
models/*.mlmodelc
models/*.mlpackage
models/*-encoder-openvino.xml
models/*-encoder-openvino-cache/
bindings/java/.gradle/
bindings/java/.idea/
.idea/
@ -59,4 +63,4 @@ cmake-build-debug/
.gradle/
local.properties
.log
.exe
.exe

0
.gitmodules vendored
View File

View File

@ -1,6 +1,6 @@
cmake_minimum_required(VERSION 3.5) # for add_link_options and implicit target directories.
project("whisper.cpp" C CXX)
project("whisper.cpp" VERSION 1.7.4)
project("whisper.cpp" VERSION 1.8.4)
include(CheckIncludeFileCXX)
set(SOVERSION 1)
@ -34,6 +34,9 @@ endif()
if (EMSCRIPTEN)
set(BUILD_SHARED_LIBS_DEFAULT OFF)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
option(WHISPER_WASM_SINGLE_FILE "whisper: embed WASM inside the generated whisper.js" ON)
# TODO: without these, we get the following error:
@ -59,9 +62,6 @@ option(BUILD_SHARED_LIBS "build shared libraries" ${BUILD_SHARED_LIBS_DEFAULT})
# option list
#
# general
option(WHISPER_CCACHE "whisper: use ccache if available" ON)
# debug
option(WHISPER_ALL_WARNINGS "whisper: enable all compiler warnings" ON)
option(WHISPER_ALL_WARNINGS_3RD_PARTY "whisper: enable all compiler warnings in 3rd party libs" OFF)
@ -96,7 +96,6 @@ option(WHISPER_OPENVINO "whisper: support for OpenVINO" OFF)
include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/build-info.cmake)
# override ggml options
set(GGML_CCACHE ${WHISPER_CCACHE})
set(GGML_SANITIZE_THREAD ${WHISPER_SANITIZE_THREAD})
set(GGML_SANITIZE_ADDRESS ${WHISPER_SANITIZE_ADDRESS})
set(GGML_SANITIZE_UNDEFINED ${WHISPER_SANITIZE_UNDEFINED})
@ -121,6 +120,12 @@ whisper_option_depr(WARNING WHISPER_OPENMP GGML_OPENMP)
whisper_option_depr(WARNING WHISPER_RPC GGML_RPC)
whisper_option_depr(WARNING WHISPER_SYCL GGML_SYCL)
whisper_option_depr(WARNING WHISPER_SYCL_F16 GGML_SYCL_F16)
whisper_option_depr(WARNING WHISPER_CCACHE GGML_CCACHE)
if (GGML_CUDA AND NOT MSVC)
#GGML_CUDA enabled, add the necessary compile options -Wno-deprecated-gpu-targets
add_compile_options(-Wno-deprecated-gpu-targets)
endif()
#
# build the library
@ -135,6 +140,22 @@ if (NOT TARGET ggml)
add_library(ggml ALIAS ggml::ggml)
else()
add_subdirectory(ggml)
if(WIN32)
# The following adds a _DISABLE_CONSTEXPR_MUTEX_CONSTRUCTOR macro and is a workaround for
# the Windows C++ standard library which does not support constexpr mutexes.
# From the release notes://github.com/microsoft/STL/wiki/Changelog
# Disable constexpr mutex constructor on Windows
# Fixed mutex's constructor to be constexpr. #3824 #4000 #4339
# Note: Programs that aren't following the documented restrictions on binary compatibility may encounter
# null dereferences in mutex machinery. You must follow this rule:
# When you mix binaries built by different supported versions of the toolset, the Redistributable version
# must be at least as new as the latest toolset used by any app component.
# You can define _DISABLE_CONSTEXPR_MUTEX_CONSTRUCTOR as an escape hatch.
#
# Specifically to whisper.cpp this would cause a crash when using the Java bindings.
# resulting in a Invalid memory access error.
target_compile_definitions(ggml-base PRIVATE _DISABLE_CONSTEXPR_MUTEX_CONSTRUCTOR)
endif()
endif()
# ... otherwise assume ggml is added by a parent CMakeLists.txt
endif()
@ -160,6 +181,10 @@ get_directory_property(WHISPER_TRANSIENT_DEFINES COMPILE_DEFINITIONS)
set_target_properties(whisper PROPERTIES PUBLIC_HEADER ${CMAKE_CURRENT_SOURCE_DIR}/include/whisper.h)
install(TARGETS whisper LIBRARY PUBLIC_HEADER)
target_compile_definitions(whisper PRIVATE
WHISPER_VERSION="${PROJECT_VERSION}"
)
configure_package_config_file(
${CMAKE_CURRENT_SOURCE_DIR}/cmake/whisper-config.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/whisper-config.cmake
@ -190,10 +215,44 @@ install(FILES "${CMAKE_CURRENT_BINARY_DIR}/whisper.pc"
#
if (WHISPER_BUILD_TESTS AND NOT CMAKE_JS_VERSION)
#include(CTest)
#add_subdirectory(tests)
include(CTest)
add_subdirectory(tests)
endif ()
if (WHISPER_BUILD_EXAMPLES)
add_subdirectory(examples)
endif()
if (MSVC)
set(MSVC_WARNING_FLAGS
/wd4101 # Unreferenced local variable
/wd4005 # Macro redefinition
/wd4065 # switch statement contains 'default' but no 'case' labels
/wd4267 # Conversion from 'size_t' to a smaller type, possible loss of data
/wd4244 # Conversion from one type to another type, possible loss of ata
/wd4805 # Unsafe mix of type
/wd4305 # Truncation from 'type1' to 'type2' (often double to float)
/wd4996 # Function or variable may be unsafe/deprecated
)
function(disable_msvc_warnings target_name)
if(TARGET ${target_name})
target_compile_options(${target_name} PRIVATE ${MSVC_WARNING_FLAGS})
endif()
endfunction()
if (WHISPER_BUILD_EXAMPLES)
disable_msvc_warnings(whisper)
disable_msvc_warnings(common)
disable_msvc_warnings(common-sdl)
disable_msvc_warnings(lsp)
disable_msvc_warnings(wchess-core)
disable_msvc_warnings(whisper-command)
disable_msvc_warnings(whisper-cli)
disable_msvc_warnings(whisper-server)
disable_msvc_warnings(whisper-stream)
disable_msvc_warnings(whisper-talk-llama)
disable_msvc_warnings(whisper-bench)
disable_msvc_warnings(quantize)
disable_msvc_warnings(vad-speech-segments)
endif()
endif()

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023-2024 The ggml authors
Copyright (c) 2023-2026 The ggml authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -4,7 +4,7 @@
.PHONY: build
build:
cmake -B build
cmake -B build $(CMAKE_ARGS)
cmake --build build --config Release
# download a few audio samples into folder "./samples":
@ -41,17 +41,17 @@ samples:
tiny.en tiny base.en base small.en small medium.en medium large-v1 large-v2 large-v3 large-v3-turbo:
bash ./models/download-ggml-model.sh $@
cmake -B build
cmake -B build $(CMAKE_ARGS)
cmake --build build --config Release
@echo ""
@echo "==============================================="
@echo "Running $@ on all samples in ./samples ..."
@echo "==============================================="
@echo ""
@for f in samples/*$(.flac .mp3 .ogg .wav); do \
@for f in samples/*.{flac,mp3,ogg,wav}; do \
echo "----------------------------------------------" ; \
echo "[+] Running $@ on $$f ... (run 'ffplay $$f' to listen)" ; \
echo "----------------------------------------------" ; \
echo "----------------------------------------------" ; \
echo "" ; \
./build/bin/whisper-cli -m models/ggml-$@.bin -f $$f ; \
echo "" ; \

270
README.md
View File

@ -2,15 +2,12 @@
![whisper.cpp](https://user-images.githubusercontent.com/1991296/235238348-05d0f6a4-da44-4900-a1de-d0707e75b763.jpeg)
[![Actions Status](https://github.com/ggerganov/whisper.cpp/workflows/CI/badge.svg)](https://github.com/ggerganov/whisper.cpp/actions)
[![Actions Status](https://github.com/ggml-org/whisper.cpp/workflows/CI/badge.svg)](https://github.com/ggml-org/whisper.cpp/actions)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Conan Center](https://shields.io/conan/v/whisper-cpp)](https://conan.io/center/whisper-cpp)
[![npm](https://img.shields.io/npm/v/whisper.cpp.svg)](https://www.npmjs.com/package/whisper.cpp/)
> [!NOTE]
> New maintenance roadmap: https://github.com/ggerganov/whisper.cpp/discussions/2788
Stable: [v1.7.4](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.7.4) / [Roadmap | F.A.Q.](https://github.com/ggerganov/whisper.cpp/discussions/126)
Stable: [v1.8.1](https://github.com/ggml-org/whisper.cpp/releases/tag/v1.8.1) / [Roadmap](https://github.com/orgs/ggml-org/projects/4/)
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
@ -26,7 +23,9 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
- [Efficient GPU support for NVIDIA](#nvidia-gpu-support)
- [OpenVINO Support](#openvino-support)
- [Ascend NPU Support](#ascend-npu-support)
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/include/whisper.h)
- [Moore Threads GPU Support](#moore-threads-gpu-support)
- [C-style API](https://github.com/ggml-org/whisper.cpp/blob/master/include/whisper.h)
- [Voice Activity Detection (VAD)](#voice-activity-detection-vad)
Supported platforms:
@ -34,14 +33,14 @@ Supported platforms:
- [x] [iOS](examples/whisper.objc)
- [x] [Android](examples/whisper.android)
- [x] [Java](bindings/java/README.md)
- [x] Linux / [FreeBSD](https://github.com/ggerganov/whisper.cpp/issues/56#issuecomment-1350920264)
- [x] Linux / [FreeBSD](https://github.com/ggml-org/whisper.cpp/issues/56#issuecomment-1350920264)
- [x] [WebAssembly](examples/whisper.wasm)
- [x] Windows ([MSVC](https://github.com/ggerganov/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggerganov/whisper.cpp/issues/168)]
- [x] [Raspberry Pi](https://github.com/ggerganov/whisper.cpp/discussions/166)
- [x] [Docker](https://github.com/ggerganov/whisper.cpp/pkgs/container/whisper.cpp)
- [x] Windows ([MSVC](https://github.com/ggml-org/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggml-org/whisper.cpp/issues/168))
- [x] [Raspberry Pi](https://github.com/ggml-org/whisper.cpp/discussions/166)
- [x] [Docker](https://github.com/ggml-org/whisper.cpp/pkgs/container/whisper.cpp)
The entire high-level implementation of the model is contained in [whisper.h](include/whisper.h) and [whisper.cpp](src/whisper.cpp).
The rest of the code is part of the [`ggml`](https://github.com/ggerganov/ggml) machine learning library.
The rest of the code is part of the [`ggml`](https://github.com/ggml-org/ggml) machine learning library.
Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications.
As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device: [whisper.objc](examples/whisper.objc)
@ -54,14 +53,14 @@ https://user-images.githubusercontent.com/1991296/204038393-2f846eae-c255-4099-a
On Apple Silicon, the inference runs fully on the GPU via Metal:
https://github.com/ggerganov/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225
https://github.com/ggml-org/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225
## Quick start
First clone the repository:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
git clone https://github.com/ggml-org/whisper.cpp.git
```
Navigate into the directory:
@ -81,7 +80,7 @@ Now build the [whisper-cli](examples/cli) example and transcribe an audio file l
```bash
# build the project
cmake -B build
cmake --build build --config Release
cmake --build build -j --config Release
# transcribe an audio file
./build/bin/whisper-cli -f samples/jfk.wav
@ -150,8 +149,9 @@ standard cmake setup with:
```bash
# build with GGML_BLAS defined
cmake -B build -DGGML_BLAS=1
cmake --build build --config Release
cmake --build build -j --config Release
./build/bin/whisper-cli [ .. etc .. ]
```
## Quantization
@ -163,7 +163,7 @@ Here are the steps for creating and using a quantized model:
```bash
# quantize a model with Q5_0 method
cmake -B build
cmake --build build --config Release
cmake --build build -j --config Release
./build/bin/quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
# run the examples as usual, specifying the quantized model file
@ -225,7 +225,7 @@ speed-up - more than x3 faster compared with CPU-only execution. Here are the in
The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
Next runs are faster.
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggerganov/whisper.cpp/pull/566).
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggml-org/whisper.cpp/pull/566).
## OpenVINO support
@ -267,7 +267,7 @@ This can result in significant speedup in encoder performance. Here are the inst
- Build `whisper.cpp` with OpenVINO support:
Download OpenVINO package from [release page](https://github.com/openvinotoolkit/openvino/releases). The recommended version to use is [2023.0.0](https://github.com/openvinotoolkit/openvino/releases/tag/2023.0.0).
Download OpenVINO package from [release page](https://github.com/openvinotoolkit/openvino/releases). The recommended version to use is [2024.6.0](https://github.com/openvinotoolkit/openvino/releases/tag/2024.6.0). Ready to use Binaries of the required libraries can be found in the [OpenVino Archives](https://storage.openvinotoolkit.org/repositories/openvino/packages/2024.6/)
After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:
@ -310,7 +310,7 @@ This can result in significant speedup in encoder performance. Here are the inst
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
cached for the next run.
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggml-org/whisper.cpp/pull/1037).
## NVIDIA GPU support
@ -324,6 +324,12 @@ cmake -B build -DGGML_CUDA=1
cmake --build build -j --config Release
```
or for newer NVIDIA GPU's (RTX 5000 series):
```
cmake -B build -DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES="86"
cmake --build build -j --config Release
```
## Vulkan GPU support
Cross-vendor solution which allows you to accelerate workload on your GPU.
First, make sure your graphics card driver provides support for Vulkan API.
@ -356,6 +362,7 @@ First, check if your Ascend NPU device is supported:
| Ascend NPU | Status |
|:-----------------------------:|:-------:|
| Atlas 300T A2 | Support |
| Atlas 300I Duo | Support |
Then, make sure you have installed [`CANN toolkit`](https://www.hiascend.com/en/software/cann/community) . The lasted version of CANN is recommanded.
@ -377,6 +384,56 @@ Run the inference examples as usual, for example:
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
- If you run successfully with your Ascend NPU device, please help update the table `Verified devices`.
## Moore Threads GPU support
With Moore Threads cards the processing of the models is done efficiently on the GPU via muBLAS and custom MUSA kernels.
First, make sure you have installed `MUSA SDK rc4.2.0`: https://developer.mthreads.com/sdk/download/musa?equipment=&os=&driverVersion=&version=4.2.0
Now build `whisper.cpp` with MUSA support:
```
cmake -B build -DGGML_MUSA=1
cmake --build build -j --config Release
```
or specify the architecture for your Moore Threads GPU. For example, if you have a MTT S80 GPU, you can specify the architecture as follows:
```
cmake -B build -DGGML_MUSA=1 -DMUSA_ARCHITECTURES="21"
cmake --build build -j --config Release
```
## FFmpeg support (Linux only)
If you want to support more audio formats (such as Opus and AAC), you can turn on the `WHISPER_FFMPEG` build flag to enable FFmpeg integration.
First, you need to install required libraries:
```bash
# Debian/Ubuntu
sudo apt install libavcodec-dev libavformat-dev libavutil-dev
# RHEL/Fedora
sudo dnf install libavcodec-free-devel libavformat-free-devel libavutil-free-devel
```
Then you can build the project as follows:
```bash
cmake -B build -D WHISPER_FFMPEG=yes
cmake --build build
```
Run the following example to confirm it's working:
```bash
# Convert an audio file to Opus format
ffmpeg -i samples/jfk.wav jfk.opus
# Transcribe the audio file
./build/bin/whisper-cli --model models/ggml-base.en.bin --file jfk.opus
```
## Docker
### Prerequisites
@ -386,10 +443,12 @@ Run the inference examples as usual, for example:
### Images
We have two Docker images available for this project:
We have multiple Docker images available for this project:
1. `ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
2. `ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
1. `ghcr.io/ggml-org/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
2. `ghcr.io/ggml-org/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
3. `ghcr.io/ggml-org/whisper.cpp:main-musa`: Same as `main` but compiled with MUSA support. (platforms: `linux/amd64`)
4. `ghcr.io/ggml-org/whisper.cpp:main-vulkan`: Same as `main` but compiled with Vulkan support. (platforms: `linux/amd64`)
### Usage
@ -398,15 +457,27 @@ We have two Docker images available for this project:
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "./models/download-ggml-model.sh base /models"
# transcribe an audio file
docker run -it --rm \
-v path/to/models:/models \
-v path/to/audios:/audios \
whisper.cpp:main "./main -m /models/ggml-base.bin -f /audios/jfk.wav"
whisper.cpp:main "whisper-cli -m /models/ggml-base.bin -f /audios/jfk.wav"
# transcribe an audio file in samples folder
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "./main -m /models/ggml-base.bin -f ./samples/jfk.wav"
whisper.cpp:main "whisper-cli -m /models/ggml-base.bin -f ./samples/jfk.wav"
# run the web server
docker run -it --rm -p "8080:8080" \
-v path/to/models:/models \
whisper.cpp:main "whisper-server --host 127.0.0.1 -m /models/ggml-base.bin"
# run the bench too on the small.en model using 4 threads
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "whisper-bench -m /models/ggml-small.en.bin -t 4"
```
## Installing with Conan
@ -427,12 +498,12 @@ For detailed instructions on how to use Conan, please refer to the [Conan docume
This is a naive example of performing real-time inference on audio from your microphone.
The [stream](examples/stream) tool samples the audio every half a second and runs the transcription continuously.
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
You will need to have [sdl2](https://wiki.libsdl.org/SDL2/Installation) installed for it to work properly.
More info is available in [issue #10](https://github.com/ggml-org/whisper.cpp/issues/10).
You will need to have [sdl2](https://wiki.libsdl.org/SDL2/Installation) installed for it to work properly.
```bash
cmake -B build -DWHISPER_SDL2=ON
cmake --build build --config Release
cmake --build build -j --config Release
./build/bin/whisper-stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
```
@ -516,7 +587,7 @@ main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 pr
## Speaker segmentation via tinydiarize (experimental)
More information about this approach is available here: https://github.com/ggerganov/whisper.cpp/pull/1058
More information about this approach is available here: https://github.com/ggml-org/whisper.cpp/pull/1058
Sample usage:
@ -543,7 +614,7 @@ main: processing './samples/a13.wav' (480000 samples, 30.0 sec), 4 threads, 1 pr
## Karaoke-style movie generation (experimental)
The [whisper-cli](examples/cli) example provides support for output of karaoke-style movies, where the
currently pronounced word is highlighted. Use the `-wts` argument and run the generated bash script.
currently pronounced word is highlighted. Use the `-owts` argument and run the generated bash script.
This requires to have `ffmpeg` installed.
Here are a few _"typical"_ examples:
@ -580,7 +651,7 @@ https://user-images.githubusercontent.com/1991296/199337538-b7b0c7a3-2753-4a88-a
## Video comparison of different models
Use the [scripts/bench-wts.sh](https://github.com/ggerganov/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
Use the [scripts/bench-wts.sh](https://github.com/ggml-org/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
```bash
./scripts/bench-wts.sh samples/jfk.wav
@ -597,7 +668,7 @@ In order to have an objective comparison of the performance of the inference acr
use the [whisper-bench](examples/bench) tool. The tool simply runs the Encoder part of the model and prints how much time it
took to execute it. The results are summarized in the following Github issue:
[Benchmark results](https://github.com/ggerganov/whisper.cpp/issues/89)
[Benchmark results](https://github.com/ggml-org/whisper.cpp/issues/89)
Additionally a script to run whisper.cpp with different models and audio files is provided [bench.py](scripts/bench.py).
@ -624,25 +695,24 @@ You can download the converted models using the [models/download-ggml-model.sh](
or manually from here:
- https://huggingface.co/ggerganov/whisper.cpp
- https://ggml.ggerganov.com
For more details, see the conversion script [models/convert-pt-to-ggml.py](models/convert-pt-to-ggml.py) or [models/README.md](models/README.md).
## [Bindings](https://github.com/ggerganov/whisper.cpp/discussions/categories/bindings)
## [Bindings](https://github.com/ggml-org/whisper.cpp/discussions/categories/bindings)
- [x] Rust: [tazz4843/whisper-rs](https://github.com/tazz4843/whisper-rs) | [#310](https://github.com/ggerganov/whisper.cpp/discussions/310)
- [x] JavaScript: [bindings/javascript](bindings/javascript) | [#309](https://github.com/ggerganov/whisper.cpp/discussions/309)
- [x] Rust: [tazz4843/whisper-rs](https://github.com/tazz4843/whisper-rs) | [#310](https://github.com/ggml-org/whisper.cpp/discussions/310)
- [x] JavaScript: [bindings/javascript](bindings/javascript) | [#309](https://github.com/ggml-org/whisper.cpp/discussions/309)
- React Native (iOS / Android): [whisper.rn](https://github.com/mybigday/whisper.rn)
- [x] Go: [bindings/go](bindings/go) | [#312](https://github.com/ggerganov/whisper.cpp/discussions/312)
- [x] Go: [bindings/go](bindings/go) | [#312](https://github.com/ggml-org/whisper.cpp/discussions/312)
- [x] Java:
- [GiviMAD/whisper-jni](https://github.com/GiviMAD/whisper-jni)
- [x] Ruby: [bindings/ruby](bindings/ruby) | [#507](https://github.com/ggerganov/whisper.cpp/discussions/507)
- [x] Objective-C / Swift: [ggerganov/whisper.spm](https://github.com/ggerganov/whisper.spm) | [#313](https://github.com/ggerganov/whisper.cpp/discussions/313)
- [x] Ruby: [bindings/ruby](bindings/ruby) | [#507](https://github.com/ggml-org/whisper.cpp/discussions/507)
- [x] Objective-C / Swift: [ggml-org/whisper.spm](https://github.com/ggml-org/whisper.spm) | [#313](https://github.com/ggml-org/whisper.cpp/discussions/313)
- [exPHAT/SwiftWhisper](https://github.com/exPHAT/SwiftWhisper)
- [x] .NET: | [#422](https://github.com/ggerganov/whisper.cpp/discussions/422)
- [x] .NET: | [#422](https://github.com/ggml-org/whisper.cpp/discussions/422)
- [sandrohanea/whisper.net](https://github.com/sandrohanea/whisper.net)
- [NickDarvey/whisper](https://github.com/NickDarvey/whisper)
- [x] Python: | [#9](https://github.com/ggerganov/whisper.cpp/issues/9)
- [x] Python: | [#9](https://github.com/ggml-org/whisper.cpp/issues/9)
- [stlukey/whispercpp.py](https://github.com/stlukey/whispercpp.py) (Cython)
- [AIWintermuteAI/whispercpp](https://github.com/AIWintermuteAI/whispercpp) (Updated fork of aarnphm/whispercpp)
- [aarnphm/whispercpp](https://github.com/aarnphm/whispercpp) (Pybind11)
@ -650,6 +720,118 @@ For more details, see the conversion script [models/convert-pt-to-ggml.py](model
- [x] R: [bnosac/audio.whisper](https://github.com/bnosac/audio.whisper)
- [x] Unity: [macoron/whisper.unity](https://github.com/Macoron/whisper.unity)
## XCFramework
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,
and macOS. It can be used in Swift projects without the need to compile the
library from source. For example, the v1.7.5 version of the XCFramework can be
used as follows:
```swift
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "Whisper",
targets: [
.executableTarget(
name: "Whisper",
dependencies: [
"WhisperFramework"
]),
.binaryTarget(
name: "WhisperFramework",
url: "https://github.com/ggml-org/whisper.cpp/releases/download/v1.7.5/whisper-v1.7.5-xcframework.zip",
checksum: "c7faeb328620d6012e130f3d705c51a6ea6c995605f2df50f6e1ad68c59c6c4a"
)
]
)
```
## Voice Activity Detection (VAD)
Support for Voice Activity Detection (VAD) can be enabled using the `--vad`
argument to `whisper-cli`. In addition to this option a VAD model is also
required.
The way this works is that first the audio samples are passed through
the VAD model which will detect speech segments. Using this information,
only the speech segments that are detected are extracted from the original audio
input and passed to whisper for processing. This reduces the amount of audio
data that needs to be processed by whisper and can significantly speed up the
transcription process.
The following VAD models are currently supported:
### Silero-VAD
[Silero-vad](https://github.com/snakers4/silero-vad) is a lightweight VAD model
written in Python that is fast and accurate.
Models can be downloaded by running the following command on Linux or MacOS:
```console
$ ./models/download-vad-model.sh silero-v6.2.0
Downloading ggml model silero-v6.2.0 from 'https://huggingface.co/ggml-org/whisper-vad' ...
ggml-silero-v6.2.0.bin 100%[==============================================>] 864.35K --.-KB/s in 0.04s
Done! Model 'silero-v6.2.0' saved in '/path/models/ggml-silero-v6.2.0.bin'
You can now use it like this:
$ ./build/bin/whisper-cli -vm /path/models/ggml-silero-v6.2.0.bin --vad -f samples/jfk.wav -m models/ggml-base.en.bin
```
And the following command on Windows:
```console
> .\models\download-vad-model.cmd silero-v6.2.0
Downloading vad model silero-v6.2.0...
Done! Model silero-v6.2.0 saved in C:\Users\danie\work\ai\whisper.cpp\ggml-silero-v6.2.0.bin
You can now use it like this:
C:\path\build\bin\Release\whisper-cli.exe -vm C:\path\ggml-silero-v6.2.0.bin --vad -m models/ggml-base.en.bin -f samples\jfk.wav
```
To see a list of all available models, run the above commands without any
arguments.
This model can be also be converted manually to ggml using the following command:
```console
$ python3 -m venv venv && source venv/bin/activate
$ (venv) pip install silero-vad
$ (venv) $ python models/convert-silero-vad-to-ggml.py --output models/silero.bin
Saving GGML Silero-VAD model to models/silero-v6.2.0-ggml.bin
```
And it can then be used with whisper as follows:
```console
$ ./build/bin/whisper-cli \
--file ./samples/jfk.wav \
--model ./models/ggml-base.en.bin \
--vad \
--vad-model ./models/silero-v6.2.0-ggml.bin
```
### VAD Options
* --vad-threshold: Threshold probability for speech detection. A probability
for a speech segment/frame above this threshold will be considered as speech.
* --vad-min-speech-duration-ms: Minimum speech duration in milliseconds. Speech
segments shorter than this value will be discarded to filter out brief noise or
false positives.
* --vad-min-silence-duration-ms: Minimum silence duration in milliseconds. Silence
periods must be at least this long to end a speech segment. Shorter silence
periods will be ignored and included as part of the speech.
* --vad-max-speech-duration-s: Maximum speech duration in seconds. Speech segments
longer than this will be automatically split into multiple segments at silence
points exceeding 98ms to prevent excessively long segments.
* --vad-speech-pad-ms: Speech padding in milliseconds. Adds this amount of padding
before and after each detected speech segment to avoid cutting off speech edges.
* --vad-samples-overlap: Amount of audio to extend from each speech segment into
the next one, in seconds (e.g., 0.10 = 100ms overlap). This ensures speech isn't
cut off abruptly between segments when they're concatenated together.
## Examples
There are various examples of using the library for different projects in the [examples](examples) folder.
@ -668,13 +850,13 @@ Some of the examples are even ported to run in the browser using WebAssembly. Ch
| [whisper.android](examples/whisper.android) | | Android mobile application using whisper.cpp |
| [whisper.nvim](examples/whisper.nvim) | | Speech-to-text plugin for Neovim |
| [generate-karaoke.sh](examples/generate-karaoke.sh) | | Helper script to easily [generate a karaoke video](https://youtu.be/uj7hVta4blM) of raw audio capture |
| [livestream.sh](examples/livestream.sh) | | [Livestream audio transcription](https://github.com/ggerganov/whisper.cpp/issues/185) |
| [livestream.sh](examples/livestream.sh) | | [Livestream audio transcription](https://github.com/ggml-org/whisper.cpp/issues/185) |
| [yt-wsp.sh](examples/yt-wsp.sh) | | Download + transcribe and/or translate any VOD [(original)](https://gist.github.com/DaniruKun/96f763ec1a037cc92fe1a059b643b818) |
| [wchess](examples/wchess) | [wchess.wasm](examples/wchess) | Voice-controlled chess |
## [Discussions](https://github.com/ggerganov/whisper.cpp/discussions)
## [Discussions](https://github.com/ggml-org/whisper.cpp/discussions)
If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic.
You can use the [Show and tell](https://github.com/ggerganov/whisper.cpp/discussions/categories/show-and-tell) category
You can use the [Show and tell](https://github.com/ggml-org/whisper.cpp/discussions/categories/show-and-tell) category
to share your own projects that use `whisper.cpp`. If you have a question, make sure to check the
[Frequently asked questions (#126)](https://github.com/ggerganov/whisper.cpp/discussions/126) discussion.
[Frequently asked questions (#126)](https://github.com/ggml-org/whisper.cpp/discussions/126) discussion.

View File

@ -1,249 +1,249 @@
# whisper.cpp for SYCL
[Background](#background)
[OS](#os)
[Intel GPU](#intel-gpu)
[Linux](#linux)
[Environment Variable](#environment-variable)
[Known Issue](#known-issue)
[Todo](#todo)
## Background
SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators—such as CPUs, GPUs, and FPGAs. It is a single-source embedded domain-specific language based on pure C++17.
oneAPI is a specification that is open and standards-based, supporting multiple architecture types including but not limited to GPU, CPU, and FPGA. The spec has both direct programming and API-based programming paradigms.
Intel uses the SYCL as direct programming language to support CPU, GPUs and FPGAs.
To avoid re-inventing the wheel, this code refers other code paths in llama.cpp (like OpenBLAS, cuBLAS, CLBlast). We use a open-source tool [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) migrate to SYCL.
The whisper.cpp for SYCL is used to support Intel GPUs.
For Intel CPU, recommend to use whisper.cpp for X86 (Intel MKL build).
## OS
|OS|Status|Verified|
|-|-|-|
|Linux|Support|Ubuntu 22.04|
|Windows|Ongoing| |
## Intel GPU
|Intel GPU| Status | Verified Model|
|-|-|-|
|Intel Data Center Max Series| Support| Max 1550|
|Intel Data Center Flex Series| Support| Flex 170|
|Intel Arc Series| Support| Arc 770|
|Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake|
|Intel iGPU| Support| iGPU in i5-1250P, i7-1165G7|
## Linux
### Setup Environment
1. Install Intel GPU driver.
a. Please install Intel GPU driver by official guide: [Install GPU Drivers](https://dgpu-docs.intel.com/driver/installation.html).
Note: for iGPU, please install the client GPU driver.
b. Add user to group: video, render.
```
sudo usermod -aG render username
sudo usermod -aG video username
```
Note: re-login to enable it.
c. Check
```
sudo apt install clinfo
sudo clinfo -l
```
Output (example):
```
Platform #0: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) Arc(TM) A770 Graphics
Platform #0: Intel(R) OpenCL HD Graphics
`-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
```
2. Install Intel® oneAPI Base toolkit.
a. Please follow the procedure in [Get the Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html).
Recommend to install to default folder: **/opt/intel/oneapi**.
Following guide use the default folder as example. If you use other folder, please modify the following guide info with your folder.
b. Check
```
source /opt/intel/oneapi/setvars.sh
sycl-ls
```
There should be one or more level-zero devices. Like **[ext_oneapi_level_zero:gpu:0]**.
Output (example):
```
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]
```
2. Build locally:
```
mkdir -p build
cd build
source /opt/intel/oneapi/setvars.sh
#for FP16
#cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DWHISPER_SYCL_F16=ON
#for FP32
cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
#build example/main only
#cmake --build . --config Release --target main
#build all binary
cmake --build . --config Release -v
```
or
```
./examples/sycl/build.sh
```
Note:
- By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for **example/main** only.
### Run
1. Put model file to folder **models**
2. Enable oneAPI running environment
```
source /opt/intel/oneapi/setvars.sh
```
3. List device ID
Run without parameter:
```
./build/bin/ls-sycl-device
or
./build/bin/main
```
Check the ID in startup log, like:
```
found 4 SYCL devices:
Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2,
max compute_units 24, max work group size 67108864, max sub group size 64, global mem size 67065057280
Device 2: 13th Gen Intel(R) Core(TM) i7-13700K, compute capability 3.0,
max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280
Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
```
|Attribute|Note|
|-|-|
|compute capability 1.3|Level-zero running time, recommended |
|compute capability 3.0|OpenCL running time, slower than level-zero in most cases|
4. Set device ID and execute whisper.cpp
Set device ID = 0 by **GGML_SYCL_DEVICE=0**
```
GGML_SYCL_DEVICE=0 ./build/bin/main -m models/ggml-base.en.bin -f samples/jfk.wav
```
or run by script:
```
./examples/sycl/run_whisper.sh
```
5. Check the device ID in output
Like:
```
Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device
```
## Environment Variable
#### Build
|Name|Value|Function|
|-|-|-|
|WHISPER_SYCL|ON (mandatory)|Enable build with SYCL code path. <br>For FP32/FP16, WHISPER_SYCL=ON is mandatory.|
|WHISPER_SYCL_F16|ON (optional)|Enable FP16 build with SYCL code path.For FP32, do not set it.|
|CMAKE_C_COMPILER|icx|Use icx compiler for SYCL code path|
|CMAKE_CXX_COMPILER|icpx|use icpx for SYCL code path|
#### Running
|Name|Value|Function|
|-|-|-|
|GGML_SYCL_DEVICE|0 (default) or 1|Set the device id used. Check the device ids by default running output|
|GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG|
## Known Issue
- Error: `error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory`.
Miss to enable oneAPI running environment.
Install oneAPI base toolkit and enable it by: `source /opt/intel/oneapi/setvars.sh`.
- Hang during startup
llama.cpp use mmap as default way to read model file and copy to GPU. In some system, memcpy will be abnormal and block.
Solution: add **--no-mmap**.
## Todo
- Support to build in Windows.
- Support multiple cards.
# whisper.cpp for SYCL
[Background](#background)
[OS](#os)
[Intel GPU](#intel-gpu)
[Linux](#linux)
[Environment Variable](#environment-variable)
[Known Issue](#known-issue)
[Todo](#todo)
## Background
SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators—such as CPUs, GPUs, and FPGAs. It is a single-source embedded domain-specific language based on pure C++17.
oneAPI is a specification that is open and standards-based, supporting multiple architecture types including but not limited to GPU, CPU, and FPGA. The spec has both direct programming and API-based programming paradigms.
Intel uses the SYCL as direct programming language to support CPU, GPUs and FPGAs.
To avoid re-inventing the wheel, this code refers other code paths in llama.cpp (like OpenBLAS, cuBLAS, CLBlast). We use a open-source tool [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) migrate to SYCL.
The whisper.cpp for SYCL is used to support Intel GPUs.
For Intel CPU, recommend to use whisper.cpp for X86 (Intel MKL build).
## OS
|OS|Status|Verified|
|-|-|-|
|Linux|Support|Ubuntu 22.04|
|Windows|Ongoing| |
## Intel GPU
|Intel GPU| Status | Verified Model|
|-|-|-|
|Intel Data Center Max Series| Support| Max 1550|
|Intel Data Center Flex Series| Support| Flex 170|
|Intel Arc Series| Support| Arc 770|
|Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake|
|Intel iGPU| Support| iGPU in i5-1250P, i7-1165G7|
## Linux
### Setup Environment
1. Install Intel GPU driver.
a. Please install Intel GPU driver by official guide: [Install GPU Drivers](https://dgpu-docs.intel.com/driver/installation.html).
Note: for iGPU, please install the client GPU driver.
b. Add user to group: video, render.
```
sudo usermod -aG render username
sudo usermod -aG video username
```
Note: re-login to enable it.
c. Check
```
sudo apt install clinfo
sudo clinfo -l
```
Output (example):
```
Platform #0: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) Arc(TM) A770 Graphics
Platform #0: Intel(R) OpenCL HD Graphics
`-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
```
2. Install Intel® oneAPI Base toolkit.
a. Please follow the procedure in [Get the Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html).
Recommend to install to default folder: **/opt/intel/oneapi**.
Following guide use the default folder as example. If you use other folder, please modify the following guide info with your folder.
b. Check
```
source /opt/intel/oneapi/setvars.sh
sycl-ls
```
There should be one or more level-zero devices. Like **[ext_oneapi_level_zero:gpu:0]**.
Output (example):
```
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]
```
2. Build locally:
```
mkdir -p build
cd build
source /opt/intel/oneapi/setvars.sh
#for FP16
#cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DWHISPER_SYCL_F16=ON
#for FP32
cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
#build example/main only
#cmake --build . --config Release --target main
#build all binary
cmake --build . --config Release -v
```
or
```
./examples/sycl/build.sh
```
Note:
- By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for **example/main** only.
### Run
1. Put model file to folder **models**
2. Enable oneAPI running environment
```
source /opt/intel/oneapi/setvars.sh
```
3. List device ID
Run without parameter:
```
./build/bin/ls-sycl-device
or
./build/bin/main
```
Check the ID in startup log, like:
```
found 4 SYCL devices:
Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2,
max compute_units 24, max work group size 67108864, max sub group size 64, global mem size 67065057280
Device 2: 13th Gen Intel(R) Core(TM) i7-13700K, compute capability 3.0,
max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280
Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
```
|Attribute|Note|
|-|-|
|compute capability 1.3|Level-zero running time, recommended |
|compute capability 3.0|OpenCL running time, slower than level-zero in most cases|
4. Set device ID and execute whisper.cpp
Set device ID = 0 by **GGML_SYCL_DEVICE=0**
```
GGML_SYCL_DEVICE=0 ./build/bin/main -m models/ggml-base.en.bin -f samples/jfk.wav
```
or run by script:
```
./examples/sycl/run_whisper.sh
```
5. Check the device ID in output
Like:
```
Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device
```
## Environment Variable
#### Build
|Name|Value|Function|
|-|-|-|
|WHISPER_SYCL|ON (mandatory)|Enable build with SYCL code path. <br>For FP32/FP16, WHISPER_SYCL=ON is mandatory.|
|WHISPER_SYCL_F16|ON (optional)|Enable FP16 build with SYCL code path.For FP32, do not set it.|
|CMAKE_C_COMPILER|icx|Use icx compiler for SYCL code path|
|CMAKE_CXX_COMPILER|icpx|use icpx for SYCL code path|
#### Running
|Name|Value|Function|
|-|-|-|
|GGML_SYCL_DEVICE|0 (default) or 1|Set the device id used. Check the device ids by default running output|
|GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG|
## Known Issue
- Error: `error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory`.
Miss to enable oneAPI running environment.
Install oneAPI base toolkit and enable it by: `source /opt/intel/oneapi/setvars.sh`.
- Hang during startup
llama.cpp use mmap as default way to read model file and copy to GPU. In some system, memcpy will be abnormal and block.
Solution: add **--no-mmap**.
## Todo
- Support to build in Windows.
- Support multiple cards.

View File

@ -15,7 +15,7 @@ BUILD_DIR := build_go
MODELS_DIR := models
EXAMPLES_DIR := $(wildcard examples/*)
INCLUDE_PATH := $(abspath ../../include):$(abspath ../../ggml/include)
LIBRARY_PATH := $(abspath ../../${BUILD_DIR}/src:$(abspath ../../${BUILD_DIR}/ggml/src))
LIBRARY_PATH := $(abspath ../../${BUILD_DIR}/src):$(abspath ../../${BUILD_DIR}/ggml/src)
ifeq ($(GGML_CUDA),1)
LIBRARY_PATH := $(LIBRARY_PATH):$(CUDA_PATH)/targets/$(UNAME_M)-linux/lib/
@ -23,7 +23,8 @@ ifeq ($(GGML_CUDA),1)
endif
ifeq ($(UNAME_S),Darwin)
EXT_LDFLAGS := -framework Foundation -framework Metal -framework MetalKit
LIBRARY_PATH := $(LIBRARY_PATH):$(abspath ../../${BUILD_DIR}/ggml/src/ggml-blas):$(abspath ../../${BUILD_DIR}/ggml/src/ggml-metal)
EXT_LDFLAGS := -framework Foundation -framework Metal -framework MetalKit -lggml-metal -lggml-blas
endif
all: clean whisper examples

View File

@ -51,7 +51,7 @@ func main() {
In order to build, you need to have the Go compiler installed. You can get it from [here](https://golang.org/dl/). Run the tests with:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
git clone https://github.com/ggml-org/whisper.cpp.git
cd whisper.cpp/bindings/go
make test
```
@ -98,7 +98,7 @@ The API Documentation:
Getting help:
* Follow the discussion for the go bindings [here](https://github.com/ggerganov/whisper.cpp/discussions/312)
* Follow the discussion for the go bindings [here](https://github.com/ggml-org/whisper.cpp/discussions/312)
## License

View File

@ -1,5 +1,5 @@
/*
github.com/ggerganov/whisper.cpp/bindings/go
github.com/ggml-org/whisper.cpp/bindings/go
provides a speech-to-text service bindings for the Go programming language.
*/
package whisper

View File

@ -282,13 +282,20 @@ func Download(ctx context.Context, p io.Writer, model, out string) (string, erro
default:
// Read body
n, err := resp.Body.Read(data)
if n > 0 {
if m, err := w.Write(data[:n]); err != nil {
return path, err
} else {
count += int64(m)
}
}
if err != nil {
DownloadReport(p, pct, count, resp.ContentLength)
if err == io.EOF {
DownloadReport(p, pct, count, resp.ContentLength)
return path, nil
}
return path, err
} else if m, err := w.Write(data[:n]); err != nil {
return path, err
} else {
count += int64(m)
}
}
}

View File

@ -47,6 +47,39 @@ func (p *Params) SetPrintTimestamps(v bool) {
p.print_timestamps = toBool(v)
}
// Voice Activity Detection (VAD)
func (p *Params) SetVAD(v bool) {
p.vad = toBool(v)
}
func (p *Params) SetVADModelPath(path string) {
p.vad_model_path = C.CString(path)
}
func (p *Params) SetVADThreshold(t float32) {
p.vad_params.threshold = C.float(t)
}
func (p *Params) SetVADMinSpeechMs(ms int) {
p.vad_params.min_speech_duration_ms = C.int(ms)
}
func (p *Params) SetVADMinSilenceMs(ms int) {
p.vad_params.min_silence_duration_ms = C.int(ms)
}
func (p *Params) SetVADMaxSpeechSec(s float32) {
p.vad_params.max_speech_duration_s = C.float(s)
}
func (p *Params) SetVADSpeechPadMs(ms int) {
p.vad_params.speech_pad_ms = C.int(ms)
}
func (p *Params) SetVADSamplesOverlap(sec float32) {
p.vad_params.samples_overlap = C.float(sec)
}
// Set language id
func (p *Params) SetLanguage(lang int) error {
if lang == -1 {
@ -146,6 +179,10 @@ func (p *Params) SetInitialPrompt(prompt string) {
p.initial_prompt = C.CString(prompt)
}
func (p *Params) SetCarryInitialPrompt(v bool) {
p.carry_initial_prompt = toBool(v)
}
///////////////////////////////////////////////////////////////////////////////
// PRIVATE METHODS
@ -199,6 +236,9 @@ func (p *Params) String() string {
if p.token_timestamps {
str += " token_timestamps"
}
if p.carry_initial_prompt {
str += " carry_initial_prompt"
}
return str + ">"
}

View File

@ -80,6 +80,39 @@ func (context *context) SetTranslate(v bool) {
context.params.SetTranslate(v)
}
// Voice Activity Detection (VAD)
func (context *context) SetVAD(v bool) {
context.params.SetVAD(v)
}
func (context *context) SetVADModelPath(path string) {
context.params.SetVADModelPath(path)
}
func (context *context) SetVADThreshold(t float32) {
context.params.SetVADThreshold(t)
}
func (context *context) SetVADMinSpeechMs(ms int) {
context.params.SetVADMinSpeechMs(ms)
}
func (context *context) SetVADMinSilenceMs(ms int) {
context.params.SetVADMinSilenceMs(ms)
}
func (context *context) SetVADMaxSpeechSec(s float32) {
context.params.SetVADMaxSpeechSec(s)
}
func (context *context) SetVADSpeechPadMs(ms int) {
context.params.SetVADSpeechPadMs(ms)
}
func (context *context) SetVADSamplesOverlap(sec float32) {
context.params.SetVADSamplesOverlap(sec)
}
func (context *context) SetSplitOnWord(v bool) {
context.params.SetSplitOnWord(v)
}
@ -237,6 +270,9 @@ func (context *context) Process(
return err
}
// Reset n so that more Segments can be available within NextSegment call
context.n = 0
// Return success
return nil
}

View File

@ -60,6 +60,15 @@ type Context interface {
SetTemperature(t float32) // Set temperature
SetTemperatureFallback(t float32) // Set temperature incrementation
SetVAD(v bool)
SetVADModelPath(path string)
SetVADThreshold(t float32)
SetVADMinSpeechMs(ms int)
SetVADMinSilenceMs(ms int)
SetVADMaxSpeechSec(s float32)
SetVADSpeechPadMs(ms int)
SetVADSamplesOverlap(sec float32)
// Process mono audio data and return any errors.
// If defined, newly generated segments are passed to the
// callback function during processing.

View File

@ -9,7 +9,9 @@ import (
// CGO
/*
#cgo LDFLAGS: -lwhisper -lggml -lggml-base -lggml-cpu -lm -lstdc++ -fopenmp
#cgo LDFLAGS: -lwhisper -lggml -lggml-base -lggml-cpu -lm -lstdc++
#cgo linux LDFLAGS: -fopenmp
#cgo darwin LDFLAGS: -lggml-metal -lggml-blas
#cgo darwin LDFLAGS: -framework Accelerate -framework Metal -framework Foundation -framework CoreGraphics
#include <whisper.h>
#include <stdlib.h>

View File

@ -23,26 +23,42 @@ import io.github.ggerganov.whispercpp.WhisperCpp;
public class Example {
public static void main(String[] args) {
WhisperCpp whisper = new WhisperCpp();
// By default, models are loaded from ~/.cache/whisper/ and are usually named "ggml-${name}.bin"
// or you can provide the absolute path to the model file.
long context = whisper.initContext("base.en");
try {
var whisperParams = whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_GREEDY);
// custom configuration if required
whisperParams.temperature_inc = 0f;
// By default, models are loaded from ~/.cache/whisper/ and are usually named "ggml-${name}.bin"
// or you can provide the absolute path to the model file.
whisper.initContext("../ggml-base.en.bin");
WhisperFullParams.ByValue whisperParams = whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_BEAM_SEARCH);
var samples = readAudio(); // divide each value by 32767.0f
whisper.fullTranscribe(whisperParams, samples);
// custom configuration if required
//whisperParams.n_threads = 8;
whisperParams.temperature = 0.0f;
whisperParams.temperature_inc = 0.2f;
//whisperParams.language = "en";
float[] samples = readAudio(); // divide each value by 32767.0f
List<WhisperSegment> whisperSegmentList = whisper.fullTranscribeWithTime(whisperParams, samples);
int segmentCount = whisper.getTextSegmentCount(context);
for (int i = 0; i < segmentCount; i++) {
String text = whisper.getTextSegment(context, i);
System.out.println(segment.getText());
for (WhisperSegment whisperSegment : whisperSegmentList) {
long start = whisperSegment.getStart();
long end = whisperSegment.getEnd();
String text = whisperSegment.getSentence();
System.out.println("start: "+start);
System.out.println("end: "+end);
System.out.println("text: "+text);
}
} catch (IOException e) {
e.printStackTrace();
} finally {
whisper.freeContext(context);
whisper.close();
}
}
}
```
@ -52,7 +68,7 @@ public class Example {
In order to build, you need to have the JDK 8 or higher installed. Run the tests with:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
git clone https://github.com/ggml-org/whisper.cpp.git
cd whisper.cpp/bindings/java
./gradlew build

View File

@ -27,23 +27,41 @@ sourceSets {
tasks.register('copyLibwhisperDynlib', Copy) {
from '../../build/src'
include 'libwhisper.dylib'
into 'build/generated/resources/main/darwin'
into 'build/generated/resources/main'
}
tasks.register('copyLibwhisperSo', Copy) {
from '../../build/src'
include 'libwhisper.so'
into 'build/generated/resources/main/linux-x86-64'
into 'build/generated/resources/main'
}
tasks.register('copyWhisperDll', Copy) {
from '../../build/Release'
tasks.register('copyWhisperDLL', Copy) {
from '../../build/bin/Release'
include 'whisper.dll'
into 'build/generated/resources/main/windows-x86-64'
into 'build/generated/resources/main'
}
tasks.register('copyGGML_BASE_DLL', Copy) {
from '../../build/bin/Release'
include 'ggml-base.dll'
into 'build/generated/resources/main'
}
tasks.register('copyGGML_DLL', Copy) {
from '../../build/bin/Release'
include 'ggml.dll'
into 'build/generated/resources/main'
}
tasks.register('copyGGML_CPU_DLL', Copy) {
from '../../build/bin/Release'
include 'ggml-cpu.dll'
into 'build/generated/resources/main'
}
tasks.register('copyLibs') {
dependsOn copyLibwhisperDynlib, copyLibwhisperSo, copyWhisperDll
dependsOn copyLibwhisperDynlib, copyLibwhisperSo, copyWhisperDLL, copyGGML_BASE_DLL, copyGGML_DLL, copyGGML_CPU_DLL
}
test {

View File

@ -168,23 +168,26 @@ public class WhisperCpp implements AutoCloseable {
return str.toString().trim();
}
public List<WhisperSegment> fullTranscribeWithTime(WhisperFullParams whisperParams, float[] audioData) throws IOException {
/**
* Full transcribe with time list.
*
* @param whisperParams the whisper params
* @param audioData the audio data
* @return the list
* @throws IOException the io exception
*/
public List<WhisperSegment> fullTranscribeWithTime(WhisperFullParams.ByValue whisperParams, float[] audioData) throws IOException {
if (ctx == null) {
throw new IllegalStateException("Model not initialised");
}
WhisperFullParams.ByValue valueParams = new WhisperFullParams.ByValue(
lib.whisper_full_default_params_by_ref(WhisperSamplingStrategy.WHISPER_SAMPLING_BEAM_SEARCH.ordinal()));
valueParams.read();
if (lib.whisper_full(ctx, valueParams, audioData, audioData.length) != 0) {
if (lib.whisper_full(ctx, whisperParams, audioData, audioData.length) != 0) {
throw new IOException("Failed to process audio");
}
int nSegments = lib.whisper_full_n_segments(ctx);
List<WhisperSegment> segments= new ArrayList<>(nSegments);
for (int i = 0; i < nSegments; i++) {
long t0 = lib.whisper_full_get_segment_t0(ctx, i);
String text = lib.whisper_full_get_segment_text(ctx, i);

View File

@ -9,6 +9,7 @@ import io.github.ggerganov.whispercpp.params.WhisperContextParams;
import io.github.ggerganov.whispercpp.params.WhisperFullParams;
public interface WhisperCppJnaLibrary extends Library {
WhisperCppJnaLibrary instance = Native.load("whisper", WhisperCppJnaLibrary.class);
String whisper_print_system_info();

View File

@ -20,7 +20,7 @@ public class WhisperContextParams extends Structure {
/** Use GPU for inference (default = true) */
public CBool use_gpu;
/** Use flash attention (default = false) */
/** Use flash attention (default = true) */
public CBool flash_attn;
/** CUDA device to use (default = 0) */

View File

@ -157,6 +157,8 @@ public class WhisperFullParams extends Structure {
/** Tokens to provide to the whisper decoder as an initial prompt.
* These are prepended to any existing text context from a previous call. */
public String initial_prompt;
/** Always prepend initial_prompt for every decode chunk. */
public CBool carry_initial_prompt;
/** Prompt tokens. (int*) */
public Pointer prompt_tokens;
@ -336,8 +338,8 @@ public class WhisperFullParams extends Structure {
"no_timestamps", "single_segment", "print_special",
"print_progress", "print_realtime", "print_timestamps",
"token_timestamps", "thold_pt", "thold_ptsum", "max_len",
"split_on_word", "max_tokens", "debug_mode", "audio_ctx",
"tdrz_enable", "suppress_regex", "initial_prompt",
"split_on_word", "max_tokens", "debug_mode", "audio_ctx",
"tdrz_enable", "suppress_regex", "initial_prompt", "carry_initial_prompt",
"prompt_tokens", "prompt_n_tokens", "language", "detect_language",
"suppress_blank", "suppress_nst", "temperature",
"max_initial_ts", "length_penalty", "temperature_inc",

View File

@ -4,6 +4,7 @@ import static org.junit.jupiter.api.Assertions.*;
import io.github.ggerganov.whispercpp.bean.WhisperSegment;
import io.github.ggerganov.whispercpp.params.CBool;
import io.github.ggerganov.whispercpp.params.WhisperContextParams;
import io.github.ggerganov.whispercpp.params.WhisperFullParams;
import io.github.ggerganov.whispercpp.params.WhisperSamplingStrategy;
import org.junit.jupiter.api.BeforeAll;
@ -25,7 +26,9 @@ class WhisperCppTest {
//String modelName = "../../models/ggml-tiny.bin";
String modelName = "../../models/ggml-tiny.en.bin";
try {
whisper.initContext(modelName);
WhisperContextParams.ByValue contextParams = whisper.getContextDefaultParams();
contextParams.useFlashAttn(false); // Disable flash attention
whisper.initContext(modelName, contextParams);
//whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_GREEDY);
//whisper.getJavaDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_BEAM_SEARCH);
modelInitialised = true;
@ -118,7 +121,7 @@ class WhisperCppTest {
float[] floats = new float[b.length / 2];
//WhisperFullParams params = whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_GREEDY);
WhisperFullParams params = whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_BEAM_SEARCH);
WhisperFullParams.ByValue params = whisper.getFullDefaultParams(WhisperSamplingStrategy.WHISPER_SAMPLING_BEAM_SEARCH);
params.setProgressCallback((ctx, state, progress, user_data) -> System.out.println("progress: " + progress));
params.print_progress = CBool.FALSE;
//params.initial_prompt = "and so my fellow Americans um, like";

View File

@ -1,6 +1,6 @@
{
"name": "whisper.cpp",
"version": "1.7.4",
"version": "1.8.4",
"description": "Whisper speech recognition",
"main": "whisper.js",
"scripts": {

View File

@ -1,3 +1,9 @@
LICENSE
pkg/
lib/whisper.*
ext/examples/
ext/ggml/
ext/include/
ext/scripts/
ext/src/
test/fixtures/

View File

@ -5,17 +5,6 @@ whispercpp
Ruby bindings for [whisper.cpp][], an interface of automatic speech recognition model.
Installation
------------
Install the gem and add to the application's Gemfile by executing:
$ bundle add whispercpp
If bundler is not being used to manage dependencies, install the gem by executing:
$ gem install whispercpp
Usage
-----
@ -31,7 +20,8 @@ params = Whisper::Params.new(
max_text_tokens: 300,
translate: true,
print_timestamps: false,
initial_prompt: "Initial prompt here."
initial_prompt: "Initial prompt here.",
carry_initial_prompt: true
)
whisper.transcribe("path/to/audio.wav", params) do |whole_text|
@ -44,17 +34,6 @@ end
Some models are prepared up-front:
```ruby
base_en = Whisper::Model.pre_converted_models["base.en"]
whisper = Whisper::Context.new(base_en)
```
At first time you use a model, it is downloaded automatically. After that, downloaded cached file is used. To clear cache, call `#clear_cache`:
```ruby
Whisper::Model.pre_converted_models["base"].clear_cache
```
You also can use shorthand for pre-converted models:
```ruby
@ -79,6 +58,19 @@ puts Whisper::Model.pre_converted_models.keys
# :
```
You can also retrieve each model:
```ruby
base_en = Whisper::Model.pre_converted_models["base.en"]
whisper = Whisper::Context.new(base_en)
```
At first time you use a model, it is downloaded automatically. After that, downloaded cached file is used. To clear cache, call `#clear_cache`:
```ruby
Whisper::Model.pre_converted_models["base"].clear_cache
```
You can also use local model files you prepared:
```ruby
@ -90,7 +82,8 @@ Or, you can download model files:
```ruby
whisper = Whisper::Context.new("https://example.net/uri/of/your/model.bin")
# Or
whisper = Whisper::Context.new(URI("https://example.net/uri/of/your/model.bin"))
uri = URI("https://example.net/uri/of/your/model.bin")
whisper = Whisper::Context.new(uri)
```
See [models][] page for details.
@ -99,9 +92,118 @@ See [models][] page for details.
Currently, whisper.cpp accepts only 16-bit WAV files.
### Voice Activity Detection (VAD) ###
Support for Voice Activity Detection (VAD) can be enabled by setting `Whisper::Params`'s `vad` argument to `true` and specifying VAD model:
```ruby
Whisper::Params.new(
vad: true,
vad_model_path: "silero-v6.2.0",
# other arguments...
)
```
When you pass the model name (`"silero-v6.2.0"`) or URI (`https://huggingface.co/ggml-org/whisper-vad/resolve/main/ggml-silero-v6.2.0.bin`), it will be downloaded automatically.
Currently, "silero-v6.2.0" is registered as pre-converted model like ASR models. You also specify file path or URI of model.
If you need configure VAD behavior, pass params for that:
```ruby
Whisper::Params.new(
vad: true,
vad_model_path: "silero-v6.2.0",
vad_params: Whisper::VAD::Params.new(
threshold: 1.0, # defaults to 0.5
min_speech_duration_ms: 500, # defaults to 250
min_silence_duration_ms: 200, # defaults to 100
max_speech_duration_s: 30000, # default is FLT_MAX,
speech_pad_ms: 50, # defaults to 30
samples_overlap: 0.5 # defaults to 0.1
),
# other arguments...
)
```
For details on VAD, see [whisper.cpp's README](https://github.com/ggml-org/whisper.cpp?tab=readme-ov-file#voice-activity-detection-vad).
### Output ###
whispercpp supports SRT and WebVTT output:
```ruby
puts whisper.transcribe("path/to/audio.wav", Whisper::Params.new).to_webvtt
# =>
WEBVTT
1
00:00:00.000 --> 00:00:03.860
My thought I have nobody by a beauty and will as you poured.
2
00:00:03.860 --> 00:00:09.840
Mr. Rochester is sub in that so-don't find simplest, and devoted about, to let might in
3
00:00:09.840 --> 00:00:09.940
a
```
You may call `#to_srt`, too
Installation
------------
Install the gem and add to the application's Gemfile by executing:
$ bundle add whispercpp
If bundler is not being used to manage dependencies, install the gem by executing:
$ gem install whispercpp
You can pass build options for whisper.cpp, for instance:
$ bundle config build.whispercpp --enable-ggml-cuda
or,
$ gem install whispercpp -- --enable-ggml-cuda
See whisper.cpp's [README](https://github.com/ggml-org/whisper.cpp/blob/master/README.md) for available options. You need convert options present in the README to Ruby-style options, for example:
Boolean options:
* `-DGGML_BLAS=1` -> `--enable-ggml-blas`
* `-DWHISER_COREML=OFF` -> `--disable-whisper-coreml`
Argument options:
* `-DGGML_CUDA_COMPRESSION_MODE=size` -> `--ggml-cuda-compression-mode=size`
Combination:
* `-DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES="86"` -> `--enable-ggml-cuda --cmake_cuda-architectures="86"`
For boolean options like `GGML_CUDA`, the README says `-DGGML_CUDA=1`. You need strip `-D`, prepend `--enable-` for `1` or `ON` (`--disable-` for `0` or `OFF`) and make it kebab-case: `--enable-ggml-cuda`.
For options which require arguments like `CMAKE_CUDA_ARCHITECTURES`, the README says `-DCMAKE_CUDA_ARCHITECTURES="86"`. You need strip `-D`, prepend `--`, make it kebab-case, append `=` and append argument: `--cmake-cuda-architectures="86"`.
API
---
### Transcription ###
By default, `Whisper::Context#transcribe` works in a single thread. You can make it work in parallel by passing `n_processors` option:
```ruby
whisper.transcribe("path/to/audio.wav", params, n_processors: Etc.nprocessors)
```
Note that transcription occasionally might be low accuracy when it works in parallel.
If n_processors is greater than 1, you cannot set any callbacks including new_segment_callback, progress_callback, encoder_begin_callback, abort_callback, and log_callback set by Whisper.log_set.
### Segments ###
Once `Whisper::Context#transcribe` called, you can retrieve segments by `#each_segment`:
@ -123,7 +225,7 @@ whisper
ed: format_time(segment.end_time),
text: segment.text
}
line << " (speaker turned)" if segment.speaker_next_turn?
line << " (speaker turned)" if segment.speaker_turn_next?
puts line
end
@ -139,7 +241,7 @@ params.on_new_segment do |segment|
ed: format_time(segment.end_time),
text: segment.text
}
line << " (speaker turned)" if segment.speaker_next_turn?
line << " (speaker turned)" if segment.speaker_turn_next?
puts line
end
@ -147,6 +249,58 @@ whisper.transcribe("path/to/audio.wav", params)
```
### Tokens ###
Each segment has tokens.
To enable token timestamps, you need to set `Whisper::Params#token_timestamps = true`. Then, retrieve tokens from segments using `Whisper::Segment#each_token`.
```ruby
whisper = Whisper::Context.new("base.en")
params = Whisper::Params.new(token_timestamps: true)
whisper
.transcribe("path/to/audio.wav", params)
.each_segment do |segment|
segment.each_token do |token|
token => {start_time:, end_time:, text:, probability:}
st = "%05.2fs" % (start_time / 1000.0)
et = "%05.2fs" % (end_time / 1000.0)
prob = "%.1f%%" % (probability * 100)
puts "[#{st} --> #{et}] #{text} (#{prob})"
end
end
```
```
[00.00s --> 00.00s] [_BEG_] (84.2%)
[00.32s --> 00.37s] And (71.2%)
[00.37s --> 00.53s] so (98.5%)
[00.69s --> 00.85s] my (70.7%)
[00.85s --> 01.59s] fellow (99.5%)
[01.59s --> 02.10s] Americans (90.1%)
[02.85s --> 03.30s] , (28.4%)
[03.30s --> 04.14s] ask (79.8%)
[04.14s --> 04.28s] not (78.9%)
[05.03s --> 05.35s] what (93.3%)
[05.41s --> 05.74s] your (98.8%)
[05.74s --> 06.41s] country (99.6%)
[06.41s --> 06.74s] can (97.7%)
[06.74s --> 06.92s] do (99.0%)
[07.00s --> 07.00s] for (95.8%)
[07.01s --> 07.52s] you (98.5%)
[07.81s --> 08.05s] , (49.3%)
[08.19s --> 08.37s] ask (65.6%)
[08.37s --> 08.75s] what (98.8%)
[08.91s --> 09.04s] you (98.2%)
[09.04s --> 09.32s] can (96.9%)
[09.32s --> 09.38s] do (90.3%)
[09.44s --> 09.76s] for (91.8%)
[09.76s --> 09.99s] your (98.2%)
[10.02s --> 10.36s] country (99.6%)
[10.51s --> 10.99s] . (87.0%)
[11.00s --> 11.00s] [_TT_550] (7.6%)
```
### Models ###
You can see model information:
@ -223,12 +377,73 @@ whisper
end
```
The second argument `samples` may be an array, an object with `length` and `each` method, or a MemoryView. If you can prepare audio data as C array and export it as a MemoryView, whispercpp accepts and works with it with zero copy.
The second argument `samples` may be an array, an object with `length` and `each` method, or a MemoryView.
If you can prepare audio data as C array and export it as a MemoryView, whispercpp accepts and works with it with zero copy.
```ruby
require "torchaudio"
require "arrow-numo-narray"
require "whisper"
waveform, sample_rate = TorchAudio.load("test/fixtures/jfk.wav")
# Convert Torch::Tensor to Arrow::Array via Numo::NArray
samples = waveform.squeeze.numo.to_arrow.to_arrow_array
whisper = Whisper::Context.new("base")
whisper
# Arrow::Array exports MemoryView
.full(Whisper::Params.new, samples)
```
Custom context params
---------------------
You can use customize `Whisper::Context`'s behavior using `Whisper::Context::Params`.
```ruby
context_params = Whisper::Context::Params.new(
use_gpu: false,
flash_attn: false,
# etc
)
whisper = Whisper::Context.new("base", context_params)
```
Using VAD separately from ASR
-----------------------------
VAD feature itself is useful. You can use it separately from ASR:
```ruby
vad = Whisper::VAD::Context.new("silero-v6.2.0")
vad
.detect("path/to/audio.wav", Whisper::VAD::Params.new)
.each.with_index do |segment, index|
segment => {start_time: st, end_time: ed} # `Segment` responds to `#deconstruct_keys`
puts "[%{nth}: %{st} --> %{ed}]" % {nth: index + 1, st:, ed:}
end
```
You may also low level API `Whisper::VAD::Context#segments_from_samples` as such `Whisper::Context#full`:
```ruby
# Ruby Array
reader = WaveFile::Reader.new("path/to/audio.wav", WaveFile::Format.new(:mono, :float, 16000))
samples = reader.enum_for(:each_buffer).map(&:samples).flatten
# Or, object which exports MemoryView
waveform, sample_rate = TorchAudio.load("test/fixtures/jfk.wav")
samples = waveform.squeeze.numo.to_arrow.to_arrow_array
segments = vad.segments_from_samples(Whisper::VAD::Params.new, samples)
```
Development
-----------
% git clone https://github.com/ggerganov/whisper.cpp.git
% git clone https://github.com/ggml-org/whisper.cpp.git
% cd whisper.cpp/bindings/ruby
% rake test
@ -236,10 +451,15 @@ First call of `rake test` builds an extension and downloads a model for testing.
If something seems wrong on build, running `rake clean` solves some cases.
### Need help ###
* Windows support
* Refinement of C/C++ code, especially memory management
License
-------
The same to [whisper.cpp][].
[whisper.cpp]: https://github.com/ggerganov/whisper.cpp
[models]: https://github.com/ggerganov/whisper.cpp/tree/master/models
[whisper.cpp]: https://github.com/ggml-org/whisper.cpp
[models]: https://github.com/ggml-org/whisper.cpp/tree/master/models

View File

@ -3,11 +3,15 @@ require "bundler/gem_tasks"
require "rake/testtask"
require_relative "extsources"
SOURCES_DIR = "ext/sources"
SOURCES = FileList[]
EXTSOURCES.each do |src|
basename = src.pathmap("%f")
dest = basename == "LICENSE" ? basename : src.pathmap("%{../..,ext}p")
dest = basename == "LICENSE" ? basename
: src.pathmap("%{\\.\\./\\.\\.,#{SOURCES_DIR}}p")
.pathmap("%{\\.\\./javascript,#{SOURCES_DIR}/bindings/javascript}p")
dir = dest.pathmap("%d")
file src
directory dir
@ -18,7 +22,6 @@ EXTSOURCES.each do |src|
end
CLEAN.include SOURCES
CLEAN.include FileList["ext/**/*.o", "ext/**/*.metal", "ext/**/*.tmp", "ext/whisper.{so,bundle,dll}"]
SRC = FileList["ext/*.{c,cpp,h}"]
@ -36,6 +39,20 @@ file "ext/Makefile" => SRC + ["ext/extconf.rb"] + SOURCES do |t|
ruby "extconf.rb"
end
end
if File.exist? "ext/Makefile"
task :make_clean do
cd "ext" do
sh "make", "clean"
end
end
task clean: :make_clean
task :make_distclean do
cd "ext" do
sh "make", "distclean"
end
end
task clobber: :make_distclean
end
file SO_FILE => "ext/Makefile" do |t|
chdir "ext" do
@ -50,17 +67,30 @@ file LIB_FILE => [SO_FILE, "lib"] do |t|
end
CLEAN.include LIB_FILE
Rake::TestTask.new do |t|
t.test_files = FileList["tests/test_*.rb"]
Rake::TestTask.new
TEST_FIXTURE_AUDIO = "test/fixtures/jfk.wav"
TEST_FIXTURE_AUDIO_SRC = File.expand_path(File.join(__dir__, "..", "..", "samples", "jfk.wav"))
TEST_FIXTURE_AUDIO_DIR = TEST_FIXTURE_AUDIO.pathmap("%d")
directory TEST_FIXTURE_AUDIO_DIR
if File.exist? TEST_FIXTURE_AUDIO_SRC
file TEST_FIXTURE_AUDIO => [TEST_FIXTURE_AUDIO_SRC, TEST_FIXTURE_AUDIO_DIR] do |t|
symlink t.source, t.name
end
else
require "open-uri"
file TEST_FIXTURE_AUDIO => TEST_FIXTURE_AUDIO_DIR do |t|
File.write t.name, URI("https://github.com/ggml-org/whisper.cpp/raw/refs/heads/master/samples/jfk.wav").read
end
end
TEST_MEMORY_VIEW = "tests/jfk_reader/jfk_reader.#{RbConfig::CONFIG['DLEXT']}"
file TEST_MEMORY_VIEW => "tests/jfk_reader/jfk_reader.c" do |t|
chdir "tests/jfk_reader" do
TEST_MEMORY_VIEW = "test/jfk_reader/jfk_reader.#{RbConfig::CONFIG['DLEXT']}"
file TEST_MEMORY_VIEW => "test/jfk_reader/jfk_reader.c" do |t|
chdir "test/jfk_reader" do
ruby "extconf.rb"
sh "make"
end
end
CLEAN.include "tests/jfk_reader/jfk_reader.{o,#{RbConfig::CONFIG['DLEXT']}}"
CLEAN.include TEST_MEMORY_VIEW
task test: [LIB_FILE, TEST_MEMORY_VIEW]
task test: [LIB_FILE, TEST_MEMORY_VIEW, TEST_FIXTURE_AUDIO]

View File

@ -2,10 +2,8 @@ Makefile
whisper.so
whisper.bundle
whisper.dll
scripts/get-flags.mk
*.o
/*/**/*.c
/*/**/*.cpp
/*/**/*.h
/*/**/*.m
/*/**/*.metal
*.a
sources/*
!sources/CMakeGraphVizOptions.cmake
mkmf.log

View File

@ -1,9 +0,0 @@
ggml/src/ggml-cpu/ggml-cpu-cpp.o: \
ggml/src/ggml-cpu/ggml-cpu.cpp \
ggml/include/ggml-backend.h \
ggml/include/ggml.h \
ggml/include/ggml-alloc.h \
ggml/src/ggml-backend-impl.h \
ggml/include/ggml-cpu.h \
ggml/src/ggml-impl.h
$(CXX) $(CXXFLAGS) -c $< -o $@

View File

@ -0,0 +1,73 @@
require "tsort"
class Dependencies
include TSort
def initialize(cmake, options)
@cmake = cmake
@options = options
@static_lib_shape = nil
@nodes = {}
@graph = Hash.new {|h, k| h[k] = []}
generate_dot
parse_dot
end
def libs
tsort.filter_map {|node|
label, shape = @nodes[node]
if shape == @static_lib_shape
label.gsub(/\\n\([^)]+\)/, '')
else
nil
end
}.reverse.collect {|lib| "lib#{lib}.a"}
end
def to_s
libs.join(" ")
end
private
def dot_path
File.join(__dir__, "build", "whisper.cpp.dot")
end
def generate_dot
args = ["-S", "sources", "-B", "build", "--graphviz", dot_path, "-D", "BUILD_SHARED_LIBS=OFF"]
args << @options.to_s unless @options.to_s.empty?
system @cmake, *args, exception: true
end
def parse_dot
File.open(dot_path).each_line do |line|
case line
when /\[\s*label\s*=\s*"Static Library"\s*,\s*shape\s*=\s*(?<shape>\w+)\s*\]/
@static_lib_shape = $~[:shape]
when /\A\s*"(?<node>\w+)"\s*\[\s*label\s*=\s*"(?<label>\S+)"\s*,\s*shape\s*=\s*(?<shape>\w+)\s*\]\s*;\s*\z/
node = $~[:node]
label = $~[:label]
shape = $~[:shape]
@nodes[node] = [label, shape]
when /\A\s*"(?<depender>\w+)"\s*->\s*"(?<dependee>\w+)"/
depender = $~[:depender]
dependee = $~[:dependee]
@graph[depender] << dependee
end
end
end
def tsort_each_node
@nodes.each_key do |node|
yield node
end
end
def tsort_each_child(node)
@graph[node].each do |child|
yield child
end
end
end

View File

@ -1,208 +1,23 @@
require 'mkmf'
require "mkmf"
require_relative "options"
require_relative "dependencies"
# need to use c++ compiler flags
$CXXFLAGS << ' -std=c++17'
cmake = find_executable("cmake") || abort
options = Options.new(cmake).to_s
have_library("gomp") rescue nil
libs = Dependencies.new(cmake, options).to_s
$LDFLAGS << ' -lstdc++'
$CFLAGS << " -O3 -march=native"
$INCFLAGS << " -Isources/include -Isources/ggml/include -Isources/examples"
$LOCAL_LIBS << " #{libs}"
$cleanfiles << " build #{libs}"
# Set to true when building binary gems
if enable_config('static-stdlib', false)
$LDFLAGS << ' -static-libgcc -static-libstdc++'
end
if enable_config('march-tune-native', false)
$CFLAGS << ' -march=native -mtune=native'
$CXXFLAGS << ' -march=native -mtune=native'
end
if ENV['WHISPER_METAL']
$GGML_METAL ||= true
$DEPRECATE_WARNING ||= true
end
$UNAME_S = `uname -s`.chomp
$UNAME_P = `uname -p`.chomp
$UNAME_M = `uname -m`.chomp
if $UNAME_S == 'Darwin'
unless ENV['GGML_NO_METAL']
$GGML_METAL ||= true
end
$GGML_NO_OPENMP ||= true
end
if $GGML_METAL
$GGML_METAL_EMBED_LIBRARY = true
end
$MK_CPPFLAGS = '-Iggml/include -Iggml/src -Iggml/src/ggml-cpu -Iinclude -Isrc -Iexamples -DGGML_USE_CPU'
$MK_CFLAGS = '-std=c11 -fPIC'
$MK_CXXFLAGS = '-std=c++17 -fPIC'
$MK_NVCCFLAGS = '-std=c++17'
$MK_LDFLAGS = ''
$OBJ_GGML = []
$OBJ_WHISPER = []
$OBJ_COMMON = []
$OBJ_SDL = []
$MK_CPPFLAGS << ' -D_XOPEN_SOURCE=600'
if $UNAME_S == 'Linux'
$MK_CPPFLAGS << ' -D_GNU_SOURCE'
end
if $UNAME_S == 'Darwin'
$MK_CPPFLAGS << ' -D_DARWIN_C_SOURCE'
end
if ENV['WHISPER_DEBUG']
$MK_CFLAGS << ' -O0 -g'
$MK_CXXFLAGS << ' -O0 -g'
$MK_LDFLAGS << ' -g'
$MK_NVCCFLAGS << ' -O0 -g'
else
$MK_CPPFLAGS << ' -DNDEBUG'
$MK_CFLAGS << ' -O3'
$MK_CXXFLAGS << ' -O3'
$MK_NVCCFLAGS << ' -O3'
end
$WARN_FLAGS =
' -Wall' <<
' -Wextra' <<
' -Wpedantic' <<
' -Wcast-qual' <<
' -Wno-unused-function'
$MK_CFLAGS <<
$WARN_FLAGS <<
' -Wshadow' <<
' -Wstrict-prototypes' <<
' -Wpointer-arith' <<
' -Wmissing-prototypes' <<
' -Werror=implicit-int' <<
' -Werror=implicit-function-declaration'
$MK_CXXFLAGS <<
$WARN_FLAGS <<
' -Wmissing-declarations' <<
' -Wmissing-noreturn'
unless `#{cc_command} #{$LDFLAGS} -Wl,-v 2>&1`.chomp.include? 'dyld-1015.7'
$MK_CPPFLAGS << ' -DHAVE_BUGGY_APPLE_LINKER'
end
if %w[Linux Darwin FreeBSD NetBSD OpenBSD Haiku].include? $UNAME_S
$MK_CFLAGS << ' -pthread'
$MK_CXXFLAGS << ' -pthread'
end
unless $_WIN32
$DSO_EXT = '.so'
else
$DSO_EXT = '.dll'
end
unless ENV['RISCV']
if %w[x86_64 i686 amd64].include? $UNAME_M
$HOST_CXXFLAGS ||= ''
$MK_CFLAGS << ' -march=native -mtune=native'
$HOST_CXXFLAGS << ' -march=native -mtune=native'
end
else
$MK_CFLAGS << ' -march=rv64gcv -mabi=lp64d'
$MK_CXXFLAGS << ' -march=rv64gcv -mabi=lp64d'
end
unless ENV['GGML_NO_ACCELERATE']
if $UNAME_S == 'Darwin'
$MK_CPPFLAGS << ' -DGGML_USE_ACCELERATE -DGGML_USE_BLAS -DGGML_BLAS_USE_ACCELERATE'
$MK_CPPFLAGS << ' -DACCELERATE_NEW_LAPACK'
$MK_CPPFLAGS << ' -DACCELERATE_LAPACK_ILP64'
$MK_LDFLAGS << ' -framework Accelerate'
$OBJ_GGML << 'ggml/src/ggml-blas/ggml-blas.o'
end
end
if ENV['GGML_OPENBLAS']
$MK_CPPFLAGS << " -DGGML_USE_BLAS #{`pkg-config --cflags-only-I openblas`.chomp}"
$MK_CFLAGS << " #{`pkg-config --cflags-only-other openblas)`.chomp}"
$MK_LDFLAGS << " #{`pkg-config --libs openblas`}"
$OBJ_GGML << 'ggml/src/ggml-blas/ggml-blas.o'
end
if ENV['GGML_OPENBLAS64']
$MK_CPPFLAGS << " -DGGML_USE_BLAS #{`pkg-config --cflags-only-I openblas64`.chomp}"
$MK_CFLAGS << " #{`pkg-config --cflags-only-other openblas64)`.chomp}"
$MK_LDFLAGS << " #{`pkg-config --libs openblas64`}"
$OBJ_GGML << 'ggml/src/ggml-blas/ggml-blas.o'
end
if $GGML_METAL
$MK_CPPFLAGS << ' -DGGML_USE_METAL'
$MK_LDFLAGS << ' -framework Foundation -framework Metal -framework MetalKit'
$OBJ_GGML << 'ggml/src/ggml-metal/ggml-metal.o'
if ENV['GGML_METAL_NDEBUG']
$MK_CPPFLAGS << ' -DGGML_METAL_NDEBUG'
end
if $GGML_METAL_EMBED_LIBRARY
$MK_CPPFLAGS << ' -DGGML_METAL_EMBED_LIBRARY'
$OBJ_GGML << 'ggml/src/ggml-metal/ggml-metal-embed.o'
end
end
$OBJ_GGML <<
'ggml/src/ggml.o' <<
'ggml/src/ggml-alloc.o' <<
'ggml/src/ggml-backend.o' <<
'ggml/src/ggml-backend-reg.o' <<
'ggml/src/ggml-opt.o' <<
'ggml/src/ggml-quants.o' <<
'ggml/src/ggml-threading.o' <<
'ggml/src/ggml-cpu/ggml-cpu.o' <<
'ggml/src/ggml-cpu/ggml-cpu-cpp.o' <<
'ggml/src/ggml-cpu/ggml-cpu-aarch64.o' <<
'ggml/src/ggml-cpu/ggml-cpu-hbm.o' <<
'ggml/src/ggml-cpu/ggml-cpu-quants.o' <<
'ggml/src/ggml-cpu/ggml-cpu-traits.o'
$OBJ_WHISPER <<
'src/whisper.o' <<
'examples/common.o' <<
'examples/common-whisper.o'
$objs = $OBJ_GGML + $OBJ_WHISPER + $OBJ_COMMON + $OBJ_SDL
$objs <<
"ruby_whisper.o" <<
"ruby_whisper_context.o" <<
"ruby_whisper_transcribe.o" <<
"ruby_whisper_params.o" <<
"ruby_whisper_error.o" <<
"ruby_whisper_segment.o" <<
"ruby_whisper_model.o"
$CPPFLAGS = "#{$MK_CPPFLAGS} #{$CPPFLAGS}"
$CFLAGS = "#{$CPPFLAGS} #{$MK_CFLAGS} #{$GF_CFLAGS} #{$CFLAGS}"
$BASE_CXXFLAGS = "#{$MK_CXXFLAGS} #{$CXXFLAGS}"
$CXXFLAGS = "#{$BASE_CXXFLAGS} #{$HOST_CXXFLAGS} #{$GF_CXXFLAGS} #{$CPPFLAGS}"
$NVCCFLAGS = "#{$MK_NVCCFLAGS} #{$NVCCFLAGS}"
$LDFLAGS = "#{$MK_LDFLAGS} #{$LDFLAGS}"
create_makefile('whisper')
File.open 'Makefile', 'a' do |file|
file.puts 'include scripts/get-flags.mk'
file.puts 'include cpu.mk'
if $GGML_METAL
file.puts 'include metal.mk'
if $GGML_METAL_EMBED_LIBRARY
file.puts 'include metal-embed.mk'
end
end
create_makefile "whisper" do |conf|
conf << <<~EOF
$(TARGET_SO): #{libs}
#{libs}: cmake-targets
cmake-targets:
#{"\t"}#{cmake} -S sources -B build -D BUILD_SHARED_LIBS=OFF -D CMAKE_ARCHIVE_OUTPUT_DIRECTORY=#{__dir__} -D CMAKE_POSITION_INDEPENDENT_CODE=ON #{options}
#{"\t"}#{cmake} --build build --config Release --target common whisper
EOF
end

View File

@ -1,17 +0,0 @@
ggml/src/ggml-metal/ggml-metal-embed.o: \
ggml/src/ggml-metal/ggml-metal.metal \
ggml/src/ggml-metal/ggml-metal-impl.h \
ggml/src/ggml-common.h
@echo "Embedding Metal library"
@sed -e '/__embed_ggml-common.h__/r ggml/src/ggml-common.h' -e '/__embed_ggml-common.h__/d' < ggml/src/ggml-metal/ggml-metal.metal > ggml/src/ggml-metal/ggml-metal-embed.metal.tmp
@sed -e '/#include "ggml-metal-impl.h"/r ggml/src/ggml-metal/ggml-metal-impl.h' -e '/#include "ggml-metal-impl.h"/d' < ggml/src/ggml-metal/ggml-metal-embed.metal.tmp > ggml/src/ggml-metal/ggml-metal-embed.metal
$(eval TEMP_ASSEMBLY=$(shell mktemp -d))
@echo ".section __DATA, __ggml_metallib" > $(TEMP_ASSEMBLY)/ggml-metal-embed.s
@echo ".globl _ggml_metallib_start" >> $(TEMP_ASSEMBLY)/ggml-metal-embed.s
@echo "_ggml_metallib_start:" >> $(TEMP_ASSEMBLY)/ggml-metal-embed.s
@echo ".incbin \"ggml/src/ggml-metal/ggml-metal-embed.metal\"" >> $(TEMP_ASSEMBLY)/ggml-metal-embed.s
@echo ".globl _ggml_metallib_end" >> $(TEMP_ASSEMBLY)/ggml-metal-embed.s
@echo "_ggml_metallib_end:" >> $(TEMP_ASSEMBLY)/ggml-metal-embed.s
$(CC) $(CFLAGS) -c $(TEMP_ASSEMBLY)/ggml-metal-embed.s -o $@
@rm -f ${TEMP_ASSEMBLY}/ggml-metal-embed.s
@rmdir ${TEMP_ASSEMBLY}

View File

@ -1,6 +0,0 @@
ggml/src/ggml-metal/ggml-metal.o: \
ggml/src/ggml-metal/ggml-metal.m \
ggml/src/ggml-metal/ggml-metal-impl.h \
ggml/include/ggml-metal.h \
ggml/include/ggml.h
$(CC) $(CFLAGS) -c $< -o $@

View File

@ -0,0 +1,85 @@
class Options
def initialize(cmake="cmake")
@cmake = cmake
@options = {}
configure
end
def to_s
@options
.reject {|name, (type, value)| value.nil?}
.collect {|name, (type, value)| "-D #{name}=#{value == true ? "ON" : value == false ? "OFF" : value.shellescape}"}
.join(" ")
end
def cmake_options
return @cmake_options if @cmake_options
output = nil
Dir.chdir __dir__ do
output = `#{@cmake.shellescape} -S sources -B build -L`
end
@cmake_options = output.lines.drop_while {|line| line.chomp != "-- Cache values"}.drop(1)
.filter_map {|line|
option, value = line.chomp.split("=", 2)
name, type = option.split(":", 2)
[
name,
[
type,
type == "BOOL" ? value == "ON" : value
]
]
}.to_h
end
private
def configure
cmake_options.each_pair do |name, (type, default_value)|
option = option_name(name)
value = type == "BOOL" ? enable_config(option) : arg_config("--#{option}")
@options[name] = [type, value]
end
configure_accelerate
configure_metal
configure_coreml
end
# See ggml/src/ggml-cpu/CMakeLists.txt
def configure_accelerate
if RUBY_PLATFORM.match?(/darwin/) && enabled?("GGML_ACCELERATE")
$LDFLAGS << " -framework Accelerate"
end
end
# See ggml/src/ggml-metal/CMakeLists.txt
def configure_metal
$LDFLAGS << " -framework Foundation -framework Metal -framework MetalKit" if enabled?("GGML_METAL")
end
# See src/CmakeLists.txt
def configure_coreml
if enabled?("WHISPER_COREML")
$LDFLAGS << " -framework Foundation -framework CoreML"
$defs << "-DRUBY_WHISPER_USE_COREML"
end
end
def option_name(name)
name.downcase.gsub("_", "-")
end
def enabled?(option)
op = @options[option]
raise "Option not exist: #{option}" unless op
raise "Option not boolean: #{option}(#{op[0]})" unless op[0] == "BOOL"
if op[1].nil?
cmake_options[option][1]
else
op[1]
end
end
end

View File

@ -1,13 +1,17 @@
#include <ruby.h>
#include <ruby/memory_view.h>
#include "ruby_whisper.h"
VALUE mWhisper;
VALUE mVAD;
VALUE cContext;
VALUE cParams;
VALUE cVADContext;
VALUE cVADParams;
VALUE cVADSegments;
VALUE cVADSegment;
VALUE eError;
VALUE cSegment;
VALUE cToken;
VALUE cModel;
ID id_to_s;
@ -20,17 +24,26 @@ ID id_new;
ID id_to_path;
ID id_URI;
ID id_pre_converted_models;
ID id_coreml_compiled_models;
ID id_cache;
ID id_n_processors;
static bool is_log_callback_finalized = false;
// High level API
extern VALUE ruby_whisper_segment_allocate(VALUE klass);
extern void init_ruby_whisper_context(VALUE *mWhisper);
extern VALUE init_ruby_whisper_context(VALUE *mWhisper);
extern void init_ruby_whisper_context_params(VALUE *cContext);
extern void init_ruby_whisper_params(VALUE *mWhisper);
extern void init_ruby_whisper_error(VALUE *mWhisper);
extern void init_ruby_whisper_segment(VALUE *mWhisper, VALUE *cSegment);
extern void init_ruby_whisper_segment(VALUE *mWhisper);
extern void init_ruby_whisper_token(VALUE *mWhisper);
extern void init_ruby_whisper_model(VALUE *mWhisper);
extern void init_ruby_whisper_vad_params(VALUE *mVAD);
extern void init_ruby_whisper_vad_context(VALUE *mVAD);
extern void init_ruby_whisper_vad_segment(VALUE *mVAD);
extern void init_ruby_whisper_vad_segments(VALUE *mVAD);
extern void register_callbacks(ruby_whisper_params *rwp, VALUE *context);
/*
@ -80,6 +93,14 @@ static VALUE ruby_whisper_s_lang_str_full(VALUE self, VALUE id) {
return rb_str_new2(str_full);
}
/*
* call-seq:
* system_info_str -> String
*/
static VALUE ruby_whisper_s_system_info_str(VALUE self) {
return rb_str_new2(whisper_print_system_info());
}
static VALUE ruby_whisper_s_finalize_log_callback(VALUE self, VALUE id) {
is_log_callback_finalized = true;
return Qnil;
@ -91,6 +112,10 @@ ruby_whisper_log_callback(enum ggml_log_level level, const char * buffer, void *
return;
}
VALUE log_callback = rb_iv_get(mWhisper, "log_callback");
if (NIL_P(log_callback)) {
return;
}
VALUE udata = rb_iv_get(mWhisper, "user_data");
rb_funcall(log_callback, id_call, 3, INT2NUM(level), rb_str_new2(buffer), udata);
}
@ -108,24 +133,20 @@ static VALUE ruby_whisper_s_log_set(VALUE self, VALUE log_callback, VALUE user_d
rb_iv_set(self, "log_callback", log_callback);
rb_iv_set(self, "user_data", user_data);
VALUE finalize_log_callback = rb_funcall(mWhisper, rb_intern("method"), 1, rb_str_new2("finalize_log_callback"));
rb_define_finalizer(log_callback, finalize_log_callback);
if (!NIL_P(log_callback)) {
VALUE finalize_log_callback = rb_funcall(mWhisper, rb_intern("method"), 1, rb_str_new2("finalize_log_callback"));
rb_define_finalizer(log_callback, finalize_log_callback);
}
whisper_log_set(ruby_whisper_log_callback, NULL);
if (NIL_P(log_callback)) {
whisper_log_set(NULL, NULL);
} else {
whisper_log_set(ruby_whisper_log_callback, NULL);
}
return Qnil;
}
static void rb_whisper_model_mark(ruby_whisper_model *rwm) {
rb_gc_mark(rwm->context);
}
static VALUE ruby_whisper_model_allocate(VALUE klass) {
ruby_whisper_model *rwm;
rwm = ALLOC(ruby_whisper_model);
return Data_Wrap_Struct(klass, rb_whisper_model_mark, RUBY_DEFAULT_FREE, rwm);
}
void Init_whisper() {
id_to_s = rb_intern("to_s");
id_call = rb_intern("call");
@ -137,9 +158,14 @@ void Init_whisper() {
id_to_path = rb_intern("to_path");
id_URI = rb_intern("URI");
id_pre_converted_models = rb_intern("pre_converted_models");
id_coreml_compiled_models = rb_intern("coreml_compiled_models");
id_cache = rb_intern("cache");
id_n_processors = rb_intern("n_processors");
mWhisper = rb_define_module("Whisper");
mVAD = rb_define_module_under(mWhisper, "VAD");
rb_define_const(mWhisper, "VERSION", rb_str_new2(whisper_version()));
rb_define_const(mWhisper, "LOG_LEVEL_NONE", INT2NUM(GGML_LOG_LEVEL_NONE));
rb_define_const(mWhisper, "LOG_LEVEL_INFO", INT2NUM(GGML_LOG_LEVEL_INFO));
rb_define_const(mWhisper, "LOG_LEVEL_WARN", INT2NUM(GGML_LOG_LEVEL_WARN));
@ -147,18 +173,43 @@ void Init_whisper() {
rb_define_const(mWhisper, "LOG_LEVEL_DEBUG", INT2NUM(GGML_LOG_LEVEL_DEBUG));
rb_define_const(mWhisper, "LOG_LEVEL_CONT", INT2NUM(GGML_LOG_LEVEL_CONT));
rb_define_const(mWhisper, "AHEADS_NONE", INT2NUM(WHISPER_AHEADS_NONE));
rb_define_const(mWhisper, "AHEADS_N_TOP_MOST", INT2NUM(WHISPER_AHEADS_N_TOP_MOST));
rb_define_const(mWhisper, "AHEADS_CUSTOM", INT2NUM(WHISPER_AHEADS_CUSTOM));
rb_define_const(mWhisper, "AHEADS_TINY_EN", INT2NUM(WHISPER_AHEADS_TINY_EN));
rb_define_const(mWhisper, "AHEADS_TINY", INT2NUM(WHISPER_AHEADS_TINY));
rb_define_const(mWhisper, "AHEADS_BASE_EN", INT2NUM(WHISPER_AHEADS_BASE_EN));
rb_define_const(mWhisper, "AHEADS_BASE", INT2NUM(WHISPER_AHEADS_BASE));
rb_define_const(mWhisper, "AHEADS_SMALL_EN", INT2NUM(WHISPER_AHEADS_SMALL_EN));
rb_define_const(mWhisper, "AHEADS_SMALL", INT2NUM(WHISPER_AHEADS_SMALL));
rb_define_const(mWhisper, "AHEADS_MEDIUM_EN", INT2NUM(WHISPER_AHEADS_MEDIUM_EN));
rb_define_const(mWhisper, "AHEADS_MEDIUM", INT2NUM(WHISPER_AHEADS_MEDIUM));
rb_define_const(mWhisper, "AHEADS_LARGE_V1", INT2NUM(WHISPER_AHEADS_LARGE_V1));
rb_define_const(mWhisper, "AHEADS_LARGE_V2", INT2NUM(WHISPER_AHEADS_LARGE_V2));
rb_define_const(mWhisper, "AHEADS_LARGE_V3", INT2NUM(WHISPER_AHEADS_LARGE_V3));
rb_define_const(mWhisper, "AHEADS_LARGE_V3_TURBO", INT2NUM(WHISPER_AHEADS_LARGE_V3_TURBO));
rb_define_singleton_method(mWhisper, "lang_max_id", ruby_whisper_s_lang_max_id, 0);
rb_define_singleton_method(mWhisper, "lang_id", ruby_whisper_s_lang_id, 1);
rb_define_singleton_method(mWhisper, "lang_str", ruby_whisper_s_lang_str, 1);
rb_define_singleton_method(mWhisper, "lang_str_full", ruby_whisper_s_lang_str_full, 1);
rb_define_singleton_method(mWhisper, "system_info_str", ruby_whisper_s_system_info_str, 0);
rb_define_singleton_method(mWhisper, "log_set", ruby_whisper_s_log_set, 2);
rb_define_private_method(rb_singleton_class(mWhisper), "finalize_log_callback", ruby_whisper_s_finalize_log_callback, 1);
init_ruby_whisper_context(&mWhisper);
cContext = init_ruby_whisper_context(&mWhisper);
init_ruby_whisper_context_params(&cContext);
init_ruby_whisper_params(&mWhisper);
init_ruby_whisper_error(&mWhisper);
init_ruby_whisper_segment(&mWhisper, &cContext);
init_ruby_whisper_segment(&mWhisper);
init_ruby_whisper_token(&mWhisper);
init_ruby_whisper_model(&mWhisper);
init_ruby_whisper_vad_params(&mVAD);
init_ruby_whisper_vad_segment(&mVAD);
init_ruby_whisper_vad_segments(&mVAD);
init_ruby_whisper_vad_context(&mVAD);
rb_require("whisper/context");
rb_require("whisper/segment");
rb_require("whisper/model/uri");
}

View File

@ -1,6 +1,9 @@
#ifndef RUBY_WHISPER_H
#define RUBY_WHISPER_H
#include <ruby.h>
#include <ruby/util.h>
#include <ruby/memory_view.h>
#include "whisper.h"
typedef struct {
@ -14,21 +17,92 @@ typedef struct {
struct whisper_context *context;
} ruby_whisper;
typedef struct ruby_whisper_context_params {
struct whisper_context_params params;
} ruby_whisper_context_params;
typedef struct {
struct whisper_full_params params;
bool diarize;
ruby_whisper_callback_container *new_segment_callback_container;
ruby_whisper_callback_container *progress_callback_container;
ruby_whisper_callback_container *encoder_begin_callback_container;
ruby_whisper_callback_container *abort_callback_container;
VALUE vad_params;
} ruby_whisper_params;
typedef struct {
struct whisper_vad_params params;
} ruby_whisper_vad_params;
typedef struct {
VALUE context;
int index;
} ruby_whisper_segment;
typedef struct {
whisper_token_data *token_data;
VALUE text;
} ruby_whisper_token;
typedef struct {
VALUE context;
} ruby_whisper_model;
typedef struct {
struct whisper_vad_segments *segments;
} ruby_whisper_vad_segments;
typedef struct {
VALUE segments;
int index;
} ruby_whisper_vad_segment;
typedef struct {
struct whisper_vad_context *context;
} ruby_whisper_vad_context;
typedef struct parsed_samples_t {
float *samples;
int n_samples;
rb_memory_view_t memview;
bool memview_exported;
} parsed_samples_t;
#define GetContext(obj, rw) do { \
TypedData_Get_Struct((obj), ruby_whisper, &ruby_whisper_type, (rw)); \
if ((rw)->context == NULL) { \
rb_raise(rb_eRuntimeError, "Not initialized"); \
} \
} while (0)
#define GetContextParams(obj, rwcp) do { \
TypedData_Get_Struct((obj), ruby_whisper_context_params, &ruby_whisper_context_params_type, (rwcp)); \
} while (0)
#define GetToken(obj, rwt) do { \
TypedData_Get_Struct((obj), ruby_whisper_token, &ruby_whisper_token_type, (rwt)); \
if ((rwt)->token_data == NULL) { \
rb_raise(rb_eRuntimeError, "Not initialized"); \
} \
} while (0)
#define GetVADContext(obj, rwvc) do { \
TypedData_Get_Struct((obj), ruby_whisper_vad_context, &ruby_whisper_vad_context_type, (rwvc)); \
if ((rwvc)->context == NULL) { \
rb_raise(rb_eRuntimeError, "Not initialized"); \
} \
} while (0)
#define GetVADParams(obj, rwvp) do { \
TypedData_Get_Struct((obj), ruby_whisper_vad_params, &ruby_whisper_vad_params_type, (rwvp)); \
} while (0)
#define GetVADSegments(obj, rwvss) do { \
TypedData_Get_Struct((obj), ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, (rwvss)); \
if ((rwvss)->segments == NULL) { \
rb_raise(rb_eRuntimeError, "Not initialized"); \
} \
} while (0)
#endif

View File

@ -1,5 +1,3 @@
#include <ruby.h>
#include <ruby/memory_view.h>
#include "ruby_whisper.h"
extern ID id_to_s;
@ -11,15 +9,43 @@ extern ID id_new;
extern ID id_to_path;
extern ID id_URI;
extern ID id_pre_converted_models;
extern ID id_coreml_compiled_models;
extern ID id_cache;
extern ID id_n_processors;
extern VALUE cContext;
extern VALUE eError;
extern VALUE cModel;
extern const rb_data_type_t ruby_whisper_params_type;
extern const rb_data_type_t ruby_whisper_context_params_type;
extern VALUE ruby_whisper_transcribe(int argc, VALUE *argv, VALUE self);
extern VALUE rb_whisper_model_initialize(VALUE context);
extern VALUE rb_whisper_segment_initialize(VALUE context, int index);
extern void register_callbacks(ruby_whisper_params *rwp, VALUE *context);
extern VALUE rb_whisper_model_s_new(VALUE context);
extern VALUE rb_whisper_segment_s_new(VALUE context, int index);
extern void prepare_transcription(ruby_whisper_params *rwp, VALUE *context, int n_processors);
ID transcribe_option_names[1];
typedef struct fill_samples_args {
float *dest;
VALUE *src;
int n_samples;
} fill_samples_args;
typedef struct full_args {
VALUE *context;
VALUE *params;
float *samples;
int n_samples;
} full_args;
typedef struct full_parallel_args {
VALUE *context;
VALUE *params;
float *samples;
int n_samples;
int n_processors;
} full_parallel_args;
static void
ruby_whisper_free(ruby_whisper *rw)
@ -37,19 +63,74 @@ rb_whisper_mark(ruby_whisper *rw)
}
void
rb_whisper_free(ruby_whisper *rw)
rb_whisper_free(void *p)
{
ruby_whisper *rw = (ruby_whisper *)p;
ruby_whisper_free(rw);
free(rw);
}
static size_t
ruby_whisper_memsize(const void *p)
{
const ruby_whisper *rw = (const ruby_whisper *)p;
size_t size = sizeof(rw);
if (!rw) {
return 0;
}
if (rw->context) {
size += sizeof(rw->context);
}
return size;
}
const rb_data_type_t ruby_whisper_type = {
"ruby_whisper",
{0, rb_whisper_free, ruby_whisper_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_allocate(VALUE klass)
{
ruby_whisper *rw;
rw = ALLOC(ruby_whisper);
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper, &ruby_whisper_type, rw);
rw->context = NULL;
return Data_Wrap_Struct(klass, rb_whisper_mark, rb_whisper_free, rw);
return obj;
}
VALUE
ruby_whisper_normalize_model_path(VALUE model_path)
{
VALUE pre_converted_models = rb_funcall(cModel, id_pre_converted_models, 0);
VALUE pre_converted_model = rb_hash_aref(pre_converted_models, model_path);
if (!NIL_P(pre_converted_model)) {
model_path = pre_converted_model;
#ifdef RUBY_WHISPER_USE_COREML
VALUE coreml_converted_models = rb_funcall(cModel, id_coreml_compiled_models, 0);
VALUE coreml_converted_model = rb_hash_aref(coreml_converted_models, pre_converted_model);
if (!NIL_P(coreml_converted_model)) {
rb_funcall(coreml_converted_model, id_cache, 0);
}
#endif
}
else if (TYPE(model_path) == T_STRING) {
const char * model_path_str = StringValueCStr(model_path);
if (strncmp("http://", model_path_str, 7) == 0 || strncmp("https://", model_path_str, 8) == 0) {
VALUE uri_class = rb_const_get(cModel, id_URI);
model_path = rb_class_new_instance(1, &model_path, uri_class);
}
}
else if (rb_obj_is_kind_of(model_path, rb_path2class("URI::HTTP"))) {
VALUE uri_class = rb_const_get(cModel, id_URI);
model_path = rb_class_new_instance(1, &model_path, uri_class);
}
if (rb_respond_to(model_path, id_to_path)) {
model_path = rb_funcall(model_path, id_to_path, 0);
}
return model_path;
}
/*
@ -63,34 +144,25 @@ ruby_whisper_initialize(int argc, VALUE *argv, VALUE self)
{
ruby_whisper *rw;
VALUE whisper_model_file_path;
VALUE context_params;
struct whisper_context_params params;
// TODO: we can support init from buffer here too maybe another ruby object to expose
rb_scan_args(argc, argv, "01", &whisper_model_file_path);
Data_Get_Struct(self, ruby_whisper, rw);
rb_scan_args(argc, argv, "11", &whisper_model_file_path, &context_params);
TypedData_Get_Struct(self, ruby_whisper, &ruby_whisper_type, rw);
VALUE pre_converted_models = rb_funcall(cModel, id_pre_converted_models, 0);
VALUE pre_converted_model = rb_hash_aref(pre_converted_models, whisper_model_file_path);
if (!NIL_P(pre_converted_model)) {
whisper_model_file_path = pre_converted_model;
}
if (TYPE(whisper_model_file_path) == T_STRING) {
const char * whisper_model_file_path_str = StringValueCStr(whisper_model_file_path);
if (strncmp("http://", whisper_model_file_path_str, 7) == 0 || strncmp("https://", whisper_model_file_path_str, 8) == 0) {
VALUE uri_class = rb_const_get(cModel, id_URI);
whisper_model_file_path = rb_class_new_instance(1, &whisper_model_file_path, uri_class);
}
}
if (rb_obj_is_kind_of(whisper_model_file_path, rb_path2class("URI::HTTP"))) {
VALUE uri_class = rb_const_get(cModel, id_URI);
whisper_model_file_path = rb_class_new_instance(1, &whisper_model_file_path, uri_class);
}
if (rb_respond_to(whisper_model_file_path, id_to_path)) {
whisper_model_file_path = rb_funcall(whisper_model_file_path, id_to_path, 0);
}
whisper_model_file_path = ruby_whisper_normalize_model_path(whisper_model_file_path);
if (!rb_respond_to(whisper_model_file_path, id_to_s)) {
rb_raise(rb_eRuntimeError, "Expected file path to model to initialize Whisper::Context");
}
rw->context = whisper_init_from_file_with_params(StringValueCStr(whisper_model_file_path), whisper_context_default_params());
if (NIL_P(context_params)) {
params = whisper_context_default_params();
} else {
ruby_whisper_context_params *rwcp;
GetContextParams(context_params, rwcp);
params = rwcp->params;
}
rw->context = whisper_init_from_file_with_params(StringValueCStr(whisper_model_file_path), params);
if (rw->context == NULL) {
rb_raise(rb_eRuntimeError, "error: failed to initialize whisper context");
}
@ -104,7 +176,7 @@ ruby_whisper_initialize(int argc, VALUE *argv, VALUE self)
VALUE ruby_whisper_model_n_vocab(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_vocab(rw->context));
}
@ -115,7 +187,7 @@ VALUE ruby_whisper_model_n_vocab(VALUE self)
VALUE ruby_whisper_model_n_audio_ctx(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_audio_ctx(rw->context));
}
@ -126,7 +198,7 @@ VALUE ruby_whisper_model_n_audio_ctx(VALUE self)
VALUE ruby_whisper_model_n_audio_state(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_audio_state(rw->context));
}
@ -137,7 +209,7 @@ VALUE ruby_whisper_model_n_audio_state(VALUE self)
VALUE ruby_whisper_model_n_audio_head(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_audio_head(rw->context));
}
@ -148,7 +220,7 @@ VALUE ruby_whisper_model_n_audio_head(VALUE self)
VALUE ruby_whisper_model_n_audio_layer(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_audio_layer(rw->context));
}
@ -159,7 +231,7 @@ VALUE ruby_whisper_model_n_audio_layer(VALUE self)
VALUE ruby_whisper_model_n_text_ctx(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_text_ctx(rw->context));
}
@ -170,7 +242,7 @@ VALUE ruby_whisper_model_n_text_ctx(VALUE self)
VALUE ruby_whisper_model_n_text_state(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_text_state(rw->context));
}
@ -181,7 +253,7 @@ VALUE ruby_whisper_model_n_text_state(VALUE self)
VALUE ruby_whisper_model_n_text_head(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_text_head(rw->context));
}
@ -192,7 +264,7 @@ VALUE ruby_whisper_model_n_text_head(VALUE self)
VALUE ruby_whisper_model_n_text_layer(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_text_layer(rw->context));
}
@ -203,7 +275,7 @@ VALUE ruby_whisper_model_n_text_layer(VALUE self)
VALUE ruby_whisper_model_n_mels(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_n_mels(rw->context));
}
@ -214,7 +286,7 @@ VALUE ruby_whisper_model_n_mels(VALUE self)
VALUE ruby_whisper_model_ftype(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_model_ftype(rw->context));
}
@ -225,10 +297,151 @@ VALUE ruby_whisper_model_ftype(VALUE self)
VALUE ruby_whisper_model_type(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return rb_str_new2(whisper_model_type_readable(rw->context));
}
static bool
check_memory_view(rb_memory_view_t *memview)
{
if (memview->format != NULL && strcmp(memview->format, "f") != 0) {
rb_warn("currently only format \"f\" is supported for MemoryView, but given: %s", memview->format);
return false;
}
if (memview->format != NULL && memview->ndim != 1) {
rb_warn("currently only 1 dimensional MemoryView is supported, but given: %zd", memview->ndim);
return false;
}
return true;
}
static VALUE
fill_samples(VALUE rb_args)
{
fill_samples_args *args = (fill_samples_args *)rb_args;
if (RB_TYPE_P(*args->src, T_ARRAY)) {
for (int i = 0; i < args->n_samples; i++) {
args->dest[i] = RFLOAT_VALUE(rb_ary_entry(*args->src, i));
}
} else {
// TODO: use rb_block_call
VALUE iter = rb_funcall(*args->src, id_to_enum, 1, rb_str_new2("each"));
for (int i = 0; i < args->n_samples; i++) {
// TODO: check if iter is exhausted and raise ArgumentError appropriately
VALUE sample = rb_funcall(iter, id_next, 0);
args->dest[i] = RFLOAT_VALUE(sample);
}
}
return Qnil;
}
struct parsed_samples_t
parse_samples(VALUE *samples, VALUE *n_samples)
{
bool memview_available = rb_memory_view_available_p(*samples);
struct parsed_samples_t parsed = {0};
parsed.memview_exported = false;
const bool is_array = RB_TYPE_P(*samples, T_ARRAY);
if (!NIL_P(*n_samples)) {
parsed.n_samples = NUM2INT(*n_samples);
if (is_array) {
if (RARRAY_LEN(*samples) < parsed.n_samples) {
rb_raise(rb_eArgError, "samples length %ld is less than n_samples %d", RARRAY_LEN(*samples), parsed.n_samples);
}
}
// Should check when samples.respond_to?(:length)?
} else {
if (is_array) {
if (RARRAY_LEN(*samples) > INT_MAX) {
rb_raise(rb_eArgError, "samples are too long");
}
parsed.n_samples = (int)RARRAY_LEN(*samples);
} else if (memview_available) {
bool memview_got = rb_memory_view_get(*samples, &parsed.memview, RUBY_MEMORY_VIEW_SIMPLE);
if (memview_got) {
parsed.memview_exported = check_memory_view(&parsed.memview);
if (!parsed.memview_exported) {
rb_memory_view_release(&parsed.memview);
parsed.memview = (rb_memory_view_t){0};
}
}
if (parsed.memview_exported) {
ssize_t n_samples_size = parsed.memview.byte_size / parsed.memview.item_size;
if (n_samples_size > INT_MAX) {
rb_memory_view_release(&parsed.memview);
rb_raise(rb_eArgError, "samples are too long: %zd", n_samples_size);
}
parsed.n_samples = (int)n_samples_size;
} else {
rb_warn("unable to get a memory view. falls back to Ruby object");
if (rb_respond_to(*samples, id_length)) {
parsed.n_samples = NUM2INT(rb_funcall(*samples, id_length, 0));
} else {
rb_raise(rb_eArgError, "samples must respond to :length");
}
}
} else if (rb_respond_to(*samples, id_length)) {
parsed.n_samples = NUM2INT(rb_funcall(*samples, id_length, 0));
} else {
rb_raise(rb_eArgError, "samples must respond to :length or be a MemoryView of an array of float when n_samples is not given");
}
}
if (parsed.memview_exported) {
parsed.samples = (float *)parsed.memview.data;
} else {
parsed.samples = ALLOC_N(float, parsed.n_samples);
fill_samples_args args = {
parsed.samples,
samples,
parsed.n_samples,
};
int state;
rb_protect(fill_samples, (VALUE)&args, &state);
if (state) {
xfree(parsed.samples);
rb_jump_tag(state);
}
}
return parsed;
}
VALUE
release_samples(VALUE rb_parsed_args)
{
parsed_samples_t *parsed_args = (parsed_samples_t *)rb_parsed_args;
if (parsed_args->memview_exported) {
rb_memory_view_release(&parsed_args->memview);
} else {
xfree(parsed_args->samples);
}
*parsed_args = (parsed_samples_t){0};
return Qnil;
}
static VALUE
full_body(VALUE rb_args)
{
full_args *args = (full_args *)rb_args;
ruby_whisper *rw;
ruby_whisper_params *rwp;
GetContext(*args->context, rw);
TypedData_Get_Struct(*args->params, ruby_whisper_params, &ruby_whisper_params_type, rwp);
prepare_transcription(rwp, args->context, 1);
int result = whisper_full(rw->context, rwp->params, args->samples, args->n_samples);
return INT2NUM(result);
}
/*
* Run the entire model: PCM -> log mel spectrogram -> encoder -> decoder -> text
* Not thread safe for same context
@ -246,58 +459,17 @@ VALUE ruby_whisper_full(int argc, VALUE *argv, VALUE self)
rb_raise(rb_eArgError, "wrong number of arguments (given %d, expected 2..3)", argc);
}
ruby_whisper *rw;
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper, rw);
VALUE params = argv[0];
Data_Get_Struct(params, ruby_whisper_params, rwp);
VALUE samples = argv[1];
int n_samples;
rb_memory_view_t view;
const bool memory_view_available_p = rb_memory_view_available_p(samples);
if (argc == 3) {
n_samples = NUM2INT(argv[2]);
if (TYPE(samples) == T_ARRAY) {
if (RARRAY_LEN(samples) < n_samples) {
rb_raise(rb_eArgError, "samples length %ld is less than n_samples %d", RARRAY_LEN(samples), n_samples);
}
}
// Should check when samples.respond_to?(:length)?
} else {
if (TYPE(samples) == T_ARRAY) {
n_samples = RARRAY_LEN(samples);
} else if (memory_view_available_p) {
if (!rb_memory_view_get(samples, &view, RUBY_MEMORY_VIEW_SIMPLE)) {
view.obj = Qnil;
rb_raise(rb_eArgError, "unable to get a memory view");
}
n_samples = view.byte_size / view.item_size;
} else if (rb_respond_to(samples, id_length)) {
n_samples = NUM2INT(rb_funcall(samples, id_length, 0));
} else {
rb_raise(rb_eArgError, "samples must respond to :length or be a MemoryView of an array of flaot when n_samples is not given");
}
}
float * c_samples = (float *)malloc(n_samples * sizeof(float));
if (memory_view_available_p) {
c_samples = (float *)view.data;
} else {
if (TYPE(samples) == T_ARRAY) {
for (int i = 0; i < n_samples; i++) {
c_samples[i] = RFLOAT_VALUE(rb_ary_entry(samples, i));
}
} else {
// TODO: use rb_block_call
VALUE iter = rb_funcall(samples, id_to_enum, 1, rb_str_new2("each"));
for (int i = 0; i < n_samples; i++) {
// TODO: check if iter is exhausted and raise ArgumentError appropriately
VALUE sample = rb_funcall(iter, id_next, 0);
c_samples[i] = RFLOAT_VALUE(sample);
}
}
}
register_callbacks(rwp, &self);
const int result = whisper_full(rw->context, rwp->params, c_samples, n_samples);
VALUE n_samples = argc == 2 ? Qnil : argv[2];
struct parsed_samples_t parsed = parse_samples(&argv[1], &n_samples);
full_args args = {
&self,
&argv[0],
parsed.samples,
parsed.n_samples,
};
VALUE rb_result = rb_ensure(full_body, (VALUE)&args, release_samples, (VALUE)&parsed);
const int result = NUM2INT(rb_result);
if (0 == result) {
return self;
} else {
@ -305,6 +477,22 @@ VALUE ruby_whisper_full(int argc, VALUE *argv, VALUE self)
}
}
static VALUE
full_parallel_body(VALUE rb_args)
{
full_parallel_args *args = (full_parallel_args *)rb_args;
ruby_whisper *rw;
ruby_whisper_params *rwp;
GetContext(*args->context, rw);
TypedData_Get_Struct(*args->params, ruby_whisper_params, &ruby_whisper_params_type, rwp);
prepare_transcription(rwp, args->context, args->n_processors);
int result = whisper_full_parallel(rw->context, rwp->params, args->samples, args->n_samples, args->n_processors);
return INT2NUM(result);
}
/*
* Split the input audio in chunks and process each chunk separately using whisper_full_with_state()
* Result is stored in the default state of the context
@ -322,19 +510,11 @@ static VALUE
ruby_whisper_full_parallel(int argc, VALUE *argv,VALUE self)
{
if (argc < 2 || argc > 4) {
rb_raise(rb_eArgError, "wrong number of arguments (given %d, expected 2..3)", argc);
rb_raise(rb_eArgError, "wrong number of arguments (given %d, expected 2..4)", argc);
}
ruby_whisper *rw;
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper, rw);
VALUE params = argv[0];
Data_Get_Struct(params, ruby_whisper_params, rwp);
VALUE samples = argv[1];
int n_samples;
VALUE n_samples = argc == 2 ? Qnil : argv[2];
int n_processors;
rb_memory_view_t view;
const bool memory_view_available_p = rb_memory_view_available_p(samples);
switch (argc) {
case 2:
n_processors = 1;
@ -346,49 +526,16 @@ ruby_whisper_full_parallel(int argc, VALUE *argv,VALUE self)
n_processors = NUM2INT(argv[3]);
break;
}
if (argc >= 3 && !NIL_P(argv[2])) {
n_samples = NUM2INT(argv[2]);
if (TYPE(samples) == T_ARRAY) {
if (RARRAY_LEN(samples) < n_samples) {
rb_raise(rb_eArgError, "samples length %ld is less than n_samples %d", RARRAY_LEN(samples), n_samples);
}
}
// Should check when samples.respond_to?(:length)?
} else if (memory_view_available_p) {
if (!rb_memory_view_get(samples, &view, RUBY_MEMORY_VIEW_SIMPLE)) {
view.obj = Qnil;
rb_raise(rb_eArgError, "unable to get a memory view");
}
n_samples = view.byte_size / view.item_size;
} else {
if (TYPE(samples) == T_ARRAY) {
n_samples = RARRAY_LEN(samples);
} else if (rb_respond_to(samples, id_length)) {
n_samples = NUM2INT(rb_funcall(samples, id_length, 0));
} else {
rb_raise(rb_eArgError, "samples must respond to :length or be a MemoryView of an array of flaot when n_samples is not given");
}
}
float * c_samples = (float *)malloc(n_samples * sizeof(float));
if (memory_view_available_p) {
c_samples = (float *)view.data;
} else {
if (TYPE(samples) == T_ARRAY) {
for (int i = 0; i < n_samples; i++) {
c_samples[i] = RFLOAT_VALUE(rb_ary_entry(samples, i));
}
} else {
// FIXME: use rb_block_call
VALUE iter = rb_funcall(samples, id_to_enum, 1, rb_str_new2("each"));
for (int i = 0; i < n_samples; i++) {
// TODO: check if iter is exhausted and raise ArgumentError
VALUE sample = rb_funcall(iter, id_next, 0);
c_samples[i] = RFLOAT_VALUE(sample);
}
}
}
register_callbacks(rwp, &self);
const int result = whisper_full_parallel(rw->context, rwp->params, c_samples, n_samples, n_processors);
struct parsed_samples_t parsed = parse_samples(&argv[1], &n_samples);
const full_parallel_args args = {
&self,
&argv[0],
parsed.samples,
parsed.n_samples,
n_processors,
};
const VALUE rb_result = rb_ensure(full_parallel_body, (VALUE)&args, release_samples, (VALUE)&parsed);
const int result = NUM2INT(rb_result);
if (0 == result) {
return self;
} else {
@ -406,7 +553,7 @@ static VALUE
ruby_whisper_full_n_segments(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_full_n_segments(rw->context));
}
@ -420,7 +567,7 @@ static VALUE
ruby_whisper_full_lang_id(VALUE self)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
return INT2NUM(whisper_full_lang_id(rw->context));
}
@ -445,10 +592,10 @@ static VALUE
ruby_whisper_full_get_segment_t0(VALUE self, VALUE i_segment)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int c_i_segment = ruby_whisper_full_check_segment_index(rw, i_segment);
const int64_t t0 = whisper_full_get_segment_t0(rw->context, c_i_segment);
return INT2NUM(t0);
return LONG2NUM(t0);
}
/*
@ -463,10 +610,10 @@ static VALUE
ruby_whisper_full_get_segment_t1(VALUE self, VALUE i_segment)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int c_i_segment = ruby_whisper_full_check_segment_index(rw, i_segment);
const int64_t t1 = whisper_full_get_segment_t1(rw->context, c_i_segment);
return INT2NUM(t1);
return LONG2NUM(t1);
}
/*
@ -481,7 +628,7 @@ static VALUE
ruby_whisper_full_get_segment_speaker_turn_next(VALUE self, VALUE i_segment)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int c_i_segment = ruby_whisper_full_check_segment_index(rw, i_segment);
const bool speaker_turn_next = whisper_full_get_segment_speaker_turn_next(rw->context, c_i_segment);
return speaker_turn_next ? Qtrue : Qfalse;
@ -499,7 +646,7 @@ static VALUE
ruby_whisper_full_get_segment_text(VALUE self, VALUE i_segment)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int c_i_segment = ruby_whisper_full_check_segment_index(rw, i_segment);
const char * text = whisper_full_get_segment_text(rw->context, c_i_segment);
return rb_str_new2(text);
@ -513,7 +660,7 @@ static VALUE
ruby_whisper_full_get_segment_no_speech_prob(VALUE self, VALUE i_segment)
{
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int c_i_segment = ruby_whisper_full_check_segment_index(rw, i_segment);
const float no_speech_prob = whisper_full_get_segment_no_speech_prob(rw->context, c_i_segment);
return DBL2NUM(no_speech_prob);
@ -524,7 +671,7 @@ ruby_whisper_full_get_segment_no_speech_prob(VALUE self, VALUE i_segment)
static VALUE
ruby_whisper_full_get_segment(VALUE self, VALUE i_segment)
{
return rb_whisper_segment_initialize(self, NUM2INT(i_segment));
return rb_whisper_segment_s_new(self, NUM2INT(i_segment));
}
/*
@ -554,11 +701,11 @@ ruby_whisper_each_segment(VALUE self)
}
ruby_whisper *rw;
Data_Get_Struct(self, ruby_whisper, rw);
GetContext(self, rw);
const int n_segments = whisper_full_n_segments(rw->context);
for (int i = 0; i < n_segments; ++i) {
rb_yield(rb_whisper_segment_initialize(self, i));
rb_yield(rb_whisper_segment_s_new(self, i));
}
return self;
@ -571,14 +718,16 @@ ruby_whisper_each_segment(VALUE self)
static VALUE
ruby_whisper_get_model(VALUE self)
{
return rb_whisper_model_initialize(self);
return rb_whisper_model_s_new(self);
}
void
VALUE
init_ruby_whisper_context(VALUE *mWhisper)
{
cContext = rb_define_class_under(*mWhisper, "Context", rb_cObject);
transcribe_option_names[0] = id_n_processors;
rb_define_alloc_func(cContext, ruby_whisper_allocate);
rb_define_method(cContext, "initialize", ruby_whisper_initialize, -1);
@ -605,9 +754,11 @@ init_ruby_whisper_context(VALUE *mWhisper)
rb_define_method(cContext, "full", ruby_whisper_full, -1);
rb_define_method(cContext, "full_parallel", ruby_whisper_full_parallel, -1);
// High leve
// High level
rb_define_method(cContext, "full_get_segment", ruby_whisper_full_get_segment, 1);
rb_define_method(cContext, "each_segment", ruby_whisper_each_segment, 0);
rb_define_method(cContext, "model", ruby_whisper_get_model, 0);
return cContext;
}

View File

@ -0,0 +1,163 @@
#include "ruby_whisper.h"
#define NUM_PARAMS 6
#define DEF_BOOLEAN_ATTR_METHOD(name) \
static VALUE \
ruby_whisper_context_params_get_ ## name(VALUE self) { \
ruby_whisper_context_params *rwcp; \
GetContextParams(self, rwcp); \
return rwcp->params.name ? Qtrue : Qfalse; \
} \
static VALUE \
ruby_whisper_context_params_set_ ## name(VALUE self, VALUE value) { \
ruby_whisper_context_params *rwcp; \
GetContextParams(self, rwcp); \
rwcp->params.name = RTEST(value); \
return value; \
}
#define DEF_INT_ATTR_METHOD(name) \
static VALUE \
ruby_whisper_context_params_get_ ## name(VALUE self) { \
ruby_whisper_context_params *rwcp; \
GetContextParams(self, rwcp); \
return INT2NUM(rwcp->params.name); \
} \
static VALUE \
ruby_whisper_context_params_set_ ## name(VALUE self, VALUE value) { \
ruby_whisper_context_params *rwcp; \
GetContextParams(self, rwcp); \
rwcp->params.name = NUM2INT(value); \
return value; \
}
#define DEFINE_PARAM(param_name, nth) \
id_ ## param_name = rb_intern(#param_name); \
param_names[nth] = id_ ## param_name; \
rb_define_method(cContextParams, #param_name, ruby_whisper_context_params_get_ ## param_name, 0); \
rb_define_method(cContextParams, #param_name "=", ruby_whisper_context_params_set_ ## param_name, 1);
VALUE cContextParams;
static ID param_names[NUM_PARAMS];
static ID id_use_gpu;
static ID id_flash_attn;
static ID id_gpu_device;
static ID id_dtw_token_timestamps;
static ID id_dtw_aheads_preset;
static ID id_dtw_n_top;
static size_t
ruby_whisper_context_params_memsize(const void *p)
{
const ruby_whisper_context_params *rwcp = (ruby_whisper_context_params *)p;
if (!rwcp) {
return 0;
}
return sizeof(ruby_whisper_context_params);
}
const rb_data_type_t ruby_whisper_context_params_type = {
"ruby_whisper_context_params",
{0, RUBY_DEFAULT_FREE, ruby_whisper_context_params_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_context_params_s_allocate(VALUE klass)
{
ruby_whisper_context_params *rwcp;
return TypedData_Make_Struct(klass, ruby_whisper_context_params, &ruby_whisper_context_params_type, rwcp);
}
DEF_BOOLEAN_ATTR_METHOD(use_gpu);
DEF_BOOLEAN_ATTR_METHOD(flash_attn);
DEF_INT_ATTR_METHOD(gpu_device);
DEF_BOOLEAN_ATTR_METHOD(dtw_token_timestamps);
DEF_INT_ATTR_METHOD(dtw_aheads_preset);
static VALUE
ruby_whisper_context_params_get_dtw_n_top(VALUE self) {
ruby_whisper_context_params *rwcp;
GetContextParams(self, rwcp);
int dtw_n_top = rwcp->params.dtw_n_top;
return dtw_n_top == -1 ? Qnil : INT2NUM(dtw_n_top);
}
static VALUE
ruby_whisper_context_params_set_dtw_n_top(VALUE self, VALUE value) {
ruby_whisper_context_params *rwcp;
GetContextParams(self, rwcp);
rwcp->params.dtw_n_top = NIL_P(value) ? -1 : NUM2INT(value);
return value;
}
#define SET_PARAM_IF_SAME(param_name) \
if (id == id_ ## param_name) { \
ruby_whisper_context_params_set_ ## param_name(self, value); \
continue; \
}
static VALUE
ruby_whisper_context_params_initialize(int argc, VALUE *argv, VALUE self)
{
ruby_whisper_context_params *rwcp;
TypedData_Get_Struct(self, ruby_whisper_context_params, &ruby_whisper_context_params_type, rwcp);
rwcp->params = whisper_context_default_params();
VALUE kw_hash;
rb_scan_args_kw(RB_SCAN_ARGS_KEYWORDS, argc, argv, ":", &kw_hash);
if (NIL_P(kw_hash)) {
return Qnil;
}
VALUE values[NUM_PARAMS] = {Qundef};
rb_get_kwargs(kw_hash, param_names, 0, NUM_PARAMS, values);
ID id;
VALUE value;
for (int i = 0; i < NUM_PARAMS; i++) {
id = param_names[i];
value = values[i];
if (value == Qundef) {
continue;
}
SET_PARAM_IF_SAME(use_gpu)
SET_PARAM_IF_SAME(flash_attn)
SET_PARAM_IF_SAME(gpu_device)
SET_PARAM_IF_SAME(dtw_token_timestamps)
SET_PARAM_IF_SAME(dtw_aheads_preset)
SET_PARAM_IF_SAME(dtw_n_top)
}
return Qnil;
}
#undef SET_PARAM_IF_SAME
void
init_ruby_whisper_context_params(VALUE *cContext)
{
cContextParams = rb_define_class_under(*cContext, "Params", rb_cObject);
rb_define_alloc_func(cContextParams, ruby_whisper_context_params_s_allocate);
rb_define_method(cContextParams, "initialize", ruby_whisper_context_params_initialize, -1);
DEFINE_PARAM(use_gpu, 0)
DEFINE_PARAM(flash_attn, 1)
DEFINE_PARAM(gpu_device, 2)
DEFINE_PARAM(dtw_token_timestamps, 3)
DEFINE_PARAM(dtw_aheads_preset, 4)
DEFINE_PARAM(dtw_n_top, 5)
}
#undef DEFINE_PARAM
#undef DEF_INT_ATTR_METHOD
#undef DEF_BOOLEAN_ATTR_METHOD
#undef NUM_PARAMS

View File

@ -1,22 +1,43 @@
#include <ruby.h>
#include "ruby_whisper.h"
extern const rb_data_type_t ruby_whisper_type;
extern VALUE cModel;
static void rb_whisper_model_mark(ruby_whisper_model *rwm) {
rb_gc_mark(rwm->context);
static void rb_whisper_model_mark(void *p) {
ruby_whisper_model *rwm = (ruby_whisper_model *)p;
if (rwm->context) {
rb_gc_mark(rwm->context);
}
}
static size_t
ruby_whisper_model_memsize(const void *p)
{
const ruby_whisper_model *rwm = (const ruby_whisper_model *)p;
size_t size = sizeof(rwm);
if (!rwm) {
return 0;
}
return size;
}
static const rb_data_type_t rb_whisper_model_type = {
"ruby_whisper_model",
{rb_whisper_model_mark, RUBY_DEFAULT_FREE, ruby_whisper_model_memsize,},
0, 0,
0
};
static VALUE ruby_whisper_model_allocate(VALUE klass) {
ruby_whisper_model *rwm;
rwm = ALLOC(ruby_whisper_model);
return Data_Wrap_Struct(klass, rb_whisper_model_mark, RUBY_DEFAULT_FREE, rwm);
return TypedData_Make_Struct(klass, ruby_whisper_model, &rb_whisper_model_type, rwm);
}
VALUE rb_whisper_model_initialize(VALUE context) {
VALUE rb_whisper_model_s_new(VALUE context) {
ruby_whisper_model *rwm;
const VALUE model = ruby_whisper_model_allocate(cModel);
Data_Get_Struct(model, ruby_whisper_model, rwm);
TypedData_Get_Struct(model, ruby_whisper_model, &rb_whisper_model_type, rwm);
rwm->context = context;
return model;
};
@ -29,9 +50,9 @@ static VALUE
ruby_whisper_model_n_vocab(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_vocab(rw->context));
}
@ -43,9 +64,9 @@ static VALUE
ruby_whisper_model_n_audio_ctx(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_audio_ctx(rw->context));
}
@ -57,9 +78,9 @@ static VALUE
ruby_whisper_model_n_audio_state(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_audio_state(rw->context));
}
@ -71,9 +92,9 @@ static VALUE
ruby_whisper_model_n_audio_head(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_audio_head(rw->context));
}
@ -85,9 +106,9 @@ static VALUE
ruby_whisper_model_n_audio_layer(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_audio_layer(rw->context));
}
@ -99,9 +120,9 @@ static VALUE
ruby_whisper_model_n_text_ctx(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_text_ctx(rw->context));
}
@ -113,9 +134,9 @@ static VALUE
ruby_whisper_model_n_text_state(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_text_state(rw->context));
}
@ -127,9 +148,9 @@ static VALUE
ruby_whisper_model_n_text_head(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_text_head(rw->context));
}
@ -141,9 +162,9 @@ static VALUE
ruby_whisper_model_n_text_layer(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_text_layer(rw->context));
}
@ -155,9 +176,9 @@ static VALUE
ruby_whisper_model_n_mels(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_n_mels(rw->context));
}
@ -169,9 +190,9 @@ static VALUE
ruby_whisper_model_ftype(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return INT2NUM(whisper_model_ftype(rw->context));
}
@ -183,9 +204,9 @@ static VALUE
ruby_whisper_model_type(VALUE self)
{
ruby_whisper_model *rwm;
Data_Get_Struct(self, ruby_whisper_model, rwm);
TypedData_Get_Struct(self, ruby_whisper_model, &rb_whisper_model_type, rwm);
ruby_whisper *rw;
Data_Get_Struct(rwm->context, ruby_whisper, rw);
GetContext(rwm->context, rw);
return rb_str_new2(whisper_model_type_readable(rw->context));
}

View File

@ -1,9 +1,8 @@
#include <ruby.h>
#include "ruby_whisper.h"
#define BOOL_PARAMS_SETTER(self, prop, value) \
ruby_whisper_params *rwp; \
Data_Get_Struct(self, ruby_whisper_params, rwp); \
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp); \
if (value == Qfalse || value == Qnil) { \
rwp->params.prop = false; \
} else { \
@ -13,7 +12,7 @@
#define BOOL_PARAMS_GETTER(self, prop) \
ruby_whisper_params *rwp; \
Data_Get_Struct(self, ruby_whisper_params, rwp); \
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp); \
if (rwp->params.prop) { \
return Qtrue; \
} else { \
@ -26,13 +25,17 @@
rb_define_method(cParams, #param_name, ruby_whisper_params_get_ ## param_name, 0); \
rb_define_method(cParams, #param_name "=", ruby_whisper_params_set_ ## param_name, 1);
#define RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT 30
#define RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT 37
extern VALUE cParams;
extern VALUE cVADParams;
extern VALUE mWhisper;
extern ID id_call;
extern VALUE rb_whisper_segment_initialize(VALUE context, int index);
extern VALUE ruby_whisper_normalize_model_path(VALUE model_path);
extern VALUE rb_whisper_segment_s_new(VALUE context, int index);
extern const rb_data_type_t ruby_whisper_vad_params_type;
static ID param_names[RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT];
static ID id_language;
@ -43,9 +46,11 @@ static ID id_print_special;
static ID id_print_progress;
static ID id_print_realtime;
static ID id_print_timestamps;
static ID id_carry_initial_prompt;
static ID id_suppress_blank;
static ID id_suppress_nst;
static ID id_token_timestamps;
static ID id_max_len;
static ID id_split_on_word;
static ID id_initial_prompt;
static ID id_diarize;
@ -63,12 +68,19 @@ static ID id_new_segment_callback;
static ID id_new_segment_callback_user_data;
static ID id_progress_callback;
static ID id_progress_callback_user_data;
static ID id_encoder_begin_callback;
static ID id_encoder_begin_callback_user_data;
static ID id_abort_callback;
static ID id_abort_callback_user_data;
static ID id_vad;
static ID id_vad_model_path;
static ID id_vad_params;
static void
rb_whisper_callbcack_container_mark(ruby_whisper_callback_container *rwc)
{
if (rwc == NULL) return;
rb_gc_mark(rwc->user_data);
rb_gc_mark(rwc->callback);
rb_gc_mark(rwc->callbacks);
@ -100,7 +112,7 @@ static void new_segment_callback(struct whisper_context *ctx, struct whisper_sta
const int n_segments = whisper_full_n_segments_from_state(state);
for (int i = n_new; i > 0; i--) {
int i_segment = n_segments - i;
VALUE segment = rb_whisper_segment_initialize(*container->context, i_segment);
VALUE segment = rb_whisper_segment_s_new(*container->context, i_segment);
for (int j = 0; j < callbacks_len; j++) {
VALUE cb = rb_ary_entry(container->callbacks, j);
rb_funcall(cb, id_call, 1, segment);
@ -126,6 +138,33 @@ static void progress_callback(struct whisper_context *ctx, struct whisper_state
}
}
static bool encoder_begin_callback(struct whisper_context *ctx, struct whisper_state *state, void *user_data) {
const ruby_whisper_callback_container *container = (ruby_whisper_callback_container *)user_data;
bool is_aborted = false;
VALUE result;
// Currently, doesn't support state because
// those require to resolve GC-related problems.
if (!NIL_P(container->callback)) {
result = rb_funcall(container->callback, id_call, 3, *container->context, Qnil, container->user_data);
if (result == Qfalse) {
is_aborted = true;
}
}
const long callbacks_len = RARRAY_LEN(container->callbacks);
if (0 == callbacks_len) {
return !is_aborted;
}
for (int j = 0; j < callbacks_len; j++) {
VALUE cb = rb_ary_entry(container->callbacks, j);
result = rb_funcall(cb, id_call, 0);
if (result == Qfalse) {
is_aborted = true;
}
}
return !is_aborted;
}
static bool abort_callback(void * user_data) {
const ruby_whisper_callback_container *container = (ruby_whisper_callback_container *)user_data;
if (!NIL_P(container->callback)) {
@ -148,7 +187,36 @@ static bool abort_callback(void * user_data) {
return false;
}
void register_callbacks(ruby_whisper_params * rwp, VALUE * context) {
static void
check_thread_safety(ruby_whisper_params *rwp, VALUE *context, int n_processors)
{
if (n_processors == 1) {
return;
}
if (!NIL_P(rwp->new_segment_callback_container->callback) || 0 != RARRAY_LEN(rwp->new_segment_callback_container->callbacks)) {
rb_raise(rb_eRuntimeError, "new segment callback not supported on parallel transcription");
}
if (!NIL_P(rwp->progress_callback_container->callback) || 0 != RARRAY_LEN(rwp->progress_callback_container->callbacks)) {
rb_raise(rb_eRuntimeError, "progress callback not supported on parallel transcription");
}
if (!NIL_P(rwp->encoder_begin_callback_container->callback) || 0 != RARRAY_LEN(rwp->encoder_begin_callback_container->callbacks)) {
rb_raise(rb_eRuntimeError, "encoder begin callback not supported on parallel transcription");
}
if (!NIL_P(rwp->abort_callback_container->callback) || 0 != RARRAY_LEN(rwp->abort_callback_container->callbacks)) {
rb_raise(rb_eRuntimeError, "abort callback not supported on parallel transcription");
}
VALUE log_callback = rb_iv_get(mWhisper, "log_callback");
if (!NIL_P(log_callback)) {
rb_raise(rb_eRuntimeError, "log callback not supported for parallel transcription");
}
}
static void register_callbacks(ruby_whisper_params * rwp, VALUE * context) {
if (!NIL_P(rwp->new_segment_callback_container->callback) || 0 != RARRAY_LEN(rwp->new_segment_callback_container->callbacks)) {
rwp->new_segment_callback_container->context = context;
rwp->params.new_segment_callback = new_segment_callback;
@ -161,6 +229,12 @@ void register_callbacks(ruby_whisper_params * rwp, VALUE * context) {
rwp->params.progress_callback_user_data = rwp->progress_callback_container;
}
if (!NIL_P(rwp->encoder_begin_callback_container->callback) || 0 != RARRAY_LEN(rwp->encoder_begin_callback_container->callbacks)) {
rwp->encoder_begin_callback_container->context = context;
rwp->params.encoder_begin_callback = encoder_begin_callback;
rwp->params.encoder_begin_callback_user_data = rwp->encoder_begin_callback_container;
}
if (!NIL_P(rwp->abort_callback_container->callback) || 0 != RARRAY_LEN(rwp->abort_callback_container->callbacks)) {
rwp->abort_callback_container->context = context;
rwp->params.abort_callback = abort_callback;
@ -168,38 +242,104 @@ void register_callbacks(ruby_whisper_params * rwp, VALUE * context) {
}
}
void
rb_whisper_params_mark(ruby_whisper_params *rwp)
static void set_vad_params(ruby_whisper_params *rwp)
{
ruby_whisper_vad_params * rwvp;
TypedData_Get_Struct(rwp->vad_params, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwp->params.vad_params = rwvp->params;
}
/*
TODO: Set abort callback to trap SIGINT and SIGTERM
*/
void
prepare_transcription(ruby_whisper_params *rwp, VALUE *context, int n_processors)
{
check_thread_safety(rwp, context, n_processors);
register_callbacks(rwp, context);
set_vad_params(rwp);
}
void
rb_whisper_params_mark(void *p)
{
ruby_whisper_params *rwp = (ruby_whisper_params *)p;
rb_whisper_callbcack_container_mark(rwp->new_segment_callback_container);
rb_whisper_callbcack_container_mark(rwp->progress_callback_container);
rb_whisper_callbcack_container_mark(rwp->encoder_begin_callback_container);
rb_whisper_callbcack_container_mark(rwp->abort_callback_container);
rb_gc_mark(rwp->vad_params);
}
void
ruby_whisper_params_free(ruby_whisper_params *rwp)
{
if (rwp->params.language) {
ruby_xfree((void *)rwp->params.language);
}
if (rwp->params.initial_prompt) {
ruby_xfree((void *)rwp->params.initial_prompt);
}
if (rwp->params.vad_model_path) {
ruby_xfree((void *)rwp->params.vad_model_path);
}
xfree(rwp->new_segment_callback_container);
xfree(rwp->progress_callback_container);
xfree(rwp->encoder_begin_callback_container);
xfree(rwp->abort_callback_container);
}
void
rb_whisper_params_free(ruby_whisper_params *rwp)
rb_whisper_params_free(void *p)
{
ruby_whisper_params *rwp = (ruby_whisper_params *)p;
// How to free user_data and callback only when not referred to by others?
ruby_whisper_params_free(rwp);
free(rwp);
xfree(rwp);
}
static size_t
ruby_whisper_params_memsize(const void *p)
{
const ruby_whisper_params *rwp = (const ruby_whisper_params *)p;
return sizeof(ruby_whisper_params) + sizeof(rwp->params) + sizeof(rwp->vad_params);
}
const rb_data_type_t ruby_whisper_params_type = {
"ruby_whisper_params",
{
rb_whisper_params_mark,
rb_whisper_params_free,
ruby_whisper_params_memsize,
},
0, 0,
0
};
static VALUE
ruby_whisper_params_allocate(VALUE klass)
{
ruby_whisper_params *rwp;
rwp = ALLOC(ruby_whisper_params);
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
if (rwp->params.language != NULL) {
rwp->params.language = ruby_strdup(rwp->params.language);
}
if (rwp->params.initial_prompt != NULL) {
rwp->params.initial_prompt = ruby_strdup(rwp->params.initial_prompt);
}
if (rwp->params.vad_model_path != NULL) {
rwp->params.vad_model_path = ruby_strdup(rwp->params.vad_model_path);
}
rwp->diarize = false;
rwp->vad_params = TypedData_Wrap_Struct(cVADParams, &ruby_whisper_vad_params_type, (void *)&rwp->params.vad_params);
rwp->new_segment_callback_container = rb_whisper_callback_container_allocate();
rwp->progress_callback_container = rb_whisper_callback_container_allocate();
rwp->encoder_begin_callback_container = rb_whisper_callback_container_allocate();
rwp->abort_callback_container = rb_whisper_callback_container_allocate();
return Data_Wrap_Struct(klass, rb_whisper_params_mark, rb_whisper_params_free, rwp);
return obj;
}
/*
@ -212,11 +352,13 @@ static VALUE
ruby_whisper_params_set_language(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
ruby_xfree((void *)rwp->params.language);
rwp->params.language = NULL;
if (value == Qfalse || value == Qnil) {
rwp->params.language = "auto";
rwp->params.language = ruby_strdup("auto");
} else {
rwp->params.language = StringValueCStr(value);
rwp->params.language = ruby_strdup(StringValueCStr(value));
}
return value;
}
@ -228,7 +370,7 @@ static VALUE
ruby_whisper_params_get_language(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
if (rwp->params.language) {
return rb_str_new2(rwp->params.language);
} else {
@ -344,6 +486,7 @@ ruby_whisper_params_set_print_realtime(VALUE self, VALUE value)
}
/*
* If true, prints results from within whisper.cpp. (avoid it, use callback instead)
*
* call-seq:
* print_realtime -> bool
*/
@ -372,6 +515,26 @@ ruby_whisper_params_get_print_timestamps(VALUE self)
{
BOOL_PARAMS_GETTER(self, print_timestamps)
}
/*
* call-seq:
* carry_initial_prompt -> true or false
*/
static VALUE
ruby_whisper_params_get_carry_initial_prompt(VALUE self)
{
BOOL_PARAMS_GETTER(self, carry_initial_prompt)
}
/*
* call-seq:
* carry_initial_prompt = bool -> bool
*/
static VALUE
ruby_whisper_params_set_carry_initial_prompt(VALUE self, VALUE value)
{
BOOL_PARAMS_SETTER(self, carry_initial_prompt, value)
}
/*
* call-seq:
* suppress_blank = force_suppress -> force_suppress
@ -432,6 +595,33 @@ ruby_whisper_params_set_token_timestamps(VALUE self, VALUE value)
{
BOOL_PARAMS_SETTER(self, token_timestamps, value)
}
/*
* max segment length in characters.
*
* call-seq:
* max_len -> Integer
*/
static VALUE
ruby_whisper_params_get_max_len(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return INT2NUM(rwp->params.max_len);
}
/*
* call-seq:
* max_len = length -> length
*/
static VALUE
ruby_whisper_params_set_max_len(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.max_len = NUM2INT(value);
return value;
}
/*
* If true, split on word rather than on token (when used with max_len).
*
@ -465,7 +655,7 @@ static VALUE
ruby_whisper_params_get_initial_prompt(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->params.initial_prompt == NULL ? Qnil : rb_str_new2(rwp->params.initial_prompt);
}
/*
@ -476,8 +666,14 @@ static VALUE
ruby_whisper_params_set_initial_prompt(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
rwp->params.initial_prompt = StringValueCStr(value);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
ruby_xfree((void *)rwp->params.initial_prompt);
rwp->params.initial_prompt = NULL;
if (NIL_P(value)) {
rwp->params.initial_prompt = NULL;
} else {
rwp->params.initial_prompt = ruby_strdup(StringValueCStr(value));
}
return value;
}
/*
@ -490,7 +686,7 @@ static VALUE
ruby_whisper_params_get_diarize(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
if (rwp->diarize) {
return Qtrue;
} else {
@ -505,7 +701,7 @@ static VALUE
ruby_whisper_params_set_diarize(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
if (value == Qfalse || value == Qnil) {
rwp->diarize = false;
} else {
@ -524,7 +720,7 @@ static VALUE
ruby_whisper_params_get_offset(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return INT2NUM(rwp->params.offset_ms);
}
/*
@ -535,7 +731,7 @@ static VALUE
ruby_whisper_params_set_offset(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.offset_ms = NUM2INT(value);
return value;
}
@ -549,7 +745,7 @@ static VALUE
ruby_whisper_params_get_duration(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return INT2NUM(rwp->params.duration_ms);
}
/*
@ -560,7 +756,7 @@ static VALUE
ruby_whisper_params_set_duration(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.duration_ms = NUM2INT(value);
return value;
}
@ -575,7 +771,7 @@ static VALUE
ruby_whisper_params_get_max_text_tokens(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return INT2NUM(rwp->params.n_max_text_ctx);
}
/*
@ -586,7 +782,7 @@ static VALUE
ruby_whisper_params_set_max_text_tokens(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.n_max_text_ctx = NUM2INT(value);
return value;
}
@ -598,7 +794,7 @@ static VALUE
ruby_whisper_params_get_temperature(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.temperature);
}
/*
@ -609,7 +805,7 @@ static VALUE
ruby_whisper_params_set_temperature(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.temperature = RFLOAT_VALUE(value);
return value;
}
@ -623,7 +819,7 @@ static VALUE
ruby_whisper_params_get_max_initial_ts(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.max_initial_ts);
}
/*
@ -634,7 +830,7 @@ static VALUE
ruby_whisper_params_set_max_initial_ts(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.max_initial_ts = RFLOAT_VALUE(value);
return value;
}
@ -646,7 +842,7 @@ static VALUE
ruby_whisper_params_get_length_penalty(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.length_penalty);
}
/*
@ -657,7 +853,7 @@ static VALUE
ruby_whisper_params_set_length_penalty(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.length_penalty = RFLOAT_VALUE(value);
return value;
}
@ -669,7 +865,7 @@ static VALUE
ruby_whisper_params_get_temperature_inc(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.temperature_inc);
}
/*
@ -680,7 +876,7 @@ static VALUE
ruby_whisper_params_set_temperature_inc(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.temperature_inc = RFLOAT_VALUE(value);
return value;
}
@ -694,7 +890,7 @@ static VALUE
ruby_whisper_params_get_entropy_thold(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.entropy_thold);
}
/*
@ -705,7 +901,7 @@ static VALUE
ruby_whisper_params_set_entropy_thold(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.entropy_thold = RFLOAT_VALUE(value);
return value;
}
@ -717,7 +913,7 @@ static VALUE
ruby_whisper_params_get_logprob_thold(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.logprob_thold);
}
/*
@ -728,7 +924,7 @@ static VALUE
ruby_whisper_params_set_logprob_thold(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.logprob_thold = RFLOAT_VALUE(value);
return value;
}
@ -740,7 +936,7 @@ static VALUE
ruby_whisper_params_get_no_speech_thold(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return DBL2NUM(rwp->params.no_speech_thold);
}
/*
@ -751,7 +947,7 @@ static VALUE
ruby_whisper_params_set_no_speech_thold(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->params.no_speech_thold = RFLOAT_VALUE(value);
return value;
}
@ -759,7 +955,7 @@ static VALUE
ruby_whisper_params_get_new_segment_callback(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->new_segment_callback_container->callback;
}
/*
@ -776,7 +972,7 @@ static VALUE
ruby_whisper_params_set_new_segment_callback(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->new_segment_callback_container->callback = value;
return value;
}
@ -784,7 +980,7 @@ static VALUE
ruby_whisper_params_get_new_segment_callback_user_data(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->new_segment_callback_container->user_data;
}
/*
@ -797,7 +993,7 @@ static VALUE
ruby_whisper_params_set_new_segment_callback_user_data(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->new_segment_callback_container->user_data = value;
return value;
}
@ -805,7 +1001,7 @@ static VALUE
ruby_whisper_params_get_progress_callback(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->progress_callback_container->callback;
}
/*
@ -824,7 +1020,7 @@ static VALUE
ruby_whisper_params_set_progress_callback(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->progress_callback_container->callback = value;
return value;
}
@ -832,7 +1028,7 @@ static VALUE
ruby_whisper_params_get_progress_callback_user_data(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->progress_callback_container->user_data;
}
/*
@ -845,15 +1041,66 @@ static VALUE
ruby_whisper_params_set_progress_callback_user_data(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->progress_callback_container->user_data = value;
return value;
}
static VALUE
ruby_whisper_params_get_encoder_begin_callback(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->encoder_begin_callback_container->callback;
}
/*
* Sets encoder begin callback, called when the encoder starts.
*
* params.encoder_begin_callback = ->(context, _, user_data) {
* # ...
* }
*
* call-seq:
* encoder_begin_callback = callback -> callback
*/
static VALUE
ruby_whisper_params_set_encoder_begin_callback(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->encoder_begin_callback_container->callback = value;
return value;
}
static VALUE
ruby_whisper_params_get_encoder_begin_callback_user_data(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->encoder_begin_callback_container->user_data;
}
/*
* Sets user data passed to the last argument of encoder begin callback.
*
* call-seq:
* encoder_begin_callback_user_data = user_data -> use_data
*/
static VALUE
ruby_whisper_params_set_encoder_begin_callback_user_data(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->encoder_begin_callback_container->user_data = value;
return value;
}
static VALUE
ruby_whisper_params_get_abort_callback(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->abort_callback_container->callback;
}
/*
@ -870,7 +1117,7 @@ static VALUE
ruby_whisper_params_set_abort_callback(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->abort_callback_container->callback = value;
return value;
}
@ -878,7 +1125,7 @@ static VALUE
ruby_whisper_params_get_abort_callback_user_data(VALUE self)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->abort_callback_container->user_data;
}
/*
@ -891,11 +1138,76 @@ static VALUE
ruby_whisper_params_set_abort_callback_user_data(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
Data_Get_Struct(self, ruby_whisper_params, rwp);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->abort_callback_container->user_data = value;
return value;
}
/*
* call-seq:
* vad = use_vad -> use_vad
*/
static VALUE
ruby_whisper_params_get_vad(VALUE self)
{
BOOL_PARAMS_GETTER(self, vad)
}
static VALUE
ruby_whisper_params_set_vad(VALUE self, VALUE value)
{
BOOL_PARAMS_SETTER(self, vad, value)
}
/*
* call-seq:
* vad_model_path = model_path -> model_path
*/
static VALUE
ruby_whisper_params_set_vad_model_path(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
ruby_xfree((void *)rwp->params.vad_model_path);
rwp->params.vad_model_path = NULL;
if (NIL_P(value)) {
rwp->params.vad_model_path = NULL;
return value;
}
VALUE path = ruby_whisper_normalize_model_path(value);
rwp->params.vad_model_path = ruby_strdup(StringValueCStr(path));
return value;
}
static VALUE
ruby_whisper_params_get_vad_model_path(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->params.vad_model_path == NULL ? Qnil : rb_str_new2(rwp->params.vad_model_path);
}
/*
* call-seq:
* vad_params = params -> params
*/
static VALUE
ruby_whisper_params_set_vad_params(VALUE self, VALUE value)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
rwp->vad_params = value;
return value;
}
static VALUE
ruby_whisper_params_get_vad_params(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
return rwp->vad_params;
}
#define SET_PARAM_IF_SAME(param_name) \
if (id == id_ ## param_name) { \
ruby_whisper_params_set_ ## param_name(self, value); \
@ -905,7 +1217,6 @@ ruby_whisper_params_set_abort_callback_user_data(VALUE self, VALUE value)
static VALUE
ruby_whisper_params_initialize(int argc, VALUE *argv, VALUE self)
{
VALUE kw_hash;
VALUE values[RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT] = {Qundef};
VALUE value;
@ -918,8 +1229,8 @@ ruby_whisper_params_initialize(int argc, VALUE *argv, VALUE self)
return self;
}
rb_get_kwargs(kw_hash, &param_names, 0, RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT, &values);
Data_Get_Struct(self, ruby_whisper_params, rwp);
rb_get_kwargs(kw_hash, param_names, 0, RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT, values);
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
for (i = 0; i < RUBY_WHISPER_PARAMS_PARAM_NAMES_COUNT; i++) {
id = param_names[i];
@ -942,8 +1253,10 @@ ruby_whisper_params_initialize(int argc, VALUE *argv, VALUE self)
SET_PARAM_IF_SAME(suppress_blank)
SET_PARAM_IF_SAME(suppress_nst)
SET_PARAM_IF_SAME(token_timestamps)
SET_PARAM_IF_SAME(max_len)
SET_PARAM_IF_SAME(split_on_word)
SET_PARAM_IF_SAME(initial_prompt)
SET_PARAM_IF_SAME(carry_initial_prompt)
SET_PARAM_IF_SAME(offset)
SET_PARAM_IF_SAME(duration)
SET_PARAM_IF_SAME(max_text_tokens)
@ -958,8 +1271,13 @@ ruby_whisper_params_initialize(int argc, VALUE *argv, VALUE self)
SET_PARAM_IF_SAME(new_segment_callback_user_data)
SET_PARAM_IF_SAME(progress_callback)
SET_PARAM_IF_SAME(progress_callback_user_data)
SET_PARAM_IF_SAME(encoder_begin_callback)
SET_PARAM_IF_SAME(encoder_begin_callback_user_data)
SET_PARAM_IF_SAME(abort_callback)
SET_PARAM_IF_SAME(abort_callback_user_data)
SET_PARAM_IF_SAME(vad)
SET_PARAM_IF_SAME(vad_model_path)
SET_PARAM_IF_SAME(vad_params)
}
}
@ -981,10 +1299,10 @@ ruby_whisper_params_initialize(int argc, VALUE *argv, VALUE self)
static VALUE
ruby_whisper_params_on_new_segment(VALUE self)
{
ruby_whisper_params *rws;
Data_Get_Struct(self, ruby_whisper_params, rws);
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
const VALUE blk = rb_block_proc();
rb_ary_push(rws->new_segment_callback_container->callbacks, blk);
rb_ary_push(rwp->new_segment_callback_container->callbacks, blk);
return Qnil;
}
@ -1001,10 +1319,30 @@ ruby_whisper_params_on_new_segment(VALUE self)
static VALUE
ruby_whisper_params_on_progress(VALUE self)
{
ruby_whisper_params *rws;
Data_Get_Struct(self, ruby_whisper_params, rws);
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
const VALUE blk = rb_block_proc();
rb_ary_push(rws->progress_callback_container->callbacks, blk);
rb_ary_push(rwp->progress_callback_container->callbacks, blk);
return Qnil;
}
/*
* Hook called when the encoder starts.
*
* whisper.on_encoder_begin do
* # ...
* end
*
* call-seq:
* on_encoder_begin { ... }
*/
static VALUE
ruby_whisper_params_on_encoder_begin(VALUE self)
{
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
const VALUE blk = rb_block_proc();
rb_ary_push(rwp->encoder_begin_callback_container->callbacks, blk);
return Qnil;
}
@ -1025,10 +1363,10 @@ ruby_whisper_params_on_progress(VALUE self)
static VALUE
ruby_whisper_params_abort_on(VALUE self)
{
ruby_whisper_params *rws;
Data_Get_Struct(self, ruby_whisper_params, rws);
ruby_whisper_params *rwp;
TypedData_Get_Struct(self, ruby_whisper_params, &ruby_whisper_params_type, rwp);
const VALUE blk = rb_block_proc();
rb_ary_push(rws->abort_callback_container->callbacks, blk);
rb_ary_push(rwp->abort_callback_container->callbacks, blk);
return Qnil;
}
@ -1051,27 +1389,35 @@ init_ruby_whisper_params(VALUE *mWhisper)
DEFINE_PARAM(suppress_blank, 8)
DEFINE_PARAM(suppress_nst, 9)
DEFINE_PARAM(token_timestamps, 10)
DEFINE_PARAM(split_on_word, 11)
DEFINE_PARAM(initial_prompt, 12)
DEFINE_PARAM(diarize, 13)
DEFINE_PARAM(offset, 14)
DEFINE_PARAM(duration, 15)
DEFINE_PARAM(max_text_tokens, 16)
DEFINE_PARAM(temperature, 17)
DEFINE_PARAM(max_initial_ts, 18)
DEFINE_PARAM(length_penalty, 19)
DEFINE_PARAM(temperature_inc, 20)
DEFINE_PARAM(entropy_thold, 21)
DEFINE_PARAM(logprob_thold, 22)
DEFINE_PARAM(no_speech_thold, 23)
DEFINE_PARAM(new_segment_callback, 24)
DEFINE_PARAM(new_segment_callback_user_data, 25)
DEFINE_PARAM(progress_callback, 26)
DEFINE_PARAM(progress_callback_user_data, 27)
DEFINE_PARAM(abort_callback, 28)
DEFINE_PARAM(abort_callback_user_data, 29)
DEFINE_PARAM(max_len, 11)
DEFINE_PARAM(split_on_word, 12)
DEFINE_PARAM(initial_prompt, 13)
DEFINE_PARAM(carry_initial_prompt, 14)
DEFINE_PARAM(diarize, 15)
DEFINE_PARAM(offset, 16)
DEFINE_PARAM(duration, 17)
DEFINE_PARAM(max_text_tokens, 18)
DEFINE_PARAM(temperature, 19)
DEFINE_PARAM(max_initial_ts, 20)
DEFINE_PARAM(length_penalty, 21)
DEFINE_PARAM(temperature_inc, 22)
DEFINE_PARAM(entropy_thold, 23)
DEFINE_PARAM(logprob_thold, 24)
DEFINE_PARAM(no_speech_thold, 25)
DEFINE_PARAM(new_segment_callback, 26)
DEFINE_PARAM(new_segment_callback_user_data, 27)
DEFINE_PARAM(progress_callback, 28)
DEFINE_PARAM(progress_callback_user_data, 29)
DEFINE_PARAM(encoder_begin_callback, 30)
DEFINE_PARAM(encoder_begin_callback_user_data, 31)
DEFINE_PARAM(abort_callback, 32)
DEFINE_PARAM(abort_callback_user_data, 33)
DEFINE_PARAM(vad, 34)
DEFINE_PARAM(vad_model_path, 35)
DEFINE_PARAM(vad_params, 36)
rb_define_method(cParams, "on_new_segment", ruby_whisper_params_on_new_segment, 0);
rb_define_method(cParams, "on_progress", ruby_whisper_params_on_progress, 0);
rb_define_method(cParams, "on_encoder_begin", ruby_whisper_params_on_encoder_begin, 0);
rb_define_method(cParams, "abort_on", ruby_whisper_params_abort_on, 0);
}

View File

@ -1,28 +1,63 @@
#include <ruby.h>
#include "ruby_whisper.h"
#define N_KEY_NAMES 6
extern ID id___method__;
extern ID id_to_enum;
static VALUE sym_start_time;
static VALUE sym_end_time;
static VALUE sym_text;
static VALUE sym_no_speech_prob;
static VALUE sym_speaker_turn_next;
static VALUE sym_n_tokens;
extern const rb_data_type_t ruby_whisper_type;
extern VALUE cSegment;
extern VALUE ruby_whisper_token_s_init(struct whisper_context *context, int i_segment, int index);
static void
rb_whisper_segment_mark(ruby_whisper_segment *rws)
rb_whisper_segment_mark(void *p)
{
ruby_whisper_segment *rws = (ruby_whisper_segment *)p;
rb_gc_mark(rws->context);
}
static size_t
ruby_whisper_segment_memsize(const void *p)
{
const ruby_whisper_segment *rws = (const ruby_whisper_segment *)p;
size_t size = sizeof(rws);
if (!rws) {
return 0;
}
if (rws->index) {
size += sizeof(rws->index);
}
return size;
}
static const rb_data_type_t ruby_whisper_segment_type = {
"ruby_whisper_segment",
{rb_whisper_segment_mark, RUBY_DEFAULT_FREE, ruby_whisper_segment_memsize,},
0, 0,
0
};
VALUE
ruby_whisper_segment_allocate(VALUE klass)
{
ruby_whisper_segment *rws;
rws = ALLOC(ruby_whisper_segment);
return Data_Wrap_Struct(klass, rb_whisper_segment_mark, RUBY_DEFAULT_FREE, rws);
return TypedData_Make_Struct(klass, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
}
VALUE
rb_whisper_segment_initialize(VALUE context, int index)
rb_whisper_segment_s_new(VALUE context, int index)
{
ruby_whisper_segment *rws;
const VALUE segment = ruby_whisper_segment_allocate(cSegment);
Data_Get_Struct(segment, ruby_whisper_segment, rws);
TypedData_Get_Struct(segment, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
rws->context = context;
rws->index = index;
return segment;
@ -38,12 +73,12 @@ static VALUE
ruby_whisper_segment_get_start_time(VALUE self)
{
ruby_whisper_segment *rws;
Data_Get_Struct(self, ruby_whisper_segment, rws);
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
Data_Get_Struct(rws->context, ruby_whisper, rw);
GetContext(rws->context, rw);
const int64_t t0 = whisper_full_get_segment_t0(rw->context, rws->index);
// able to multiply 10 without overflow because to_timestamp() in whisper.cpp does it
return INT2NUM(t0 * 10);
return LONG2NUM(t0 * 10);
}
/*
@ -56,12 +91,12 @@ static VALUE
ruby_whisper_segment_get_end_time(VALUE self)
{
ruby_whisper_segment *rws;
Data_Get_Struct(self, ruby_whisper_segment, rws);
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
Data_Get_Struct(rws->context, ruby_whisper, rw);
GetContext(rws->context, rw);
const int64_t t1 = whisper_full_get_segment_t1(rw->context, rws->index);
// able to multiply 10 without overflow because to_timestamp() in whisper.cpp does it
return INT2NUM(t1 * 10);
return LONG2NUM(t1 * 10);
}
/*
@ -74,9 +109,9 @@ static VALUE
ruby_whisper_segment_get_speaker_turn_next(VALUE self)
{
ruby_whisper_segment *rws;
Data_Get_Struct(self, ruby_whisper_segment, rws);
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
Data_Get_Struct(rws->context, ruby_whisper, rw);
GetContext(rws->context, rw);
return whisper_full_get_segment_speaker_turn_next(rw->context, rws->index) ? Qtrue : Qfalse;
}
@ -88,9 +123,9 @@ static VALUE
ruby_whisper_segment_get_text(VALUE self)
{
ruby_whisper_segment *rws;
Data_Get_Struct(self, ruby_whisper_segment, rws);
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
Data_Get_Struct(rws->context, ruby_whisper, rw);
GetContext(rws->context, rw);
const char * text = whisper_full_get_segment_text(rw->context, rws->index);
return rb_str_new2(text);
}
@ -103,21 +138,147 @@ static VALUE
ruby_whisper_segment_get_no_speech_prob(VALUE self)
{
ruby_whisper_segment *rws;
Data_Get_Struct(self, ruby_whisper_segment, rws);
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
Data_Get_Struct(rws->context, ruby_whisper, rw);
GetContext(rws->context, rw);
return DBL2NUM(whisper_full_get_segment_no_speech_prob(rw->context, rws->index));
}
/*
* Get number of tokens in the segment
*
* call-seq:
* n_tokens -> Integer
*/
static VALUE
ruby_whisper_segment_get_n_tokens(VALUE self)
{
ruby_whisper_segment *rws;
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
GetContext(rws->context, rw);
return INT2NUM(whisper_full_n_tokens(rw->context, rws->index));
}
/*
* Yields each Whisper::Token:
*
* whisper.each_segment.first.each_token do |token|
* p token
* end
*
* Returns an Enumerator if no block is given:
*
* whisper.each_segment.first.each_token.to_a # => [#<Whisper::Token>, ...]
*
* call-seq:
* each_token {|token| ... }
* each_token -> Enumerator
*/
static VALUE
ruby_whisper_segment_each_token(VALUE self)
{
if (!rb_block_given_p()) {
const VALUE method_name = rb_funcall(self, id___method__, 0);
return rb_funcall(self, id_to_enum, 1, method_name);
}
ruby_whisper_segment *rws;
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
GetContext(rws->context, rw);
const int n_tokens = whisper_full_n_tokens(rw->context, rws->index);
for (int i = 0; i < n_tokens; ++i) {
rb_yield(ruby_whisper_token_s_init(rw->context, rws->index, i));
}
return self;
}
/*
* call-seq:
* deconstruct_keys(keys) -> hash
*
* Possible keys: :start_time, :end_time, :text, :no_speech_prob, :speaker_turn_next, :n_tokens
*
* whisper.each_segment do |segment|
* segment => {start_time:, end_time:, text:, no_speech_prob:, speaker_turn_next:}
*
* puts "[#{start_time} --> #{end_time}] #{text} (no speech prob: #{no_speech_prob}#{speaker_turn_next ? ', speaker turns next' : ''})"
* end
*/
static VALUE
ruby_whisper_segment_deconstruct_keys(VALUE self, VALUE keys)
{
ruby_whisper_segment *rws;
TypedData_Get_Struct(self, ruby_whisper_segment, &ruby_whisper_segment_type, rws);
ruby_whisper *rw;
GetContext(rws->context, rw);
VALUE hash = rb_hash_new();
long n_keys;
if (NIL_P(keys)) {
keys = rb_ary_new3(
N_KEY_NAMES,
sym_start_time,
sym_end_time,
sym_text,
sym_no_speech_prob,
sym_speaker_turn_next
);
n_keys = N_KEY_NAMES;
} else {
n_keys = RARRAY_LEN(keys);
if (n_keys > N_KEY_NAMES) {
return hash;
}
}
for (int i = 0; i < n_keys; i++) {
VALUE key = rb_ary_entry(keys, i);
if (key == sym_start_time) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_start_time(self));
}
if (key == sym_end_time) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_end_time(self));
}
if (key == sym_text) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_text(self));
}
if (key == sym_no_speech_prob) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_no_speech_prob(self));
}
if (key == sym_speaker_turn_next) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_speaker_turn_next(self));
}
if (key == sym_n_tokens) {
rb_hash_aset(hash, key, ruby_whisper_segment_get_n_tokens(self));
}
}
return hash;
}
void
init_ruby_whisper_segment(VALUE *mWhisper, VALUE *cContext)
init_ruby_whisper_segment(VALUE *mWhisper)
{
cSegment = rb_define_class_under(*mWhisper, "Segment", rb_cObject);
sym_start_time = ID2SYM(rb_intern("start_time"));
sym_end_time = ID2SYM(rb_intern("end_time"));
sym_text = ID2SYM(rb_intern("text"));
sym_no_speech_prob = ID2SYM(rb_intern("no_speech_prob"));
sym_speaker_turn_next = ID2SYM(rb_intern("speaker_turn_next"));
sym_n_tokens = ID2SYM(rb_intern("n_tokens"));
rb_define_alloc_func(cSegment, ruby_whisper_segment_allocate);
rb_define_method(cSegment, "start_time", ruby_whisper_segment_get_start_time, 0);
rb_define_method(cSegment, "end_time", ruby_whisper_segment_get_end_time, 0);
rb_define_method(cSegment, "speaker_next_turn?", ruby_whisper_segment_get_speaker_turn_next, 0);
rb_define_method(cSegment, "speaker_turn_next?", ruby_whisper_segment_get_speaker_turn_next, 0);
rb_define_method(cSegment, "text", ruby_whisper_segment_get_text, 0);
rb_define_method(cSegment, "no_speech_prob", ruby_whisper_segment_get_no_speech_prob, 0);
rb_define_method(cSegment, "n_tokens", ruby_whisper_segment_get_n_tokens, 0);
rb_define_method(cSegment, "each_token", ruby_whisper_segment_each_token, 0);
rb_define_method(cSegment, "deconstruct_keys", ruby_whisper_segment_deconstruct_keys, 1);
}
#undef N_KEY_NAMES

View File

@ -0,0 +1,371 @@
#include "ruby_whisper.h"
#define N_KEY_NAMES 11
extern VALUE cToken;
extern const rb_data_type_t ruby_whisper_type;
static VALUE sym_id;
static VALUE sym_tid;
static VALUE sym_probability;
static VALUE sym_log_probability;
static VALUE sym_pt;
static VALUE sym_ptsum;
static VALUE sym_t_dtw;
static VALUE sym_voice_length;
static VALUE sym_start_time;
static VALUE sym_end_time;
static VALUE sym_text;
static size_t
ruby_whisper_token_memsize(const void *p)
{
const ruby_whisper_token *rwt = (const ruby_whisper_token *)p;
if (!rwt) {
return 0;
}
size_t size = sizeof(*rwt);
if (rwt->token_data) {
size += sizeof(*rwt->token_data);
}
return size;
}
static void
ruby_whisper_token_mark(void *p)
{
ruby_whisper_token *rwt = (ruby_whisper_token *)p;
rb_gc_mark(rwt->text);
}
static void
ruby_whisper_token_free(void *p)
{
ruby_whisper_token *rwt = (ruby_whisper_token *)p;
if (rwt->token_data) {
xfree(rwt->token_data);
rwt->token_data = NULL;
}
xfree(rwt);
}
static const rb_data_type_t ruby_whisper_token_type = {
"ruby_whisper_token",
{ruby_whisper_token_mark, ruby_whisper_token_free, ruby_whisper_token_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_token_allocate(VALUE klass)
{
ruby_whisper_token *rwt;
VALUE token = TypedData_Make_Struct(klass, ruby_whisper_token, &ruby_whisper_token_type, rwt);
rwt->token_data = NULL;
rwt->text = Qnil;
return token;
}
VALUE
ruby_whisper_token_s_init(struct whisper_context *context, int i_segment, int i_token)
{
const VALUE token = ruby_whisper_token_allocate(cToken);
ruby_whisper_token *rwt;
TypedData_Get_Struct(token, ruby_whisper_token, &ruby_whisper_token_type, rwt);
rwt->token_data = ALLOC(whisper_token_data);
*(rwt->token_data) = whisper_full_get_token_data(context, i_segment, i_token);
rwt->text = rb_str_new2(whisper_full_get_token_text(context, i_segment, i_token));
return token;
}
/*
* Token ID.
*
* call-seq:
* id -> Integer
*/
static VALUE
ruby_whisper_token_get_id(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return INT2NUM(rwt->token_data->id);
}
/*
* Forced timestamp token ID.
*
* call-seq:
* tid -> Integer
*/
static VALUE
ruby_whisper_token_get_tid(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return INT2NUM(rwt->token_data->tid);
}
/*
* Probability of the token.
*
* call-seq:
* probability -> Float
*/
static VALUE
ruby_whisper_token_get_p(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return DBL2NUM(rwt->token_data->p);
}
/*
* Log probability of the token.
*
* call-seq:
* log_probability -> Float
*/
static VALUE
ruby_whisper_token_get_plog(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return DBL2NUM(rwt->token_data->plog);
}
/*
* Probability of the timestamp token.
*
* call-seq:
* pt -> Float
*/
static VALUE
ruby_whisper_token_get_pt(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return DBL2NUM(rwt->token_data->pt);
}
/*
* Sum of probability of all timestamp tokens.
*
* call-seq:
* ptsum -> Float
*/
static VALUE
ruby_whisper_token_get_ptsum(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return DBL2NUM(rwt->token_data->ptsum);
}
/*
* [EXPERIMENTAL] Token-level timestamps with DTW
*
* Do not use if you haven't computed token-level timestamps with dtw.
* Roughly corresponds to the moment in audio in which the token was output.
*
* call-seq:
* t_dtw -> Integer
*/
static VALUE
ruby_whisper_token_get_t_dtw(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return LONG2NUM(rwt->token_data->t_dtw);
}
/*
* Voice length of the token.
*
* call-seq:
* voice_length -> Float
*/
static VALUE
ruby_whisper_token_get_vlen(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return DBL2NUM(rwt->token_data->vlen);
}
/*
* Get the token text of the token.
*
* call-seq:
* text -> String
*/
static VALUE
ruby_whisper_token_get_text(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return rwt->text;
}
/*
* Start time of the token.
*
* Token-level timestamp data.
* Do not use if you haven't computed token-level timestamps.
*
* call-seq:
* start_time -> Integer
*/
static VALUE
ruby_whisper_token_get_start_time(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return LONG2NUM(rwt->token_data->t0 * 10);
}
/*
* End time of the token.
*
* Token-level timestamp data.
* Do not use if you haven't computed token-level timestamps.
*
* call-seq:
* end_time -> Integer
*/
static VALUE
ruby_whisper_token_get_end_time(VALUE self)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
return LONG2NUM(rwt->token_data->t1 * 10);
}
/*
* call-seq:
* deconstruct_keys(keys) -> hash
*
* Possible keys: :id, :tid, :probability, :log_probability, :pt, :ptsum,
* :t_dtw, :voice_length, :start_time, :end_time, :text
* segment.each_token do |token|
* token => {text:, probability:}
puts "#{text} (#{probability})"
* end
*/
static VALUE ruby_whisper_token_deconstruct_keys(VALUE self, VALUE keys)
{
ruby_whisper_token *rwt;
GetToken(self, rwt);
VALUE hash = rb_hash_new();
long n_keys = 0;
if (NIL_P(keys)) {
keys = rb_ary_new3(
N_KEY_NAMES,
sym_id,
sym_tid,
sym_probability,
sym_log_probability,
sym_pt,
sym_ptsum,
sym_t_dtw,
sym_voice_length,
sym_start_time,
sym_end_time,
sym_text
);
n_keys = N_KEY_NAMES;
} else {
n_keys = RARRAY_LEN(keys);
if (n_keys > N_KEY_NAMES) {
return hash;
}
}
for (int i = 0; i < n_keys; i++) {
VALUE key = rb_ary_entry(keys, i);
if (key == sym_start_time) {
rb_hash_aset(hash, key, ruby_whisper_token_get_start_time(self));
continue;
}
if (key == sym_end_time) {
rb_hash_aset(hash, key, ruby_whisper_token_get_end_time(self));
continue;
}
if (key == sym_text) {
rb_hash_aset(hash, key, ruby_whisper_token_get_text(self));
continue;
}
if (key == sym_probability) {
rb_hash_aset(hash, key, ruby_whisper_token_get_p(self));
continue;
}
if (key == sym_id) {
rb_hash_aset(hash, key, ruby_whisper_token_get_id(self));
continue;
}
if (key == sym_tid) {
rb_hash_aset(hash, key, ruby_whisper_token_get_tid(self));
continue;
}
if (key == sym_log_probability) {
rb_hash_aset(hash, key, ruby_whisper_token_get_plog(self));
continue;
}
if (key == sym_pt) {
rb_hash_aset(hash, key, ruby_whisper_token_get_pt(self));
continue;
}
if (key == sym_ptsum) {
rb_hash_aset(hash, key, ruby_whisper_token_get_ptsum(self));
continue;
}
if (key == sym_t_dtw) {
rb_hash_aset(hash, key, ruby_whisper_token_get_t_dtw(self));
continue;
}
if (key == sym_voice_length) {
rb_hash_aset(hash, key, ruby_whisper_token_get_vlen(self));
continue;
}
}
return hash;
}
void
init_ruby_whisper_token(VALUE *mWhisper)
{
cToken = rb_define_class_under(*mWhisper, "Token", rb_cObject);
rb_define_alloc_func(cToken, ruby_whisper_token_allocate);
sym_id = ID2SYM(rb_intern("id"));
sym_tid = ID2SYM(rb_intern("tid"));
sym_probability = ID2SYM(rb_intern("probability"));
sym_log_probability = ID2SYM(rb_intern("log_probability"));
sym_pt = ID2SYM(rb_intern("pt"));
sym_ptsum = ID2SYM(rb_intern("ptsum"));
sym_t_dtw = ID2SYM(rb_intern("t_dtw"));
sym_voice_length = ID2SYM(rb_intern("voice_length"));
sym_start_time = ID2SYM(rb_intern("start_time"));
sym_end_time = ID2SYM(rb_intern("end_time"));
sym_text = ID2SYM(rb_intern("text"));
rb_define_method(cToken, "id", ruby_whisper_token_get_id, 0);
rb_define_method(cToken, "tid", ruby_whisper_token_get_tid, 0);
rb_define_method(cToken, "probability", ruby_whisper_token_get_p, 0);
rb_define_method(cToken, "log_probability", ruby_whisper_token_get_plog, 0);
rb_define_method(cToken, "pt", ruby_whisper_token_get_pt, 0);
rb_define_method(cToken, "ptsum", ruby_whisper_token_get_ptsum, 0);
rb_define_method(cToken, "t_dtw", ruby_whisper_token_get_t_dtw, 0);
rb_define_method(cToken, "voice_length", ruby_whisper_token_get_vlen, 0);
rb_define_method(cToken, "start_time", ruby_whisper_token_get_start_time, 0);
rb_define_method(cToken, "end_time", ruby_whisper_token_get_end_time, 0);
rb_define_method(cToken, "text", ruby_whisper_token_get_text, 0);
rb_define_method(cToken, "deconstruct_keys", ruby_whisper_token_deconstruct_keys, 1);
}
#undef N_KEY_NAMES

View File

@ -1,4 +1,3 @@
#include <ruby.h>
#include "ruby_whisper.h"
#include "common-whisper.h"
#include <string>
@ -8,11 +7,16 @@
extern "C" {
#endif
extern const rb_data_type_t ruby_whisper_type;
extern const rb_data_type_t ruby_whisper_params_type;
extern ID id_to_s;
extern ID id_call;
extern ID id_to_path;
extern ID transcribe_option_names[1];
extern void
register_callbacks(ruby_whisper_params * rwp, VALUE * self);
prepare_transcription(ruby_whisper_params * rwp, VALUE * self, int n_processors);
/*
* transcribe a single file
@ -31,16 +35,24 @@ VALUE
ruby_whisper_transcribe(int argc, VALUE *argv, VALUE self) {
ruby_whisper *rw;
ruby_whisper_params *rwp;
VALUE wave_file_path, blk, params;
VALUE wave_file_path, blk, params, kws;
VALUE opts[1];
rb_scan_args(argc, argv, "02&", &wave_file_path, &params, &blk);
Data_Get_Struct(self, ruby_whisper, rw);
Data_Get_Struct(params, ruby_whisper_params, rwp);
rb_scan_args_kw(RB_SCAN_ARGS_LAST_HASH_KEYWORDS, argc, argv, "2:&", &wave_file_path, &params, &kws, &blk);
rb_get_kwargs(kws, transcribe_option_names, 0, 1, opts);
int n_processors = opts[0] == Qundef ? 1 : NUM2INT(opts[0]);
GetContext(self, rw);
TypedData_Get_Struct(params, ruby_whisper_params, &ruby_whisper_params_type, rwp);
if (!rb_respond_to(wave_file_path, id_to_s)) {
rb_raise(rb_eRuntimeError, "Expected file path to wave file");
}
if (rb_respond_to(wave_file_path, id_to_path)) {
wave_file_path = rb_funcall(wave_file_path, id_to_path, 0);
}
std::string fname_inp = StringValueCStr(wave_file_path);
std::vector<float> pcmf32; // mono-channel F32 PCM
@ -50,32 +62,33 @@ ruby_whisper_transcribe(int argc, VALUE *argv, VALUE self) {
fprintf(stderr, "error: failed to open '%s' as WAV file\n", fname_inp.c_str());
return self;
}
{
static bool is_aborted = false; // NOTE: this should be atomic to avoid data race
// Commented out because it is work in progress
// {
// static bool is_aborted = false; // NOTE: this should be atomic to avoid data race
rwp->params.encoder_begin_callback = [](struct whisper_context * /*ctx*/, struct whisper_state * /*state*/, void * user_data) {
bool is_aborted = *(bool*)user_data;
return !is_aborted;
};
rwp->params.encoder_begin_callback_user_data = &is_aborted;
}
// rwp->params.encoder_begin_callback = [](struct whisper_context * /*ctx*/, struct whisper_state * /*state*/, void * user_data) {
// bool is_aborted = *(bool*)user_data;
// return !is_aborted;
// };
// rwp->params.encoder_begin_callback_user_data = &is_aborted;
// }
register_callbacks(rwp, &self);
prepare_transcription(rwp, &self, n_processors);
if (whisper_full_parallel(rw->context, rwp->params, pcmf32.data(), pcmf32.size(), 1) != 0) {
if (whisper_full_parallel(rw->context, rwp->params, pcmf32.data(), pcmf32.size(), n_processors) != 0) {
fprintf(stderr, "failed to process audio\n");
return self;
}
if (NIL_P(blk)) {
return self;
}
const int n_segments = whisper_full_n_segments(rw->context);
VALUE output = rb_str_new2("");
for (int i = 0; i < n_segments; ++i) {
const char * text = whisper_full_get_segment_text(rw->context, i);
output = rb_str_concat(output, rb_str_new2(text));
}
VALUE idCall = id_call;
if (blk != Qnil) {
rb_funcall(blk, idCall, 1, output);
}
rb_funcall(blk, id_call, 1, output);
return self;
}
#ifdef __cplusplus

View File

@ -0,0 +1,122 @@
#include "ruby_whisper.h"
extern ID id_to_s;
extern VALUE cVADContext;
extern const rb_data_type_t ruby_whisper_vad_params_type;
extern VALUE ruby_whisper_vad_detect(VALUE self, VALUE file_path, VALUE params);
extern VALUE ruby_whisper_normalize_model_path(VALUE model_path);
extern parsed_samples_t parse_samples(VALUE *samples, VALUE *n_samples);
extern VALUE release_samples(VALUE parsed);
extern VALUE ruby_whisper_vad_segments_s_init(struct whisper_vad_segments *segments);
typedef struct segments_from_samples_args {
VALUE *context;
VALUE *params;
float *samples;
int n_samples;
} segments_from_samples_args;
static size_t
ruby_whisper_vad_context_memsize(const void *p)
{
const ruby_whisper_vad_context *rwvc = p;
size_t size = sizeof(rwvc);
if (!rwvc) {
return 0;
}
if (rwvc->context) {
size += sizeof(rwvc->context);
}
return size;
}
static void
ruby_whisper_vad_context_free(void *p)
{
ruby_whisper_vad_context *rwvc = (ruby_whisper_vad_context *)p;
if (rwvc->context) {
whisper_vad_free(rwvc->context);
rwvc->context = NULL;
}
xfree(rwvc);
}
const rb_data_type_t ruby_whisper_vad_context_type = {
"ruby_whisper_vad_context",
{0, ruby_whisper_vad_context_free, ruby_whisper_vad_context_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_vad_context_s_allocate(VALUE klass)
{
ruby_whisper_vad_context *rwvc;
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper_vad_context, &ruby_whisper_vad_context_type, rwvc);
rwvc->context = NULL;
return obj;
}
static VALUE
ruby_whisper_vad_context_initialize(VALUE self, VALUE model_path)
{
ruby_whisper_vad_context *rwvc;
struct whisper_vad_context *context;
model_path = ruby_whisper_normalize_model_path(model_path);
context = whisper_vad_init_from_file_with_params(StringValueCStr(model_path), whisper_vad_default_context_params());
if (context == NULL) {
rb_raise(rb_eRuntimeError, "Failed to initialize whisper VAD context");
}
TypedData_Get_Struct(self, ruby_whisper_vad_context, &ruby_whisper_vad_context_type, rwvc);
rwvc->context = context;
return Qnil;
}
static VALUE
segments_from_samples_body(VALUE rb_args)
{
segments_from_samples_args *args = (segments_from_samples_args *)rb_args;
ruby_whisper_vad_context *rwvc;
ruby_whisper_vad_params *rwvp;
GetVADContext(*args->context, rwvc);
GetVADParams(*args->params, rwvp);
struct whisper_vad_segments *segments = whisper_vad_segments_from_samples(rwvc->context, rwvp->params, args->samples, args->n_samples);
return ruby_whisper_vad_segments_s_init(segments);
}
static VALUE
ruby_whisper_vad_segments_from_samples(int argc, VALUE *argv, VALUE self)
{
if (argc < 2 || argc > 3) {
rb_raise(rb_eArgError, "wrong number of arguments (given %d, expected 2..3)", argc);
}
VALUE n_samples = argc == 2 ? Qnil : argv[2];
struct parsed_samples_t parsed = parse_samples(&argv[1], &n_samples);
segments_from_samples_args args = {
&self,
&argv[0],
parsed.samples,
parsed.n_samples,
};
VALUE segments = rb_ensure(segments_from_samples_body, (VALUE)&args, release_samples, (VALUE)&parsed);
return segments;
}
void init_ruby_whisper_vad_context(VALUE *mVAD)
{
cVADContext = rb_define_class_under(*mVAD, "Context", rb_cObject);
rb_define_alloc_func(cVADContext, ruby_whisper_vad_context_s_allocate);
rb_define_method(cVADContext, "initialize", ruby_whisper_vad_context_initialize, 1);
rb_define_method(cVADContext, "segments_from_samples", ruby_whisper_vad_segments_from_samples, -1);
rb_define_method(cVADContext, "detect", ruby_whisper_vad_detect, 2);
}

View File

@ -0,0 +1,51 @@
#include "ruby_whisper.h"
#include "common-whisper.h"
#include <string>
#include <vector>
#ifdef __cplusplus
extern "C" {
#endif
extern ID id_to_path;
extern VALUE cVADSegments;
extern const rb_data_type_t ruby_whisper_vad_context_type;
extern const rb_data_type_t ruby_whisper_vad_params_type;
extern const rb_data_type_t ruby_whisper_vad_segments_type;
extern VALUE ruby_whisper_vad_segments_s_init(struct whisper_vad_segments *segments);
VALUE
ruby_whisper_vad_detect(VALUE self, VALUE file_path, VALUE params) {
ruby_whisper_vad_context *rwvc;
ruby_whisper_vad_params *rwvp;
std::string cpp_file_path;
std::vector<float> pcmf32;
std::vector<std::vector<float>> pcmf32s;
whisper_vad_segments *segments;
GetVADContext(self, rwvc);
TypedData_Get_Struct(params, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
if (rb_respond_to(file_path, id_to_path)) {
file_path = rb_funcall(file_path, id_to_path, 0);
}
cpp_file_path = StringValueCStr(file_path);
if (!read_audio_data(cpp_file_path, pcmf32, pcmf32s, false)) {
rb_raise(rb_eRuntimeError, "Failed to open '%s' as WAV file\n", cpp_file_path.c_str());
}
segments = whisper_vad_segments_from_samples(rwvc->context, rwvp->params, pcmf32.data(), pcmf32.size());
if (segments == nullptr) {
rb_raise(rb_eRuntimeError, "Failed to process audio\n");
}
return ruby_whisper_vad_segments_s_init(segments);
}
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,287 @@
#include "ruby_whisper.h"
#define DEFINE_PARAM(param_name, nth) \
id_ ## param_name = rb_intern(#param_name); \
param_names[nth] = id_ ## param_name; \
rb_define_method(cVADParams, #param_name, ruby_whisper_vad_params_get_ ## param_name, 0); \
rb_define_method(cVADParams, #param_name "=", ruby_whisper_vad_params_set_ ## param_name, 1);
#define NUM_PARAMS 6
extern VALUE cVADParams;
static size_t
ruby_whisper_vad_params_memsize(const void *p)
{
const struct ruby_whisper_vad_params *params = p;
size_t size = sizeof(params);
if (!params) {
return 0;
}
return size;
}
static ID param_names[NUM_PARAMS];
static ID id_threshold;
static ID id_min_speech_duration_ms;
static ID id_min_silence_duration_ms;
static ID id_max_speech_duration_s;
static ID id_speech_pad_ms;
static ID id_samples_overlap;
const rb_data_type_t ruby_whisper_vad_params_type = {
"ruby_whisper_vad_params",
{0, 0, ruby_whisper_vad_params_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_vad_params_s_allocate(VALUE klass)
{
ruby_whisper_vad_params *rwvp;
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params = whisper_vad_default_params();
return obj;
}
/*
* Probability threshold to consider as speech.
*
* call-seq:
* threshold = th -> th
*/
static VALUE
ruby_whisper_vad_params_set_threshold(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.threshold = RFLOAT_VALUE(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_threshold(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return DBL2NUM(rwvp->params.threshold);
}
/*
* Min duration for a valid speech segment.
*
* call-seq:
* min_speech_duration_ms = duration_ms -> duration_ms
*/
static VALUE
ruby_whisper_vad_params_set_min_speech_duration_ms(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.min_speech_duration_ms = NUM2INT(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_min_speech_duration_ms(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return INT2NUM(rwvp->params.min_speech_duration_ms);
}
/*
* Min silence duration to consider speech as ended.
*
* call-seq:
* min_silence_duration_ms = duration_ms -> duration_ms
*/
static VALUE
ruby_whisper_vad_params_set_min_silence_duration_ms(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.min_silence_duration_ms = NUM2INT(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_min_silence_duration_ms(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return INT2NUM(rwvp->params.min_silence_duration_ms);
}
/*
* Max duration of a speech segment before forcing a new segment.
*
* call-seq:
* max_speech_duration_s = duration_s -> duration_s
*/
static VALUE
ruby_whisper_vad_params_set_max_speech_duration_s(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.max_speech_duration_s = RFLOAT_VALUE(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_max_speech_duration_s(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return DBL2NUM(rwvp->params.max_speech_duration_s);
}
/*
* Padding added before and after speech segments.
*
* call-seq:
* speech_pad_ms = pad_ms -> pad_ms
*/
static VALUE
ruby_whisper_vad_params_set_speech_pad_ms(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.speech_pad_ms = NUM2INT(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_speech_pad_ms(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return INT2NUM(rwvp->params.speech_pad_ms);
}
/*
* Overlap in seconds when copying audio samples from speech segment.
*
* call-seq:
* samples_overlap = overlap -> overlap
*/
static VALUE
ruby_whisper_vad_params_set_samples_overlap(VALUE self, VALUE value)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rwvp->params.samples_overlap = RFLOAT_VALUE(value);
return value;
}
static VALUE
ruby_whisper_vad_params_get_samples_overlap(VALUE self)
{
ruby_whisper_vad_params *rwvp;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
return DBL2NUM(rwvp->params.samples_overlap);
}
static VALUE
ruby_whisper_vad_params_equal(VALUE self, VALUE other)
{
ruby_whisper_vad_params *rwvp1;
ruby_whisper_vad_params *rwvp2;
if (self == other) {
return Qtrue;
}
if (!rb_obj_is_kind_of(other, cVADParams)) {
return Qfalse;
}
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp1);
TypedData_Get_Struct(other, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp2);
if (rwvp1->params.threshold != rwvp2->params.threshold) {
return Qfalse;
}
if (rwvp1->params.min_speech_duration_ms != rwvp2->params.min_speech_duration_ms) {
return Qfalse;
}
if (rwvp1->params.min_silence_duration_ms != rwvp2->params.min_silence_duration_ms) {
return Qfalse;
}
if (rwvp1->params.max_speech_duration_s != rwvp2->params.max_speech_duration_s) {
return Qfalse;
}
if (rwvp1->params.speech_pad_ms != rwvp2->params.speech_pad_ms) {
return Qfalse;
}
if (rwvp1->params.samples_overlap != rwvp2->params.samples_overlap) {
return Qfalse;
}
return Qtrue;
}
#define SET_PARAM_IF_SAME(param_name) \
if (id == id_ ## param_name) { \
ruby_whisper_vad_params_set_ ## param_name(self, value); \
continue; \
}
VALUE
ruby_whisper_vad_params_initialize(int argc, VALUE *argv, VALUE self)
{
VALUE kw_hash;
VALUE values[NUM_PARAMS] = {Qundef};
VALUE value;
ruby_whisper_vad_params *rwvp;
ID id;
int i;
TypedData_Get_Struct(self, ruby_whisper_vad_params, &ruby_whisper_vad_params_type, rwvp);
rb_scan_args_kw(RB_SCAN_ARGS_KEYWORDS, argc, argv, ":", &kw_hash);
if (NIL_P(kw_hash)) {
return self;
}
rb_get_kwargs(kw_hash, param_names, 0, NUM_PARAMS, values);
for (i = 0; i < NUM_PARAMS; i++) {
id = param_names[i];
value = values[i];
if (value == Qundef) {
continue;
}
SET_PARAM_IF_SAME(threshold)
SET_PARAM_IF_SAME(min_speech_duration_ms)
SET_PARAM_IF_SAME(min_silence_duration_ms)
SET_PARAM_IF_SAME(max_speech_duration_s)
SET_PARAM_IF_SAME(speech_pad_ms)
SET_PARAM_IF_SAME(samples_overlap)
}
return self;
}
#undef SET_PARAM_IF_SAME
void
init_ruby_whisper_vad_params(VALUE *mVAD)
{
cVADParams = rb_define_class_under(*mVAD, "Params", rb_cObject);
rb_define_alloc_func(cVADParams, ruby_whisper_vad_params_s_allocate);
rb_define_method(cVADParams, "initialize", ruby_whisper_vad_params_initialize, -1);
DEFINE_PARAM(threshold, 0)
DEFINE_PARAM(min_speech_duration_ms, 1)
DEFINE_PARAM(min_silence_duration_ms, 2)
DEFINE_PARAM(max_speech_duration_s, 3)
DEFINE_PARAM(speech_pad_ms, 4)
DEFINE_PARAM(samples_overlap, 5)
rb_define_method(cVADParams, "==", ruby_whisper_vad_params_equal, 1);
}
#undef DEFINE_PARAM
#undef NUM_PARAMS

View File

@ -0,0 +1,138 @@
#include "ruby_whisper.h"
#define N_KEY_NAMES 2
extern VALUE cVADSegment;
extern const rb_data_type_t ruby_whisper_vad_segments_type;
static VALUE sym_start_time;
static VALUE sym_end_time;
static void
rb_whisper_vad_segment_mark(void *p)
{
ruby_whisper_vad_segment *rwvs = (ruby_whisper_vad_segment *)p;
rb_gc_mark(rwvs->segments);
}
static size_t
ruby_whisper_vad_segment_memsize(const void *p)
{
const ruby_whisper_vad_segment *rwvs = p;
size_t size = sizeof(rwvs);
if (!rwvs) {
return 0;
}
if (rwvs->index) {
size += sizeof(rwvs->index);
}
return size;
}
static const rb_data_type_t ruby_whisper_vad_segment_type = {
"ruby_whisper_vad_segment",
{rb_whisper_vad_segment_mark, RUBY_DEFAULT_FREE, ruby_whisper_vad_segment_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_vad_segment_s_allocate(VALUE klass)
{
ruby_whisper_vad_segment *rwvs;
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper_vad_segment, &ruby_whisper_vad_segment_type, rwvs);
rwvs->segments = Qnil;
rwvs->index = -1;
return obj;
}
VALUE
rb_whisper_vad_segment_s_new(VALUE segments, int index)
{
ruby_whisper_vad_segment *rwvs;
const VALUE segment = ruby_whisper_vad_segment_s_allocate(cVADSegment);
TypedData_Get_Struct(segment, ruby_whisper_vad_segment, &ruby_whisper_vad_segment_type, rwvs);
rwvs->segments = segments;
rwvs->index = index;
return segment;
}
static VALUE
ruby_whisper_vad_segment_get_start_time(VALUE self)
{
ruby_whisper_vad_segment *rwvs;
ruby_whisper_vad_segments *rwvss;
float t0;
TypedData_Get_Struct(self, ruby_whisper_vad_segment, &ruby_whisper_vad_segment_type, rwvs);
TypedData_Get_Struct(rwvs->segments, ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, rwvss);
t0 = whisper_vad_segments_get_segment_t0(rwvss->segments, rwvs->index);
return DBL2NUM(t0 * 10);
}
static VALUE
ruby_whisper_vad_segment_get_end_time(VALUE self)
{
ruby_whisper_vad_segment *rwvs;
ruby_whisper_vad_segments *rwvss;
float t1;
TypedData_Get_Struct(self, ruby_whisper_vad_segment, &ruby_whisper_vad_segment_type, rwvs);
TypedData_Get_Struct(rwvs->segments, ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, rwvss);
t1 = whisper_vad_segments_get_segment_t1(rwvss->segments, rwvs->index);
return DBL2NUM(t1 * 10);
}
static VALUE
ruby_whisper_vad_segment_deconstruct_keys(VALUE self, VALUE keys)
{
ruby_whisper_vad_segment *rwvs;
ruby_whisper_vad_segments *rwvss;
VALUE hash, key;
long n_keys;
int i;
TypedData_Get_Struct(self, ruby_whisper_vad_segment, &ruby_whisper_vad_segment_type, rwvs);
TypedData_Get_Struct(rwvs->segments, ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, rwvss);
hash = rb_hash_new();
if (NIL_P(keys)) {
keys = rb_ary_new3(
N_KEY_NAMES,
sym_start_time,
sym_end_time
);
n_keys = N_KEY_NAMES;
} else {
n_keys = RARRAY_LEN(keys);
if (n_keys > N_KEY_NAMES) {
return hash;
}
}
for (i = 0; i < n_keys; i++) {
key = rb_ary_entry(keys, i);
if (key == sym_start_time) {
rb_hash_aset(hash, key, ruby_whisper_vad_segment_get_start_time(self));
}
if (key == sym_end_time) {
rb_hash_aset(hash, key, ruby_whisper_vad_segment_get_end_time(self));
}
}
return hash;
}
void
init_ruby_whisper_vad_segment(VALUE *mVAD)
{
cVADSegment = rb_define_class_under(*mVAD, "Segment", rb_cObject);
sym_start_time = ID2SYM(rb_intern("start_time"));
sym_end_time = ID2SYM(rb_intern("end_time"));
rb_define_alloc_func(cVADSegment, ruby_whisper_vad_segment_s_allocate);
rb_define_method(cVADSegment, "start_time", ruby_whisper_vad_segment_get_start_time, 0);
rb_define_method(cVADSegment, "end_time", ruby_whisper_vad_segment_get_end_time, 0);
rb_define_method(cVADSegment, "deconstruct_keys", ruby_whisper_vad_segment_deconstruct_keys, 1);
}

View File

@ -0,0 +1,105 @@
#include "ruby_whisper.h"
extern ID id___method__;
extern ID id_to_enum;
extern VALUE cVADSegments;
extern VALUE rb_whisper_vad_segment_s_new(VALUE segments, int index);
static size_t
ruby_whisper_vad_segments_memsize(const void *p)
{
const ruby_whisper_vad_segments *rwvss = p;
size_t size = sizeof(rwvss);
if (!rwvss) {
return 0;
}
if (rwvss->segments) {
size += sizeof(rwvss->segments);
}
return size;
}
static void
ruby_whisper_vad_segments_free(void *p)
{
ruby_whisper_vad_segments *rwvss = (ruby_whisper_vad_segments *)p;
if (rwvss->segments) {
whisper_vad_free_segments(rwvss->segments);
rwvss->segments = NULL;
}
xfree(rwvss);
}
const rb_data_type_t ruby_whisper_vad_segments_type = {
"ruby_whisper_vad_segments",
{0, ruby_whisper_vad_segments_free, ruby_whisper_vad_segments_memsize,},
0, 0,
0
};
static VALUE
ruby_whisper_vad_segments_s_allocate(VALUE klass)
{
ruby_whisper_vad_segments *rwvss;
VALUE obj = TypedData_Make_Struct(klass, ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, rwvss);
rwvss->segments = NULL;
return obj;
}
VALUE
ruby_whisper_vad_segments_s_init(struct whisper_vad_segments *segments)
{
VALUE rb_segments;
ruby_whisper_vad_segments *rwvss;
rb_segments = ruby_whisper_vad_segments_s_allocate(cVADSegments);
TypedData_Get_Struct(rb_segments, ruby_whisper_vad_segments, &ruby_whisper_vad_segments_type, rwvss);
rwvss->segments = segments;
return rb_segments;
}
static VALUE
ruby_whisper_vad_segments_each(VALUE self)
{
ruby_whisper_vad_segments *rwvss;
VALUE method_name;
int n_segments, i;
if (!rb_block_given_p()) {
method_name = rb_funcall(self, id___method__, 0);
return rb_funcall(self, id_to_enum, 1, method_name);
}
GetVADSegments(self, rwvss);
n_segments = whisper_vad_segments_n_segments(rwvss->segments);
for (i = 0; i < n_segments; ++i) {
rb_yield(rb_whisper_vad_segment_s_new(self, i));
}
return self;
}
static VALUE
ruby_whisper_vad_segments_get_length(VALUE self)
{
ruby_whisper_vad_segments *rwvss;
int n_segments;
GetVADSegments(self, rwvss);
n_segments = whisper_vad_segments_n_segments(rwvss->segments);
return INT2NUM(n_segments);
}
void
init_ruby_whisper_vad_segments(VALUE *mVAD)
{
cVADSegments = rb_define_class_under(*mVAD, "Segments", rb_cObject);
rb_define_alloc_func(cVADSegments, ruby_whisper_vad_segments_s_allocate);
rb_define_method(cVADSegments, "each", ruby_whisper_vad_segments_each, 0);
rb_define_method(cVADSegments, "length", ruby_whisper_vad_segments_get_length, 0);
rb_include_module(cVADSegments, rb_path2class("Enumerable"));
}

View File

@ -0,0 +1,8 @@
set(GRAPHVIZ_EXECUTABLES FALSE)
set(GRAPHVIZ_STATIC_LIBS TRUE)
set(GRAPHVIZ_SHARED_LIBS FALSE)
set(GRAPHVIZ_MODULE_LIBS FALSE)
set(GRAPHVIZ_INTERFACE_LIBS FALSE)
set(GRAPHVIZ_OBJECT_LIBS FALSE)
set(GRAPHVIZ_UNKNOWN_LIBS FALSE)
set(GRAPHVIZ_GENERATE_DEPENDERS FALSE)

View File

@ -1,6 +1,41 @@
require "yaml"
require "pathname"
sources = `git ls-files -z ../..`.split("\x0")
paths = YAML.load_file("../../.github/workflows/bindings-ruby.yml")[true]["push"]["paths"]
paths.delete "bindings/ruby/**"
EXTSOURCES = (Dir.glob(paths, base: "../..").collect {|path| "../../#{path}"} << "../../LICENSE") & sources
root = Pathname("..")/".."
ignored_dirs = %w[
.devops
.github
ci
examples/wchess/wchess.wasm
examples/whisper.android
examples/whisper.android.java
examples/whisper.objc
examples/whisper.swiftui
grammars
models
samples
scripts
].collect {|dir| root/dir}
ignored_files = %w[
AUTHORS
Makefile
README.md
README_sycl.md
.gitignore
.gitmodules
.dockerignore
whisper.nvim
twitch.sh
yt-wsp.sh
close-issue.yml
build-xcframework.sh
]
EXTSOURCES =
`git ls-files -z #{root}`.split("\x0")
.collect {|file| Pathname(file)}
.reject {|file|
ignored_dirs.any? {|dir| file.descend.any? {|desc| desc == dir}} ||
ignored_files.include?(file.basename.to_path) ||
(file.descend.to_a[1] != root && file.descend.to_a[1] != Pathname("..")/"javascript")
}
.collect(&:to_path)

View File

@ -0,0 +1,15 @@
module Whisper
class Context
def to_srt
each_segment.with_index.reduce("") {|srt, (segment, index)|
srt << "#{index + 1}\n#{segment.to_srt_cue}\n"
}
end
def to_webvtt
each_segment.with_index.reduce("WEBVTT\n\n") {|webvtt, (segment, index)|
webvtt << "#{index + 1}\n#{segment.to_webvtt_cue}\n"
}
end
end
end

View File

@ -34,7 +34,7 @@ module Whisper
when /darwin/
Pathname(Dir.home)/"Library/Caches"
else
ENV.key?("XDG_CACHE_HOME") ? ENV["XDG_CACHE_HOME"] : Pathname(Dir.home)/".cache"
ENV.key?("XDG_CACHE_HOME") ? Pathname(ENV["XDG_CACHE_HOME"]) : Pathname(Dir.home)/".cache"
end
base/"whisper.cpp"
end
@ -53,8 +53,10 @@ module Whisper
http.request request do |response|
case response
when Net::HTTPNotModified
# noop
# noop
when Net::HTTPOK
return if !response.key?("last-modified") && cache_path.exist?
download response
when Net::HTTPRedirection
request URI(response["location"]), headers
@ -68,7 +70,7 @@ module Whisper
rescue => err
if cache_path.exist?
warn err
# Use cache file
# Use cache file
else
raise
end
@ -92,7 +94,8 @@ module Whisper
end
def show_progress(current, size)
progress_rate_available = size && $stderr.tty?
line_size = 47
progress_rate_available = size && $stderr.tty? && $stderr.winsize[1] >= line_size
unless @prev
@prev = Time.now
@ -128,6 +131,44 @@ module Whisper
end
end
class ZipURI < URI
def cache
zip_path = super
dest = unzipped_path
return if dest.exist? && dest.mtime >= zip_path.mtime
escaping dest do
system "unzip", "-q", "-d", zip_path.dirname.to_path, zip_path.to_path, exception: true
end
zip_path
end
def clear_cache
super
unzipped_path.rmtree if unzipped_path.exist?
end
private
def unzipped_path
cache_path.sub_ext("")
end
def escaping(path)
escaped = Pathname("#{path}.removing")
if path.exist?
escaped.rmtree if escaped.exist?
path.rename escaped
end
yield
ensure
if path.exist?
escaped.rmtree if escaped.exist?
else
escaped.rename path if escaped.exist?
end
end
end
@pre_converted_models = %w[
tiny
tiny.en
@ -141,7 +182,6 @@ module Whisper
base-q8_0
small
small.en
small.en-tdrz
small-q5_1
small.en-q5_1
small-q8_0
@ -163,8 +203,30 @@ module Whisper
models[name] = URI.new("https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-#{name}.bin")
}
%w[
small.en-tdrz
].each do |name|
@pre_converted_models[name] = URI.new("https://huggingface.co/akashmjn/tinydiarize-whisper.cpp/resolve/main/ggml-#{name}.bin")
end
%w[
silero-v5.1.2
silero-v6.2.0
].each do |name|
@pre_converted_models[name] = URI.new("https://huggingface.co/ggml-org/whisper-vad/resolve/main/ggml-#{name}.bin")
end
@coreml_compiled_models = @pre_converted_models.each_with_object({}) {|(name, uri), models|
next if name.end_with?("-tdrz") || name.start_with?("silero-")
if matched = name.match(/\A(?<name>.*)-q\d_\d\z/)
name = matched[:name]
end
models[uri] = ZipURI.new("https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-#{name}-encoder.mlmodelc.zip")
}
class << self
attr_reader :pre_converted_models
attr_reader :pre_converted_models, :coreml_compiled_models
end
end
end

View File

@ -0,0 +1,58 @@
module Whisper
class Segment
SRT_ESCAPES = {
"&" => "&amp;",
"<" => "&lt;",
">" => "&gt;",
}
SRT_ESCAPES_RE = Regexp.union(SRT_ESCAPES.keys)
private_constant :SRT_ESCAPES, :SRT_ESCAPES_RE
def to_srt_cue
"#{srt_start_time} --> #{srt_end_time}\n#{srt_text}\n"
end
def to_webvtt_cue
"#{webvtt_start_time} --> #{webvtt_end_time}\n#{webvtt_text}\n"
end
private
def time_to_a(time)
sec, decimal_part = time.divmod(1000)
min, sec = sec.divmod(60)
hour, min = min.divmod(60)
[hour, min, sec, decimal_part]
end
def srt_time(time)
"%02d:%02d:%02d,%03d" % time_to_a(time)
end
def srt_start_time
srt_time(start_time)
end
def srt_end_time
srt_time(end_time)
end
def srt_text
text.gsub(SRT_ESCAPES_RE, SRT_ESCAPES)
end
def webvtt_time(time)
"%02d:%02d:%02d.%03d" % time_to_a(time)
end
def webvtt_start_time
webvtt_time(start_time)
end
def webvtt_end_time
webvtt_time(end_time)
end
alias webvtt_text srt_text
end
end

View File

@ -7,25 +7,57 @@ module Whisper
type log_callback = ^(Integer level, String message, Object user_data) -> void
type new_segment_callback = ^(Whisper::Context, void, Integer n_new, Object user_data) -> void
type progress_callback = ^(Whisper::Context, void, Integer progress, Object user_data) -> void
type encoder_begin_callback = ^(Whisper::Context, void, Object user_data) -> void
type abort_callback = ^(Whisper::Context, void, Object user_data) -> boolish
VERSION: String
LOG_LEVEL_NONE: Integer
LOG_LEVEL_INFO: Integer
LOG_LEVEL_WARN: Integer
LOG_LEVEL_ERROR: Integer
LOG_LEVEL_DEBUG: Integer
LOG_LEVEL_CONT: Integer
AHEADS_NONE: Integer
AHEADS_N_TOP_MOST: Integer
AHEADS_CUSTOM: Integer
AHEADS_TINY_EN: Integer
AHEADS_TINY: Integer
AHEADS_BASE_EN: Integer
AHEADS_BASE: Integer
AHEADS_SMALL_EN: Integer
AHEADS_SMALL: Integer
AHEADS_MEDIUM_EN: Integer
AHEADS_MEDIUM: Integer
AHEADS_LARGE_V1: Integer
AHEADS_LARGE_V2: Integer
AHEADS_LARGE_V3: Integer
AHEADS_LARGE_V3_TURBO: Integer
def self.lang_max_id: () -> Integer
def self.lang_id: (string name) -> Integer
def self.lang_str: (Integer id) -> String
def self.lang_str_full: (Integer id) -> String
def self.log_set: (log_callback, Object? user_data) -> log_callback
def self.log_set: (log_callback?, Object? user_data) -> log_callback
def self.system_info_str: () -> String
class Context
def self.new: (string | _ToPath | ::URI::HTTP) -> instance
def transcribe: (string, Params) -> self
| (string, Params) { (String) -> void } -> self
def self.new: (String | path | ::URI::HTTP) -> instance
# transcribe a single file
# can emit to a block results
#
# params = Whisper::Params.new
# params.duration = 60_000
# whisper.transcribe "path/to/audio.wav", params do |text|
# puts text
# end
#
# If n_processors is greater than 1, you cannot set any callbacks including
# new_segment_callback, progress_callback, encoder_begin_callback, abort_callback,
# and log_callback set by Whisper.log_set
def transcribe: (path, Params, ?n_processors: Integer) -> self
| (path, Params, ?n_processors: Integer) { (String) -> void } -> self
def model_n_vocab: () -> Integer
def model_n_audio_ctx: () -> Integer
def model_n_audio_state: () -> Integer
@ -34,22 +66,105 @@ module Whisper
def model_n_mels: () -> Integer
def model_ftype: () -> Integer
def model_type: () -> String
# Yields each Whisper::Segment:
#
# whisper.transcribe("path/to/audio.wav", params)
# whisper.each_segment do |segment|
# puts segment.text
# end
#
# Returns an Enumerator if no block given:
#
# whisper.transcribe("path/to/audio.wav", params)
# enum = whisper.each_segment
# enum.to_a # => [#<Whisper::Segment>, ...]
#
def each_segment: { (Segment) -> void } -> void
| () -> Enumerator[Segment]
def model: () -> Model
def full_get_segment: (Integer nth) -> Segment
def full_n_segments: () -> Integer
# Language ID, which can be converted to string by Whisper.lang_str and Whisper.lang_str_full.
#
def full_lang_id: () -> Integer
# Start time of a segment indexed by +segment_index+ in centiseconds (10 times milliseconds).
#
# full_get_segment_t0(3) # => 1668 (16680 ms)
#
def full_get_segment_t0: (Integer) -> Integer
# End time of a segment indexed by +segment_index+ in centiseconds (10 times milliseconds).
#
# full_get_segment_t1(3) # => 1668 (16680 ms)
#
def full_get_segment_t1: (Integer) -> Integer
# Whether the next segment indexed by +segment_index+ is predicated as a speaker turn.
#
# full_get_segment_speacker_turn_next(3) # => true
#
def full_get_segment_speaker_turn_next: (Integer) -> (true | false)
# Text of a segment indexed by +segment_index+.
#
# full_get_segment_text(3) # => "ask not what your country can do for you, ..."
#
def full_get_segment_text: (Integer) -> String
def full_get_segment_no_speech_prob: (Integer) -> Float
# Run the entire model: PCM -> log mel spectrogram -> encoder -> decoder -> text
# Not thread safe for same context
# Uses the specified decoding strategy to obtain the text.
#
# The second argument +samples+ must be an array of samples, respond to :length, or be a MemoryView of an array of float. It must be 32 bit float PCM audio data.
#
def full: (Params, Array[Float] samples, ?Integer n_samples) -> self
| (Params, _Samples, ?Integer n_samples) -> self
# Split the input audio in chunks and process each chunk separately using whisper_full_with_state()
# Result is stored in the default state of the context
# Not thread safe if executed in parallel on the same context.
# It seems this approach can offer some speedup in some cases.
# However, the transcription accuracy can be worse at the beginning and end of each chunk.
#
# If n_processors is greater than 1, you cannot set any callbacks including
# new_segment_callback, progress_callback, encoder_begin_callback, abort_callback,
# and log_callback set by Whisper.log_set
def full_parallel: (Params, Array[Float], ?Integer n_samples) -> self
| (Params, _Samples, ?Integer n_samples) -> self
| (Params, _Samples, ?Integer? n_samples, Integer n_processors) -> self
def to_srt: () -> String
def to_webvtt: () -> String
class Params
def self.new: (
use_gpu: boolish,
flash_attn: boolish,
gpu_device: Integer,
dtw_token_timestamps: boolish,
dtw_aheads_preset: Integer,
dtw_n_top: Integer | nil,
) -> instance
def use_gpu=: (boolish) -> boolish
def use_gpu: () -> (true | false)
def flash_attn=: (boolish) -> boolish
def flash_attn: () -> (true | false)
def gpu_device=: (Integer) -> Integer
def gpu_device: () -> Integer
def dtw_token_timestamps=: (boolish) -> boolish
def dtw_token_timestamps: () -> (true | false)
def dtw_aheads_preset=: (Integer) -> Integer
def dtw_aheads_preset: () -> Integer
def dtw_n_top=: (Integer | nil) -> (Integer | nil)
def dtw_n_top: () -> (Integer | nil)
end
end
class Params
@ -65,8 +180,10 @@ module Whisper
?suppress_blank: boolish,
?suppress_nst: boolish,
?token_timestamps: boolish,
?max_len: Integer,
?split_on_word: boolish,
?initial_prompt: string | nil,
?carry_initial_prompt: boolish,
?diarize: boolish,
?offset: Integer,
?duration: Integer,
@ -82,76 +199,254 @@ module Whisper
?new_segment_callback_user_data: Object,
?progress_callback: progress_callback,
?progress_callback_user_data: Object,
?encoder_begin_callback: encoder_begin_callback,
?encoder_begin_callback_user_data: Object,
?abort_callback: abort_callback,
?abort_callback_user_data: Object
?abort_callback_user_data: Object,
?vad: boolish,
?vad_model_path: path | URI,
?vad_params: Whisper::VAD::Params
) -> instance
# params.language = "auto" | "en", etc...
#
def language=: (String) -> String # TODO: Enumerate lang names
def language: () -> String
def translate=: (boolish) -> boolish
def translate: () -> (true | false)
def no_context=: (boolish) -> boolish
# If true, does not use past transcription (if any) as initial prompt for the decoder.
#
def no_context: () -> (true | false)
def single_segment=: (boolish) -> boolish
# If true, forces single segment output (useful for streaming).
#
def single_segment: () -> (true | false)
def print_special=: (boolish) -> boolish
# If true, prints special tokens (e.g. <SOT>, <EOT>, <BEG>, etc.).
#
def print_special: () -> (true | false)
def print_progress=: (boolish) -> boolish
# If true, prints progress information.
#
def print_progress: () -> (true | false)
def print_realtime=: (boolish) -> boolish
# If true, prints results from within whisper.cpp. (avoid it, use callback instead)
#
def print_realtime: () -> (true | false)
# If true, prints timestamps for each text segment when printing realtime.
#
def print_timestamps=: (boolish) -> boolish
def print_timestamps: () -> (true | false)
def suppress_blank=: (boolish) -> boolish
# If true, suppresses blank outputs.
#
def suppress_blank: () -> (true | false)
def suppress_nst=: (boolish) -> boolish
# If true, suppresses non-speech-tokens.
#
def suppress_nst: () -> (true | false)
def token_timestamps=: (boolish) -> boolish
# If true, enables token-level timestamps.
#
def token_timestamps: () -> (true | false)
def max_len=: (Integer) -> Integer
# max segment length in characters.
#
def max_len: () -> Integer
def split_on_word=: (boolish) -> boolish
# If true, split on word rather than on token (when used with max_len).
#
def split_on_word: () -> (true | false)
def initial_prompt=: (_ToS) -> _ToS
def carry_initial_prompt=: (boolish) -> boolish
# Tokens to provide to the whisper decoder as initial prompt
# these are prepended to any existing text context from a previous call
# use whisper_tokenize() to convert text to tokens.
# Maximum of whisper_n_text_ctx()/2 tokens are used (typically 224).
#
def initial_prompt: () -> (String | nil)
def carry_initial_prompt: () -> (true | false)
def diarize=: (boolish) -> boolish
# If true, enables diarization.
#
def diarize: () -> (true | false)
def offset=: (Integer) -> Integer
# Start offset in ms.
#
def offset: () -> Integer
def duration=: (Integer) -> Integer
# Audio duration to process in ms.
#
def duration: () -> Integer
def max_text_tokens=: (Integer) -> Integer
# Max tokens to use from past text as prompt for the decoder.
#
def max_text_tokens: () -> Integer
def temperature=: (Float) -> Float
def temperature: () -> Float
def max_initial_ts=: (Float) -> Float
# See https://github.com/openai/whisper/blob/f82bc59f5ea234d4b97fb2860842ed38519f7e65/whisper/decoding.py#L97
#
def max_initial_ts: () -> Float
def length_penalty=: (Float) -> Float
def length_penalty: () -> Float
def temperature_inc=: (Float) -> Float
def temperature_inc: () -> Float
def entropy_thold=: (Float) -> Float
# Similar to OpenAI's "compression_ratio_threshold"
#
def entropy_thold: () -> Float
def logprob_thold=: (Float) -> Float
def logprob_thold: () -> Float
def no_speech_thold=: (Float) -> Float
def no_speech_thold: () -> Float
# Sets new segment callback, called for every newly generated text segment.
#
# params.new_segment_callback = ->(context, _, n_new, user_data) {
# # ...
# }
#
def new_segment_callback=: (new_segment_callback) -> new_segment_callback
def new_segment_callback: () -> (new_segment_callback | nil)
# Sets user data passed to the last argument of new segment callback.
#
def new_segment_callback_user_data=: (Object) -> Object
def new_segment_callback_user_data: () -> Object
# Sets progress callback, called on each progress update.
#
# params.new_segment_callback = ->(context, _, progress, user_data) {
# # ...
# }
#
# +progress+ is an Integer between 0 and 100.
#
def progress_callback=: (progress_callback) -> progress_callback
def progress_callback: () -> (progress_callback | nil)
# Sets user data passed to the last argument of progress callback.
#
def progress_callback_user_data=: (Object) -> Object
def progress_callback_user_data: () -> Object
# Sets encoder begin callback, called when the encoder starts.
#
def encoder_begin_callback=: (encoder_begin_callback) -> encoder_begin_callback
def encoder_begin_callback: () -> (encoder_begin_callback | nil)
# Sets user data passed to the last argument of encoder begin callback.
#
def encoder_begin_callback_user_data=: (Object) -> Object
def encoder_begin_callback_user_data: () -> Object
# Sets abort callback, called to check if the process should be aborted.
#
# params.abort_callback = ->(user_data) {
# # ...
# }
#
#
def abort_callback=: (abort_callback) -> abort_callback
def abort_callback: () -> (abort_callback | nil)
# Sets user data passed to the last argument of abort callback.
#
def abort_callback_user_data=: (Object) -> Object
def abort_callback_user_data: () -> Object
# Enable VAD
#
def vad=: (boolish) -> boolish
def vad: () -> (true | false)
# Path to the VAD model
def vad_model_path=: (path | URI | nil) -> (path | URI | nil)
def vad_model_path: () -> (String | nil)
def vad_params=: (Whisper::VAD::Params) -> Whisper::VAD::Params
def vad_params: () -> (Whisper::VAD::Params)
# Hook called on new segment. Yields each Whisper::Segment.
#
# whisper.on_new_segment do |segment|
# # ...
# end
#
def on_new_segment: { (Segment) -> void } -> void
# Hook called on progress update. Yields each progress Integer between 0 and 100.
#
def on_progress: { (Integer progress) -> void } -> void
# Hook called on encoder starts.
#
def on_encoder_begin: { () -> void } -> void
# Call block to determine whether abort or not. Return +true+ when you want to abort.
#
# params.abort_on do
# if some_condition
# true # abort
# else
# false # continue
# end
# end
#
def abort_on: { (Object user_data) -> boolish } -> void
end
class Model
def self.pre_converted_models: () -> Hash[String, Model::URI]
def self.coreml_compiled_models: () -> Hash[Model::URI, Model::ZipURI]
def self.new: () -> instance
def n_vocab: () -> Integer
def n_audio_ctx: () -> Integer
@ -167,18 +462,215 @@ module Whisper
def type: () -> String
class URI
def self.new: (string | ::URI::HTTP) -> self
def self.new: (string | ::URI::HTTP) -> instance
def to_path: -> String
def clear_cache: -> void
end
class ZipURI < URI
def cache: () -> Pathname
def clear_cache: () -> void
end
end
class Segment
type deconstructed_keys = {
start_time: (Integer | nil),
end_time: (Integer | nil),
text: (String | nil),
no_speech_prob: (Float | nil),
speaker_turn_next: (true | false | nil),
n_tokens: (Integer | nil)
}
# Start time in milliseconds.
#
def start_time: () -> Integer
# End time in milliseconds.
#
def end_time: () -> Integer
def speaker_next_turn?: () -> (true | false)
# Whether the next segment is predicted as a speaker turn.
#
def speaker_turn_next?: () -> (true | false)
def text: () -> String
def no_speech_prob: () -> Float
# Get number of tokens in the segment
#
def n_tokens: () -> Integer
# Yields each Whisper::Token:
#
# whisper.each_segment.first.each_token do |token|
# p token
# end
#
# Returns an Enumerator if no block is given:
#
# whisper.each_segment.first.each_token.to_a # => [#<Whisper::Token>, ...]
#
def each_token: { (Token) -> void } -> void
| () -> Enumerator[Token]
def to_srt_cue: () -> String
def to_webvtt_cue: () -> String
# Possible keys: :start_time, :end_time, :text, :no_speech_prob, :speaker_turn_next
#
# whisper.each_segment do |segment|
# segment => {start_time:, end_time:, text:, no_speech_prob:, speaker_turn_next:}
#
# puts "[#{start_time} --> #{end_time}] #{text} (no speech prob: #{no_speech_prob}#{speaker_turn_next ? ', speaker turns next' : ''})"
# end
def deconstruct_keys: (Array[:start_time | :end_time | :text | :no_speech_prob | :speaker_turn_next | :n_tokens] | nil) -> deconstructed_keys
end
module Token
type deconstructed_keys = {
id: (Integer | nil),
tid: (Integer | nil),
probability: (Float | nil),
log_probability: (Float | nil),
pt: (Float | nil),
ptsum: (Float | nil),
t_dtw: (Integer | nil),
voice_length: (Float | nil),
text: (String | nil),
start_time: (Integer | nil),
end_time: (Integer | nil),
}
# Token ID.
#
def id: () -> Integer
# Forced timestamp token ID.
#
def tid: () -> Integer
# Probability of the token.
#
def probability: () -> Float
# Log probability of the token.
#
def log_probability: () -> Float
# Probability of the timestamp token.
#
def pt: () -> Float
# Sum of probability of all timestamp tokens.
#
def ptsum: () -> Float
# [EXPERIMENTAL] Token-level timestamps with DTW
#
# Do not use if you haven't computed token-level timestamps with dtw.
# Roughly corresponds to the moment in audio in which the token was output.
#
def t_dtw: () -> Integer
# Voice length of the token.
#
def voice_length: () -> Float
# Start time of the token.
#
# Token-level timestamp data.
# Do not use if you haven't computed token-level timestamps.
#
def start_time: () -> Integer
# End time of the token.
#
# Token-level timestamp data.
# Do not use if you haven't computed token-level timestamps.
#
def end_time: () -> Integer
# Get the token text of the token.
#
def text: () -> String
def deconstruct_keys: (Array[:id | :tid | :probability | :log_probability | :pt | :ptsum | :t_dtw | :voice_length | :start_time | :end_time | :text] | nil) -> deconstructed_keys
end
module VAD
class Params
def self.new: (
?threshold: Float,
?min_speech_duration_ms: Integer,
?min_silence_duration_ms: Integer,
?max_speech_duration_s: Float,
?speech_pad_ms: Integer,
?samples_overlap: Float
) -> instance
# Probability threshold to consider as speech.
#
def threshold=: (Float) -> Float
def threshold: () -> Float
# Min duration for a valid speech segment.
#
def min_speech_duration_ms=: (Integer) -> Integer
def min_speech_duration_ms: () -> Integer
# Min silence duration to consider speech as ended.
#
def min_silence_duration_ms=: (Integer) -> Integer
def min_silence_duration_ms: () -> Integer
# Max duration of a speech segment before forcing a new segment.
def max_speech_duration_s=: (Float) -> Float
def max_speech_duration_s: () -> Float
# Padding added before and after speech segments.
#
def speech_pad_ms=: (Integer) -> Integer
def speech_pad_ms: () -> Integer
# Overlap in seconds when copying audio samples from speech segment.
#
def samples_overlap=: (Float) -> Float
def samples_overlap: () -> Float
def ==: (Params) -> (true | false)
end
class Context
def self.new: (String | path | ::URI::HTTP model_name_or_path) -> instance
def segments_from_samples: (Params, Array[Float] samples, ?Integer n_samples) -> Segments
| (Params, _Samples, ?Integer n_samples) -> Segments
def detect: (path wav_file_path, Params) -> Segments
end
class Segments
include Enumerable[Segment]
def each: { (Segment) -> void } -> void
| () -> Enumerator[Segment]
def length: -> Integer
end
class Segment
type deconstructed_keys = {
start_time: (Integer | nil),
end_time: (Integer | nil),
}
def start_time: () -> Integer
def end_time: () -> Integer
def deconstruct_keys: (Array[:start_time | :end_time] | nil) -> deconstructed_keys
end
end
class Error < StandardError

View File

@ -3,12 +3,12 @@ require "whisper"
require_relative "jfk_reader/jfk_reader"
class TestBase < Test::Unit::TestCase
AUDIO = File.join(__dir__, "..", "..", "..", "samples", "jfk.wav")
AUDIO = File.join(__dir__, "fixtures", "jfk.wav")
class << self
attr_reader :whisper
def whisper
return @whisper if @whisper
def startup
@whisper = Whisper::Context.new("base.en")
params = Whisper::Params.new
params.print_timestamps = false

View File

@ -111,6 +111,48 @@ class TestCallback < TestBase
assert_equal 100, last
end
def test_encoder_begin_callback
i = 0
@params.encoder_begin_callback = ->(context, state, user_data) {
i += 1
}
@whisper.transcribe(@audio, @params)
assert i > 0
end
def test_encoder_begin_callback_abort
logs = []
Whisper.log_set -> (level, buffer, user_data) {
logs << buffer if level == Whisper::LOG_LEVEL_ERROR
}, logs
@params.encoder_begin_callback = ->(context, state, user_data) {
return false
}
@whisper.transcribe(@audio, @params)
assert_match(/encoder_begin_callback returned false - aborting/, logs.join)
Whisper.log_set ->(level, buffer, user_data) {}, nil
end
def test_encoder_begin_callback_user_data
udata = Object.new
@params.encoder_begin_callback_user_data = udata
yielded = nil
@params.encoder_begin_callback = ->(context, state, user_data) {
yielded = user_data
}
@whisper.transcribe(@audio, @params)
assert_same udata, yielded
end
def test_on_encoder_begin
i = 0
@params.on_encoder_begin do
i += 1
end
@whisper.transcribe(@audio, @params)
assert i > 0
end
def test_abort_callback
i = 0
@params.abort_callback = ->(user_data) {

View File

@ -0,0 +1,82 @@
require_relative "helper"
class TestContextParams < TestBase
PARAM_NAMES = [
:use_gpu,
:flash_attn,
:gpu_device,
:dtw_token_timestamps,
:dtw_aheads_preset,
:dtw_n_top
]
def test_new
params = Whisper::Context::Params.new
assert_instance_of Whisper::Context::Params, params
end
def test_attributes
params = Whisper::Context::Params.new
assert_true params.use_gpu
params.use_gpu = false
assert_false params.use_gpu
assert_true params.flash_attn
params.flash_attn = false
assert_false params.flash_attn
assert_equal 0, params.gpu_device
params.gpu_device = 1
assert_equal 1, params.gpu_device
assert_false params.dtw_token_timestamps
params.dtw_token_timestamps = true
assert_true params.dtw_token_timestamps
assert_equal Whisper::AHEADS_NONE, params.dtw_aheads_preset
params.dtw_aheads_preset =Whisper::AHEADS_BASE
assert_equal Whisper::AHEADS_BASE, params.dtw_aheads_preset
assert_nil params.dtw_n_top
params.dtw_n_top = 6
assert_equal 6, params.dtw_n_top
params.dtw_n_top = nil
assert_nil params.dtw_n_top
end
def test_new_with_kw_args
params = Whisper::Context::Params.new(use_gpu: false)
assert_false params.use_gpu
end
def test_new_with_kw_wargs_non_existent
assert_raise ArgumentError do
Whisper::Context::Params.new(non_existent: "value")
end
end
data(PARAM_NAMES.collect {|param| [param, param]}.to_h)
def test_new_with_kw_args_default_values(param)
default_params = Whisper::Context::Params.new
default_value = default_params.send(param)
value = if param == :dtw_n_top
6
else
case default_value
in true | false
!default_value
in Integer
default_value + 1
end
end
params = Whisper::Context::Params.new(param => value)
assert_equal value, params.send(param)
PARAM_NAMES.reject {|name| name == param}.each do |name|
expected = default_params.send(name)
actual = params.send(name)
assert_equal expected, actual
end
end
end

View File

@ -106,4 +106,13 @@ class TestModel < TestBase
assert_equal 1, model.ftype
assert_equal "base", model.type
end
def test_coreml_model_auto_download
uri = Whisper::Model.coreml_compiled_models[Whisper::Model.pre_converted_models["tiny"]]
model_path = Pathname(uri.to_path).sub_ext("")
model_path.rmtree if model_path.exist?
uri.cache
assert_path_exist model_path
end
end

View File

@ -0,0 +1,51 @@
require_relative "helper"
require 'tempfile'
require 'tmpdir'
require 'shellwords'
class TestPackage < TestBase
def test_build
Tempfile.create do |file|
assert system("gem", "build", "whispercpp.gemspec", "--output", file.to_path.shellescape, exception: true)
assert file.size > 0
assert_path_exist file.to_path
end
end
sub_test_case "Building binary on installation" do
def setup
system "rake", "build", exception: true
end
def test_install
gemspec = Gem::Specification.load("whispercpp.gemspec")
Dir.mktmpdir do |dir|
system "gem", "install", "--install-dir", dir.shellescape, "--no-document", "pkg/#{gemspec.file_name.shellescape}", exception: true
assert_installed dir, gemspec.version
end
end
def test_install_with_coreml
omit_unless RUBY_PLATFORM.match?(/darwin/) do
gemspec = Gem::Specification.load("whispercpp.gemspec")
Dir.mktmpdir do |dir|
system "gem", "install", "--install-dir", dir.shellescape, "--no-document", "pkg/#{gemspec.file_name.shellescape}", "--", "--enable-whisper-coreml", exception: true
assert_installed dir, gemspec.version
libdir = File.join(dir, "gems", "#{gemspec.name}-#{gemspec.version}", "lib")
assert_nothing_raised do
system "ruby", "-I", libdir, "-r", "whisper", "-e", "Whisper::Context.new('tiny')", exception: true
end
assert_match(/COREML = 1/, `ruby -I #{libdir.shellescape} -r whisper -e 'puts Whisper.system_info_str'`)
end
end
end
private
def assert_installed(dir, version)
assert_path_exist File.join(dir, "gems/whispercpp-#{version}/lib", "whisper.#{RbConfig::CONFIG["DLEXT"]}")
assert_path_exist File.join(dir, "gems/whispercpp-#{version}/LICENSE")
assert_path_not_exist File.join(dir, "gems/whispercpp-#{version}/ext/build")
end
end
end

View File

@ -13,8 +13,10 @@ class TestParams < TestBase
:suppress_blank,
:suppress_nst,
:token_timestamps,
:max_len,
:split_on_word,
:initial_prompt,
:carry_initial_prompt,
:diarize,
:offset,
:duration,
@ -32,6 +34,9 @@ class TestParams < TestBase
:progress_callback_user_data,
:abort_callback,
:abort_callback_user_data,
:vad,
:vad_model_path,
:vad_params,
]
def setup
@ -41,6 +46,8 @@ class TestParams < TestBase
def test_language
@params.language = "en"
assert_equal @params.language, "en"
GC.compact
assert_equal @params.language, "en"
@params.language = "auto"
assert_equal @params.language, "auto"
end
@ -115,6 +122,13 @@ class TestParams < TestBase
assert !@params.print_timestamps
end
def test_carry_initial_prompt
@params.carry_initial_prompt = true
assert @params.carry_initial_prompt
@params.carry_initial_prompt = false
assert !@params.carry_initial_prompt
end
def test_suppress_blank
@params.suppress_blank = true
assert @params.suppress_blank
@ -136,6 +150,13 @@ class TestParams < TestBase
assert !@params.token_timestamps
end
def test_max_len
@params.max_len = 42
assert_equal @params.max_len, 42
@params.max_len = 0
assert_equal @params.max_len, 0
end
def test_split_on_word
@params.split_on_word = true
assert @params.split_on_word
@ -191,6 +212,50 @@ class TestParams < TestBase
assert_in_delta 0.2, @params.no_speech_thold
end
def test_vad
assert_false @params.vad
@params.vad = true
assert_true @params.vad
end
def test_vad_model_path
assert_nil @params.vad_model_path
@params.vad_model_path = "silero-v6.2.0"
assert_equal Whisper::Model.pre_converted_models["silero-v6.2.0"].to_path, @params.vad_model_path
end
def test_vad_model_path_with_nil
@params.vad_model_path = "silero-v6.2.0"
@params.vad_model_path = nil
assert_nil @params.vad_model_path
end
def test_vad_model_path_with_invalid
assert_raise TypeError do
@params.vad_model_path = Object.new
end
end
def test_vad_model_path_with_URI_string
@params.vad_model_path = "https://huggingface.co/ggml-org/whisper-vad/resolve/main/ggml-silero-v6.2.0.bin"
assert_equal @params.vad_model_path, Whisper::Model.pre_converted_models["silero-v6.2.0"].to_path
end
def test_vad_model_path_with_URI
@params.vad_model_path = URI("https://huggingface.co/ggml-org/whisper-vad/resolve/main/ggml-silero-v6.2.0.bin")
assert_equal @params.vad_model_path, Whisper::Model.pre_converted_models["silero-v6.2.0"].to_path
end
def test_vad_params
assert_kind_of Whisper::VAD::Params, @params.vad_params
default_params = @params.vad_params
assert_same default_params, @params.vad_params
assert_equal 0.5, default_params.threshold
new_params = Whisper::VAD::Params.new
@params.vad_params = new_params
assert_same new_params, @params.vad_params
end
def test_new_with_kw_args
params = Whisper::Params.new(language: "es")
assert_equal "es", params.language
@ -225,6 +290,10 @@ class TestParams < TestBase
proc {}
in [/_user_data\Z/, *]
Object.new
in [:vad_model_path, *]
Whisper::Model.pre_converted_models["silero-v6.2.0"].to_path
in [:vad_params, *]
Whisper::VAD::Params.new
end
params = Whisper::Params.new(param => value)
if Float === value

View File

@ -0,0 +1,145 @@
require_relative "helper"
class TestSegment < TestBase
def test_iteration
whisper.each_segment do |segment|
assert_instance_of Whisper::Segment, segment
end
end
def test_enumerator
enum = whisper.each_segment
assert_instance_of Enumerator, enum
enum.to_a.each_with_index do |segment, index|
assert_instance_of Whisper::Segment, segment
assert_kind_of Integer, index
end
end
def test_start_time
i = 0
whisper.each_segment do |segment|
assert_equal 0, segment.start_time if i == 0
i += 1
end
end
def test_end_time
i = 0
whisper.each_segment do |segment|
assert_equal whisper.full_get_segment_t1(i) * 10, segment.end_time
i += 1
end
end
def test_no_speech_prob
no_speech_prob = nil
whisper.each_segment do |segment|
no_speech_prob = segment.no_speech_prob
end
assert no_speech_prob > 0.0
end
def test_on_new_segment
params = Whisper::Params.new
seg = nil
index = 0
params.on_new_segment do |segment|
assert_instance_of Whisper::Segment, segment
if index == 0
seg = segment
assert_equal 0, segment.start_time
assert_match(/ask not what your country can do for you, ask what you can do for your country/, segment.text)
end
index += 1
end
whisper.transcribe(AUDIO, params)
assert_equal 0, seg.start_time
assert_match(/ask not what your country can do for you, ask what you can do for your country/, seg.text)
end
def test_on_new_segment_twice
params = Whisper::Params.new
seg = nil
params.on_new_segment do |segment|
seg = segment
return
end
params.on_new_segment do |segment|
assert_same seg, segment
return
end
whisper.transcribe(AUDIO, params)
end
def test_transcription_after_segment_retrieved
segment = whisper.each_segment.first
assert_match(/ask not what your country can do for you, ask what you can do for your country/, segment.text)
whisper.transcribe(AUDIO, Whisper::Params.new(offset: 5000))
assert_not_match(/ask not what your country can do for you, ask what you can do for your country/, segment.text)
assert_match(/what you can do for your country/i, segment.text)
end
def test_pattern_matching
segment = whisper.each_segment.first
segment => {start_time:, end_time:, text:, no_speech_prob:, speaker_turn_next:}
assert_equal segment.start_time, start_time
assert_equal segment.end_time, end_time
assert_equal segment.text, text
assert_equal segment.no_speech_prob, no_speech_prob
assert_equal segment.speaker_turn_next?, speaker_turn_next
end
def test_pattern_matching_partial
segment = whisper.each_segment.first
segment => {start_time:, end_time:, text:}
assert_equal segment.start_time, start_time
assert_equal segment.end_time, end_time
assert_equal segment.text, text
end
def test_deconstruct_keys
segment = whisper.each_segment.first
expected = {
start_time: segment.start_time,
end_time: segment.end_time,
text: segment.text,
no_speech_prob: segment.no_speech_prob,
speaker_turn_next: segment.speaker_turn_next?
}
assert_equal expected, segment.deconstruct_keys([:start_time, :end_time, :text, :no_speech_prob, :speaker_turn_next])
end
def test_deconstruct_keys_non_existent
omit "Undefined behavior"
segment = whisper.each_segment.first
assert_equal({}, segment.deconstruct_keys([:non_existent]))
end
def test_deconstruct_keys_too_many_keys
omit "Undefined behavior"
segment = whisper.each_segment.first
assert_equal({}, segment.deconstruct_keys([:start_time, :end_time, :text, :no_speech_prob, :speaker_turn_next, :extra_key]))
end
def test_deconstruct_keys_includes_non_existent_keys_not_too_many
omit "Undefined behavior"
segment = whisper.each_segment.first
expected = {
start_time: segment.start_time,
end_time: segment.end_time,
text: segment.text,
no_speech_prob: segment.no_speech_prob
}
assert_equal(expected, segment.deconstruct_keys([:start_time, :end_time, :text, :no_speech_prob, :non_existent]))
end
end

View File

@ -0,0 +1,81 @@
require_relative "helper"
class TestToken < TestBase
def setup
@segment = whisper.each_segment.first
@token = @segment.each_token.first
end
def test_n_tokens
assert_equal 27, @segment.n_tokens
end
def test_allocate
token = Whisper::Token.allocate
assert_raise do
token.id
end
end
def test_each_token
i = 0
@segment.each_token do |token|
i += 1
assert_instance_of Whisper::Token, token
end
assert_equal 27, i
end
def test_each_token_without_block
assert_instance_of Enumerator, @segment.each_token
end
def test_token
assert_instance_of Whisper::Token, @token
assert_instance_of Integer, @token.id
assert_instance_of Float, @token.probability
assert_instance_of Float, @token.log_probability
assert_instance_of Integer, @token.tid
assert_instance_of Float, @token.pt
assert_instance_of Float, @token.ptsum
assert_instance_of Integer, @token.start_time
assert_instance_of Integer, @token.end_time
assert_instance_of Integer, @token.t_dtw
assert_instance_of Float, @token.voice_length
assert_instance_of String, @token.text
end
def test_text
assert_equal ["[_BEG_]", " And", " so", " my", " fellow", " Americans", ",", " ask", " not", " what", " your", " country", " can", " do", " for", " you", ",", " ask", " what", " you", " can", " do", " for", " your", " country", ".", "[_TT_550]"],
@segment.each_token.collect(&:text)
end
def test_token_timestamps
params = Whisper::Params.new(token_timestamps: true)
whisper.transcribe(TestBase::AUDIO, params)
prev = -1
whisper.each_segment.first.each_token do |token|
assert token.start_time >= prev
assert token.end_time >= token.start_time
prev = token.end_time
end
end
def test_deconstruct_keys_with_nil
keys = %i[id tid probability log_probability pt ptsum t_dtw voice_length start_time end_time text]
expected = keys.collect {|key| [key, @token.send(key)] }.to_h
assert_equal(expected, @token.deconstruct_keys(nil))
end
def test_deconstruct_keys_with_keys
keys = %i[id tid probability log_probability pt ptsum t_dtw voice_length start_time end_time text]
expected = keys.collect {|key| [key, @token.send(key)] }.to_h
assert_equal expected, @token.deconstruct_keys(keys)
end
end

View File

@ -0,0 +1,19 @@
require_relative "helper"
class TestVAD < TestBase
def setup
@whisper = Whisper::Context.new("base.en")
vad_params = Whisper::VAD::Params.new
@params = Whisper::Params.new(
vad: true,
vad_model_path: "silero-v6.2.0",
vad_params:
)
end
def test_transcribe
@whisper.transcribe(TestBase::AUDIO, @params) do |text|
assert_match(/ask not what your country can do for you[,.] ask what you can do for your country/i, text)
end
end
end

View File

@ -0,0 +1,100 @@
require_relative "helper"
class TestVADContext < TestBase
def test_initialize
context = Whisper::VAD::Context.new("silero-v6.2.0")
assert_instance_of Whisper::VAD::Context, context
end
def test_detect
context = Whisper::VAD::Context.new("silero-v6.2.0")
segments = context.detect(AUDIO, Whisper::VAD::Params.new)
assert_segments segments
end
def test_invalid_model_type
assert_raise TypeError do
Whisper::VAD::Context.new(Object.new)
end
end
def test_allocate
vad = Whisper::VAD::Context.allocate
assert_raise do
vad.detect(AUDIO, Whisper::VAD::Params.new)
end
end
private
def assert_segments(segments)
assert_instance_of Whisper::VAD::Segments, segments
i = 0
segments.each do |segment|
i += 1
assert_instance_of Whisper::VAD::Segment, segment
end
assert i > 0
segments.each_with_index do |segment, index|
assert_instance_of Integer, index
end
assert_instance_of Enumerator, segments.each
segment = segments.each.first
assert_instance_of Float, segment.start_time
assert_instance_of Float, segment.end_time
segment => {start_time:, end_time:}
assert_equal segment.start_time, start_time
assert_equal segment.end_time, end_time
assert_equal 4, segments.length
end
sub_test_case "from samples" do
def setup
super
@vad = Whisper::VAD::Context.new("silero-v6.2.0")
@samples = File.read(AUDIO, nil, 78).unpack("s<*").collect {|i| i.to_f / 2**15}
end
def test_segments_from_samples
segments = @vad.segments_from_samples(Whisper::VAD::Params.new, @samples, @samples.length)
assert_segments segments
end
def test_segments_from_samples_without_length
segments = @vad.segments_from_samples(Whisper::VAD::Params.new, @samples)
assert_segments segments
end
def test_segments_from_samples_enumerator
samples = @samples.each
segments = @vad.segments_from_samples(Whisper::VAD::Params.new, samples, @samples.length)
assert_segments segments
end
def test_segments_from_samples_enumerator_without_length
samples = @samples.each
assert_raise ArgumentError do
@vad.segments_from_samples(Whisper::VAD::Params.new, samples)
end
end
def test_segments_from_samples_enumerator_with_too_large_length
samples = @samples.each.take(10).to_enum
assert_raise StopIteration do
@vad.segments_from_samples(Whisper::VAD::Params.new, samples, 11)
end
end
def test_segments_from_samples_with_memory_view
samples = JFKReader.new(AUDIO)
segments = @vad.segments_from_samples(Whisper::VAD::Params.new, samples)
assert_segments segments
end
end
end

View File

@ -0,0 +1,103 @@
require_relative "helper"
class TestVADParams < TestBase
PARAM_NAMES = [
:threshold,
:min_speech_duration_ms,
:min_silence_duration_ms,
:max_speech_duration_s,
:speech_pad_ms,
:samples_overlap
]
def setup
@params = Whisper::VAD::Params.new
end
def test_new
params = Whisper::VAD::Params.new
assert_kind_of Whisper::VAD::Params, params
end
def test_threshold
assert_in_delta @params.threshold, 0.5
@params.threshold = 0.7
assert_in_delta @params.threshold, 0.7
end
def test_min_speech_duration
pend
end
def test_min_speech_duration_ms
assert_equal 250, @params.min_speech_duration_ms
@params.min_speech_duration_ms = 500
assert_equal 500, @params.min_speech_duration_ms
end
def test_min_silence_duration_ms
assert_equal 100, @params.min_silence_duration_ms
@params.min_silence_duration_ms = 200
assert_equal 200, @params.min_silence_duration_ms
end
def test_max_speech_duration
pend
end
def test_max_speech_duration_s
assert @params.max_speech_duration_s >= 10e37 # Defaults to FLT_MAX
@params.max_speech_duration_s = 60.0
assert_equal 60.0, @params.max_speech_duration_s
end
def test_speech_pad_ms
assert_equal 30, @params.speech_pad_ms
@params.speech_pad_ms = 50
assert_equal 50, @params.speech_pad_ms
end
def test_samples_overlap
assert_in_delta @params.samples_overlap, 0.1
@params.samples_overlap = 0.5
assert_in_delta @params.samples_overlap, 0.5
end
def test_equal
assert_equal @params, Whisper::VAD::Params.new
end
def test_new_with_kw_args
params = Whisper::VAD::Params.new(threshold: 0.7)
assert_in_delta params.threshold, 0.7
assert_equal 250, params.min_speech_duration_ms
end
def test_new_with_kw_args_non_existent
assert_raise ArgumentError do
Whisper::VAD::Params.new(non_existent: "value")
end
end
data(PARAM_NAMES.collect {|param| [param, param]}.to_h)
def test_new_with_kw_args_default_values(param)
default_value = @params.send(param)
value = default_value + 1
params = Whisper::VAD::Params.new(param => value)
if Float === value
assert_in_delta value, params.send(param)
else
assert_equal value, params.send(param)
end
PARAM_NAMES.reject {|name| name == param}.each do |name|
expected = @params.send(name)
actual = params.send(name)
if Float === expected
assert_in_delta expected, actual
else
assert_equal expected, actual
end
end
end
end

View File

@ -0,0 +1,19 @@
require_relative "helper"
class TestVADSegment < TestBase
def test_initialize
segment = Whisper::VAD::Segment.new
assert_raise do
segment.start_time
end
assert_raise do
segments.end_time
end
assert_raise do
segment => {start_time:, end_time:}
end
end
end

View File

@ -0,0 +1,16 @@
require_relative "helper"
class TestVADSegments < TestBase
def test_initialize
segments = Whisper::VAD::Segments.new
assert_raise do
segments.each do |segment|
end
end
assert_raise do
segments.length
end
end
end

View File

@ -1,6 +1,7 @@
require_relative "helper"
require "stringio"
require "etc"
require "pathname"
# Exists to detect memory-related bug
Whisper.log_set ->(level, buffer, user_data) {}, nil
@ -20,6 +21,44 @@ class TestWhisper < TestBase
}
end
def test_whisper_pathname
@whisper = Whisper::Context.new("base.en")
params = Whisper::Params.new
@whisper.transcribe(Pathname(AUDIO), params) {|text|
assert_match(/ask not what your country can do for you, ask what you can do for your country/, text)
}
end
def test_transcribe_non_parallel
@whisper = Whisper::Context.new("base.en")
params = Whisper::Params.new
@whisper.transcribe(AUDIO, params, n_processors: 1) {|text|
assert_match(/ask not what your country can do for you, ask what you can do for your country/, text)
}
end
def test_transcribe_n_processors
@whisper = Whisper::Context.new("base.en")
params = Whisper::Params.new
without_log_callback do
@whisper.transcribe(AUDIO, params, n_processors: 4) {|text|
assert_match(/what you can do for your country/i, text)
}
end
end
private
def without_log_callback
Whisper.log_set nil, nil
yield
ensure
Whisper.log_set ->(level, buffer, user_data) {}, nil
end
sub_test_case "After transcription" do
def test_full_n_segments
assert_equal 1, whisper.full_n_segments
@ -94,6 +133,14 @@ class TestWhisper < TestBase
end
end
def test_system_info_str
assert_match(/\AWHISPER : COREML = \d | OPENVINO = \d |/, Whisper.system_info_str)
end
def test_version
assert_kind_of String, Whisper::VERSION
end
def test_log_set
user_data = Object.new
logs = []
@ -123,6 +170,13 @@ class TestWhisper < TestBase
$stderr = stderr
end
def test_access_attribute_without_initialization
whisper = Whisper::Context.allocate
assert_raise do
whisper.model_type
end
end
sub_test_case "full" do
def setup
super
@ -174,9 +228,21 @@ class TestWhisper < TestBase
assert_match(/ask not what your country can do for you, ask what you can do for your country/, @whisper.each_segment.first.text)
end
def test_full_with_memroy_view_gc
samples = JFKReader.new(AUDIO)
@whisper.full(@params, samples)
GC.start
require "fiddle"
Fiddle::MemoryView.export samples do |view|
assert_equal 176000, view.to_s.unpack("#{view.format}*").length
end
end
def test_full_parallel
nprocessors = 2
@whisper.full_parallel(@params, @samples, @samples.length, nprocessors)
without_log_callback do
@whisper.full_parallel(@params, @samples, @samples.length, nprocessors)
end
assert_equal nprocessors, @whisper.full_n_segments
text = @whisper.each_segment.collect(&:text).join
@ -187,7 +253,9 @@ class TestWhisper < TestBase
def test_full_parallel_with_memory_view
nprocessors = 2
samples = JFKReader.new(AUDIO)
@whisper.full_parallel(@params, samples, nil, nprocessors)
without_log_callback do
@whisper.full_parallel(@params, samples, nil, nprocessors)
end
assert_equal nprocessors, @whisper.full_n_segments
text = @whisper.each_segment.collect(&:text).join
@ -206,7 +274,9 @@ class TestWhisper < TestBase
def test_full_parallel_without_length
nprocessors = 2
@whisper.full_parallel(@params, @samples, nil, nprocessors)
without_log_callback do
@whisper.full_parallel(@params, @samples, nil, nprocessors)
end
assert_equal nprocessors, @whisper.full_n_segments
text = @whisper.each_segment.collect(&:text).join
@ -223,4 +293,48 @@ class TestWhisper < TestBase
assert_match(/for your country/i, text)
end
end
def test_to_srt
whisper = Whisper::Context.new("base.en")
whisper.transcribe AUDIO, @params
lines = whisper.to_srt.lines
assert_match(/\A\d+\n/, lines[0])
assert_match(/\d{2}:\d{2}:\d{2},\d{3} --> \d{2}:\d{2}:\d{2},\d{3}\n/, lines[1])
assert_match(/ask not what your country can do for you, ask what you can do for your country/, lines[2])
end
def test_to_webvtt
whisper = Whisper::Context.new("base.en")
whisper.transcribe AUDIO, @params
lines = whisper.to_webvtt.lines
assert_equal "WEBVTT\n", lines[0]
assert_equal "\n", lines[1]
assert_match(/\A\d+\n/, lines[2])
assert_match(/\d{2}:\d{2}:\d{2}\.\d{3} --> \d{2}:\d{2}:\d{2}\.\d{3}\n/, lines[3])
assert_match(/ask not what your country can do for you, ask what you can do for your country/, lines[4])
end
sub_test_case "Format needs escape" do
def setup
@whisper = Whisper::Context.new("base.en")
@whisper.transcribe AUDIO, Whisper::Params.new
segment = @whisper.each_segment.first
segment.define_singleton_method :text do
"& so my fellow Americans --> ask not what your country can do for you <-- ask what you can do for your country."
end
@whisper.define_singleton_method :each_segment do
Enumerator.new(3) {|yielder| 3.times {yielder << segment}}
end
end
def test_to_srt_escape
assert_equal "&amp; so my fellow Americans --&gt; ask not what your country can do for you &lt;-- ask what you can do for your country.\n", @whisper.to_srt.lines[2]
end
def test_to_webvtt_escape
assert_equal "&amp; so my fellow Americans --&gt; ask not what your country can do for you &lt;-- ask what you can do for your country.\n", @whisper.to_webvtt.lines[4]
end
end
end

View File

@ -1,31 +0,0 @@
require_relative "helper"
require 'tempfile'
require 'tmpdir'
require 'shellwords'
class TestPackage < TestBase
def test_build
Tempfile.create do |file|
assert system("gem", "build", "whispercpp.gemspec", "--output", file.to_path.shellescape, exception: true)
assert file.size > 0
assert_path_exist file.to_path
end
end
sub_test_case "Building binary on installation" do
def setup
system "rake", "build", exception: true
end
def test_install
match_data = `rake -Tbuild`.match(/(whispercpp-(.+)\.gem)/)
filename = match_data[1]
version = match_data[2]
basename = "whisper.#{RbConfig::CONFIG["DLEXT"]}"
Dir.mktmpdir do |dir|
system "gem", "install", "--install-dir", dir.shellescape, "--no-document", "pkg/#{filename.shellescape}", exception: true
assert_path_exist File.join(dir, "gems/whispercpp-#{version}/lib", basename)
end
end
end
end

View File

@ -1,74 +0,0 @@
require_relative "helper"
class TestSegment < TestBase
def test_iteration
whisper.each_segment do |segment|
assert_instance_of Whisper::Segment, segment
end
end
def test_enumerator
enum = whisper.each_segment
assert_instance_of Enumerator, enum
enum.to_a.each_with_index do |segment, index|
assert_instance_of Whisper::Segment, segment
assert_kind_of Integer, index
end
end
def test_start_time
i = 0
whisper.each_segment do |segment|
assert_equal 0, segment.start_time if i == 0
i += 1
end
end
def test_end_time
i = 0
whisper.each_segment do |segment|
assert_equal whisper.full_get_segment_t1(i) * 10, segment.end_time
i += 1
end
end
def test_no_speech_prob
no_speech_prob = nil
whisper.each_segment do |segment|
no_speech_prob = segment.no_speech_prob
end
assert no_speech_prob > 0.0
end
def test_on_new_segment
params = Whisper::Params.new
seg = nil
index = 0
params.on_new_segment do |segment|
assert_instance_of Whisper::Segment, segment
if index == 0
seg = segment
assert_equal 0, segment.start_time
assert_match(/ask not what your country can do for you, ask what you can do for your country/, segment.text)
end
index += 1
end
whisper.transcribe(AUDIO, params)
assert_equal 0, seg.start_time
assert_match(/ask not what your country can do for you, ask what you can do for your country/, seg.text)
end
def test_on_new_segment_twice
params = Whisper::Params.new
seg = nil
params.on_new_segment do |segment|
seg = segment
return
end
params.on_new_segment do |segment|
assert_same seg, segment
return
end
whisper.transcribe(AUDIO, params)
end
end

View File

@ -3,8 +3,7 @@ require_relative "extsources"
Gem::Specification.new do |s|
s.name = "whispercpp"
s.authors = ["Georgi Gerganov", "Todd A. Fisher"]
s.version = '1.3.1'
s.date = '2024-12-19'
s.version = '1.3.7'
s.description = %q{High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model via Ruby}
s.email = 'todd.fisher@gmail.com'
s.extra_rdoc_files = ['LICENSE', 'README.md']
@ -15,18 +14,19 @@ Gem::Specification.new do |s|
if s.extra_rdoc_files.include?(basename)
basename
else
file.sub("../..", "ext")
file.sub("../..", "ext/sources")
.sub("../javascript", "ext/sources/bindings/javascript")
end
}
s.summary = %q{Ruby whisper.cpp bindings}
s.test_files = s.files.select {|file| file.start_with? "tests/"}
s.test_files = s.files.select {|file| file.start_with? "test/"}
s.extensions << 'ext/extconf.rb'
s.required_ruby_version = '>= 3.1.0'
#### Documentation and testing.
s.homepage = 'https://github.com/ggerganov/whisper.cpp'
s.homepage = 'https://github.com/ggml-org/whisper.cpp'
s.rdoc_options = ['--main', 'README.md']

View File

@ -15,6 +15,7 @@ GGML_METAL_EMBED_LIBRARY=ON
GGML_BLAS_DEFAULT=ON
GGML_METAL_USE_BF16=ON
GGML_OPENMP=OFF
BUILD_STATIC_XCFRAMEWORK=${BUILD_STATIC_XCFRAMEWORK:-OFF}
COMMON_C_FLAGS="-Wno-macro-redefined -Wno-shorten-64-to-32 -Wno-unused-command-line-argument -g"
COMMON_CXX_FLAGS="-Wno-macro-redefined -Wno-shorten-64-to-32 -Wno-unused-command-line-argument -g"
@ -41,6 +42,11 @@ COMMON_CMAKE_ARGS=(
-DGGML_OPENMP=${GGML_OPENMP}
)
XCODE_VERSION=$(xcodebuild -version 2>/dev/null | head -n1 | awk '{ print $2 }')
MAJOR_VERSION=$(echo $XCODE_VERSION | cut -d. -f1)
MINOR_VERSION=$(echo $XCODE_VERSION | cut -d. -f2)
echo "Detected Xcode version: $XCODE_VERSION"
check_required_tool() {
local tool=$1
local install_message=$2
@ -322,6 +328,15 @@ combine_static_libraries() {
arch_flags+=" -arch $arch"
done
if [[ "${BUILD_STATIC_XCFRAMEWORK}" == "ON" ]]; then
echo "Packaging static framework for ${platform}."
mkdir -p "$(dirname "${base_dir}/${output_lib}")"
cp "${temp_dir}/combined.a" "${base_dir}/${output_lib}"
rm -rf "${temp_dir}"
return
fi
# Create dynamic library
echo "Creating dynamic library for ${platform}."
xcrun -sdk $sdk clang++ -dynamiclib \
@ -335,21 +350,28 @@ combine_static_libraries() {
# Platform-specific post-processing for device builds
if [[ "$is_simulator" == "false" ]]; then
if command -v vtool &>/dev/null; then
if command -v xcrun vtool &>/dev/null; then
case "$platform" in
"ios")
echo "Marking binary as a framework binary for iOS..."
vtool -set-build-version ios ${IOS_MIN_OS_VERSION} ${IOS_MIN_OS_VERSION} -replace \
xcrun vtool -set-build-version ios ${IOS_MIN_OS_VERSION} ${IOS_MIN_OS_VERSION} -replace \
-output "${base_dir}/${output_lib}" "${base_dir}/${output_lib}"
;;
"visionos")
echo "Marking binary as a framework binary for visionOS..."
vtool -set-build-version xros ${VISIONOS_MIN_OS_VERSION} ${VISIONOS_MIN_OS_VERSION} -replace \
if [[ "$MAJOR_VERSION" -gt 16 ]] || [[ "$MAJOR_VERSION" -eq 16 && "$MINOR_VERSION" -gt 2 ]]; then
echo "Xcode version greater than 16.2, using visionOS."
VISION_OS_BUILD_VERSION="visionos"
else
echo "Xcode version less than or equal to 16.2, using xros."
VISION_OS_BUILD_VERSION="xros"
fi
xcrun vtool -set-build-version ${VISION_OS_BUILD_VERSION} ${VISIONOS_MIN_OS_VERSION} ${VISIONOS_MIN_OS_VERSION} -replace \
-output "${base_dir}/${output_lib}" "${base_dir}/${output_lib}"
;;
"tvos")
echo "Marking binary as a framework binary for tvOS..."
vtool -set-build-version tvos ${TVOS_MIN_OS_VERSION} ${TVOS_MIN_OS_VERSION} -replace \
xcrun vtool -set-build-version tvos ${TVOS_MIN_OS_VERSION} ${TVOS_MIN_OS_VERSION} -replace \
-output "${base_dir}/${output_lib}" "${base_dir}/${output_lib}"
;;
esac
@ -517,13 +539,27 @@ combine_static_libraries "build-tvos-device" "Release-appletvos" "tvos" "false"
# Create XCFramework with correct debug symbols paths
echo "Creating XCFramework..."
if [[ "${BUILD_STATIC_XCFRAMEWORK}" == "ON" ]]; then
xcodebuild -create-xcframework \
-framework $(pwd)/build-ios-sim/framework/whisper.framework \
-framework $(pwd)/build-ios-device/framework/whisper.framework \
-framework $(pwd)/build-macos/framework/whisper.framework \
-framework $(pwd)/build-visionos/framework/whisper.framework \
-framework $(pwd)/build-visionos-sim/framework/whisper.framework \
-framework $(pwd)/build-tvos-device/framework/whisper.framework \
-framework $(pwd)/build-tvos-sim/framework/whisper.framework \
-output $(pwd)/build-apple/whisper.xcframework
exit 0
fi
xcodebuild -create-xcframework \
-framework $(pwd)/build-ios-sim/framework/whisper.framework \
-debug-symbols $(pwd)/build-ios-sim/dSYMs/whisper.dSYM \
-framework $(pwd)/build-ios-device/framework/whisper.framework \
-debug-symbols $(pwd)/build-ios-device/dSYMs/whisper.dSYM \
-framework $(pwd)/build-macos/framework/whisper.framework \
-debug-symbols $(pwd)/build-macos/dSYMS/whisper.dSYM \
-debug-symbols $(pwd)/build-macos/dSYMs/whisper.dSYM \
-framework $(pwd)/build-visionos/framework/whisper.framework \
-debug-symbols $(pwd)/build-visionos/dSYMs/whisper.dSYM \
-framework $(pwd)/build-visionos-sim/framework/whisper.framework \

View File

@ -24,9 +24,9 @@ mkdir -p "$2"
OUT=$(realpath "$1")
MNT=$(realpath "$2")
rm -f "$OUT/*.log"
rm -f "$OUT/*.exit"
rm -f "$OUT/*.md"
rm -vf $OUT/*.log
rm -vf $OUT/*.exit
rm -vf $OUT/*.md
sd=`dirname $0`
cd $sd/../
@ -50,8 +50,35 @@ fi
CMAKE_EXTRA="-DWHISPER_FATAL_WARNINGS=ON"
if [ ! -z ${GG_BUILD_METAL} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_METAL=ON"
fi
if [ ! -z ${GG_BUILD_CUDA} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=native"
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_CUDA=ON"
if command -v nvidia-smi >/dev/null 2>&1; then
CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader,nounits 2>/dev/null | head -1 | tr -d '.')
if [[ -n "$CUDA_ARCH" && "$CUDA_ARCH" =~ ^[0-9]+$ ]]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DCMAKE_CUDA_ARCHITECTURES=${CUDA_ARCH}"
else
echo "Warning: Using fallback CUDA architectures"
CMAKE_EXTRA="${CMAKE_EXTRA} -DCMAKE_CUDA_ARCHITECTURES=61;70;75;80;86;89"
fi
else
echo "Error: nvidia-smi not found, cannot build with CUDA"
exit 1
fi
fi
if [ ! -z ${GG_BUILD_ROCM} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_HIP=ON"
if [ -z ${GG_BUILD_AMDGPU_TARGETS} ]; then
echo "Missing GG_BUILD_AMDGPU_TARGETS, please set it to your GPU architecture (e.g. gfx90a, gfx1100, etc.)"
exit 1
fi
CMAKE_EXTRA="${CMAKE_EXTRA} -DAMDGPU_TARGETS=${GG_BUILD_AMDGPU_TARGETS}"
fi
if [ ! -z ${GG_BUILD_SYCL} ]; then
@ -60,28 +87,38 @@ if [ ! -z ${GG_BUILD_SYCL} ]; then
echo "source /opt/intel/oneapi/setvars.sh"
exit 1
fi
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL_F16=ON"
fi
if [ ! -z ${GG_BUILD_OPENVINO} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DWHISPER_OPENVINO=ON"
fi
if [ ! -z ${GG_BUILD_METAL} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_METAL=ON"
# Use only main GPU
export ONEAPI_DEVICE_SELECTOR="level_zero:0"
# Enable sysman for correct memory reporting
export ZES_ENABLE_SYSMAN=1
# to circumvent precision issues on CPY operations
export SYCL_PROGRAM_COMPILE_OPTIONS="-cl-fp32-correctly-rounded-divide-sqrt"
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_SYCL=1 -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL_F16=ON"
fi
if [ ! -z ${GG_BUILD_VULKAN} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_VULKAN=ON"
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_VULKAN=1"
# if on Mac, disable METAL
if [[ "$OSTYPE" == "darwin"* ]]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_METAL=OFF -DGGML_BLAS=OFF"
fi
fi
if [ ! -z ${GG_BUILD_BLAS} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_BLAS=ON"
if [ ! -z ${GG_BUILD_WEBGPU} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_WEBGPU=1"
fi
if [ ! -z ${GG_BUILD_COREML} ]; then
CMAKE_EXTRA="${CMAKE_EXTRA} -DWHISPER_COREML=ON"
if [ ! -z ${GG_BUILD_MUSA} ]; then
# Use qy1 by default (MTT S80)
MUSA_ARCH=${MUSA_ARCH:-21}
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_MUSA=ON -DMUSA_ARCHITECTURES=${MUSA_ARCH}"
fi
if [ ! -z ${GG_BUILD_NO_SVE} ]; then
# arm 9 and newer enables sve by default, adjust these flags depending on the cpu used
CMAKE_EXTRA="${CMAKE_EXTRA} -DGGML_NATIVE=OFF -DGGML_CPU_ARM_ARCH=armv8.5-a+fp16+i8mm"
fi
## helpers
@ -178,7 +215,7 @@ function gg_run_ctest {
mode=$2
cd ${SRC}
rm -rf build-ci-${mode} && mkdir build-ci-${mode} && cd build-ci-${mode}
set -e
@ -209,7 +246,7 @@ function gg_run_bench {
cd ${SRC}
# set flash attention flag if enabled
fattn=""
fattn="-nfa"
if [ "$BENCH_FLASH_ATTN" -eq 1 ]; then
fattn="-fa"
fi
@ -219,7 +256,7 @@ function gg_run_bench {
echo "Running memcpy benchmark"
(time ./build-ci-release/bin/whisper-bench -w 1 -t $BENCH_N_THREADS 2>&1) | tee -a $OUT/${ci}-memcpy.log
gg_check_last_command_status "$OUT/${ci}-memcpy.exit" "memcpy benchmark"
echo "Running ggml_mul_mat benchmark with $BENCH_N_THREADS threads"
(time ./build-ci-release/bin/whisper-bench -w 2 -t $BENCH_N_THREADS 2>&1) | tee -a $OUT/${ci}-mul_mat.log
gg_check_last_command_status "$OUT/${ci}-mul_mat.exit" "ggml_mul_mat benchmark"
@ -233,6 +270,8 @@ function gg_run_bench {
printf "| %16s | %13s | %3s | %3s | %7s | %7s | %7s | %7s | %7s |\n" "---" "---" "---" "---" "---" "---" "---" "---" "---"
} | tee -a $OUT/${ci}-models-table.log
res=0
# run benchmark for each model
for model in "${MODELS[@]}"; do
echo "Benchmarking model: $model"
@ -283,8 +322,11 @@ function gg_run_bench {
| tee -a $OUT/${ci}-models-table.log
else
echo "Benchmark failed for model: $model" | tee -a $OUT/${ci}-bench-errors.log
res=1
fi
done
return $res
}
function gg_sum_bench {
@ -326,11 +368,12 @@ ret=0
for model in "${MODELS[@]}"; do
test $ret -eq 0 && gg_download_model ${model}
done
if [ -z ${GG_BUILD_SYCL}]; then
test $ret -eq 0 && gg_run ctest debug
fi
test $ret -eq 0 && gg_run ctest debug
test $ret -eq 0 && gg_run ctest release
test $ret -eq 0 && gg_run bench
cat $OUT/README.md
exit $ret

View File

@ -15,7 +15,7 @@ jobs:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
- uses: actions/stale@v10
with:
exempt-issue-labels: "refactor,help wanted,good first issue,research,bug,roadmap"
days-before-issue-stale: 30

View File

@ -0,0 +1,16 @@
set( CMAKE_SYSTEM_NAME Darwin )
set( CMAKE_SYSTEM_PROCESSOR arm64 )
set( target arm64-apple-darwin-macho )
set( CMAKE_C_COMPILER clang )
set( CMAKE_CXX_COMPILER clang++ )
set( CMAKE_C_COMPILER_TARGET ${target} )
set( CMAKE_CXX_COMPILER_TARGET ${target} )
set( arch_c_flags "-march=armv8.4-a -fvectorize -ffp-model=fast -fno-finite-math-only" )
set( warn_c_flags "-Wno-format -Wno-unused-variable -Wno-unused-function" )
set( CMAKE_C_FLAGS_INIT "${arch_c_flags} ${warn_c_flags}" )
set( CMAKE_CXX_FLAGS_INIT "${arch_c_flags} ${warn_c_flags}" )

View File

@ -0,0 +1,16 @@
set( CMAKE_SYSTEM_NAME Windows )
set( CMAKE_SYSTEM_PROCESSOR arm64 )
set( target arm64-pc-windows-msvc )
set( CMAKE_C_COMPILER clang )
set( CMAKE_CXX_COMPILER clang++ )
set( CMAKE_C_COMPILER_TARGET ${target} )
set( CMAKE_CXX_COMPILER_TARGET ${target} )
set( arch_c_flags "-march=armv8.7-a -fvectorize -ffp-model=fast -fno-finite-math-only" )
set( warn_c_flags "-Wno-format -Wno-unused-variable -Wno-unused-function -Wno-gnu-zero-variadic-macro-arguments" )
set( CMAKE_C_FLAGS_INIT "${arch_c_flags} ${warn_c_flags}" )
set( CMAKE_CXX_FLAGS_INIT "${arch_c_flags} ${warn_c_flags}" )

View File

@ -0,0 +1,29 @@
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR riscv64)
set(CMAKE_SYSTEM_VERSION 1)
if (CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(riscv)")
message(STATUS "HOST SYSTEM ${CMAKE_HOST_SYSTEM_PROCESSOR}")
else()
set(GNU_MACHINE riscv64-unknown-linux-gnu CACHE STRING "GNU compiler triple")
if (DEFINED ENV{RISCV_ROOT_PATH})
file(TO_CMAKE_PATH $ENV{RISCV_ROOT_PATH} RISCV_ROOT_PATH)
else()
message(FATAL_ERROR "RISCV_ROOT_PATH env must be defined")
endif()
set(RISCV_ROOT_PATH ${RISCV_ROOT_PATH} CACHE STRING "root path to riscv toolchain")
set(CMAKE_C_COMPILER ${RISCV_ROOT_PATH}/bin/riscv64-unknown-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER ${RISCV_ROOT_PATH}/bin/riscv64-unknown-linux-gnu-g++)
set(CMAKE_STRIP ${RISCV_ROOT_PATH}/bin/riscv64-unknown-linux-gnu-strip)
set(CMAKE_FIND_ROOT_PATH "${RISCV_ROOT_PATH}/riscv64-unknown-linux-gnu")
set(CMAKE_SYSROOT "${RISCV_ROOT_PATH}/sysroot")
endif()
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
set(CMAKE_C_FLAGS "-march=rv64gcv_zfh_zba_zicbop -mabi=lp64d ${CMAKE_C_FLAGS}")
set(CMAKE_CXX_FLAGS "-march=rv64gcv_zfh_zba_zicbop -mabi=lp64d ${CXX_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -latomic")

View File

@ -3,60 +3,25 @@ set(WHISPER_BUILD_COMMIT @WHISPER_BUILD_COMMIT@)
set(WHISPER_BUILD_NUMBER @WHISPER_BUILD_NUMBER@)
set(WHISPER_SHARED_LIB @BUILD_SHARED_LIBS@)
set(GGML_BLAS @GGML_BLAS@)
set(GGML_CUDA @GGML_CUDA@)
set(GGML_METAL @GGML_METAL@)
set(GGML_HIPBLAS @GGML_HIPBLAS@)
set(GGML_ACCELERATE @GGML_ACCELERATE@)
@PACKAGE_INIT@
set_and_check(WHISPER_INCLUDE_DIR "@PACKAGE_WHISPER_INCLUDE_INSTALL_DIR@")
set_and_check(WHISPER_LIB_DIR "@PACKAGE_WHISPER_LIB_INSTALL_DIR@")
set_and_check(WHISPER_BIN_DIR "@PACKAGE_WHISPER_BIN_INSTALL_DIR@")
# Ensure transient dependencies satisfied
find_package(Threads REQUIRED)
if (APPLE AND GGML_ACCELERATE)
find_library(ACCELERATE_FRAMEWORK Accelerate REQUIRED)
endif()
if (GGML_BLAS)
find_package(BLAS REQUIRED)
endif()
if (GGML_CUDA)
find_package(CUDAToolkit REQUIRED)
endif()
if (GGML_METAL)
find_library(FOUNDATION_LIBRARY Foundation REQUIRED)
find_library(METAL_FRAMEWORK Metal REQUIRED)
find_library(METALKIT_FRAMEWORK MetalKit REQUIRED)
endif()
if (GGML_HIPBLAS)
find_package(hip REQUIRED)
find_package(hipblas REQUIRED)
find_package(rocblas REQUIRED)
endif()
find_package(ggml REQUIRED HINTS ${LLAMA_LIB_DIR}/cmake)
find_library(whisper_LIBRARY whisper
REQUIRED
HINTS ${WHISPER_LIB_DIR})
set(_whisper_link_deps "Threads::Threads" "@WHISPER_EXTRA_LIBS@")
set(_whisper_transient_defines "@WHISPER_TRANSIENT_DEFINES@")
HINTS ${WHISPER_LIB_DIR}
NO_CMAKE_FIND_ROOT_PATH
)
add_library(whisper UNKNOWN IMPORTED)
set_target_properties(whisper
PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${WHISPER_INCLUDE_DIR}"
INTERFACE_LINK_LIBRARIES "${_whisper_link_deps}"
INTERFACE_COMPILE_DEFINITIONS "${_whisper_transient_defines}"
INTERFACE_LINK_LIBRARIES "ggml::ggml;ggml::ggml-base;"
IMPORTED_LINK_INTERFACE_LANGUAGES "CXX"
IMPORTED_LOCATION "${whisper_LIBRARY}"
INTERFACE_COMPILE_FEATURES cxx_std_11

View File

@ -0,0 +1,5 @@
set( CMAKE_SYSTEM_NAME Windows )
set( CMAKE_SYSTEM_PROCESSOR x86_64 )
set( CMAKE_C_COMPILER clang )
set( CMAKE_CXX_COMPILER clang++ )

View File

@ -98,6 +98,7 @@ if (EMSCRIPTEN)
add_subdirectory(stream.wasm)
add_subdirectory(command.wasm)
add_subdirectory(bench.wasm)
add_subdirectory(wchess)
elseif(CMAKE_JS_VERSION)
add_subdirectory(addon.node)
else()
@ -105,6 +106,7 @@ else()
add_subdirectory(bench)
add_subdirectory(server)
add_subdirectory(quantize)
add_subdirectory(vad-speech-segments)
if (WHISPER_SDL2)
add_subdirectory(stream)
add_subdirectory(command)

View File

@ -1,8 +1,10 @@
# addon
# whisper.cpp Node.js addon
This is an addon demo that can **perform whisper model reasoning in `node` and `electron` environments**, based on [cmake-js](https://github.com/cmake-js/cmake-js).
It can be used as a reference for using the whisper.cpp project in other node projects.
This addon now supports **Voice Activity Detection (VAD)** for improved transcription performance.
## Install
```shell
@ -26,12 +28,88 @@ For Electron addon and cmake-js options, you can see [cmake-js](https://github.c
## Run
### Basic Usage
```shell
cd examples/addon.node
node index.js --language='language' --model='model-path' --fname_inp='file-path'
```
Because this is a simple Demo, only the above parameters are set in the node environment.
### VAD (Voice Activity Detection) Usage
Other parameters can also be specified in the node environment.
Run the VAD example with performance comparison:
```shell
node vad-example.js
```
## Voice Activity Detection (VAD) Support
VAD can significantly improve transcription performance by only processing speech segments, which is especially beneficial for audio files with long periods of silence.
### VAD Model Setup
Before using VAD, download a VAD model:
```shell
# From the whisper.cpp root directory
./models/download-vad-model.sh silero-v6.2.0
```
### VAD Parameters
All VAD parameters are optional and have sensible defaults:
- `vad`: Enable VAD (default: false)
- `vad_model`: Path to VAD model file (required when VAD enabled)
- `vad_threshold`: Speech detection threshold 0.0-1.0 (default: 0.5)
- `vad_min_speech_duration_ms`: Min speech duration in ms (default: 250)
- `vad_min_silence_duration_ms`: Min silence duration in ms (default: 100)
- `vad_max_speech_duration_s`: Max speech duration in seconds (default: FLT_MAX)
- `vad_speech_pad_ms`: Speech padding in ms (default: 30)
- `vad_samples_overlap`: Sample overlap 0.0-1.0 (default: 0.1)
### JavaScript API Example
```javascript
const path = require("path");
const { whisper } = require(path.join(__dirname, "../../build/Release/addon.node"));
const { promisify } = require("util");
const whisperAsync = promisify(whisper);
// With VAD enabled
const vadParams = {
language: "en",
model: path.join(__dirname, "../../models/ggml-base.en.bin"),
fname_inp: path.join(__dirname, "../../samples/jfk.wav"),
vad: true,
vad_model: path.join(__dirname, "../../models/ggml-silero-v6.2.0.bin"),
vad_threshold: 0.5,
progress_callback: (progress) => console.log(`Progress: ${progress}%`)
};
whisperAsync(vadParams).then(result => console.log(result));
```
## Supported Parameters
Both traditional whisper.cpp parameters and new VAD parameters are supported:
- `language`: Language code (e.g., "en", "es", "fr")
- `model`: Path to whisper model file
- `fname_inp`: Path to input audio file
- `use_gpu`: Enable GPU acceleration (default: true)
- `flash_attn`: Enable flash attention (default: false)
- `no_prints`: Disable console output (default: false)
- `no_timestamps`: Disable timestamps (default: false)
- `detect_language`: Auto-detect language (default: false)
- `audio_ctx`: Audio context size (default: 0)
- `max_len`: Maximum segment length (default: 0)
- `max_context`: Maximum context size (default: -1)
- `prompt`: Initial prompt for decoder
- `comma_in_time`: Use comma in timestamps (default: true)
- `print_progress`: Print progress info (default: false)
- `progress_callback`: Progress callback function
- VAD parameters (see above section)

View File

@ -1,31 +1,133 @@
const path = require("path");
const { whisper } = require(path.join(
__dirname,
"../../../build/Release/addon.node"
));
const { promisify } = require("util");
const { join } = require('path');
const { whisper } = require('../../../build/Release/addon.node');
const { promisify } = require('util');
const whisperAsync = promisify(whisper);
const whisperParamsMock = {
language: "en",
model: path.join(__dirname, "../../../models/ggml-base.en.bin"),
fname_inp: path.join(__dirname, "../../../samples/jfk.wav"),
const commonParams = {
language: 'en',
model: join(__dirname, '../../../models/ggml-base.en.bin'),
fname_inp: join(__dirname, '../../../samples/jfk.wav'),
use_gpu: true,
flash_attn: false,
no_prints: true,
comma_in_time: false,
translate: true,
no_timestamps: false,
detect_language: false,
audio_ctx: 0,
max_len: 0,
max_len: 0
};
describe("Run whisper.node", () => {
test("it should receive a non-empty value", async () => {
let result = await whisperAsync(whisperParamsMock);
describe('Whisper.cpp Node.js addon with VAD support', () => {
test('Basic whisper transcription without VAD', async () => {
const params = {
...commonParams,
vad: false
};
expect(result.length).toBeGreaterThan(0);
}, 10000);
const result = await whisperAsync(params);
expect(typeof result).toBe('object');
expect(Array.isArray(result.transcription)).toBe(true);
expect(result.transcription.length).toBeGreaterThan(0);
// Check that we got some transcription text
const text = result.transcription.map(segment => segment[2]).join(' ');
expect(text.length).toBeGreaterThan(0);
expect(text.toLowerCase()).toContain('ask not');
}, 30000);
test('VAD parameters validation', async () => {
// Test with invalid VAD model - should return empty transcription
const invalidParams = {
...commonParams,
vad: true,
vad_model: 'non-existent-model.bin',
vad_threshold: 0.5
};
// This should handle the error gracefully and return empty transcription
const result = await whisperAsync(invalidParams);
expect(typeof result).toBe('object');
expect(Array.isArray(result.transcription)).toBe(true);
// When VAD model doesn't exist, it should return empty transcription
expect(result.transcription.length).toBe(0);
}, 10000);
test('VAD parameter parsing', async () => {
// Test that VAD parameters are properly parsed (even if VAD model doesn't exist)
const vadParams = {
...commonParams,
vad: false, // Disabled so no model required
vad_threshold: 0.7,
vad_min_speech_duration_ms: 300,
vad_min_silence_duration_ms: 150,
vad_max_speech_duration_s: 45.0,
vad_speech_pad_ms: 50,
vad_samples_overlap: 0.15
};
const result = await whisperAsync(vadParams);
expect(typeof result).toBe('object');
expect(Array.isArray(result.transcription)).toBe(true);
}, 30000);
test('Progress callback with VAD disabled', async () => {
let progressCalled = false;
let lastProgress = 0;
const params = {
...commonParams,
vad: false,
progress_callback: (progress) => {
progressCalled = true;
lastProgress = progress;
expect(progress).toBeGreaterThanOrEqual(0);
expect(progress).toBeLessThanOrEqual(100);
}
};
const result = await whisperAsync(params);
expect(progressCalled).toBe(true);
expect(lastProgress).toBe(100);
expect(typeof result).toBe('object');
}, 30000);
test('Language detection without VAD', async () => {
const params = {
...commonParams,
vad: false,
detect_language: true,
language: 'auto'
};
const result = await whisperAsync(params);
expect(typeof result).toBe('object');
expect(typeof result.language).toBe('string');
expect(result.language.length).toBeGreaterThan(0);
}, 30000);
test('Basic transcription with all VAD parameters set', async () => {
// Test with VAD disabled but all parameters set to ensure no crashes
const params = {
...commonParams,
vad: false, // Disabled so it works without VAD model
vad_model: '', // Empty model path
vad_threshold: 0.6,
vad_min_speech_duration_ms: 200,
vad_min_silence_duration_ms: 80,
vad_max_speech_duration_s: 25.0,
vad_speech_pad_ms: 40,
vad_samples_overlap: 0.08
};
const result = await whisperAsync(params);
expect(typeof result).toBe('object');
expect(Array.isArray(result.transcription)).toBe(true);
expect(result.transcription.length).toBeGreaterThan(0);
}, 30000);
});

View File

@ -9,6 +9,7 @@
#include <vector>
#include <cmath>
#include <cstdint>
#include <cfloat>
struct whisper_params {
int32_t n_threads = std::min(4, (int32_t) std::thread::hardware_concurrency());
@ -38,6 +39,7 @@ struct whisper_params {
bool print_progress = false;
bool no_timestamps = false;
bool no_prints = false;
bool detect_language= false;
bool use_gpu = true;
bool flash_attn = false;
bool comma_in_time = true;
@ -50,6 +52,16 @@ struct whisper_params {
std::vector<std::string> fname_out = {};
std::vector<float> pcmf32 = {}; // mono-channel F32 PCM
// Voice Activity Detection (VAD) parameters
bool vad = false;
std::string vad_model = "";
float vad_threshold = 0.5f;
int vad_min_speech_duration_ms = 250;
int vad_min_silence_duration_ms = 100;
float vad_max_speech_duration_s = FLT_MAX;
int vad_speech_pad_ms = 30;
float vad_samples_overlap = 0.1f;
};
struct whisper_print_user_data {
@ -82,7 +94,7 @@ void whisper_print_segment_callback(struct whisper_context * ctx, struct whisper
t1 = whisper_full_get_segment_t1(ctx, i);
}
if (!params.no_timestamps) {
if (!params.no_timestamps && !params.no_prints) {
printf("[%s --> %s] ", to_timestamp(t0).c_str(), to_timestamp(t1).c_str());
}
@ -113,12 +125,14 @@ void whisper_print_segment_callback(struct whisper_context * ctx, struct whisper
// colorful print bug
//
const char * text = whisper_full_get_segment_text(ctx, i);
printf("%s%s", speaker.c_str(), text);
if (!params.no_prints) {
const char * text = whisper_full_get_segment_text(ctx, i);
printf("%s%s", speaker.c_str(), text);
}
// with timestamps or speakers: each segment on new line
if (!params.no_timestamps || params.diarize) {
if ((!params.no_timestamps || params.diarize) && !params.no_prints) {
printf("\n");
}
@ -128,6 +142,11 @@ void whisper_print_segment_callback(struct whisper_context * ctx, struct whisper
void cb_log_disable(enum ggml_log_level, const char *, void *) {}
struct whisper_result {
std::vector<std::vector<std::string>> segments;
std::string language;
};
class ProgressWorker : public Napi::AsyncWorker {
public:
ProgressWorker(Napi::Function& callback, whisper_params params, Napi::Function progress_callback, Napi::Env env)
@ -158,15 +177,27 @@ class ProgressWorker : public Napi::AsyncWorker {
void OnOK() override {
Napi::HandleScope scope(Env());
Napi::Object res = Napi::Array::New(Env(), result.size());
for (uint64_t i = 0; i < result.size(); ++i) {
if (params.detect_language) {
Napi::Object resultObj = Napi::Object::New(Env());
resultObj.Set("language", Napi::String::New(Env(), result.language));
Callback().Call({Env().Null(), resultObj});
}
Napi::Object returnObj = Napi::Object::New(Env());
if (!result.language.empty()) {
returnObj.Set("language", Napi::String::New(Env(), result.language));
}
Napi::Array transcriptionArray = Napi::Array::New(Env(), result.segments.size());
for (uint64_t i = 0; i < result.segments.size(); ++i) {
Napi::Object tmp = Napi::Array::New(Env(), 3);
for (uint64_t j = 0; j < 3; ++j) {
tmp[j] = Napi::String::New(Env(), result[i][j]);
tmp[j] = Napi::String::New(Env(), result.segments[i][j]);
}
res[i] = tmp;
}
Callback().Call({Env().Null(), res});
transcriptionArray[i] = tmp;
}
returnObj.Set("transcription", transcriptionArray);
Callback().Call({Env().Null(), returnObj});
}
// Progress callback function - using thread-safe function
@ -176,19 +207,19 @@ class ProgressWorker : public Napi::AsyncWorker {
auto callback = [progress](Napi::Env env, Napi::Function jsCallback) {
jsCallback.Call({Napi::Number::New(env, progress)});
};
tsfn.BlockingCall(callback);
}
}
private:
whisper_params params;
std::vector<std::vector<std::string>> result;
whisper_result result;
Napi::Env env;
Napi::ThreadSafeFunction tsfn;
// Custom run function with progress callback support
int run_with_progress(whisper_params &params, std::vector<std::vector<std::string>> &result) {
int run_with_progress(whisper_params &params, whisper_result & result) {
if (params.no_prints) {
whisper_log_set(cb_log_disable, NULL);
}
@ -277,7 +308,8 @@ class ProgressWorker : public Napi::AsyncWorker {
wparams.print_timestamps = !params.no_timestamps;
wparams.print_special = params.print_special;
wparams.translate = params.translate;
wparams.language = params.language.c_str();
wparams.language = params.detect_language ? "auto" : params.language.c_str();
wparams.detect_language = params.detect_language;
wparams.n_threads = params.n_threads;
wparams.n_max_text_ctx = params.max_context >= 0 ? params.max_context : wparams.n_max_text_ctx;
wparams.offset_ms = params.offset_t_ms;
@ -312,34 +344,38 @@ class ProgressWorker : public Napi::AsyncWorker {
};
wparams.progress_callback_user_data = this;
// Abort mechanism example
{
static bool is_aborted = false; // Note: this should be atomic to avoid data races
// Set VAD parameters
wparams.vad = params.vad;
wparams.vad_model_path = params.vad_model.c_str();
wparams.encoder_begin_callback = [](struct whisper_context * /*ctx*/, struct whisper_state * /*state*/, void * user_data) {
bool is_aborted = *(bool*)user_data;
return !is_aborted;
};
wparams.encoder_begin_callback_user_data = &is_aborted;
}
wparams.vad_params.threshold = params.vad_threshold;
wparams.vad_params.min_speech_duration_ms = params.vad_min_speech_duration_ms;
wparams.vad_params.min_silence_duration_ms = params.vad_min_silence_duration_ms;
wparams.vad_params.max_speech_duration_s = params.vad_max_speech_duration_s;
wparams.vad_params.speech_pad_ms = params.vad_speech_pad_ms;
wparams.vad_params.samples_overlap = params.vad_samples_overlap;
if (whisper_full_parallel(ctx, wparams, pcmf32.data(), pcmf32.size(), params.n_processors) != 0) {
fprintf(stderr, "failed to process audio\n");
return 10;
}
}
}
}
if (params.detect_language || params.language == "auto") {
result.language = whisper_lang_str(whisper_full_lang_id(ctx));
}
const int n_segments = whisper_full_n_segments(ctx);
result.resize(n_segments);
result.segments.resize(n_segments);
for (int i = 0; i < n_segments; ++i) {
const char * text = whisper_full_get_segment_text(ctx, i);
const int64_t t0 = whisper_full_get_segment_t0(ctx, i);
const int64_t t1 = whisper_full_get_segment_t1(ctx, i);
result[i].emplace_back(to_timestamp(t0, params.comma_in_time));
result[i].emplace_back(to_timestamp(t1, params.comma_in_time));
result[i].emplace_back(text);
result.segments[i].emplace_back(to_timestamp(t0, params.comma_in_time));
result.segments[i].emplace_back(to_timestamp(t1, params.comma_in_time));
result.segments[i].emplace_back(text);
}
whisper_print_timings(ctx);
@ -360,23 +396,62 @@ Napi::Value whisper(const Napi::CallbackInfo& info) {
std::string language = whisper_params.Get("language").As<Napi::String>();
std::string model = whisper_params.Get("model").As<Napi::String>();
std::string input = whisper_params.Get("fname_inp").As<Napi::String>();
bool use_gpu = whisper_params.Get("use_gpu").As<Napi::Boolean>();
bool flash_attn = whisper_params.Get("flash_attn").As<Napi::Boolean>();
bool no_prints = whisper_params.Get("no_prints").As<Napi::Boolean>();
bool no_timestamps = whisper_params.Get("no_timestamps").As<Napi::Boolean>();
int32_t audio_ctx = whisper_params.Get("audio_ctx").As<Napi::Number>();
bool comma_in_time = whisper_params.Get("comma_in_time").As<Napi::Boolean>();
int32_t max_len = whisper_params.Get("max_len").As<Napi::Number>();
bool use_gpu = true;
if (whisper_params.Has("use_gpu") && whisper_params.Get("use_gpu").IsBoolean()) {
use_gpu = whisper_params.Get("use_gpu").As<Napi::Boolean>();
}
bool flash_attn = false;
if (whisper_params.Has("flash_attn") && whisper_params.Get("flash_attn").IsBoolean()) {
flash_attn = whisper_params.Get("flash_attn").As<Napi::Boolean>();
}
bool no_prints = false;
if (whisper_params.Has("no_prints") && whisper_params.Get("no_prints").IsBoolean()) {
no_prints = whisper_params.Get("no_prints").As<Napi::Boolean>();
}
bool no_timestamps = false;
if (whisper_params.Has("no_timestamps") && whisper_params.Get("no_timestamps").IsBoolean()) {
no_timestamps = whisper_params.Get("no_timestamps").As<Napi::Boolean>();
}
bool detect_language = false;
if (whisper_params.Has("detect_language") && whisper_params.Get("detect_language").IsBoolean()) {
detect_language = whisper_params.Get("detect_language").As<Napi::Boolean>();
}
int32_t audio_ctx = 0;
if (whisper_params.Has("audio_ctx") && whisper_params.Get("audio_ctx").IsNumber()) {
audio_ctx = whisper_params.Get("audio_ctx").As<Napi::Number>();
}
bool comma_in_time = true;
if (whisper_params.Has("comma_in_time") && whisper_params.Get("comma_in_time").IsBoolean()) {
comma_in_time = whisper_params.Get("comma_in_time").As<Napi::Boolean>();
}
int32_t max_len = 0;
if (whisper_params.Has("max_len") && whisper_params.Get("max_len").IsNumber()) {
max_len = whisper_params.Get("max_len").As<Napi::Number>();
}
// Add support for max_context
int32_t max_context = -1;
if (whisper_params.Has("max_context") && whisper_params.Get("max_context").IsNumber()) {
max_context = whisper_params.Get("max_context").As<Napi::Number>();
}
// support prompt
std::string prompt = "";
if (whisper_params.Has("prompt") && whisper_params.Get("prompt").IsString()) {
prompt = whisper_params.Get("prompt").As<Napi::String>();
}
// Add support for print_progress
bool print_progress = false;
if (whisper_params.Has("print_progress")) {
if (whisper_params.Has("print_progress") && whisper_params.Get("print_progress").IsBoolean()) {
print_progress = whisper_params.Get("print_progress").As<Napi::Boolean>();
}
// Add support for progress_callback
@ -385,6 +460,47 @@ Napi::Value whisper(const Napi::CallbackInfo& info) {
progress_callback = whisper_params.Get("progress_callback").As<Napi::Function>();
}
// Add support for VAD parameters
bool vad = false;
if (whisper_params.Has("vad") && whisper_params.Get("vad").IsBoolean()) {
vad = whisper_params.Get("vad").As<Napi::Boolean>();
}
std::string vad_model = "";
if (whisper_params.Has("vad_model") && whisper_params.Get("vad_model").IsString()) {
vad_model = whisper_params.Get("vad_model").As<Napi::String>();
}
float vad_threshold = 0.5f;
if (whisper_params.Has("vad_threshold") && whisper_params.Get("vad_threshold").IsNumber()) {
vad_threshold = whisper_params.Get("vad_threshold").As<Napi::Number>();
}
int vad_min_speech_duration_ms = 250;
if (whisper_params.Has("vad_min_speech_duration_ms") && whisper_params.Get("vad_min_speech_duration_ms").IsNumber()) {
vad_min_speech_duration_ms = whisper_params.Get("vad_min_speech_duration_ms").As<Napi::Number>();
}
int vad_min_silence_duration_ms = 100;
if (whisper_params.Has("vad_min_silence_duration_ms") && whisper_params.Get("vad_min_silence_duration_ms").IsNumber()) {
vad_min_silence_duration_ms = whisper_params.Get("vad_min_silence_duration_ms").As<Napi::Number>();
}
float vad_max_speech_duration_s = FLT_MAX;
if (whisper_params.Has("vad_max_speech_duration_s") && whisper_params.Get("vad_max_speech_duration_s").IsNumber()) {
vad_max_speech_duration_s = whisper_params.Get("vad_max_speech_duration_s").As<Napi::Number>();
}
int vad_speech_pad_ms = 30;
if (whisper_params.Has("vad_speech_pad_ms") && whisper_params.Get("vad_speech_pad_ms").IsNumber()) {
vad_speech_pad_ms = whisper_params.Get("vad_speech_pad_ms").As<Napi::Number>();
}
float vad_samples_overlap = 0.1f;
if (whisper_params.Has("vad_samples_overlap") && whisper_params.Get("vad_samples_overlap").IsNumber()) {
vad_samples_overlap = whisper_params.Get("vad_samples_overlap").As<Napi::Number>();
}
Napi::Value pcmf32Value = whisper_params.Get("pcmf32");
std::vector<float> pcmf32_vec;
if (pcmf32Value.IsTypedArray()) {
@ -407,8 +523,20 @@ Napi::Value whisper(const Napi::CallbackInfo& info) {
params.pcmf32 = pcmf32_vec;
params.comma_in_time = comma_in_time;
params.max_len = max_len;
params.max_context = max_context;
params.print_progress = print_progress;
params.prompt = prompt;
params.detect_language = detect_language;
// Set VAD parameters
params.vad = vad;
params.vad_model = vad_model;
params.vad_threshold = vad_threshold;
params.vad_min_speech_duration_ms = vad_min_speech_duration_ms;
params.vad_min_silence_duration_ms = vad_min_silence_duration_ms;
params.vad_max_speech_duration_s = vad_max_speech_duration_s;
params.vad_speech_pad_ms = vad_speech_pad_ms;
params.vad_samples_overlap = vad_samples_overlap;
Napi::Function callback = info[1].As<Napi::Function>();
// Create a new Worker class with progress callback support

View File

@ -1,8 +1,10 @@
const path = require("path");
const { whisper } = require(path.join(
__dirname,
"../../build/Release/addon.node"
));
const path = require('path');
const os = require('os');
const isWindows = os.platform() === 'win32';
const buildPath = isWindows ? "../../build/bin/Release/addon.node" : "../../build/Release/addon.node";
const { whisper } = require(path.join(__dirname, buildPath));
const { promisify } = require("util");
const whisperAsync = promisify(whisper);
@ -17,6 +19,7 @@ const whisperParams = {
comma_in_time: false,
translate: true,
no_timestamps: false,
detect_language: false,
audio_ctx: 0,
max_len: 0,
progress_callback: (progress) => {
@ -31,6 +34,8 @@ const params = Object.fromEntries(
const [key, value] = item.slice(2).split("=");
if (key === "audio_ctx") {
whisperParams[key] = parseInt(value);
} else if (key === "detect_language") {
whisperParams[key] = value === "true";
} else {
whisperParams[key] = value;
}

View File

@ -0,0 +1,132 @@
const path = require("path");
const { whisper } = require(path.join(
__dirname,
"../../build/Release/addon.node"
));
const { promisify } = require("util");
const whisperAsync = promisify(whisper);
// Example with VAD enabled
const vadParams = {
language: "en",
model: path.join(__dirname, "../../models/ggml-base.en.bin"),
fname_inp: path.join(__dirname, "../../samples/jfk.wav"),
use_gpu: true,
flash_attn: false,
no_prints: false,
comma_in_time: true,
translate: false,
no_timestamps: false,
detect_language: false,
audio_ctx: 0,
max_len: 0,
// VAD parameters
vad: true,
vad_model: path.join(__dirname, "../../models/ggml-silero-v6.2.0.bin"), // You need to download this model
vad_threshold: 0.5,
vad_min_speech_duration_ms: 250,
vad_min_silence_duration_ms: 100,
vad_max_speech_duration_s: 30.0,
vad_speech_pad_ms: 30,
vad_samples_overlap: 0.1,
progress_callback: (progress) => {
console.log(`VAD Transcription progress: ${progress}%`);
}
};
// Example without VAD (traditional approach)
const traditionalParams = {
language: "en",
model: path.join(__dirname, "../../models/ggml-base.en.bin"),
fname_inp: path.join(__dirname, "../../samples/jfk.wav"),
use_gpu: true,
flash_attn: false,
no_prints: false,
comma_in_time: true,
translate: false,
no_timestamps: false,
detect_language: false,
audio_ctx: 0,
max_len: 0,
vad: false, // Explicitly disable VAD
progress_callback: (progress) => {
console.log(`Traditional transcription progress: ${progress}%`);
}
};
async function runVADExample() {
try {
console.log("=== Whisper.cpp Node.js VAD Example ===\n");
// Check if VAD model exists
const fs = require('fs');
if (!fs.existsSync(vadParams.vad_model)) {
console.log("⚠️ VAD model not found. Please download the VAD model first:");
console.log(" ./models/download-vad-model.sh silero-v6.2.0");
console.log(" Or run: python models/convert-silero-vad-to-ggml.py");
console.log("\n Falling back to traditional transcription without VAD...\n");
// Run without VAD
console.log("🎵 Running traditional transcription...");
const traditionalResult = await whisperAsync(traditionalParams);
console.log("\n📝 Traditional transcription result:");
console.log(traditionalResult);
return;
}
console.log("🎵 Running transcription with VAD enabled...");
console.log("VAD Parameters:");
console.log(` - Threshold: ${vadParams.vad_threshold}`);
console.log(` - Min speech duration: ${vadParams.vad_min_speech_duration_ms}ms`);
console.log(` - Min silence duration: ${vadParams.vad_min_silence_duration_ms}ms`);
console.log(` - Max speech duration: ${vadParams.vad_max_speech_duration_s}s`);
console.log(` - Speech padding: ${vadParams.vad_speech_pad_ms}ms`);
console.log(` - Samples overlap: ${vadParams.vad_samples_overlap}\n`);
const startTime = Date.now();
const vadResult = await whisperAsync(vadParams);
const vadDuration = Date.now() - startTime;
console.log("\n✅ VAD transcription completed!");
console.log(`⏱️ Processing time: ${vadDuration}ms`);
console.log("\n📝 VAD transcription result:");
console.log(vadResult);
// Compare with traditional approach
console.log("\n🔄 Running traditional transcription for comparison...");
const traditionalStartTime = Date.now();
const traditionalResult = await whisperAsync(traditionalParams);
const traditionalDuration = Date.now() - traditionalStartTime;
console.log("\n✅ Traditional transcription completed!");
console.log(`⏱️ Processing time: ${traditionalDuration}ms`);
console.log("\n📝 Traditional transcription result:");
console.log(traditionalResult);
// Performance comparison
console.log("\n📊 Performance Comparison:");
console.log(`VAD: ${vadDuration}ms`);
console.log(`Traditional: ${traditionalDuration}ms`);
const speedup = traditionalDuration / vadDuration;
if (speedup > 1) {
console.log(`🚀 VAD is ${speedup.toFixed(2)}x faster!`);
} else {
console.log(` Traditional approach was ${(1/speedup).toFixed(2)}x faster in this case.`);
}
} catch (error) {
console.error("❌ Error during transcription:", error);
}
}
// Run the example
if (require.main === module) {
runVADExample();
}
module.exports = {
runVADExample,
vadParams,
traditionalParams
};

Some files were not shown because too many files have changed in this diff Show More