Commit Graph

3874 Commits

Author SHA1 Message Date
shani-f f1da026bb8 SYCL: optimized repeat_back kernel (3× fewer asm instructions, 2× faster)Feature/sycl repeat back opt (#16869)
* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* SYCL: optimize repeat_back kernel

* Remove Hebrew comment from repeat_back.cpp

* Remove comments for code clarity

Removed comments to clean up the code.

* Fix formatting in ggml-sycl.cpp

* Formatted lambda according to legacy style. No logic changes

* Remove blank line in repeat_back.cpp

Remove unnecessary blank line before assigning acc to dst_dd.
2025-11-09 23:38:03 +02:00
Georgi Gerganov 39834fde1b clip : use FA (llama/16837)
* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-11-09 23:38:03 +02:00
mnehete32 5ed97df483 CUDA: add FLOOR, CEIL, ROUND, TRUNC unary ops (llama/16917) 2025-11-09 23:38:03 +02:00
Aaron Teo 84854d246a ggml: add s390x cpu-feats (llama/16774) 2025-11-09 23:38:03 +02:00
Jeff Bolz 2001457367 vulkan: Fix multi_add invalid descriptor usage (llama/16899) 2025-11-09 23:38:03 +02:00
Jeff Bolz 90be9c9de1 vulkan: fuse mul_mat+add and mul_mat_id+add_id (llama/16868)
* vulkan: fuse mul_mat+add and mul_mat_id+add_id

The fusion is only applied for the mat-vec mul paths.

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix 32b build

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Oliver Simons 7d55fba06f CUDA: Remove unneded bias/gate dims in fused mmvq (llama/16858)
* CUDA: Remove unneded bias/gate dims in fused mmvq

Pointed out
[here](https://github.com/ggml-org/llama.cpp/pull/16847#discussion_r2476798989)
that only a single value is needed per target col per thread

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Fix "Error 991-D: extra braces are nonstandard" during compilation

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Johannes Gäßler 52e1bbb554 CUDA: Volta tensor core support for MMF (llama/16843)
* CUDA: Volta tensor core support for MMF

* more generic checks for hardware support

* Update ggml/src/ggml-cuda/mmf.cuh

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2025-11-09 23:38:03 +02:00
Georgi Gerganov addda802dd ggml : fix conv2d_dw SVE path (ggml/1380)
* Fix test-conv2d-dw failure on ARM SVE by using runtime vector length

The ggml_compute_forward_conv_2d_dw_cwhn function was using a hardcoded GGML_F32_EPR (8) for SIMD vectorization, but on ARM SVE the actual vector length varies by hardware. This caused incorrect computation when processing CWHN layout tensors on ARM machines.

Fix by using svcntw() to get the runtime SVE vector length instead of the compile-time constant.

Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>

* ci : reduce sam score threshold

* ci : update bbox checks for sam test

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 7d60b431a5 CUDA: add expert reduce kernel (llama/16857)
* CUDA: add expert reduce kernel

* contigous checks, better formatting, use std::vector instead of array

* use vector empty instead of size

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Jeff Bolz a9ba988e56 vulkan: disable spirv-opt for rope shaders (llama/16872) 2025-11-09 23:38:03 +02:00
Masato Nakasaka e2b3eca0dc vulkan: Fix crash when FP16 mul_mat accumulation is not supported (llama/16796)
* Experimenting crash fix

* added assert for aborting and fixed comment

* changed to check if a pipeline is empty or not

* Moved function in class definition

* replaced with is_empty

* Modified is_empty to check only unaligned pipelines
2025-11-09 23:38:03 +02:00
Ruben Ortlam 7ed570ee94 vulkan: fix shmem overrun in mmq id shader (llama/16873)
* vulkan: fix shmem overrun in mmq id shader

* metal : fix mul_mm_id

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-09 23:38:03 +02:00
l3utterfly 486d39c2cb ggml-hexagon: respect input size when getting/setting tensor data (llama/16836)
* respect input size when getting/setting tensor data

allows partial repacking/copying when get tensor size is smaller than the actual tensor

* Removed duplicate repack_mxfp4_mxfp4x4x2 function
2025-11-09 23:38:03 +02:00
lhez 7fdd53ac0d opencl: fix boundary handling for mul_mm (llama/16875) 2025-11-09 23:38:03 +02:00
Max Krasnyansky ffe1c832bd cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (llama/16833)
Very similar implementation to the flash-attention chunking, with similar benefits.
2025-11-09 23:38:03 +02:00
JJJYmmm e1780b209d model: add support for qwen3vl series (llama/16780)
* support qwen3vl series.

Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>

* bugfix: fix the arch check for qwen3vl-moe.

* use build_ffn

* optimize deepstack structure

* optimize deepstack feature saving

* Revert "optimize deepstack feature saving" for temporal fix

This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.

* code clean

* use fused qkv in clip

* clean up / rm is_deepstack_layers for simplification

* add test model

* move test model to "big" section

* fix imrope check

* remove trailing whitespace

* fix rope fail

* metal : add imrope support

* add imrope support for sycl

* vulkan: add imrope w/o check

* fix vulkan

* webgpu: add imrope w/o check

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix tensor mapping

---------

Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Max Krasnyansky f1fdb91e95 cpu: introduce chunking for flash attention (llama/16829)
Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.
2025-11-09 23:38:03 +02:00
Sigbjørn Skjæret f7dfa39104 cuda : fix argsort with 64k+ rows (llama/16849) 2025-11-09 23:38:03 +02:00
Jeff Bolz 887d984558 vulkan: Handle argsort with a large number of rows (llama/16851) 2025-11-09 23:38:03 +02:00
Oliver Simons 41f4daca57 Hide latency of bias and gate-loading (llama/16847)
This is realised by loading them into registers before computation of
the dot-product, effectively batching them together with said
dot-product. As a lot of threads are alive here, the warp scheduler has
enough threads available to effectively hide the cost of additionally
loading those two floats.
2025-11-09 23:38:03 +02:00
Jeff Bolz efe8099268 vulkan: Fuse rope+set_rows (llama/16769)
This pattern appears in a lot of models, the rope operation is applied right
before storing into the KV cache (usually on the K tensor).

Add a path to some of the rope shaders that computes the destination address
based on the set_rows tensor. Compile variants of the shader with D_TYPE of
f16 (the usual KV cache type).

Add a src3 operand to ggml_vk_op_f32 - sometimes rope uses three srcs and needs
the fourth for the row indices.

Add fused_ops_write_mask to indicate which intermediate tensors need to write
their results to memory. Skipping writing the roped K value helps to allow more
nodes to run concurrently.

Add logic to ggml_vk_graph_optimize to make ROPE+VIEW+SET_ROWS consecutive. It
rarely starts out that way in the graph.

Add new backend tests.
2025-11-09 23:38:03 +02:00
Jeff Bolz 35a3fda240 vulkan: Update topk_moe fusion to handle gpt's late softmax (llama/16656)
* vulkan: Update topk_moe fusion to handle gpt's late softmax

Based on #16649.

* Add ggml_check_edges

* Add sync logging to show fusion effects

* handle clamp added in #16655

* Update ggml/src/ggml-impl.h

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-11-09 23:38:03 +02:00
Ruben Ortlam bc944bddc8 Vulkan MMQ Integer Dot Refactor and K-Quant support (llama/16536)
* vulkan: add mmq q2_k integer dot support

* Refactor mmq caching

* Reduce mmq register use

* Load 4 quant blocks into shared memory in one step

* Pack q2_k blocks into caches of 32

* Use 32-bit accumulators for integer dot matmul

* Add q4_k mmq

* Add q3_k mmq

* Add q5_k mmq

* Add q6_k mmq

* Add mxfp4 mmq, enable MMQ MUL_MAT_ID

* Fix mmv dm loads
2025-11-09 23:38:03 +02:00
Max Krasnyansky 4d74160c9a Hexagon Op queue & dispatch optimizations (llama/16820)
* hexagon: remove dspqueue callbacks and do all read processing inplace

* hexagon: there is no need to ref/deref the buffers at this point

We're not going to release the buffers without flushing the session queue.
So there is no need to inc/dec the refcounts for every request.
We also don't need to include those bufs in the response.

* hexagon: bump the thread count in the adb wrapper scripts

We can use more CPU cores now that the dedicated dspqueue polling threads are not used (ie no contention).
Also enable more agressive polling for now since we still map Flash Attention (and a few other kernels) to
the CPU and those dspqueue threads were keeping the CPU cores are higher clock freqs.

* hexagon: add lhez as the second code owner
2025-11-09 23:38:03 +02:00
Aman Gupta 6051c704a0 CUDA: use fastdiv in set-rows (llama/16834)
* CUDA: use fastdiv in set-rows

* add assert about value fitting in u32
2025-11-09 23:38:03 +02:00
Jeff Bolz 82a23ca9c4 vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (llama/16793)
This lets the copy to the destination device use the host-visible
vidmem optimization.
2025-11-09 23:38:03 +02:00
Aman Gupta 5c316c48f7 CUDA: Fix bug in topk-moe for gpt-oss (llama/16821)
* CUDA: Fix bug in topk-moe for gpt-oss

When using ggml_can_fuse_subgraph, the output nodes which are passed are wrong. This causes `test-backend-ops` to still fuse ndoes (because the nodes are not used elsewhere in the graph),
but it actually doesn't fuse in the actual gpt-oss

* fix for qwen3 too

* change ifndef to ifdef
2025-11-09 23:38:03 +02:00
YaelLogic 5850c952e5 sycl: add RMS_NORM_BACK operation support (llama/16808)
* sycl: add RMS_NORM_BACK operation support

* sycl: rms_norm_back: add dual reduction paths (FP64 and FP32) and savepoint before further changes

* sycl: add RMS_NORM_BACK support

Implement RMS_NORM_BACK for the SYCL backend using FP32 compensated parallel reduction. Minimal docs updates (ops.md / SYCL.csv).

* revert: restore .gitignore and tools/run/CMakeLists.txt to upstream

* revert: restore tests/CMakeLists.txt to upstream

* sycl: optimize rms_norm_back

* fix: restore SYCL.csv to correct state with RMS_NORM_BACK support

* Update ggml/src/ggml-sycl/norm.cpp

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>

* fix: remove trailing whitespace and add missing newline (EditorConfig)

---------

Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2025-11-09 23:38:03 +02:00
YaelGitAccount a983c9219d cuda: add SET operation support (llama/16804)
* feat(cuda): add GGML_OP_SET support

Implement CUDA kernel for SET operation with f32 support.

All tests passing (14598/14598).

* cuda(set): add I32 support; keep F32

* refactor(cuda): use ggml_cuda_cpy to unify SET operator logic and remove code duplication

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-cuda/set.cu

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
l3utterfly f863a42d97 initialise buffer.device in ggml_hexagon_session (llama/16816) 2025-11-09 23:38:03 +02:00
Chenguang Li cb39359e7f CANN: Improve device ID handling and aclnnArange checks (llama/16752)
* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var
2025-11-09 23:38:03 +02:00
Aman Gupta 0c8ff48103 CUDA: add unused vars to mmvf and mmvq (llama/16807) 2025-11-09 23:38:03 +02:00
tamarPal 9664420a54 sycl: add SSM_CONV operation support (llama/16800)
* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

 Flawless numerical accuracy - matches CPU bit-for-bit
 Optimal SYCL kernel design - efficient parallel execution
 Complete tensor layout compatibility - handles all strides correctly
 Robust error handling - comprehensive assertions and validation
 All official tests pass - 14,490/14,490 backend operations verified
 Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <tamarPal@example.com>
2025-11-09 23:38:03 +02:00
Acly bcda7c3e58 ggml : fix interpolate with align-corners and ne=1 (llama/16700)
* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning
2025-11-09 23:38:03 +02:00
Johannes Gäßler 1471b1fda7 HIP: fix AMDGPU_TARGETS, update documentation (llama/16803) 2025-11-09 23:38:03 +02:00
tamarPal 0e1b6c5fc4 sycl: add ROLL operation support (llama/16665)
* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <tamarPal@example.com>
2025-11-09 23:38:03 +02:00
shani-f 543221d824 sycl: add REPEAT_BACK operation support (llama/16734)
* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 97c3285cc4 CUDA: support for weight clamp in top-k norm (llama/16702) 2025-11-09 23:38:03 +02:00
Acly bd8734c050 ggml-alloc : make gallocr prefer chunks that allow memory reuse (llama/16788) 2025-11-09 23:38:03 +02:00
Sigbjørn Skjæret e6ff2bceed cuda : use fast copy when src and dst are of different type and contiguous (llama/16789)
* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape
2025-11-09 23:38:03 +02:00
leejet 4f4246dcb4 ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (llama/16744)
* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-11-09 23:38:03 +02:00
Aman Gupta 9f75cc7eef CUDA: General GEMV fusion (llama/16715) 2025-11-09 23:38:03 +02:00
Gilad S c00ab7e5e6 vulkan: deduplicate Microsoft Direct3D12 devices (llama/16689)
* fix: deduplicate and deprioritize Microsoft Direct3D12 vulkan devices from the `vulkan-dozen` driver

* style: indent

* fix: decrease priority

* fix: switch to `||`
2025-11-09 23:38:03 +02:00
Giuseppe Scrivano d0b544da70 vulkan: delete dead code (llama/16732)
ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-11-09 23:38:03 +02:00
Jeff Bolz 070b24f65c vulkan: Optimize SSM_SCAN (llama/16645) 2025-11-09 23:38:03 +02:00
leejet 5166efa7f0 ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (llama/16742)
* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions
2025-11-09 23:38:03 +02:00
Aman Gupta 524046d4d1 CUDA: use CUB for arbitary size argsort (llama/16754) 2025-11-09 23:38:03 +02:00
Aman Gupta 47efc4f115 ggml-cuda: use passed ops instead of hardcoded ops (llama/16712) 2025-11-09 23:38:03 +02:00
Matthew Michel 0a5b4c2e9b sycl: use async memory allocation to fix crashes during graph recording (llama/16644)
* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]
2025-11-09 23:38:03 +02:00