Commit Graph

4210 Commits

Author SHA1 Message Date
Oliver Simons 9eef377330
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (llama/15715)
* Add fastdiv, use it in modulo and use modulo in rms_norm_f32

Fastdiv is much faster way to do integer division, which was identified
as bottleneck in rms_norm_f32

* Support more `block_size` values in `rms_norm_f32`

This makes us more flexible in selecting the optimal threads w.r.t
paralellizing across a col vs. launch-overheads of threads and mio
throttles

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Replace modulo with fastmodulo in `rms_norm_f32`

* Use `BinPackArguments=true` for formating function calls

Will file a separate PR to adjust .clang-format file

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Use uint3 for both `fastdiv` and `fastmodulo`

The compiler seems to reliably optimize away the unused .z component in
the fastdiv use-case, see https://godbolt.org/z/rx8KPrKr3

* More constrained type declarations

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Rename fastdiv and fastmodulo variables to shared variable name

As suggest by JohannesGaessler, this increases clarity of the intended
use

* Pack fastdiv/fastmodulo constants into uint2/uint3 objects

By packing constants to be used together into a struct, we are less
likely to make errors.

* Rename function parameter of fastmodulo

`modulo_consts` is more fitting/descriptive

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:48 +03:00
hipudding 51bc843f3a
CANN: Add RoPE contiguous check for 310I DUP device (llama/15735) 2025-09-20 13:42:48 +03:00
xctan 75f739c7c8
ggml-cpu : optimize RVV kernels (llama/15720)
* ggml-cpu : optimize rvv ggml_vec_dot_f32

* ggml-cpu : optimize 128-bit rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : fix riscv arch flags

* ggml-cpu : add more rvv ops

* ggml-cpu : optimize rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : optimize rvv ggml_vec_dot_q6_K_q8_K

* ggml-cpu : minor rvv adjustments

* ggml-cpu : fix riscv include
2025-09-20 13:42:48 +03:00
hipudding 91e9e72ecd
CANN: Mask unsupported TRANSPOSE_1D operator (llama/15733)
CANN currently does not support kernels larger than 255.
This change disables such cases.
2025-09-20 13:42:48 +03:00
Chenguang Li d84b96d9d0
CANN: Fix type float_t to float (llama/15736)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:48 +03:00
Ruben Ortlam e584edb5ba
vulkan: fix shaders gen when no integer dot is available (llama/15740) 2025-09-20 13:42:48 +03:00
hipudding 5aee53c40f
CANN: Resolve soft_max precision issue (llama/15730)
Previously, the slope tensor was set to fp16 to improve efficiency.
While this worked correctly in FA, it caused precision issues in soft_max.
This change applies different data types for different operators
to balance both accuracy and performance.
2025-09-20 13:42:47 +03:00
Jeff Bolz 1e03aa66f7
vulkan: Fix macro parameter order for f32 matmul shaders (llama/15716) 2025-09-20 13:42:47 +03:00
rmatif fb37f91163
opencl: add attn sinks support for FA kernels (llama/15706) 2025-09-20 13:42:47 +03:00
Chenguang Li 3db49c1c26
CANN: Support eager execution mode under ACL graph compilation (llama/15712)
* [CANN] Support eager execution mode under ACL graph compilation

Add support for running operators in eager mode while ACL graph
compilation is enabled. This allows bypassing graph execution
and directly submitting ops, which is useful for debugging and
reducing graph build overhead in certain scenarios.

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

* rename to acl_graph_mode

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:47 +03:00
hipudding 13d3963f71
CANN: Support ext_factor in rope (llama/15710) 2025-09-20 13:42:47 +03:00
Johannes Gäßler f20a7b0e99
ggml-backend: raise GGML_MAX_SPLIT_INPUTS (llama/15722) 2025-09-20 13:42:47 +03:00
Gilad S 9e3600e569
vulkan: use memory budget extension to read memory usage (llama/15545)
* vulkan: use memory budget extension to read memory usage

* fix: formatting and names

* formatting

* fix: detect and cache memory budget extension availability on init

* fix: read `budgetprops.heapBudget` instead of `heap.size` when memory budget extension is available

* style: lints
2025-09-20 13:42:47 +03:00
Jeff Bolz 7a5e7368a3
vulkan: add missing clamps in new mul_mat_id paths (llama/15702)
This is a missing interaction between #15546 and #15652
2025-09-20 13:42:46 +03:00
Ruben Ortlam d5f80a2982
vulkan: disable large mmv subgroups on older Nvidia GPUs (llama/15717) 2025-09-20 13:42:46 +03:00
s-goto-11 8218dc609c
ggml: SVE support for exponential functions (llama/15145)
* SVE support for exponential functions

Add const notation to variable pg

* Update ggml/src/ggml-cpu/vec.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Add const

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-09-20 13:42:46 +03:00
Prashant Vithule 31840a3a56
ggml: aarch64: Implement SVE F16 kernels for vector functions (llama/15115)
* Added sve implementation for vec_dot_fp16 Kernel

* removed white spaces

* Added comment

* removed white spaces

* changed GGML_F16x_VEC_FMA for code consistency

* Update vec.h

---------

Co-authored-by: vithulep <p.m.vithule1517@gmail.com>
2025-09-20 13:42:46 +03:00
Ruben Ortlam 5e70d901b0
Vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (llama/14903)
* vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants

* vulkan: use subgroup operations for quantize_q8_1 shader

* vulkan: add q8_1_x4 type with 128-bit alignment, use in mul_mat_vecq shader

* vulkan: use q8_1_x4 blocks in mul_mmq shader

* vulkan: do 8 calculations per invocation instead of 32 in mul_mat_vecq, similar to mul_mat_vec

* vulkan: tune mul_mat_vecq performance for Intel

* vulkan: fix quantizing issue when tensor is not divisible by 128

* vulkan: adapt integer dot mmv to mmv small m optimization (llama/15355)

* vulkan: allow all subgroup modes for mmv and mmvq

* vulkan: use prealloc intermediate reuse for mmvq path

* vulkan: tune mmvq for Intel, AMD GCN and Nvidia RTX 3090

* vulkan: adapt mmv quantize_y path to conditional sync logic

* vulkan: disable q8_0 mmvq on Nvidia

* vulkan: enable q8_0 on Nvidia pre-turing

* fix prealloc sync condition

* fix llvmpipe subgroup 8 issue
2025-09-20 13:42:46 +03:00
Daniel Bevenius c5f511e697
ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (llama/15695)
* ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops

This commit adds support for the TRANSPOSE and RESHAPE operations in the
ggml webgpu backend.

Co-authored-by: Diego Devesa <slarengh@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-20 13:42:46 +03:00
Akarshan Biswas 2ba5e0cb47
CUDA: fix build error from ambiguous __half conversions in conv2d (llama/15690)
* CUDA: fix build error from ambiguous __half conversions in conv2d

Building conv2d with half precision failed because `__half` defines
multiple implicit conversion operators (to float, int, short, etc.),
causing ambiguous overload resolution when multiplying with float.

Introduce a templated `to_float` helper that explicitly converts
`__half` via `__half2float`, while passing through float unchanged.
Use this helper in conv2d accumulation to ensure unambiguous and
correct promotion to float.

Fixes some build errors with half-precision kernels on CUDA.

ggml-ci

* CUDA: Replace custom to_float helper with unified ggml_cuda_cast and add half‑>float conversion

* CUDA: Add missing convert.cuh header

* CUDA: remove unnecessary extension in ggml_cuda_cast

* CUDA: Address review comment, remove second type template argument
2025-09-20 13:42:46 +03:00
hipudding bb5f844ec7
CANN: Optimize MUL_MAT_ID (llama/15658) 2025-09-20 13:42:46 +03:00
hipudding ed7ebdc757
CANN: fix RoPE cache issue on multi-device (llama/15629)
* CANN: fix RoPE cache issue on multi-device

RoPE cache only needs to be computed once per token.
However, in multi-device scenarios, not every device starts
computation from layer 0, which may lead to unallocated memory
issues and precision errors.

This commit records the first layer of each device to avoid
the above issues.

* CANN: Optimize first-layer detection method

* CANN: Remove trailing whitespace

* CANN: Only cache the data that can be determined as unchanged through the parameters.

* CANN: Update function comment
2025-09-20 13:42:45 +03:00
Georgi Gerganov 3d470687de
metal : fix checks for available FA kernels (llama/15700)
* metal : fix checks for available FA kernels

ggml-ci

* cont : fix comment [no ci]
2025-09-20 13:42:45 +03:00
Diego Devesa b11c972b88
llama : separate compute buffer reserve from fattn check (llama/15696)
Exposes ggml_backend_sched_split_graph() to allow splitting the graph without allocating compute buffers and uses it to split the graph for the automatic Flash Attention check.
2025-09-20 13:42:45 +03:00
Jeff Bolz db7ecfb61d
vulkan: handle large sizes for get_rows (llama/15686) 2025-09-20 13:42:45 +03:00
Jeff Bolz 191def71ce
vulkan: mul_mat_id coopmat2 optimizations (llama/15546)
* vulkan: mul_mat_id coopmat2 optimizations

Add a path for when the tile fits in BN/2, similar to what we have for mul_mat.

Only call fetch_scales/store_scales once per QUANT_K block, and once at the
beginning in case start_k is not aligned.

* Also add a path for BN/4 - worth a couple more percent
2025-09-20 13:42:45 +03:00
Daniel Bevenius b092e95aaa
vulkan : remove unused portability_enumeration_ext variable (llama/15679)
This commit removes the portability_enumeration_ext variable from the
ggml_vk_instance_portability_enumeration_ext_available function as it
is initialized to false but never modified, making it redundant.
2025-09-20 13:42:45 +03:00
Jeff Bolz 20ce6fcf6a
vulkan: Allow fallback to sysmem memory when vidmem is full (llama/15649)
* vulkan: Allow fallback to sysmem memory when vidmem is full

* vulkan: Add env var GGML_VK_ALLOW_SYSMEM_FALLBACK
2025-09-20 13:42:45 +03:00
Jeff Bolz 71f0ee70bf
vulkan: clamp matmul and FA results to the max finite value (llama/15652)
* vulkan: clamp matmul and FA results to the max finite value

* only clamp for fp16
2025-09-20 13:42:45 +03:00
Charles Xu 74583845b6
ggml: update kleidiai to v1.13.0 (llama/15663) 2025-09-20 13:42:44 +03:00
Johannes Gäßler f6ba3949b6
llama: use FA + max. GPU layers by default (llama/15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-09-20 13:42:44 +03:00
Johannes Gäßler b7809c401b
CUDA: use FP32 arithmetic for conv2d (llama/15683) 2025-09-20 13:42:44 +03:00
Jeff Bolz a6dec4f49d
vulkan: Skip syncing for prealloc_y when it is reused (llama/15544) 2025-09-20 13:42:44 +03:00
Chenguang Li d629af157e
CANN: FIx compiler warnings (llama/15661)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:44 +03:00
Aman Gupta 82ce91e7d2
CUDA: fix bug in rms_norm fusion (llama/15660)
* CUDA: fix bug in rms_norm fusion

* Fix bug for OP_REPEAT

* Fix index for add
2025-09-20 13:42:44 +03:00
Aman Gupta 6d7ddaf793
CUDA: fuse adds, fuse add with rms norm (llama/15631)
* CUDA: fused add with rms_norm_mul

* Non-broadcast fuse works

* Add fused adds

* format

* Remove n_fuse from template params

* Address review comments

* Move template inside binbcast
2025-09-20 13:42:44 +03:00
mnehete32 dc9f55bbb0
CUDA: add conv2d (llama/15635)
* CUDA: add conv2d

* CUDA: conv2d - correct formatting and added const
2025-09-20 13:42:44 +03:00
Aaron Teo 6287027a2c
ggml-cpu: fix invalid hsum build in debug s390x (llama/15634)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-09-20 13:42:43 +03:00
compilade 6dffbaa0cb
ggml : fix SSM_SCAN for n_groups > 1 (llama/15625) 2025-09-20 13:42:43 +03:00
Georgi Gerganov cac6253744
kv-cache : remove LLAMA_SET_ROWS checks (llama/15505)
ggml-ci
2025-09-20 13:42:43 +03:00
matiaslin 88c0582b61
cuda: Add cublasLt_static linking when GGML_STATIC is enabled (llama/15622)
Prior to this change, we faced undefined cublasLt references when
attempting to compile 'llama-cli' with GGML_STATIC=ON on Linux.

We add linking with CUDA::cublasLt_static when CUDA version is greater
than 10.1.
2025-09-20 13:42:43 +03:00
uvos 65fa2c0c1a
HIP: Enable support for ggml_backend_cuda_register_host_buffer (llama/15615) 2025-09-20 13:42:43 +03:00
Chenguang Li 02e8b23137
CANN: refactor mask handling and improve performance in FA (llama/15561)
* CANN(flash-attn): refactor mask handling and improve performance

1. Refactored the mask computation in Flash Attention, unified the logic without separating prefill and decode.
2. Optimized performance in non-alibi scenarios by reducing one repeat operation.
3. Updated operator management to explicitly mark unsupported cases on 310P devices and when dim is not divisible by 16.

Signed-off-by: noemotiovon <757486878@qq.com>

* [CANN]: fix review

Signed-off-by: noemotiovon <757486878@qq.com>

* [CANN]: Optimization FA BNSD to BSND

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:43 +03:00
xctan ece1bdfe7e
ggml-cpu : add basic RVV support for vector f32 ops (llama/15057)
* ggml-cpu : add basic RVV support for vector f32 ops

* ggml-cpu : add RVV support for f32 softmax
2025-09-20 13:42:43 +03:00
rmatif a6ec224efa
OpenCL: add fused group_norm/norm, mul, add (llama/15314)
* add fused group_norm/norm, mul, add

* fix spacing

* revert rms_norm logic

* fix trailing whitespace
2025-09-20 13:42:43 +03:00
Akarshan Biswas 94fa9f63b3
SYCL: fix rms_norm_mul_add for tensor dim not a multiple of sg_size (llama/15592)
The original implementation unconditionally returned true for this operation, leading to a failure when the tensor's first dimension (ne[0]) was not a multiple of WARP_SIZE. This caused an GGML_ASSERT(ncols % WARP_SIZE == 0) failure in ggml-sycl/norm.cpp.

This change updates the ggml_backend_sycl_device_supports_op check to correctly return true for GGML_OP_RMS_NORM only when the first dimension of the tensor is a multiple of WARP_SIZE, ensuring the operation can be performed without error.
2025-09-20 13:42:42 +03:00
shalinib-ibm 31c7784e09
llamafile: PowerPC Sgemm Optimization (llama/15558)
This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with base).

Minor code refactoring changes:
Replace macro with inline function
Code Indent made consistent with 4 spaces

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

model                   Size                Params     Backend       Threads   Test    Patch   Base
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp512   98.58   60.3
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp1024  95.88   57.36
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp2048  85.46   53.26
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp4096  68.66   45.78
llama 8B all F32        29.92 GiB           8.03 B      CPU           20       pp6144  57.35   40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-09-20 13:42:42 +03:00
Johannes Gäßler 53010199a1
CUDA: return -1 for nonexistent compiled arch (llama/15587) 2025-09-20 13:42:42 +03:00
Georgi Gerganov 1c21a850be
metal : optimize FA vec for large sequences and BS <= 8 (llama/15566)
* metal : optmize FA vec for large heads and sequences

* metal : adjust small-batch mul mv kernels

ggml-ci

* batched-bench : fix total speed computation

ggml-ci

* cont : add comments

ggml-ci
2025-09-20 13:42:42 +03:00
Georgi Gerganov dc693ca8c9
metal : improve `MUL_MAT_ID` (llama/15541)
* metal : mul_mm_id remove hdst

* metal : remove mul_mm_id hsrc1

* metal : mul_mm_id simplify + add test

* metal : opt mul_mm_id map0

* metal : optimize mul_mm_id id gathering

* metal : mul/div opt

* metal : optimize mul_mm_id_map0

ggml-ci
2025-09-20 13:42:42 +03:00