Commit Graph

3178 Commits

Author SHA1 Message Date
Daniel Bevenius cda7d4e5ac
ggml WebGPU: remove userdata from request adapter callback (llama/15527)
* ggml WebGPU: remove userdata from request adapter callback

This commit removes the `userdata` parameter from the WebGPU request
adapter callback in `ggml-webgpu.cpp`. Instead, the lambda function
captures the `webgpu_context` directly.

The motivation for this change is to simplify the code and improve
readability.

* inline the callback lambda into the RequestAdapter call

This commit removes the callback lambda variable and inlines it directly
into the RequestAdapter call.
2025-09-20 13:42:50 +03:00
Johannes Gäßler cd70d89628
CUDA: faster tile FA (Pascal/AMD), headsize 256 (llama/15769) 2025-09-20 13:42:50 +03:00
Charles Xu be2676bb1c
kleidiai: generalize compute_forward_kv_cache to compute_forward_fp16 (llama/15817) 2025-09-20 13:42:50 +03:00
Johannes Gäßler 69400f16f1
ggml-cpu: document use of "free" memory [no ci] (llama/15834) 2025-09-20 13:42:50 +03:00
Aaron Teo f499271c4e
ggml-cpu: drop support for nnpa intrinsics (llama/15821) 2025-09-20 13:42:50 +03:00
Johannes Gäßler 6ff468cfaa
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (llama/15802)
* CUDA: fastdiv, launch bounds for mmvq + q8_1 quant
2025-09-20 13:42:50 +03:00
Daniel Bevenius 4d6e1144b1
ggml : introduce semantic versioning (ggml/1336)
* ggml : introduce semantic versioning

This commit introduces semantic versioning for the GGML library.

The motivation for this is that the current versioning, using build
numbers, makes it difficult to track changes and releases for projects
that use ggml.

The release steps are the following:
1. Sync the changes from llama.cpp using sync-llama-am.sh and after the
   PR has been approved and merged move to step 2.
2. Run scripts/release.sh and specify the type of release, major, minor,
   or patch. This script will handle incrementing the version
   (major|minor|patch), create a new commit with the version change,
   create a tag for the version, and prepare for the next development
   iteration.
3. Inspect the commits/tag and push to master. This will trigger the
   github release workflow which is triggered for new tags which will
   then publish a new release on github.

Example usage:
```console
$ ./scripts/release.sh major --dry-run
[dry-run] - No changes will be made

Step 1: Reading current version...
Current version: 0.9.0-dev
New release version: 1.0.0

Step 2: Updating version in CMakeLists.txt...
  [dry-run] Would update GGML_VERSION_MAJOR to 1
  [dry-run] Would update GGML_VERSION_MINOR to 0
  [dry-run] Would update GGML_VERSION_PATCH to 0
  [dry-run] Would remove -dev suffix

Step 3: Committing version bump...
  [dry-run] Would commit: 'ggml : bump version to 1.0.0'

Step 4: Creating git tag...
  [dry-run] Would create tag: v1.0.0 with message 'Release version 1.0.0'

Step 5: Preparing for next development cycle...
  [dry-run] Would update GGML_VERSION_MINOR to 1
  [dry-run] Would add -dev suffix back

Step 6: Committing development version...
  [dry-run] Would commit: 'ggml : prepare for development of 1.1.0-dev'

[dry-run] Summary (no changes were made):
  • Would have released version: 1.0.0
  • Would have created tag: v1.0.0
  • Would have set next development version: 1.1.0-dev
```

Refs: https://github.com/ggml-org/ggml/issues/1333

* ggml: create branch for release candidate and check master

* ggml : sign the git tag
2025-09-20 13:42:50 +03:00
Gregor Jasny c80f78cc7b
CUDA : conditionally add cuda architectures (ggml/1341) 2025-09-20 13:42:50 +03:00
Gabe Goodhart ffe560cbb1
metal : Add template specialization for mul_mm_id w/ ne20 == 10 (llama/15799)
Branch: GGMLMetalNE20

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-20 13:42:49 +03:00
Chenguang Li 3780a3c917
CANN: Refactor ND to NZ workspace to be per-device (llama/15763)
* CANN:Refactor ND to NZ workspace to be per-device in Ascend backend

- Replaced the previous single global ND→NZ workspace with a per-device
  cache using unordered_map keyed by device ID.
- Functions `release_nz_workspace`, `relloc_nz_workspace`, and
  `get_nz_workspace` now manage workspace independently for each device,
  preventing memory conflicts in multi-device / pipeline parallel scenarios.
- This change fixes potential precision issues caused by workspace
  overwrites when multiple devices perform ND→NZ conversions concurrently.

Co-authored-by: hipudding <huafengchun@gmail.com>

* refactor

Signed-off-by: noemotiovon <757486878@qq.com>

* rename

Signed-off-by: noemotiovon <757486878@qq.com>

* fix review comments

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-09-20 13:42:49 +03:00
leejet 2228462b19
ggml: add ops for WAN video model (cuda && cpu) (llama/15669)
* add conv3d support

* add ggml_pad_ext for cpu & cuda backend

* cuda/cpu: add im2col_3d support

* cuda: make im2col a little faster

* fix cuda pad/scale/im2col3d

* make im2col_3d faster

* gguf: support loading tensors which n_dims > GGML_MAX_DIMS

* fix cuda get_rows

* avoid ggml_conv_3d conflict

* correct GGML_OP_COUNT assertion

* avoid build failure

* avoid build failure on MacOS

* cuda: remove unnecessary MIN define

* fix cpu im2col_3d

* adjust the code style

* cuda: use simpler loop in get_rows

* add test_im2col_3d to test-backend-ops

* test-backend-ops.cpp: remove trailing whitespace

* cpu: im2col_3d support non continuous src

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* fix test_im2col_3d

* remove unused variables

* cuda: get_rows: dfloat2 -> float2

* add test_pad_ext to test-backend-ops.cpp

* add gguf_init_from_file_ext impl

* Revert "gguf: support loading tensors which n_dims > GGML_MAX_DIMS"

This reverts commit d8377a0a37f314bd3713fe043b4333ad661610c1.

* Revert "add gguf_init_from_file_ext impl"

This reverts commit d9f1d13208c68ef83b3538201ac7f31614fb1994.

* update ggml_backend_vk_device_supports_op

* fix ggml_backend_vk_device_supports_op

* update other backend supports op for ggml_pad_ext

* metal/opencl/sycl/vulkan: fix GGML_OP_PAD check in supports_op

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-09-20 13:42:49 +03:00
hipudding 96efb472b4
CANN: Fix precision issue on 310I DUO multi-devices (llama/15784) 2025-09-20 13:42:49 +03:00
rmatif 1569daf524
opencl: add hs=40 to FA (llama/15758) 2025-09-20 13:42:49 +03:00
Chenguang Li 5c860e94c6
CANN: fix acl_rstd allocation size in ggml_cann_rms_norm (llama/15760)
Fixes #15330

Adjust the allocation size of acl_rstd. The parameter `dims` is set to 3 according to the CANN documentation.

Co-authored-by: Yuchuan <yuchuan-cao@users.noreply.github.com>
2025-09-20 13:42:49 +03:00
Ruben Ortlam 719a05c665
vulkan: fix mmv subgroup16 selection (llama/15775) 2025-09-20 13:42:49 +03:00
Jeff Bolz 4a702a867c
vulkan: don't use std::string in load_shaders, to improve compile time (llama/15724)
* vulkan: don't use std::string in load_shaders, to improve compile time

* keep the string version for those calls that use it
2025-09-20 13:42:49 +03:00
Daniel Bevenius 4144ae10e9
vulkan : update ggml_vk_instance_validation_ext_available (llama/15666)
* vulkan : update ggml_vk_instance_validation_ext_available

This commit updates ggml_vk_instance_validation_ext_available() to
check for VK_EXT_validation_features instead of
VK_KHR_portability_enumeration.

Based on how the returned boolean is used later in the code (to enable
both the validation layer and the VK_EXT_validation_features extension),
it appears the function may have been intended to check for the
validation layer features extension.

* remove try/catch

This was a left over from a previous iteration where I was explicitly
quering for a specific validation layer first, which would throw.

* update warning message about validation layers
2025-09-20 13:42:48 +03:00
Shin-myoung-serp 85c7aa3750
ggml vulkan: add hardsigmoid and hardswish operations (llama/15762) 2025-09-20 13:42:48 +03:00
Oliver Simons 9eef377330
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (llama/15715)
* Add fastdiv, use it in modulo and use modulo in rms_norm_f32

Fastdiv is much faster way to do integer division, which was identified
as bottleneck in rms_norm_f32

* Support more `block_size` values in `rms_norm_f32`

This makes us more flexible in selecting the optimal threads w.r.t
paralellizing across a col vs. launch-overheads of threads and mio
throttles

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Replace modulo with fastmodulo in `rms_norm_f32`

* Use `BinPackArguments=true` for formating function calls

Will file a separate PR to adjust .clang-format file

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Use uint3 for both `fastdiv` and `fastmodulo`

The compiler seems to reliably optimize away the unused .z component in
the fastdiv use-case, see https://godbolt.org/z/rx8KPrKr3

* More constrained type declarations

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Rename fastdiv and fastmodulo variables to shared variable name

As suggest by JohannesGaessler, this increases clarity of the intended
use

* Pack fastdiv/fastmodulo constants into uint2/uint3 objects

By packing constants to be used together into a struct, we are less
likely to make errors.

* Rename function parameter of fastmodulo

`modulo_consts` is more fitting/descriptive

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-09-20 13:42:48 +03:00
hipudding 51bc843f3a
CANN: Add RoPE contiguous check for 310I DUP device (llama/15735) 2025-09-20 13:42:48 +03:00
xctan 75f739c7c8
ggml-cpu : optimize RVV kernels (llama/15720)
* ggml-cpu : optimize rvv ggml_vec_dot_f32

* ggml-cpu : optimize 128-bit rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : fix riscv arch flags

* ggml-cpu : add more rvv ops

* ggml-cpu : optimize rvv ggml_vec_dot_q4_K_q8_K

* ggml-cpu : optimize rvv ggml_vec_dot_q6_K_q8_K

* ggml-cpu : minor rvv adjustments

* ggml-cpu : fix riscv include
2025-09-20 13:42:48 +03:00
hipudding 91e9e72ecd
CANN: Mask unsupported TRANSPOSE_1D operator (llama/15733)
CANN currently does not support kernels larger than 255.
This change disables such cases.
2025-09-20 13:42:48 +03:00
Chenguang Li d84b96d9d0
CANN: Fix type float_t to float (llama/15736)
Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:48 +03:00
Ruben Ortlam e584edb5ba
vulkan: fix shaders gen when no integer dot is available (llama/15740) 2025-09-20 13:42:48 +03:00
hipudding 5aee53c40f
CANN: Resolve soft_max precision issue (llama/15730)
Previously, the slope tensor was set to fp16 to improve efficiency.
While this worked correctly in FA, it caused precision issues in soft_max.
This change applies different data types for different operators
to balance both accuracy and performance.
2025-09-20 13:42:47 +03:00
Jeff Bolz 1e03aa66f7
vulkan: Fix macro parameter order for f32 matmul shaders (llama/15716) 2025-09-20 13:42:47 +03:00
rmatif fb37f91163
opencl: add attn sinks support for FA kernels (llama/15706) 2025-09-20 13:42:47 +03:00
Chenguang Li 3db49c1c26
CANN: Support eager execution mode under ACL graph compilation (llama/15712)
* [CANN] Support eager execution mode under ACL graph compilation

Add support for running operators in eager mode while ACL graph
compilation is enabled. This allows bypassing graph execution
and directly submitting ops, which is useful for debugging and
reducing graph build overhead in certain scenarios.

Signed-off-by: noemotiovon <757486878@qq.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

* rename to acl_graph_mode

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
2025-09-20 13:42:47 +03:00
hipudding 13d3963f71
CANN: Support ext_factor in rope (llama/15710) 2025-09-20 13:42:47 +03:00
Johannes Gäßler f20a7b0e99
ggml-backend: raise GGML_MAX_SPLIT_INPUTS (llama/15722) 2025-09-20 13:42:47 +03:00
Gilad S 9e3600e569
vulkan: use memory budget extension to read memory usage (llama/15545)
* vulkan: use memory budget extension to read memory usage

* fix: formatting and names

* formatting

* fix: detect and cache memory budget extension availability on init

* fix: read `budgetprops.heapBudget` instead of `heap.size` when memory budget extension is available

* style: lints
2025-09-20 13:42:47 +03:00
Jeff Bolz 7a5e7368a3
vulkan: add missing clamps in new mul_mat_id paths (llama/15702)
This is a missing interaction between #15546 and #15652
2025-09-20 13:42:46 +03:00
Ruben Ortlam d5f80a2982
vulkan: disable large mmv subgroups on older Nvidia GPUs (llama/15717) 2025-09-20 13:42:46 +03:00
s-goto-11 8218dc609c
ggml: SVE support for exponential functions (llama/15145)
* SVE support for exponential functions

Add const notation to variable pg

* Update ggml/src/ggml-cpu/vec.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Add const

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-09-20 13:42:46 +03:00
Prashant Vithule 31840a3a56
ggml: aarch64: Implement SVE F16 kernels for vector functions (llama/15115)
* Added sve implementation for vec_dot_fp16 Kernel

* removed white spaces

* Added comment

* removed white spaces

* changed GGML_F16x_VEC_FMA for code consistency

* Update vec.h

---------

Co-authored-by: vithulep <p.m.vithule1517@gmail.com>
2025-09-20 13:42:46 +03:00
Ruben Ortlam 5e70d901b0
Vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants (llama/14903)
* vulkan: Add Integer Dot Product mul_mat_vec shader for legacy quants

* vulkan: use subgroup operations for quantize_q8_1 shader

* vulkan: add q8_1_x4 type with 128-bit alignment, use in mul_mat_vecq shader

* vulkan: use q8_1_x4 blocks in mul_mmq shader

* vulkan: do 8 calculations per invocation instead of 32 in mul_mat_vecq, similar to mul_mat_vec

* vulkan: tune mul_mat_vecq performance for Intel

* vulkan: fix quantizing issue when tensor is not divisible by 128

* vulkan: adapt integer dot mmv to mmv small m optimization (llama/15355)

* vulkan: allow all subgroup modes for mmv and mmvq

* vulkan: use prealloc intermediate reuse for mmvq path

* vulkan: tune mmvq for Intel, AMD GCN and Nvidia RTX 3090

* vulkan: adapt mmv quantize_y path to conditional sync logic

* vulkan: disable q8_0 mmvq on Nvidia

* vulkan: enable q8_0 on Nvidia pre-turing

* fix prealloc sync condition

* fix llvmpipe subgroup 8 issue
2025-09-20 13:42:46 +03:00
Daniel Bevenius c5f511e697
ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops (llama/15695)
* ggml : WebGPU add TRANSPOSE and RESHAPE to supported ops

This commit adds support for the TRANSPOSE and RESHAPE operations in the
ggml webgpu backend.

Co-authored-by: Diego Devesa <slarengh@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-20 13:42:46 +03:00
Akarshan Biswas 2ba5e0cb47
CUDA: fix build error from ambiguous __half conversions in conv2d (llama/15690)
* CUDA: fix build error from ambiguous __half conversions in conv2d

Building conv2d with half precision failed because `__half` defines
multiple implicit conversion operators (to float, int, short, etc.),
causing ambiguous overload resolution when multiplying with float.

Introduce a templated `to_float` helper that explicitly converts
`__half` via `__half2float`, while passing through float unchanged.
Use this helper in conv2d accumulation to ensure unambiguous and
correct promotion to float.

Fixes some build errors with half-precision kernels on CUDA.

ggml-ci

* CUDA: Replace custom to_float helper with unified ggml_cuda_cast and add half‑>float conversion

* CUDA: Add missing convert.cuh header

* CUDA: remove unnecessary extension in ggml_cuda_cast

* CUDA: Address review comment, remove second type template argument
2025-09-20 13:42:46 +03:00
hipudding bb5f844ec7
CANN: Optimize MUL_MAT_ID (llama/15658) 2025-09-20 13:42:46 +03:00
hipudding ed7ebdc757
CANN: fix RoPE cache issue on multi-device (llama/15629)
* CANN: fix RoPE cache issue on multi-device

RoPE cache only needs to be computed once per token.
However, in multi-device scenarios, not every device starts
computation from layer 0, which may lead to unallocated memory
issues and precision errors.

This commit records the first layer of each device to avoid
the above issues.

* CANN: Optimize first-layer detection method

* CANN: Remove trailing whitespace

* CANN: Only cache the data that can be determined as unchanged through the parameters.

* CANN: Update function comment
2025-09-20 13:42:45 +03:00
Georgi Gerganov 3d470687de
metal : fix checks for available FA kernels (llama/15700)
* metal : fix checks for available FA kernels

ggml-ci

* cont : fix comment [no ci]
2025-09-20 13:42:45 +03:00
Diego Devesa b11c972b88
llama : separate compute buffer reserve from fattn check (llama/15696)
Exposes ggml_backend_sched_split_graph() to allow splitting the graph without allocating compute buffers and uses it to split the graph for the automatic Flash Attention check.
2025-09-20 13:42:45 +03:00
Jeff Bolz db7ecfb61d
vulkan: handle large sizes for get_rows (llama/15686) 2025-09-20 13:42:45 +03:00
Jeff Bolz 191def71ce
vulkan: mul_mat_id coopmat2 optimizations (llama/15546)
* vulkan: mul_mat_id coopmat2 optimizations

Add a path for when the tile fits in BN/2, similar to what we have for mul_mat.

Only call fetch_scales/store_scales once per QUANT_K block, and once at the
beginning in case start_k is not aligned.

* Also add a path for BN/4 - worth a couple more percent
2025-09-20 13:42:45 +03:00
Daniel Bevenius b092e95aaa
vulkan : remove unused portability_enumeration_ext variable (llama/15679)
This commit removes the portability_enumeration_ext variable from the
ggml_vk_instance_portability_enumeration_ext_available function as it
is initialized to false but never modified, making it redundant.
2025-09-20 13:42:45 +03:00
Jeff Bolz 20ce6fcf6a
vulkan: Allow fallback to sysmem memory when vidmem is full (llama/15649)
* vulkan: Allow fallback to sysmem memory when vidmem is full

* vulkan: Add env var GGML_VK_ALLOW_SYSMEM_FALLBACK
2025-09-20 13:42:45 +03:00
Jeff Bolz 71f0ee70bf
vulkan: clamp matmul and FA results to the max finite value (llama/15652)
* vulkan: clamp matmul and FA results to the max finite value

* only clamp for fp16
2025-09-20 13:42:45 +03:00
Charles Xu 74583845b6
ggml: update kleidiai to v1.13.0 (llama/15663) 2025-09-20 13:42:44 +03:00
Johannes Gäßler f6ba3949b6
llama: use FA + max. GPU layers by default (llama/15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-09-20 13:42:44 +03:00
Johannes Gäßler b7809c401b
CUDA: use FP32 arithmetic for conv2d (llama/15683) 2025-09-20 13:42:44 +03:00