Commit Graph

3385 Commits

Author SHA1 Message Date
Georgi Gerganov 4979e04f5d
release : v1.8.2 2025-10-15 10:29:42 +03:00
Georgi Gerganov 8ba3c13b0c talk-llama : sync llama.cpp 2025-10-15 09:29:17 +03:00
Georgi Gerganov ff2253b08a sync : ggml 2025-10-15 09:29:17 +03:00
SavicStefan 499f183e75 vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
Signed-off-by: Stefan Savic <stefan.savic@huawei.com>
Co-authored-by: Stefan Savic <stefan.savic@huawei.com>
2025-10-15 09:29:17 +03:00
Aman Gupta 2eb9119754 CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577) 2025-10-15 09:29:17 +03:00
Jeff Bolz 393fbbc80b vulkan: Support FA with K/V in F32 (llama/16543) 2025-10-15 09:29:17 +03:00
Jeff Bolz 73e200ee85 vulkan: Improve build time for MSVC (llama/16545)
Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.
2025-10-15 09:29:17 +03:00
Johannes Gäßler 1bdd746bc8 CUDA: enable FA for FP32 KV cache (llama/16546) 2025-10-15 09:29:17 +03:00
Aman Gupta f2075667fa CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code
2025-10-15 09:29:17 +03:00
Aman Gupta b4c5c6f71f CUDA: add fp kernel for larger batch size MoE (llama/16512)
* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks
2025-10-15 09:29:17 +03:00
Anav Prasad a12848e8e9 cuda : remove legacy copy-op pointer indirection code (llama/16485)
* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function
2025-10-15 09:29:17 +03:00
Georgi Gerganov 25ac94a6cb metal : FA support F32 K and V and head size = 32 (llama/16531)
* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]
2025-10-15 09:29:17 +03:00
lhez 66b0fc2fb7 opencl: fix build targeting CL 2 (llama/16554) 2025-10-15 09:29:17 +03:00
Johannes Gäßler 77272fe0df CUDA: fix numerical issues in tile FA kernel (llama/16540) 2025-10-15 09:29:17 +03:00
Jie Fu (傅杰) 8a9c2ba6a1 ggml : fix build broken with -march=armv9-a on MacOS (llama/16520)
* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Add #pragma message

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Address review comment.

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <jiefu@tencent.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-10-15 09:29:17 +03:00
Chenguang Li 417ecdddc5 CANN: fix CPU memory leak in CANN backend (llama/16549)
This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.
2025-10-15 09:29:17 +03:00
Sam/Samuel bfd88b8b6e metal: add support for opt_step_sgd (llama/16539)
* metal: add support for opt_step_sgd

* add newline to pass EditorConfig check
2025-10-15 09:29:17 +03:00
Georgi Gerganov ccac1b4772 ggml : fix scalar path for computing norm (llama/16558) 2025-10-15 09:29:17 +03:00
hipudding 53e21364a6 CANN: Update several operators to support FP16 data format (llama/16251)
Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <757486878@qq.com>
2025-10-15 09:29:17 +03:00
Sam/Samuel 7f22fe5d8f metal : add opt_step_adamw and op_sum (llama/16529)
* scaffold to support opt step adamw on metal (not written so far)

* add opt-step-adamw kernel for metal

* pass op->src[4] as a separate buffer to the pipeline

* add bounds check to opt-step-adamw kernel

* complete scaffold for GGML_OP_SUM

* naive GGML_OP_SUM kernel

* remove unwanted comment

* change OP_SUM capability gate

* Add has_simdgroup_reduction to both ops to pass CI
2025-10-15 09:29:17 +03:00
Neo Zhang Jianyu be778c992f fix UT fault cases: count-equal, argsort, pad OPs (llama/16521)
* fix/refactor OP argsort, pad

* fix count-equal op

* update SYCL OP list

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-15 09:29:17 +03:00
sirus20x6 70eb30f28e ggml : Fix FP16 ELU positive branch (llama/16519)
Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
sirus20x6 53721d6309 ggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (llama/16518)
The previous SVE implementation for `ggml_vec_dot_f16_unroll` contained a bug due to a copy-paste error. The wrong variable was used in an FMA instruction, leading to incorrect results. This commit corrects the variable usage and improves the clarity of the code by renaming variables to avoid confusion.

Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
Johannes Gäßler b5fb9b9f58 CUDA: faster tile FA, add oob checks, more HSs (llama/16492) 2025-10-15 09:29:17 +03:00
Georgi Gerganov a91dd3be72
release : v1.8.1 2025-10-12 11:17:59 +03:00
Georgi Gerganov ea174c62bc bench : update [no ci] 2025-10-12 11:16:23 +03:00
Georgi Gerganov ff4c1a5a53 talk-llama : sync llama.cpp 2025-10-12 11:16:23 +03:00
Georgi Gerganov ed6a3063ec sync : ggml 2025-10-12 11:16:23 +03:00
Georgi Gerganov d201705e71 metal : fix mul-mm condition + fix mul-mv permuted kernels (llama/16494) 2025-10-12 11:16:23 +03:00
Diego Devesa 1cc342427b cuda : avoid initializing unused devices (llama/16510) 2025-10-12 11:16:23 +03:00
Prajwal B Mehendarkar d8f1aa4e1d cmake : Dont define XOPENSOURCE on AIX (llama/16481) 2025-10-12 11:16:23 +03:00
duduta d83fef35df cpu : optimize the ggml NORM operation (llama/15953)
* ggml-cpu: optimize norm operation to use intrinsics or Accelerate

          rename function

          add endif macro comment

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* implement s390x SIMD suggested by @taronaeo

* add TODO comment

* tidy up spaces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2025-10-12 11:16:23 +03:00
Chenguang Li b9eac9419c CANN: Improve ACL graph matching (llama/16166)
* CANN: improve ACL graph matching

Record `ne` and `nb` information for src tensors and include them in the
graph matching check. This enhances the robustness of ACL graph matching
by preventing incorrect matches when src tensors share the same data
address but differ in shape or stride.

* CANN: add op_params match
2025-10-12 11:16:23 +03:00
Charles Xu c8b2c56fd2 kleidiai: kernel interface refactoring (llama/16460) 2025-10-12 11:16:23 +03:00
Neo Zhang Jianyu 7df6766b63 refactor soft_max, add soft_max_back (llama/16472)
* refactor to support soft_max_ext

* fix error and support soft_max_back

* rm unused functions

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-12 11:16:23 +03:00
ai-fonsi 21e6e72a2f Disable CUDA host buffers on integrated GPUs (llama/16308) 2025-10-12 11:16:23 +03:00
Georgi Gerganov 7ef78a72e1 metal : mark FA blocks (llama/16372)
* metal : better unroll in the FA kernels

* metal : index FA blocks

* tests : restore [no ci]

* metal : prevent division by zero in FA kernels

* metal : fix -INF detection logic
2025-10-12 11:16:23 +03:00
Reese Levine 4eea3efc49 ggml webgpu: profiling, CI updates, reworking of command submission (llama/16452)
* Add profiling

* More detailed profiling

* Rework command submission to avoid global locks

* Update wait handling

* try new method of waiting on futures

* Add serializing of command submission in some cases

* Add new pool for timestamp queries and clean up logging

* Serialize command submission in CI and leave a TODO note

* Update webgpu CI

* Add myself as WebGPU codeowner

* Deadlock avoidance

* Leave WebGPU/Vulkan CI serialized

* Fix divide by 0

* Fix logic in division by inflight_threads

* Update CODEOWNERS and remove serialize submit option
2025-10-12 11:16:23 +03:00
Georgi Gerganov 4bce4fa5e9 metal : add support for non-padded FA KV (llama/16148)
* metal : pad K, V and Mask when needed

* cont : simplify

* cuda : add TODO about KV padding requirement

* metal : add comments

* metal : remove mask padding requirement
2025-10-12 11:16:23 +03:00
Georgi Gerganov 6cf0c21b09 tests : add -INF blocks to the KQ mask in the FA tests (llama/16380)
* tests : add -INF blocks to the KQ mask in the FA tests

* cont : bump -INF block size to 64

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* ggml : prevent division by zero in FA CPU op

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov 1a4116f942 metal : various optimizations + refactoring (llama/16446)
* metal : ssm_scan minor opts

* metal : get_rows optimize

* metal : cpy optimize

* metal : ssm_conv opt

* metal : ssm_scan simplify

* metal : ssm_Scan opt
2025-10-12 11:16:23 +03:00
Georgi Gerganov 0e431b3cea ggml : fix unaligned access in AMX code (llama/16315) 2025-10-12 11:16:23 +03:00
Daniel Bevenius 0f29d7c3fa ggml-cpu : fix leftover handling in ggml_vec_scale_f32 for SVE (llama/16443)
This commit updates the leftover handling in ggml_vec_scale_f32.

The motivation for this is that the code currently incorrectly assumes
there would be fewer than ggml_f32_epr leftover elements. However,
since the main loop processes 2*ggml_f32_epr elements per iteration
, there can be up to (2*ggml_f32_epr - 1) leftover elements.

The original single-pass leftover code could only process ggml_f32_epr
elements, leaving some elements unscaled.

Example scenario with 256-bit SVE:
```
ggml_f32_epr  = 8 (elements per register)
ggml_f32_step = 16 (two registers per iteration)
n             = 25
np            = 16
leftovers     = 9 elements (16-24)

Original    : processes only elements 16-23, misses element 24
This commit : loop processes elements 16-23, then element 24
```

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/18070620247/job/51419855630
2025-10-12 11:16:23 +03:00
Reese Levine b8bdf06182 ggml webgpu: actually add softmax, fix rms_norm offset (llama/16400)
* implement soft_max

* Fix soft_max data race

* Temporary fix, wait on each submit
2025-10-12 11:16:23 +03:00
Eve 2ca8fa37fa vulkan: use a more appropriate amount of threads when generating shaders (llama/16418)
* use a more flexible amount of threads

* fix windows compile and 0 thread case

* nominmax
2025-10-12 11:16:23 +03:00
Radoslav Gerganov 93882335a8 rpc : check src buffer when copying tensor (llama/16421)
Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.
2025-10-12 11:16:23 +03:00
Radoslav Gerganov af51bbab88 rpc : add support for multiple devices (llama/16276)
* rpc : add support for multiple devices

Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.

closes: #15210

* fixes

* use ggml_backend_reg_t

* address review comments

* fix llama-bench backend report

* address review comments, change device naming

* fix cmd order
2025-10-12 11:16:23 +03:00
Acly 49e0a426f3 vulkan : incremental shader builds (llama/16341)
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times

* support dep-files so shaders are recompiled if their included files change

* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled

* vulkan : only write embedded shader .hpp/.cpp when they change

* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier

* fix hang in vulkan-shaders-gen when there are compilation errors

* early out did not decrement compile_count

* clean up

* fix glslc integer dot product test

* unconditionally write the embedded shader cpp output

* replace output filepath in generated dep-files to match output in CMakeLists

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov 93c1305565 metal : fix loop bound in ggml_mem_ranges (llama/16412) 2025-10-12 11:16:23 +03:00
Acly a70144a873 ggml : fix graph reallocation with multiple chunks (llama/16396)
reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower
2025-10-12 11:16:23 +03:00