Commit Graph

4210 Commits

Author SHA1 Message Date
YehuditE 55cf00c20a sycl : add PAD_REFLECT_D1 operator support (llama/16145)
* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block
2025-10-22 12:58:11 +03:00
Diego Devesa 70b4d22f01 ggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679) 2025-10-22 12:58:11 +03:00
safranowith bb76672081 SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)
* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN
2025-10-22 12:58:11 +03:00
Aaron Teo 82bdf31267 ci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)
* devops: initial patch

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: rm baseline version

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-10-22 12:58:11 +03:00
Johannes Gäßler 72d98011db HIP: fix GPU_TARGETS (llama/16642) 2025-10-22 12:58:11 +03:00
Jeff Bolz 414901a42c vulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)
This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.
2025-10-22 12:58:11 +03:00
Aman Gupta 08345f15ec CUDA: use registers instead of smem in topk-moe (llama/16647)
Uses the technique used in the vulkan PR #16641. Neat trick!
2025-10-22 12:58:11 +03:00
Shawn Gu 8ffdf4bd96 opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)
* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings
2025-10-22 12:58:11 +03:00
Radoslav Gerganov 6aa18cccd8 rpc : report actual free memory (llama/16616)
* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server
2025-10-22 12:58:11 +03:00
Giuseppe Scrivano d22008b631 vulkan: Add State Space Model (SSM) Operations Support (llama/16463)
* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-10-22 12:58:11 +03:00
muggle-stack 328263f8fd ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <mingjun.rong@spacemit.com>
2025-10-22 12:58:11 +03:00
Jeff Bolz 4a384826a8 vulkan: fix debug build (add_rms_len/data not found) (llama/16624) 2025-10-22 12:58:11 +03:00
Ilia Ilmer 0ae492641c metal : add `CONV_TRANSPOSE_2D` (llama/16542)
* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 82332cea27 SYCL SET operator optimized for F32 tensors (llama/16350)
* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 7bb53032b3 sycl : add ARANGE operator (llama/16362)
* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
Chenguang Li fe965613c0 CANN: format code using .clang-format (llama/15863)
This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <huafengchun@gmail.com>
2025-10-22 12:58:11 +03:00
takuya kodama 3c136d699a ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set
2025-10-22 12:58:11 +03:00
yael-works f7b5ecf195 SYCL: Add GGML_OP_MEAN operator support (llama/16009)
* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-22 12:58:11 +03:00
safranowith 757d51d21d cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)
* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py
2025-10-22 12:58:11 +03:00
lhez bef9f74553 opencl: add q8_0 mm support (llama/16469)
* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path
2025-10-22 12:58:11 +03:00
lhez 16dab3d122 opencl: fix FA for f32 (llama/16584) 2025-10-22 12:58:11 +03:00
Sam/Samuel d8a146b0f9 metal: optimise `GGML_OP_SUM` (llama/16559)
* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
Julius Tischbein 0c9d49927c CUDA: Changing the CUDA scheduling strategy to spin (llama/16585)
* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Remove empty line

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-10-22 12:58:11 +03:00
Georgi Gerganov 8ed913da0e metal : avoid using Metal's gpuAddress property (llama/16576)
* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check
2025-10-22 12:58:11 +03:00
Georgi Gerganov 23c19308d8
server : set no_context == true (#3482) 2025-10-20 15:39:48 +03:00
Georgi Gerganov 4979e04f5d
release : v1.8.2 2025-10-15 10:29:42 +03:00
Georgi Gerganov 8ba3c13b0c talk-llama : sync llama.cpp 2025-10-15 09:29:17 +03:00
Georgi Gerganov ff2253b08a sync : ggml 2025-10-15 09:29:17 +03:00
SavicStefan 499f183e75 vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
Signed-off-by: Stefan Savic <stefan.savic@huawei.com>
Co-authored-by: Stefan Savic <stefan.savic@huawei.com>
2025-10-15 09:29:17 +03:00
Aman Gupta 2eb9119754 CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577) 2025-10-15 09:29:17 +03:00
Jeff Bolz 393fbbc80b vulkan: Support FA with K/V in F32 (llama/16543) 2025-10-15 09:29:17 +03:00
Jeff Bolz 73e200ee85 vulkan: Improve build time for MSVC (llama/16545)
Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.
2025-10-15 09:29:17 +03:00
Johannes Gäßler 1bdd746bc8 CUDA: enable FA for FP32 KV cache (llama/16546) 2025-10-15 09:29:17 +03:00
Aman Gupta f2075667fa CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code
2025-10-15 09:29:17 +03:00
Aman Gupta b4c5c6f71f CUDA: add fp kernel for larger batch size MoE (llama/16512)
* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks
2025-10-15 09:29:17 +03:00
Anav Prasad a12848e8e9 cuda : remove legacy copy-op pointer indirection code (llama/16485)
* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function
2025-10-15 09:29:17 +03:00
Georgi Gerganov 25ac94a6cb metal : FA support F32 K and V and head size = 32 (llama/16531)
* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]
2025-10-15 09:29:17 +03:00
lhez 66b0fc2fb7 opencl: fix build targeting CL 2 (llama/16554) 2025-10-15 09:29:17 +03:00
Johannes Gäßler 77272fe0df CUDA: fix numerical issues in tile FA kernel (llama/16540) 2025-10-15 09:29:17 +03:00
Jie Fu (傅杰) 8a9c2ba6a1 ggml : fix build broken with -march=armv9-a on MacOS (llama/16520)
* ggml : fix build broken with -march=armv9-a on MacOS

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Add #pragma message

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Address review comment.

Signed-off-by: Jie Fu <jiefu@tencent.com>

* Update ggml/src/ggml-cpu/ggml-cpu.c

---------

Signed-off-by: Jie Fu <jiefu@tencent.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-10-15 09:29:17 +03:00
Chenguang Li 417ecdddc5 CANN: fix CPU memory leak in CANN backend (llama/16549)
This commit fixes a CPU-side memory leak issue in the CANN backend,
which occurred when intermediate aclTensorList objects were not properly
released after operator execution. The leak happened during repeated
invocations of CANN ops (e.g., FlashAttention), leading to increasing
host memory usage over time.

Proper resource cleanup (aclDestroyTensorList and related release logic)
has been added to ensure that all temporary tensors are correctly freed.
2025-10-15 09:29:17 +03:00
Sam/Samuel bfd88b8b6e metal: add support for opt_step_sgd (llama/16539)
* metal: add support for opt_step_sgd

* add newline to pass EditorConfig check
2025-10-15 09:29:17 +03:00
Georgi Gerganov ccac1b4772 ggml : fix scalar path for computing norm (llama/16558) 2025-10-15 09:29:17 +03:00
hipudding 53e21364a6 CANN: Update several operators to support FP16 data format (llama/16251)
Many Ascend operators internally use FP16 precision for computation.
If input data is in FP32, it must first be cast to FP16 before
computation, and then cast back to FP32 after computation, which
introduces unnecessary cast operations. Moreover, FP16 computation
requires significantly less workload compared to FP32, leading to
noticeable efficiency improvements.

In this change, `get_rows`, `rms_norm`, and `flash_attn_ext` are extended
to support multiple data types. Validation on the Qwen2 0.5b model shows
correct accuracy and about 10% performance gain in concurrent scenarios.

Co-authored-by: noemotiovon <757486878@qq.com>
2025-10-15 09:29:17 +03:00
Sam/Samuel 7f22fe5d8f metal : add opt_step_adamw and op_sum (llama/16529)
* scaffold to support opt step adamw on metal (not written so far)

* add opt-step-adamw kernel for metal

* pass op->src[4] as a separate buffer to the pipeline

* add bounds check to opt-step-adamw kernel

* complete scaffold for GGML_OP_SUM

* naive GGML_OP_SUM kernel

* remove unwanted comment

* change OP_SUM capability gate

* Add has_simdgroup_reduction to both ops to pass CI
2025-10-15 09:29:17 +03:00
Neo Zhang Jianyu be778c992f fix UT fault cases: count-equal, argsort, pad OPs (llama/16521)
* fix/refactor OP argsort, pad

* fix count-equal op

* update SYCL OP list

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-15 09:29:17 +03:00
sirus20x6 70eb30f28e ggml : Fix FP16 ELU positive branch (llama/16519)
Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
sirus20x6 53721d6309 ggml: Correct SVE implementation in ggml_vec_dot_f16_unroll (llama/16518)
The previous SVE implementation for `ggml_vec_dot_f16_unroll` contained a bug due to a copy-paste error. The wrong variable was used in an FMA instruction, leading to incorrect results. This commit corrects the variable usage and improves the clarity of the code by renaming variables to avoid confusion.

Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-10-15 09:29:17 +03:00
Johannes Gäßler b5fb9b9f58 CUDA: faster tile FA, add oob checks, more HSs (llama/16492) 2025-10-15 09:29:17 +03:00
Georgi Gerganov a91dd3be72
release : v1.8.1 2025-10-12 11:17:59 +03:00