Commit Graph

3922 Commits

Author SHA1 Message Date
sirus20x6 773041e336 ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (llama/16522)
* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <shelhamer.aaron@gmail.com>
2025-11-09 23:38:03 +02:00
Aman Gupta 431aaf56f0 CUDA: fix bug in topk-moe softmax (llama/16711) 2025-11-09 23:38:03 +02:00
Aman Gupta ba41a6ca6a CUDA: topk-moe: add optional parameter for gpt-oss (llama/16649) 2025-11-09 23:38:03 +02:00
Johannes Gäßler 99cea274e5 CUDA: better error for FA kernel with 0 occupancy (llama/16643) 2025-11-09 23:38:03 +02:00
Oleg Orlov 999a7e0cbf
whisper : enable IGPU (#3492)
Co-authored-by: Oleg Orlov <vk.orelsokolov@yandex.by>
2025-11-01 13:38:28 +01:00
KITAITI Makoto c62adfbd1e
ruby : tiny bug fix (#3490)
* Remove build-xcframework.sh from package

* Remove unused variable

* Bump version to 1.3.5

* Don't use variable before declaration
2025-10-29 03:50:44 +09:00
Orel-A f16c12f3f5
wasm : fix Hebrew ID (#3487)
whisper_lang_id: unknown language 'iw'
2025-10-27 08:49:32 +02:00
Georgi Gerganov 322c2adb75 talk-llama : sync llama.cpp 2025-10-22 12:58:11 +03:00
Georgi Gerganov 35ea5ced60 sync : ggml 2025-10-22 12:58:11 +03:00
Aman Gupta 9a8cfb040c ggml: add ggml_can_fuse_subgraph (llama/16662)
* ggml: add ggml_can_fuse_subgraph

* ggml-cuda: use ggml_can_fuse_subgraph for topk-moe

* format

* 1. remove inputs from signature as they are transient nodes
2. add check for views: view_src should be part of the subgraph

* - combine check into one loop
- check all view_src parents
- other minor review comments

* remove redudant if test

* - rename and other minor review comments

* add assert about count < 32
2025-10-22 12:58:11 +03:00
lhez 5c4c477d00 opencl: fix warnings and clean up profiling (llama/16688)
* opencl: remove unused headers, fix warnings

* opencl: clean up profiling, only keep kernel time
2025-10-22 12:58:11 +03:00
Jeff Bolz 7f16c71068 vulkan: Handle FA with all -inf mask values (llama/16447) 2025-10-22 12:58:11 +03:00
YehuditE 55cf00c20a sycl : add PAD_REFLECT_D1 operator support (llama/16145)
* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block
2025-10-22 12:58:11 +03:00
Diego Devesa 70b4d22f01 ggml-alloc : fix leak when reusing a tensor with a larger size (llama/16679) 2025-10-22 12:58:11 +03:00
safranowith bb76672081 SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (llama/16613)
* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN
2025-10-22 12:58:11 +03:00
Aaron Teo 82bdf31267 ci : fix binaries release failure for s390x (binaries may not work yet) (llama/16664)
* devops: initial patch

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* devops: rm baseline version

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-10-22 12:58:11 +03:00
Johannes Gäßler 72d98011db HIP: fix GPU_TARGETS (llama/16642) 2025-10-22 12:58:11 +03:00
Jeff Bolz 414901a42c vulkan: Implement topk_moe fused shader, ported from CUDA (llama/16641)
This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.
2025-10-22 12:58:11 +03:00
Aman Gupta 08345f15ec CUDA: use registers instead of smem in topk-moe (llama/16647)
Uses the technique used in the vulkan PR #16641. Neat trick!
2025-10-22 12:58:11 +03:00
Shawn Gu 8ffdf4bd96 opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (llama/16602)
* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings
2025-10-22 12:58:11 +03:00
Radoslav Gerganov 6aa18cccd8 rpc : report actual free memory (llama/16616)
* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server
2025-10-22 12:58:11 +03:00
Giuseppe Scrivano d22008b631 vulkan: Add State Space Model (SSM) Operations Support (llama/16463)
* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-10-22 12:58:11 +03:00
muggle-stack 328263f8fd ggml : fix SpaceMit IME array out-of-bounds in task assignment (llama/16629)
Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <mingjun.rong@spacemit.com>
2025-10-22 12:58:11 +03:00
Jeff Bolz 4a384826a8 vulkan: fix debug build (add_rms_len/data not found) (llama/16624) 2025-10-22 12:58:11 +03:00
Ilia Ilmer 0ae492641c metal : add `CONV_TRANSPOSE_2D` (llama/16542)
* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 82332cea27 SYCL SET operator optimized for F32 tensors (llama/16350)
* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
GittyBurstein 7bb53032b3 sycl : add ARANGE operator (llama/16362)
* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <gitty@example.com>
2025-10-22 12:58:11 +03:00
Chenguang Li fe965613c0 CANN: format code using .clang-format (llama/15863)
This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <huafengchun@gmail.com>
2025-10-22 12:58:11 +03:00
takuya kodama 3c136d699a ggml-cpu: replace putenv with setenv for const-correctness (llama/16573)
## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set
2025-10-22 12:58:11 +03:00
yael-works f7b5ecf195 SYCL: Add GGML_OP_MEAN operator support (llama/16009)
* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-22 12:58:11 +03:00
safranowith 757d51d21d cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (llama/16083)
* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py
2025-10-22 12:58:11 +03:00
lhez bef9f74553 opencl: add q8_0 mm support (llama/16469)
* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path
2025-10-22 12:58:11 +03:00
lhez 16dab3d122 opencl: fix FA for f32 (llama/16584) 2025-10-22 12:58:11 +03:00
Sam/Samuel d8a146b0f9 metal: optimise `GGML_OP_SUM` (llama/16559)
* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-22 12:58:11 +03:00
Julius Tischbein 0c9d49927c CUDA: Changing the CUDA scheduling strategy to spin (llama/16585)
* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Remove empty line

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-10-22 12:58:11 +03:00
Georgi Gerganov 8ed913da0e metal : avoid using Metal's gpuAddress property (llama/16576)
* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check
2025-10-22 12:58:11 +03:00
Georgi Gerganov 23c19308d8
server : set no_context == true (#3482) 2025-10-20 15:39:48 +03:00
Georgi Gerganov 4979e04f5d
release : v1.8.2 2025-10-15 10:29:42 +03:00
Georgi Gerganov 8ba3c13b0c talk-llama : sync llama.cpp 2025-10-15 09:29:17 +03:00
Georgi Gerganov ff2253b08a sync : ggml 2025-10-15 09:29:17 +03:00
SavicStefan 499f183e75 vulkan: Add ACC_TYPE_VEC2 implementation (llama/16203)
Signed-off-by: Stefan Savic <stefan.savic@huawei.com>
Co-authored-by: Stefan Savic <stefan.savic@huawei.com>
2025-10-15 09:29:17 +03:00
Aman Gupta 2eb9119754 CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (llama/16577) 2025-10-15 09:29:17 +03:00
Jeff Bolz 393fbbc80b vulkan: Support FA with K/V in F32 (llama/16543) 2025-10-15 09:29:17 +03:00
Jeff Bolz 73e200ee85 vulkan: Improve build time for MSVC (llama/16545)
Enable CMP0147 so custom build steps (invoking vulkan-shader-gen) are run in parallel.

Enable /MP so source files are compiled in parallel.
2025-10-15 09:29:17 +03:00
Johannes Gäßler 1bdd746bc8 CUDA: enable FA for FP32 KV cache (llama/16546) 2025-10-15 09:29:17 +03:00
Aman Gupta f2075667fa CUDA: use fastdiv + ggml_cuda_mad for mmvf (llama/16557)
* CUDA: use fastdiv + ggml_cuda_mad for mmvf

* use bf16 directly + fix formatting

* Add exception for HIP code
2025-10-15 09:29:17 +03:00
Aman Gupta b4c5c6f71f CUDA: add fp kernel for larger batch size MoE (llama/16512)
* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks
2025-10-15 09:29:17 +03:00
Anav Prasad a12848e8e9 cuda : remove legacy copy-op pointer indirection code (llama/16485)
* remove legacy copy-op pointer indirection code

* further removal of copy-op indirection code

* renamed check_node_graph_compatibility_and_refresh_copy_ops function
2025-10-15 09:29:17 +03:00
Georgi Gerganov 25ac94a6cb metal : FA support F32 K and V and head size = 32 (llama/16531)
* metal : FA support F32 K and V and head size = 32

* graph : remove obsolete comment [no ci]
2025-10-15 09:29:17 +03:00
lhez 66b0fc2fb7 opencl: fix build targeting CL 2 (llama/16554) 2025-10-15 09:29:17 +03:00