Commit Graph

128 Commits

Author SHA1 Message Date
Georgi Gerganov 2bc630f197 talk-llama : sync llama.cpp 2026-03-16 13:10:15 +02:00
Georgi Gerganov 84f8db71d8 talk-llama : sync llama.cpp 2026-02-27 20:57:58 +02:00
Georgi Gerganov 364c77f4ca talk-llama : sync llama.cpp 2026-02-15 21:44:37 +02:00
Georgi Gerganov 4b23ff249e talk-llama : sync llama.cpp 2026-02-08 09:29:10 +02:00
Georgi Gerganov 953e503fd9 talk-llama : sync llama.cpp 2026-01-30 15:56:40 +02:00
Georgi Gerganov ecfcc65fbf talk-llama : sync llama.cpp 2026-01-14 09:11:59 +02:00
Peter A. a96310871a
examples : fix executable example targets (#3600)
* cmake:
    - added `whisper-` prefix to unprefixed targets: `quantize`, `lsp`,
      `vad-speech-segments`
    - added `install(TARGETS ${TARGET} RUNTIME)` where it was missing

Signed-off-by: Peter A. <ink.splatters@pm.me>

* .github/workflows/build.yml: quantize -> whisper-quantize

Signed-off-by: Peter A. <ink.splatters@pm.me>

---------

Signed-off-by: Peter A. <ink.splatters@pm.me>
2026-01-13 08:08:18 +01:00
Georgi Gerganov 7359ac94d5 talk-llama : sync llama.cpp 2025-12-31 17:52:09 +02:00
Georgi Gerganov 6c22e792cb talk-llama : sync llama.cpp 2025-12-18 08:20:56 +02:00
Georgi Gerganov 179d8b1c9c
talk-llama : sync llama.cpp 2025-12-12 18:15:27 +02:00
Georgi Gerganov b12abefa9b sync : llama.cpp 2025-11-17 21:05:46 +02:00
Georgi Gerganov a1867e0dad sync : llama.cpp 2025-11-09 23:38:03 +02:00
Georgi Gerganov 322c2adb75 talk-llama : sync llama.cpp 2025-10-22 12:58:11 +03:00
Georgi Gerganov 8ba3c13b0c talk-llama : sync llama.cpp 2025-10-15 09:29:17 +03:00
Georgi Gerganov ff4c1a5a53 talk-llama : sync llama.cpp 2025-10-12 11:16:23 +03:00
Georgi Gerganov 0b3587acdd
whisper : enable flash attention by default (#3441) 2025-09-30 15:47:20 +03:00
Georgi Gerganov fcf0181ee2
talk-llama : sync llama.cpp 2025-09-29 15:18:41 +03:00
Georgi Gerganov 36778bd8b8
talk-llama : sync llama.cpp 2025-09-20 13:58:28 +03:00
Georgi Gerganov fc45bb8625 talk-llama : sync llama.cpp
ggml-ci
2025-08-18 20:30:45 +03:00
Georgi Gerganov d0a9d8c7f8 talk-llama : sync llama.cpp 2025-07-28 13:02:32 +03:00
Georgi Gerganov 6ddff4d96a talk-llama : sync llama.cpp
ggml-ci
2025-07-12 19:23:56 +03:00
Georgi Gerganov 1f816de7da talk-llama : sync llama.cpp 2025-07-01 17:54:53 +03:00
Georgi Gerganov e6c10cf3d5 talk-llama : sync llama.cpp
ggml-ci
2025-06-21 07:34:17 +03:00
Georgi Gerganov 2f60ebc3c2 talk-llama : sync llama.cpp
ggml-ci
2025-06-18 12:40:34 +03:00
Georgi Gerganov db264d6220 talk-llama : sync llama.cpp
ggml-ci
2025-06-10 12:40:33 +03:00
Georgi Gerganov 7fd6fa8097 talk-llama : sync llama.cpp
ggml-ci
2025-06-01 15:14:44 +03:00
Daniel Bevenius 73a8c5fb94
whisper : remove whisper_load_backends function (#3196)
* whisper : remove whisper_load_backends function

This commit removes the `whisper_load_backends` function, which was used
to load all GGML backends.

The motivation for this change push the responsibility of loading
backends to user applications to give them more control over which
backends to load and when. See the references below for more context.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3182
Refs: https://github.com/ggml-org/whisper.cpp/pull/3042#issuecomment-2801778733
Refs: https://github.com/ggml-org/whisper.cpp/pull/3042#issuecomment-2801928990

* ruby : add check for rwc is NULL

This commit adds a check to ensure that the `rwc` pointer is not NULL
before attempting to mark its members in the garbage collector.

The motivation for this is an attempt to see if this fixed the CI build
as I'm not able to reproduce the issue locally.

Refs: https://github.com/ggml-org/whisper.cpp/actions/runs/15299612277/job/43036694928?pr=3196
2025-05-29 08:03:17 +02:00
Georgi Gerganov 26eb48cb08 talk-llama : sync llama.cpp
ggml-ci
2025-05-27 18:03:00 +03:00
matteng1 ea9f206f18
talk-llama : fix for swedish umlauts + expose model inference settings in talk-llama.cpp (#3187)
Quick fix for not removing swedish umlauts.

* Update talk-llama.cpp

Expose model inference settings to user instead of hard coding them. Same defaults as previous defaults.

* Update examples/talk-llama/talk-llama.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-05-26 07:57:39 +02:00
Georgi Gerganov 6b6cf19c65 talk-llama : sync llama.cpp
ggml-ci
2025-05-19 14:58:39 +03:00
Georgi Gerganov f890560575 talk-llama : sync llama.cpp
ggml-ci
2025-05-13 13:59:21 +03:00
Daniel Bevenius 09846f4e12
whisper: remove MSVC warnings pragmas (#3090)
* ggml : remove MSVC warnings pragmas

This commit removes the MSVC-specific pragmas as these are now handled
in CMakeLists.txt.

* whisper : remove MSVC warning pragmas

This commit removes the MSVC-specific pragmas. These are now handled in
the CMakeLists.txt file.
2025-05-05 13:09:35 +02:00
Georgi Gerganov 0778b6ff5f talk-llama : sync llama.cpp
ggml-ci
2025-05-01 13:29:02 +03:00
Georgi Gerganov f3c42399a3
talk-llama : sync llama.cpp (#3084)
ggml-ci
2025-04-28 16:40:23 +03:00
Georgi Gerganov c64f3e8ada
common : separate whisper sources (#2846)
* common : separate whisper sources

* examples : add chrono

* examples : add more headers
2025-02-27 12:50:32 +02:00
Georgi Gerganov 3f91832352
talk-llama : sync llama.cpp 2025-02-03 22:42:26 +02:00
Georgi Gerganov 99b011a9f5 talk-llama : sync llama.cpp 2025-01-14 10:38:01 +02:00
Georgi Gerganov 35d0e02c72
talk-llama : sync llama.cpp (#2709) 2025-01-13 08:55:48 +02:00
Georgi Gerganov 2e59dced12
whisper : rename binaries + fix install (#2648)
* whisper : rename binaries + fix install

* cont : try to fix ci

* cont : fix emscripten builds
2024-12-21 09:43:49 +02:00
Georgi Gerganov 61edb117a0 talk-llama : sync llama.cpp 2024-12-18 12:52:16 +02:00
Georgi Gerganov f2c680f893 talk-llama : sync llama.cpp 2024-12-08 20:14:35 +02:00
Georgi Gerganov 06e059b8f8 talk-llama : sync llama.cpp 2024-11-20 21:00:08 +02:00
Georgi Gerganov 24d706774d talk-llama : sync llama.cpp 2024-11-15 15:21:04 +02:00
Georgi Gerganov c65d0fd3c8 talk-llama : sync llama.cpp 2024-11-01 10:19:05 +02:00
Georgi Gerganov 941912467d whisper : adapt to latest ggml (skip) (#0) 2024-10-05 15:23:51 +03:00
Georgi Gerganov ccc2547210 talk-llama : sync llama.cpp 2024-10-03 12:22:17 +03:00
Georgi Gerganov fe18c29ab8 talk-llama : sync llama.cpp 2024-09-24 19:45:08 +03:00
Georgi Gerganov da9809f243 talk-llama : sync llama.cpp 2024-08-28 13:22:20 +03:00
Georgi Gerganov 22058f2dbc talk-llama : sync llama.cpp 2024-08-08 22:48:46 +03:00
Georgi Gerganov dbf9c15e30 talk-llama : sync llama.cpp 2024-07-08 14:53:55 +03:00