* whisper : check state->ctx_metal not null
* whisper : add whisper_context_params { use_gpu }
* whisper : new API with params & deprecate old API
* examples : use no-gpu param && whisper_init_from_file_with_params
* whisper.objc : enable metal & disable on simulator
* whisper.swiftui, metal : enable metal & support load default.metallib
* whisper.android : use new API
* bindings : use new API
* addon.node : fix build & test
* bindings : updata java binding
* bindings : add missing whisper_context_default_params_by_ref WHISPER_API for java
* metal : use SWIFTPM_MODULE_BUNDLE for GGML_SWIFT and reuse library load
* metal : move bundle var into block
* metal : use SWIFT_PACKAGE instead of GGML_SWIFT
* style : minor updates
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
||
|---|---|---|
| .. | ||
| whisper.cpp.swift | ||
| whisper.swiftui.demo | ||
| whisper.swiftui.xcodeproj | ||
| README.md | ||
README.md
A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.
Usage:
- Select a model from the whisper.cpp repository.1
- Add the model to
whisper.swiftui.demo/Resources/modelsvia Xcode. - Select a sample audio file (for example, jfk.wav).
- Add the sample audio file to
whisper.swiftui.demo/Resources/samplesvia Xcode. - Select the "Release" 2 build configuration under "Run", then deploy and run to your device.
Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models is the appropriate directory to place resources whilst whisper.swiftui.demo/Models is related to actual code.
-
I recommend the tiny, base or small models for running on an iOS device. ↩︎
-
The
Releasebuild can boost performance of transcription. In this project, it also added-O3 -DNDEBUGtoOther C Flags, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎
