How to use local Whisper instance in BoltAI
Requires v1.27.2 or later
Last updated
Requires v1.27.2 or later
Last updated
The Whisper model is a speech to text model from OpenAI that you can use to transcribe audio files. In BoltAI, you can use the Whisper model via OpenAI API, Groq or a custom server.
Follow this step-by-step guide to setup and use a local Whisper instance in BoltAI
Make sure your local Whisper instance is up and running. This is outside the scope of this guide. Here are some pointers for you:
Go to Settings > Models. Click the plus (+) button and select "OpenAI-compatible Server"
Fill the form, make sure to enter the full URL to the API endpoint
Click "Save (Skip Verification)"
Go to Settings > Advanced > Voice Settings
In the "Whisper Settings" section, select your newly created service ("Local Whisper" in this example)
Now both the main chat UI and the Inline Whisper will use this local whisper instance instead of OpenAI.
It's pretty simple, isn't it? If you have any question, feel free to send me an email. I'm happy to help.