How to set up a custom OpenAI-compatible Server in BoltAI
Last updated
Was this helpful?
Last updated
Was this helpful?
BoltAI supports a custom OpenAI-compatible Server such as an OpenAI proxy server, LocalAI or LM Studio Local Inference Server. It should work with any platform that
There are a few options to run a local OpenAI-compatible server.
1. Ollama (Recommended)
is another fantastic option. It's opensource and easy to use. Unfortunately, its server is not compatible with OpenAI so you will need to use LiteLLM for that.
Ollama supports OpenAI-compatible server now.
👉
2. LM Studio
The easiest way to do this is to use . Follow this guide by Ingrid Stevens to start.
👉
3. LocalAI
LocalAI is another option if you're comfortable with docker and building it yourself. Follow their guide here:
👉
Go to Settings > Models, click the (+) button and choose "OpenAI-compatible Server"
Fill the form and click "Save Changes"
Give it a friendly name.
Enter the exact url for the chat completions endpoint. For LM Studio, the default is http://localhost:1234/v1/chat/completions
(Optional) Enter the model id. This will be sent with each chat request (the params model
in OpenAI API spec)
Enter the context length of this model. You need to refer to the original model to find this configuration. In LM Studio, find the "Context Length" configuration on the right pane.
Enable streaming if the server supports it
Click "Save Changes".
IMPORTANT: if you don't intend to use OpenAI, make sure to set this as default (6)
This feature is still in beta. Please reach out if you run into any issue.
To use AI Command with a custom server, make sure you set it as the default AI service (screenshot below)