Changelog
BoltAI v2 is here
Hi everyone, BoltAI v2 is finally here āØ
First of all, thank you for your continued support, and sorry for the lack of communication from my end. Iāve been putting my 100% attention into v2 development over the last few months and wasnāt able to share much of the progress.
With that said, Iām very happy with the result. After months of effort, Iām super excited to share that the BoltAI v2 beta is ready for public release. Itās a lot better than v1, which is already pretty good, I think.
Letās get to it.
Download Link for BoltAI v2
Here is the download link if you want to jump straight to the product: https://updates.boltai.com/dmg/BoltAI-2.0.5.dmg
Note that itās a completely new app, with a different app bundle ID. You can import data from v1 to v2 to speed up the configuration process (keyboard shortcut: Command + Shift + I).
Since v2 doesnāt have full feature parity with v1 yet, you may want to use both v1 and v2 in parallel.
Whatās news
Here are some of the notable improvements:
Cloud Sync Ready
You can choose to sync all your data with BoltAI Cloud for easy access. BoltAI Cloud is hosted on Supabase with strict row-level security (RLS) enforcement. Your data is encrypted at rest and in transit.
BoltAI supports end-to-end encryption for your API keys. You can set your own passphrase to encrypt your API keys. This way, nobody can see your API keys, not even me as the developer.
Cloud Sync is completely optional. You can skip it and use BoltAI 100% locally.
Faster UI, better UX
I rebuilt v2 from the ground up with a stronger foundation. It targets macOS 13+ and only uses modern Apple APIs, making the whole app snappier.
You can quickly navigate between chats in v2 with standard keyboard shortcut:
- ā1 to ā9 to select first 9 chats in the sidebar. Demo: https://share.cleanshot.com/6KFJx21j
- ā[ and ā] to go back / go forward.
- ā[ and ā] to select previous / next item in the sidebar. Demo: https://share.cleanshot.com/y2NBxcHb
I rebuilt the chat renderer completely. Itās faster and looks more modern now. I tested it with a very large chat (43k+ messages), and it doesnāt affect performance at all. Give it a try and let me know what you think.
Demo: https://share.cleanshot.com/4kLfZSwC
The model selection popover also got a major redesign. Now you can quickly switch to another service and model with just a keyboard shortcut.
Demo: https://share.cleanshot.com/4TkhV2rR
New āInstant Chat Barā
Following your feedback, I reworked the āInstant Commandā in v1 and made it work even better now: better design, native look and feel, while still quite powerful. Press Control + Space to trigger it.
In Settings, you can enable more advanced features such as shake to activate, auto-attach clipboard content, and so on.
Demo:
- Instant Chat https://share.cleanshot.com/ZYc76RH0
- Shake to Chat https://share.cleanshot.com/kNRXWqwX
New āInstant Dictationā
I reworked the Inline Whisper feature from v1, and now you can use it with local AI models too! In v2, I decided to use the Parakeet model as itās fast, consumes less RAM, and can work 100% offline.
Like in v1, you can choose to copy the output to the clipboard or paste it directly into the app.
Demo: https://share.cleanshot.com/W5rb4Gyb
Tighter AI provider integration
BoltAI v2 continues to support a wide range of AI services, now including subscriptions such as Claude Code or GitHub Copilot.
It works even better in v2, where all provider responses are unified to give you the same experience. In v1, there were a lot of inconsistencies between providers.
I integrated deeper with each AI service provider in v2 to take full advantage of model capabilities.
For example, when using the OpenAI provider, you can also use native tools such as Web Search, Image Generation, Code Interpreter, and more. Go to Settings > Plugins to configure native provider tools.
More Secure
BoltAI v1 supported some great out-of-app features such as AI Command, AI Inline, and File Sync. Some of these features required turning sandboxing off (the default security model from Apple). In v2, I reworked this and made the app sandboxed by default. This way, for future features like AI agents, we can all be reassured that the app wonāt touch your files without asking first.
And a lot moreā¦
Iām curious to hear your thoughts on the new app.
Future Roadmap
BoltAI v2 is still in active development. It has ~80% feature parity with v1 now.
I will continue to port features from v1 to v2 in the coming months. Please let me know which features in v1 you want to use in v2 first. Iāll prioritize them.
Top-priority features Iām porting to v2 right now are: MCP support, AI Command, and other features for teams. Stay tuned.
Please bookmark and share your feedback on this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9
FAQs:
Is my v1 license valid for v2?
Yes. BoltAI v2 continues to follow the same licensing model as v1: a Perpetual License with 1 year of updates. Think of it as v1.99.
However, in v2 Iāll introduce a subscription plan for Cloud Sync (optional) as it incurs real recurring expenses. Note that the Cloud Sync feature is completely optional. You can turn sync off and keep using it like v1.
In the first beta, there is no license validation yet. Use it for free while in beta!
Will v1 keep receiving updates / bug-fixes?
Iāll continue to fix critical bugs in v1 until all features are ported to v2. New features and major updates will be focused on v2.
How do I migrate my data?
To import data from v1, open BoltAI v2 app, press Command + Shift + I and follow the instruction.
Note that in the first release, AI Commands & AI Assistants are not supported yet. You can import: AI Services, Folders, Chats, Prompts and Memories.
How do I report bugs?
Please report bugs for v2 using this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9
And thatās it, for now
See you in the next release š
v1.36.5 (build 169)
- Added support for OpenAI's latest model gpt-5-pro
- Support gpt-5 without Org Verification (turn streaming off)
- Supports Responses API for o3-pro, o1-pro, and other deep research models
- Azure OpenAI Service: added support for GPT-5 and other models
- Switched to
max_completion_tokens
instead ofmax_tokens
- Fetch model list directly from Github Copilot
- Improved MCP security
- Improved OpenRouter error handling
- New implementation of OpenRouter's API key validation
- Fixed the issue where gpt-5 model family doesn't work with temperature and top_p parameters
v1.36.0
- [New] Added support for OpenAI's new Responses API (o1-pro & o3-pro models)
- [New] LM Studio provider now supports Tool Use
- [New] Added MCP logging (use Console.app to view detail logs)
- [Improved] New implementation of MCP server initialization
- [Improved] Switched to OpenAI-compatible API endpoint for Google AI provider
- [Improved] Improved model list fetching on app startup
- [Fix] Not showing AI response for some local AI models
- [Fix] Invalid JSON response when using Google AI
- [Fix]
option+return
doesn't trigger alt AI Command - [Fix] Incorrectly parse some MCP tool call parameters link
- [Fix] Spaces aren't escaped in MCP server args link
v1.35.3
- Claude 4 models support Reasoning Effort params
- New voice models: GPT-4o Transcribe and GPT-4o mini Transcribe
- Support for AWS's ProcessAWSCredentialIdentityResolver
- Added support for all AWS regions
- WAL mode enabled by default for faster and better database performance
- Improved Inspector pane rendering with reduced flickering
- New message rendering & icon set with a focus on content
- Customizable model listing endpoint for OpenAI-compatible servers (default:
/v1/models
) - Attempted fix for the Message Editor window issue
v1.34.1: Model Context Protocol support
- New: BoltAI is now an MCP client. MCP servers allows you to extend BoltAI's capabilities with custom tools and commands
- New: Added support for GitHub Copilot and Jina DeepSearch
- New: Added OpenAI's new models: o3, o4-mini, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano
- New: Support more UI customization options: App Sidebar & Chat Input UI
- Improved: Automatically pull model list from Google AI
What's new?
Model Context Protocol
The most significant change of this release is MCP support.
An MCP server, short for Model Context Protocol server, is a lightweight program that exposes specific capabilities to AI models via a standardized protocol, enabling AI to interact with external data sources and tools.
MCP servers allows you to extend BoltAI's capabilities with custom tools and commands.
To learn more about using MCP servers in BoltAI, read more at https://boltai.com/docs/plugins/mcp-servers
New AI Providers
BoltAI now supports 2 new AI service providers: GitHub Copilot* and Jina DeepSearch
*Note: an active Copilot subscription is required
OpenAI's new models
This release added support for latest models: o3, o4-mini and GPT-4.1 model family.
Note that to stream o3 model, you will need to verify your org. Alternatively, you can disable streaming for o3 in Settings > Advanced > Feature Flags
More customizations
Go to Settings > Appearance to personalize your app. You can now customize the App Sidebar and Chat Input Box. Enjoy.
See you in the next update š
v1.33.0
- New: Added support for Claude 3.7 Sonnet
- Improved: Added support for reasoning models on OpenRouter
- Improved: Better cache breakpoint handling for Anthropic models
- Other bug fixes and improvements
What's new?
The most notable change in this release is supporting Anthropic's new Claude 3.7 model. This new model allows you to set a custom "thinking budget tokens". You can decide how hard the model think.
BoltAI simplifies this by using the same Reasoning Effort setting:
- Low: 4K thinking tokens
- Medium: 16k thinking tokens
- High: 32k thinking tokens
It's recommended to start with medium (16k tokens) as it's the optimal setting when using Claude 3.7 extended thinking.
v1.32.3 (build 150)
- New: Added support for Github Marketplace, Pico AI Homelab (MLX), Cerebras AI.
- New: Fetch model lists for Custom AI services.
- Improved: o1 models now support streaming.
- Improved: Copy reasoning content to clipboard.
- Improved: Better error handling for AI services.
- Improved: Better Advanced Voice Mode.
v1.32.0 build 146
- New: Added support for OpenAI's
reasoning_effort
parameter - New: Added support for OpenRouter's
reasoning
content. - Fixed: Projects not showing all AI models.
v1.31.0 build 144
- New: Added support for citations when using Perplexity or Web Search AI Plugins.
- New: Show AI Plugin input params if available.
- New: Chats within projects now support drag-and-drop and other operations.
- Fixed: o1 doesn't work with custom GPT parameters.
- Fixed: Cannot setup a new Anthropic service
v1.30.5 (build 143)
- New: Added support for the new DeepSeek Reasoning model.
- New: Thinking UI. Show reasoning content in a popover.
- Improved: Automatically pulls the model list from DeepSeek platform.
- Improved: Automatically loads previous chat messages on second page.
- Fixed: Alt AI Command profile not showing up.
- Fixed: Jumpy scroll experience.
- Fixed: Always enable text selection for user messages.
- Fixed: Floating window feature doesn't work for some users.
- Other bug fixes and improvements.