Changelog

October 7th, 2025

BoltAI v2 is here

New

Hi everyone, BoltAI v2 is finally here ✨

First of all, thank you for your continued support, and sorry for the lack of communication from my end. I’ve been putting my 100% attention into v2 development over the last few months and wasn’t able to share much of the progress.

With that said, I’m very happy with the result. After months of effort, I’m super excited to share that the BoltAI v2 beta is ready for public release. It’s a lot better than v1, which is already pretty good, I think.

Let’s get to it.

CleanShot 2025-10-06 at 22


Download Link for BoltAI v2

Here is the download link if you want to jump straight to the product: https://updates.boltai.com/dmg/BoltAI-2.0.5.dmg

Note that it’s a completely new app, with a different app bundle ID. You can import data from v1 to v2 to speed up the configuration process (keyboard shortcut: Command + Shift + I).

Since v2 doesn’t have full feature parity with v1 yet, you may want to use both v1 and v2 in parallel.

What’s news

Here are some of the notable improvements:

Cloud Sync Ready

You can choose to sync all your data with BoltAI Cloud for easy access. BoltAI Cloud is hosted on Supabase with strict row-level security (RLS) enforcement. Your data is encrypted at rest and in transit.

BoltAI supports end-to-end encryption for your API keys. You can set your own passphrase to encrypt your API keys. This way, nobody can see your API keys, not even me as the developer.

Cloud Sync is completely optional. You can skip it and use BoltAI 100% locally.

Faster UI, better UX

I rebuilt v2 from the ground up with a stronger foundation. It targets macOS 13+ and only uses modern Apple APIs, making the whole app snappier.

You can quickly navigate between chats in v2 with standard keyboard shortcut:

I rebuilt the chat renderer completely. It’s faster and looks more modern now. I tested it with a very large chat (43k+ messages), and it doesn’t affect performance at all. Give it a try and let me know what you think.

Demo: https://share.cleanshot.com/4kLfZSwC

The model selection popover also got a major redesign. Now you can quickly switch to another service and model with just a keyboard shortcut.

Demo: https://share.cleanshot.com/4TkhV2rR

New ā€œInstant Chat Barā€

Instant Chat

Following your feedback, I reworked the ā€œInstant Commandā€ in v1 and made it work even better now: better design, native look and feel, while still quite powerful. Press Control + Space to trigger it.

In Settings, you can enable more advanced features such as shake to activate, auto-attach clipboard content, and so on.

Demo:

New ā€œInstant Dictationā€

Instant Dictation

I reworked the Inline Whisper feature from v1, and now you can use it with local AI models too! In v2, I decided to use the Parakeet model as it’s fast, consumes less RAM, and can work 100% offline.

Like in v1, you can choose to copy the output to the clipboard or paste it directly into the app.

Demo: https://share.cleanshot.com/W5rb4Gyb

Tighter AI provider integration

BoltAI v2 continues to support a wide range of AI services, now including subscriptions such as Claude Code or GitHub Copilot.

It works even better in v2, where all provider responses are unified to give you the same experience. In v1, there were a lot of inconsistencies between providers.

I integrated deeper with each AI service provider in v2 to take full advantage of model capabilities.

For example, when using the OpenAI provider, you can also use native tools such as Web Search, Image Generation, Code Interpreter, and more. Go to Settings > Plugins to configure native provider tools.

NativeTools

More Secure

BoltAI v1 supported some great out-of-app features such as AI Command, AI Inline, and File Sync. Some of these features required turning sandboxing off (the default security model from Apple). In v2, I reworked this and made the app sandboxed by default. This way, for future features like AI agents, we can all be reassured that the app won’t touch your files without asking first.

And a lot more…

I’m curious to hear your thoughts on the new app.

Future Roadmap

BoltAI v2 is still in active development. It has ~80% feature parity with v1 now.

I will continue to port features from v1 to v2 in the coming months. Please let me know which features in v1 you want to use in v2 first. I’ll prioritize them.

Top-priority features I’m porting to v2 right now are: MCP support, AI Command, and other features for teams. Stay tuned.

Please bookmark and share your feedback on this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9

FAQs:

Is my v1 license valid for v2?

Yes. BoltAI v2 continues to follow the same licensing model as v1: a Perpetual License with 1 year of updates. Think of it as v1.99.

However, in v2 I’ll introduce a subscription plan for Cloud Sync (optional) as it incurs real recurring expenses. Note that the Cloud Sync feature is completely optional. You can turn sync off and keep using it like v1.

In the first beta, there is no license validation yet. Use it for free while in beta!

Will v1 keep receiving updates / bug-fixes?

I’ll continue to fix critical bugs in v1 until all features are ported to v2. New features and major updates will be focused on v2.

How do I migrate my data?

To import data from v1, open BoltAI v2 app, press Command + Shift + I and follow the instruction.

Note that in the first release, AI Commands & AI Assistants are not supported yet. You can import: AI Services, Folders, Chats, Prompts and Memories.

How do I report bugs?

Please report bugs for v2 using this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9

And that’s it, for now

See you in the next release šŸ‘‹


October 7th, 2025

v1.36.5 (build 169)

NewImproved
  • Added support for OpenAI's latest model gpt-5-pro
  • Support gpt-5 without Org Verification (turn streaming off)
  • Supports Responses API for o3-pro, o1-pro, and other deep research models
  • Azure OpenAI Service: added support for GPT-5 and other models
  • Switched to max_completion_tokens instead of max_tokens
  • Fetch model list directly from Github Copilot
  • Improved MCP security
  • Improved OpenRouter error handling
  • New implementation of OpenRouter's API key validation
  • Fixed the issue where gpt-5 model family doesn't work with temperature and top_p parameters

July 25th, 2025

v1.36.0

NewImprovedFixed
  • [New] Added support for OpenAI's new Responses API (o1-pro & o3-pro models)
  • [New] LM Studio provider now supports Tool Use
  • [New] Added MCP logging (use Console.app to view detail logs)
  • [Improved] New implementation of MCP server initialization
  • [Improved] Switched to OpenAI-compatible API endpoint for Google AI provider
  • [Improved] Improved model list fetching on app startup
  • [Fix] Not showing AI response for some local AI models
  • [Fix] Invalid JSON response when using Google AI
  • [Fix] option+return doesn't trigger alt AI Command
  • [Fix] Incorrectly parse some MCP tool call parameters link
  • [Fix] Spaces aren't escaped in MCP server args link

May 29th, 2025

v1.35.3

NewImproved

CleanShot 2025-05-29 at 15

  • Claude 4 models support Reasoning Effort params
  • New voice models: GPT-4o Transcribe and GPT-4o mini Transcribe
  • Support for AWS's ProcessAWSCredentialIdentityResolver
  • Added support for all AWS regions
  • WAL mode enabled by default for faster and better database performance
  • Improved Inspector pane rendering with reduced flickering
  • New message rendering & icon set with a focus on content
  • Customizable model listing endpoint for OpenAI-compatible servers (default: /v1/models)
  • Attempted fix for the Message Editor window issue

April 22nd, 2025

v1.34.1: Model Context Protocol support

NewImprovedFixed

CleanShot 2025-04-15 at 22

  • New: BoltAI is now an MCP client. MCP servers allows you to extend BoltAI's capabilities with custom tools and commands
  • New: Added support for GitHub Copilot and Jina DeepSearch
  • New: Added OpenAI's new models: o3, o4-mini, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano
  • New: Support more UI customization options: App Sidebar & Chat Input UI
  • Improved: Automatically pull model list from Google AI

What's new?

Model Context Protocol

The most significant change of this release is MCP support.

An MCP server, short for Model Context Protocol server, is a lightweight program that exposes specific capabilities to AI models via a standardized protocol, enabling AI to interact with external data sources and tools.

MCP servers allows you to extend BoltAI's capabilities with custom tools and commands.

To learn more about using MCP servers in BoltAI, read more at https://boltai.com/docs/plugins/mcp-servers

New AI Providers

BoltAI now supports 2 new AI service providers: GitHub Copilot* and Jina DeepSearch

*Note: an active Copilot subscription is required

OpenAI's new models

This release added support for latest models: o3, o4-mini and GPT-4.1 model family.

Note that to stream o3 model, you will need to verify your org. Alternatively, you can disable streaming for o3 in Settings > Advanced > Feature Flags

More customizations

Go to Settings > Appearance to personalize your app. You can now customize the App Sidebar and Chat Input Box. Enjoy.

See you in the next update šŸ‘‹


February 26th, 2025

v1.33.0

New

CleanShot 2025-02-26 at 08

  • New: Added support for Claude 3.7 Sonnet
  • Improved: Added support for reasoning models on OpenRouter
  • Improved: Better cache breakpoint handling for Anthropic models
  • Other bug fixes and improvements

What's new?

The most notable change in this release is supporting Anthropic's new Claude 3.7 model. This new model allows you to set a custom "thinking budget tokens". You can decide how hard the model think.

BoltAI simplifies this by using the same Reasoning Effort setting:

  • Low: 4K thinking tokens
  • Medium: 16k thinking tokens
  • High: 32k thinking tokens

It's recommended to start with medium (16k tokens) as it's the optimal setting when using Claude 3.7 extended thinking.


February 15th, 2025

v1.32.3 (build 150)

NewImproved

CleanShot 2025-02-15 at 17

  • New: Added support for Github Marketplace, Pico AI Homelab (MLX), Cerebras AI.
  • New: Fetch model lists for Custom AI services.
  • Improved: o1 models now support streaming.
  • Improved: Copy reasoning content to clipboard.
  • Improved: Better error handling for AI services.
  • Improved: Better Advanced Voice Mode.

February 3rd, 2025

v1.32.0 build 146

NewFixed

CleanShot 2025-02-03 at 16

  • New: Added support for OpenAI's reasoning_effort parameter
  • New: Added support for OpenRouter's reasoning content.
  • Fixed: Projects not showing all AI models.

January 28th, 2025

v1.31.0 build 144

NewImprovedFixed

CleanShot 2025-01-28 at 13

  • New: Added support for citations when using Perplexity or Web Search AI Plugins.
  • New: Show AI Plugin input params if available.
  • New: Chats within projects now support drag-and-drop and other operations.
  • Fixed: o1 doesn't work with custom GPT parameters.
  • Fixed: Cannot setup a new Anthropic service

January 22nd, 2025

v1.30.5 (build 143)

NewImprovedFixed

CleanShot 2025-01-22 at 13

  • New: Added support for the new DeepSeek Reasoning model.
  • New: Thinking UI. Show reasoning content in a popover.
  • Improved: Automatically pulls the model list from DeepSeek platform.
  • Improved: Automatically loads previous chat messages on second page.
  • Fixed: Alt AI Command profile not showing up.
  • Fixed: Jumpy scroll experience.
  • Fixed: Always enable text selection for user messages.
  • Fixed: Floating window feature doesn't work for some users.
  • Other bug fixes and improvements.