Agentbrisk
codingvscode-extensionjetbrainsopen-source Status: active

Continue

Open-source AI code assistant that lets you bring any model and configure everything


Continue is a free, open-source AI coding extension for VS Code and JetBrains that lets you connect any model from any provider. Run Claude Opus 4.7 for chat, a local Ollama model for autocomplete, and GPT-5 for large edits, all from one extension. Configuration lives in a JSON or YAML file you control. There are no subscriptions, no vendor lock-in, and no opinionated defaults you can't override. Continue Hub lets individuals and teams share assistant configurations. For engineers who want the full AI coding stack without signing over model choice and data to a single company, Continue is the clearest answer available today.

Most AI coding tools make a decision for you: here’s the model, here’s the pricing, here’s what you can configure and what you can’t. Continue.dev takes the opposite approach. Continue is an open-source extension for VS Code and JetBrains that treats model selection, provider routing, and behavioral configuration as things you should own entirely. You pick the models. You write the config. You decide whether autocomplete hits a local Ollama instance or an Anthropic API key. The extension itself costs nothing. If you’ve been watching the AI coding space and feeling like every serious product wants you to trust them with both your code and your model budget, Continue is the tool that was built specifically for people who feel that way.

Quick verdict

Continue is the right choice if you want the full AI coding stack without ceding control to a vendor. Setup requires more work than Cursor or GitHub Copilot, and the out-of-the-box experience is less polished. But once it’s configured, you get a genuinely capable chat, edit, autocomplete, and agent workflow running on whichever combination of models fits your use case, with your API keys, going to your chosen providers. For JetBrains users especially, it’s the best option on the market.

What is Continue, exactly?

Continue is an open-source AI code assistant extension, first released in August 2023 by the San Francisco company of the same name. It’s not a fork of anything; it’s a plugin for existing IDEs. VS Code and JetBrains both have official, actively maintained releases. The GitHub repository has accumulated significant community contribution, and the project operates with a public roadmap.

The core design decision that makes Continue different from every major competitor: it doesn’t own the model relationship. When you use Cursor, your requests go through Anysphere’s servers. When you use GitHub Copilot, they go through Microsoft’s infrastructure. When you use Continue, they go from your machine directly to whatever model provider you’ve configured. That architectural choice has real implications for privacy, data handling, cost structure, and flexibility.

Configuration is done in a JSON or YAML file called config.json (or config.yaml) that lives in your home directory or, optionally, in a project-level directory for team sharing. The file specifies which models handle which tasks: one model for chat, another for autocomplete, another for edit instructions. You can use different providers for each, mix hosted and local models freely, and swap them out by editing the file. There’s no UI required for any of this.

The four main capability areas are chat, edit, autocomplete, and agent mode. Chat is a sidebar panel that talks to your codebase using whatever context you supply. Edit is inline: select code, describe the change, get a diff. Autocomplete is the continuous background suggestion layer. Agent mode gives the model the ability to take multi-step actions: read files, run terminal commands, iterate on its output. These four modes have rough equivalents in every major AI coding tool; the difference with Continue is that each one can be powered by a different model you chose.

Continue Hub sits alongside the extension as a registry for shareable configurations. You can publish a config that combines particular models, custom slash commands, and context providers for a specific stack, and anyone can pull it into their own setup. It functions like a package manager for AI assistant behavior.

The features that make it the configurable choice

Bring your own model, anywhere

Continue connects to every major hosted model provider: Anthropic, OpenAI, Google, Mistral, Cohere, and others. It also supports any OpenAI-compatible API endpoint, which covers most newer providers and self-hosted setups. In practice this means you can run Claude Opus 4.7 for complex chat, GPT-5 for a specific edit task you find it better at, and Gemini 3 for code review, all within the same Continue session.

The multi-model routing is not just theoretical. The config file has separate model slots per task type. Your autocomplete model is specified separately from your chat model, which is specified separately from your edit model. You don’t have to use the same model everywhere, and you shouldn’t have to. A fast, cheap model for autocomplete and a slower, smarter frontier model for architectural chat is a sensible setup, and Continue is built to handle it natively.

This matters for cost. If you’re using Cursor Pro at $20/month, all your model usage is bundled. If you’re using Continue with your own API keys, you pay exactly what each request costs. For a developer who uses autocomplete heavily but chat sparingly, the economics often favor Continue. For a developer who hammers Opus-level models all day on complex tasks, Cursor’s flat rate might be cheaper. Do the math for your usage pattern.

Local-first with Ollama support

Continue’s Ollama integration is one of the most practical local-model workflows available in any coding tool. You install Ollama, pull a model like qwen2.5-coder or deepseek-coder-v2, and point a Continue model slot at localhost:11434. That’s the entire setup.

The obvious use case is autocomplete. A local 7B or 14B code model running on your machine produces suggestions in under 100ms with no API cost and no data leaving your machine. The suggestion quality from a well-tuned local model is competitive with early-generation Copilot, which is good enough for the mechanical tab-completion moments that make up a lot of coding. You save the paid API calls for tasks where the frontier model quality actually matters: complex refactors, architectural questions, writing tests for tricky logic.

For teams in regulated industries or security-conscious environments, local-only autocomplete is a compliance story as much as a cost story. Code never leaves the developer’s machine during the most frequent AI interaction they have. That’s a meaningful guarantee that hosted tools can’t offer.

The local model support also covers LM Studio and any other server that speaks the OpenAI API format. If you’ve already built a local model setup for other tools, Continue will typically plug into it without changes.

JSON-configured workflows

The Continue config file is the feature that will either sell you on the tool or make you walk away. It’s expressive but it requires you to write it. Here’s what that means in practice: you can define custom slash commands that combine a fixed prompt with dynamic context (the current file, a selection, the git diff), you can add context providers that pull in information from sources Continue doesn’t know about by default (your documentation site, your issue tracker, a local knowledge base), and you can set per-project configs that override your global defaults.

Slash commands are particularly useful. You can define /review as a command that runs the current file through a specific model with a code review prompt you wrote, then add it to Continue Hub so your whole team uses the same review behavior. You can define /test as a command that generates test cases using your project’s preferred testing framework and style, without re-explaining that context every time.

The config format is version-controllable. You commit config.yaml to your repo, and every engineer who clones the project gets the same Continue setup. This is the team story for Continue: shared context providers, shared slash commands, shared model choices, all defined in a file that lives where your code lives.

Continue Hub for sharing assistants

Continue Hub is the distribution layer for the config-as-code model. Individuals and teams publish assistant configurations as packages. Other users install them with a single command that imports the config into their Continue setup. A published configuration can include model recommendations, slash commands, context providers, and documentation about what the assistant is for.

This is a meaningful feature for open-source projects and developer communities. If you’re maintaining a large open-source codebase, you can publish a Continue assistant config that helps contributors understand the project’s conventions, preferred libraries, and code style. Contributors install it once and their AI assistance is tuned to your project without any onboarding conversation.

For companies, Hub is where the team collaboration features live. Centrally managed assistant configurations, access controls, and versioning are available on paid Teams plans. The free individual tier covers sharing and discovery but not the enterprise access control layer.

JetBrains parity

Most AI coding extensions are VS Code-first projects that add a JetBrains version as an afterthought, if they add one at all. GitHub Copilot does support JetBrains, but its JetBrains feature set has historically lagged VS Code. Cline is VS Code only. Cursor is a VS Code fork entirely, which means JetBrains users can’t use it at all.

Continue’s JetBrains plugin is an official, actively maintained release covering IntelliJ IDEA, PyCharm, GoLand, WebStorm, and other IDEs in the JetBrains family. The feature set is close to parity with VS Code: chat, edit, autocomplete, and agent mode all work. Configuration uses the same config file format.

For teams that are standardized on JetBrains IDEs, this removes what would otherwise be a hard blocker. Backend engineers running IntelliJ, data engineers using PyCharm, and Go developers using GoLand can all use the same Continue setup as their VS Code colleagues, sharing the same config file and slash commands. That consistency is genuinely rare in this space.

Pricing

Continue, the extension, is free. There’s no subscription, no usage limit imposed by Continue, and no rate limiting from Continue’s infrastructure, because Continue doesn’t sit between your requests and your model provider. You pay whoever’s API you use, at that provider’s rates.

For a developer using Claude Sonnet 4.6 via Anthropic’s API for chat and edits, and a local Ollama model for autocomplete: the Anthropic usage costs money at Anthropic’s token rates, and the local model costs nothing beyond electricity. The total bill depends entirely on how much you use the hosted model and at what token volumes.

Compare this to Cursor Pro at $20/month or GitHub Copilot at $10/month. For developers who use AI coding tools lightly, the hosted tools may cost less in practice because API token costs on even modest usage can add up. For developers who use autocomplete very heavily but stick to local models for that, and use hosted models only for deliberate chat sessions, Continue’s model often costs less. There’s no universal answer; it depends on your usage pattern and which models you choose.

Continue Hub is free for individuals and open-source use. It covers publishing, discovering, and installing assistant configurations with no limit. Teams plans with enterprise features such as centralized configuration management, access controls, and priority support are paid. Pricing for Teams wasn’t publicly listed at the time of this review; the expectation is a per-seat model typical for developer tools.

One hidden cost to account for: your time configuring things. The first hour of getting Continue set up with a good multi-model config and a few useful slash commands is time you don’t spend on Cursor or Copilot setups. That’s not a knock, it’s an honest trade-off: more control requires more initial investment.

Where Continue wins and where it doesn’t

Continue’s genuine advantage is breadth of compatibility. If you’re a JetBrains user, it’s the strongest AI assistant available. If you’re running a regulated or privacy-conscious environment, local autocomplete plus direct API calls means you can control exactly where every token goes. If you want to use a model that Cursor or Copilot don’t support, Continue will connect to it. If your team has strong opinions about which models you use and why, Continue’s config-as-code approach lets you enforce those opinions systematically.

The areas where it loses to polished products are real and worth stating directly. The out-of-the-box autocomplete experience, before you’ve tuned the model config, is not as good as Cursor’s fine-tuned Tab model. Cursor’s codebase indexing and semantic search are more mature than anything Continue ships by default. Agent mode in Continue works, but it’s newer and less battle-tested than the agent workflows in Cursor or Cline. And the setup cost is non-trivial: if you want the best Continue experience, you’re going to spend time in the config file, which Cursor users never have to think about.

The honest framing: Continue is for engineers who have strong preferences and the patience to express them in config files. If you don’t care which model handles your autocomplete and you just want it to work, Cursor is the faster path to a good experience. If you care a lot and want the answers to live in version-controlled YAML, Continue is built for you.

Who Continue is built for

The developer who gets the most from Continue is someone who’s already formed opinions about AI models. They know they prefer Claude Sonnet 4.6 for refactoring over GPT-5. They’ve tried a few local models and settled on one they like for tab completion. They want their assistant configuration in their dotfiles. They’re not interested in paying a flat rate for access to a curated model menu; they’d rather control the inputs.

JetBrains users are the clearest specific group. If you’re in IntelliJ or PyCharm every day and you’ve been watching VS Code-first tools with envy, Continue is the tool that takes your IDE seriously.

Privacy-focused engineers are a strong fit too. Developers working on proprietary algorithms, legal tech, healthcare adjacent software, or anything where code confidentiality matters will find the direct API model reassuring. No proxy, no third-party indexing, no terms of service that say your code might improve a product.

Continue is probably not the right starting point for a developer new to AI coding tools who wants maximum assistance with minimum setup. For that, see the best AI agents for coding roundup for tools better suited to fast onboarding.

Continue vs the alternatives

Continue vs Cursor. The most common comparison, and the one with the clearest answer: Cursor if you want the fastest path to a polished experience in VS Code, Continue if you want full control or you’re in JetBrains. Cursor owns the whole editor and makes strong defaults. Continue owns nothing and makes no defaults you can’t change. Cursor’s autocomplete is better out of the box. Continue’s is better once you’ve spent time on config and chosen models you trust. The full breakdown lives at Continue vs Cursor. The real decision point is whether you value polish or control more.

Continue vs Cline. Cline is VS Code-only and agent-focused. It’s built around an autonomous loop that reads files, runs commands, and iterates, which makes it one of the more capable agentic tools available as an extension. Continue has agent capabilities, but that’s not its primary identity. If agentic multi-step task execution is your main need and you’re in VS Code, Cline is worth comparing directly. If you need JetBrains support or care deeply about the full spectrum of chat, edit, and autocomplete alongside agent work, Continue covers more ground.

Continue vs Aider. Aider is terminal-native and git-aware, which makes it a different category of tool than a GUI extension. Aider’s strength is repository-level changes applied as commits, driven from the command line. Continue’s strength is the interactive IDE experience with multiple mode types. They’re not direct competitors for most workflows. If your preferred working style is terminal-first with deliberate commits as the unit of work, Aider fits better. If you want AI embedded in your editor as you write code, Continue is the right shape. Developers who want both sometimes run Aider for big architectural changes and Continue for daily editing.

Continue vs GitHub Copilot. Copilot is the safe, easy choice: minimal setup, plugs into VS Code and JetBrains and Neovim, well-documented, $10/month. It’s also opaque about model choices and gives you no ability to swap providers. If you’re evaluating for a team that wants to standardize quickly without configuration work, Copilot’s simplicity is genuinely valuable. For teams that already have API contracts with specific model providers or want to control which models touch their code, Continue’s bring-your-own approach is worth the setup cost. Check best AI coding agents for a fuller picture of where Copilot and Continue each fit.

Getting started

Install the Continue extension from the VS Code Marketplace or the JetBrains Plugin Marketplace. On first launch, Continue will create a default config.json in ~/.continue/. Open that file and you’ll see the model slots: one for chat, one for autocomplete, one for edit.

The fastest useful starting point is adding your Anthropic API key and setting Claude Sonnet 4.6 as your chat model. If you have Ollama installed, add a local model slot for autocomplete pointing at localhost:11434 with whatever code model you have pulled. If you don’t have Ollama yet, set Sonnet as your autocomplete model too; you can tune it later.

After that, try the chat panel on a real question about a file you’re working in. Then try selecting a function, hitting the edit shortcut (Cmd+I on Mac), describing a change, and reviewing the diff. Those two interactions cover 80% of the daily value.

When you’re ready to go deeper, look at the slash command documentation and write one custom command for something you do repeatedly in your stack. That’s the moment Continue starts feeling like a tool built for you specifically, rather than a tool built for everyone.

The bottom line

Continue is the best answer to a specific question: “What if I want the full AI coding assistant experience, in my IDE of choice, on my terms?” It won’t beat Cursor’s polish on day one. It won’t match Copilot’s zero-configuration simplicity. What it does is give you a serious, capable, actively developed tool that puts every meaningful decision in your hands, ships to both VS Code and JetBrains, costs nothing beyond your API usage, and keeps your code out of any infrastructure you didn’t explicitly choose.

For JetBrains users with no real alternatives, it’s an easy recommendation. For VS Code users who have strong model preferences, care about data routing, or want their assistant config in version control, it’s worth the setup investment. If you’re unsure which tool fits your workflow, the best AI agents for coding page has the full landscape.

Key features

  • Bring your own model from any provider or run locally via Ollama
  • Chat, edit, autocomplete, and agent modes in VS Code and JetBrains
  • JSON and YAML config files for full control over every behavior
  • Continue Hub for sharing and discovering assistant configurations
  • Custom slash commands and context providers for any workflow
  • Multi-model routing, different models per task type
  • Full JetBrains IDE support with feature parity to VS Code

Pros and cons

Pros

  • + Completely free to use; you only pay for model API calls you make yourself
  • + Works with any model provider: Anthropic, OpenAI, Google, Ollama, LM Studio, and more
  • + Full JetBrains support, rare among AI coding extensions
  • + Configuration is plain text, version-controllable, and shareable via Continue Hub
  • + No data sent to Continue servers; all traffic goes directly to your chosen model provider
  • + Active open-source community with regular releases and a public roadmap

Cons

  • − Setup takes real effort compared to opinionated tools like Cursor
  • − Autocomplete quality depends entirely on which model you configure
  • − No built-in codebase indexing as polished as Cursor's vector search
  • − Continue Hub team features require a paid plan
  • − Agent mode is newer and less mature than chat and edit modes

Who is Continue for?

  • Privacy-conscious engineers who want model traffic going to their own API keys, not a third-party proxy
  • JetBrains users who can't switch to VS Code but want serious AI assistance
  • Teams that want to standardize on a shared assistant configuration they control entirely
  • Developers running air-gapped or offline setups with local Ollama models

Alternatives to Continue

If Continue isn't quite the right fit, the closest alternatives are cline , cursor , and aider . See our full Continue alternatives page for side-by-side comparisons.

Frequently Asked Questions

What is Continue?
Continue is a free, open-source AI code assistant extension for VS Code and JetBrains. It connects to any AI model provider and gives you chat, inline editing, autocomplete, and agent capabilities inside your IDE. Unlike Cursor or GitHub Copilot, Continue doesn't proxy your requests through its own servers. Your API calls go directly to whichever model provider you configure, whether that's Anthropic, OpenAI, Google, or a locally running Ollama instance.
Is Continue free?
The extension itself is free and open source under the Apache 2.0 license. You pay for whatever model API you use, the same as you would using that API directly. Continue Hub, which lets you share and discover assistant configurations, is free for individuals. Teams plans with enterprise features are paid.
How does Continue compare to Cursor?
Cursor is an opinionated, polished product that owns your whole editor. Continue is an extension you add to VS Code or JetBrains that gives you full control over which models run where. Cursor is faster to get started with and has a more refined autocomplete experience out of the box. Continue wins if you care about model flexibility, IDE choice (especially JetBrains), data privacy, or simply not wanting to switch editors. See the full breakdown at the Continue vs Cursor comparison.
Can Continue work with local models?
Yes, local model support is one of Continue's strongest points. It integrates directly with Ollama, LM Studio, and any OpenAI-compatible local server. A common setup is using a small fast local model like Mistral or Qwen for autocomplete (low latency, no API cost) and a hosted frontier model like Claude Sonnet 4.6 for chat and editing. You configure which model handles which task type in the config file.
What is Continue Hub?
Continue Hub is a registry where individuals and teams can publish, share, and discover assistant configurations, context providers, and slash commands for Continue. Think of it as a package manager for AI assistant setups. An individual can publish a config that combines specific models and custom prompts for a particular stack, and anyone else can pull that config directly into their own Continue setup.
Does Continue work in JetBrains IDEs?
Yes. Continue supports IntelliJ IDEA, PyCharm, GoLand, WebStorm, and other JetBrains IDEs through an official JetBrains plugin. Feature parity with the VS Code extension is close, covering chat, edit, and autocomplete. This is notable because most AI coding extensions are built VS Code-first and treat JetBrains as an afterthought if they support it at all.

Related agents