Agentbrisk
codingvscode-extensionautonomous Featured Status: active

Cline

Open-source autonomous coding agent that runs in VS Code with full visibility


Cline is an open-source autonomous coding agent that runs as a VS Code extension. Originally released as claude-dev, it lets you bring your own API keys from Anthropic, OpenAI, Google, or any compatible provider and then gives the model access to your full codebase, a terminal, a browser, and MCP-connected tools. Every file write, shell command, and web request requires your explicit approval, so you stay in control even when the agent is running a long, multi-step task. Plan mode lets you review the agent's entire approach before it touches anything. There are no monthly seat fees, no opaque model routing, and no vendor lock-in. You see the exact prompts, the exact costs, and every action the agent takes. For engineers who want real autonomy without giving up visibility, Cline is the most transparent option in the category.

Most AI coding tools ask you to trust them. Cline asks you to watch. Every file it wants to edit, every shell command it wants to run, every URL it wants to fetch, it stops and waits for your go-ahead. That sounds slow until you’ve watched a different agent quietly delete a config file it decided was redundant. Cline is an open-source VS Code extension that gives an AI model real access to your codebase and real tools, then makes every step explicit. No black boxes. No surprise refactors. You bring your own API keys, pick your own model, and the agent shows its work.

Quick verdict

Cline is the best choice if you want genuine autonomy without giving up control. It’s open source, free to install, and works with every major AI provider. The setup takes five minutes if you already have an API key. The cost discipline requires more attention. For engineers who want to delegate real work to an agent but refuse to fly blind, nothing in the category is more transparent.

What is Cline, exactly?

Cline started as a GitHub project called claude-dev, published in mid-2024. The name described exactly what it was: a VS Code extension that gave Claude API access to your local development environment. It could read files, write files, run terminal commands, and report back. The reception was strong enough that the project expanded well beyond Claude, adding support for OpenAI, Google Gemini, Mistral, and local models running through Ollama or LM Studio. The rename to Cline reflected that new scope.

At its core, Cline is an autonomous agent loop running inside VS Code. You describe a task in natural language. The agent reads the relevant files, forms a plan, and starts executing. It can write code across multiple files, install packages, run test suites, check the output, and iterate based on what it sees. It can open a browser, navigate to a URL, and use what it finds to inform a decision. It can call out to external tools through the Model Context Protocol.

What makes Cline different from most tools in this space is the philosophy around transparency. The agent doesn’t batch up a dozen changes and present them at the end. It shows you each action before it happens. You see the exact diff before it writes a file. You see the exact command before it runs in the terminal. You can reject any step and redirect the agent before it’s too late. This isn’t a limitation of the architecture, it’s a deliberate design choice.

The project became a company in 2024 and added an optional Cline Cloud service for teams who want managed billing rather than routing everything through their own provider accounts. The open-source core remains MIT-licensed, and the extension is free. The company’s revenue model depends on Cline Cloud adoption, not on gating features behind a paywall.

The GitHub repository crossed tens of thousands of stars quickly, which is unusual for a tool this focused. It’s become a genuine reference implementation for what an open, auditable coding agent should look like.

The features that earn it a place in your sidebar

Step-by-step transparency

Every action Cline wants to take surfaces as an explicit approval prompt. Want to write to src/api/auth.ts? You see the full diff. Want to run npm run test? You see the exact command. Want to fetch a URL? You see the address. This happens before anything executes. You’re not reviewing a log after the fact, you’re approving a queue in real time.

In practice this means you catch mistakes early. The agent might decide that the best fix for a failing test is to delete the assertion. You see that diff, you reject it, and you add a note. The agent adjusts. That feedback loop is much tighter than reviewing a finished pull request from an AI that ran for ten minutes unobserved.

The transparency extends to cost. Cline tracks token usage per session and shows you the running cost in the sidebar. If a task is ballooning to $3 in API calls because the agent keeps re-reading a large file, you know it before it becomes a $30 session. That level of visibility is absent from most subscription-based tools where the model routing and token consumption are hidden behind a flat monthly fee.

Bring your own keys

Cline connects directly to your provider accounts. You paste in an Anthropic API key, or an OpenAI key, or a Google AI Studio key, and Cline uses that for every request. There’s no Cline-layer markup on tokens. There’s no opaque model routing that quietly downgrades you to a cheaper model to protect margins. You configure the model explicitly, and that’s the model that runs.

This matters for a few reasons beyond cost. If your organization can’t send code to third-party SaaS platforms for compliance reasons, BYOK lets you route everything through your own provider agreement. If you want to run on a local model using Ollama, you can point Cline at localhost and nothing leaves your machine. If Anthropic releases a new model and you want to test it the same day, you update one dropdown in the Cline settings.

The downside is real: you’re responsible for managing your API keys, monitoring your spend, and understanding provider rate limits. There’s no billing dashboard with alerts and spend caps built into Cline itself. You need to use your provider’s console for that. It’s a tradeoff most developers will find acceptable, but it’s worth knowing about before your first long agentic session.

MCP support

Cline is a first-class MCP client. MCP, the Model Context Protocol, is an open standard for connecting AI models to external tools and data sources. Think of it as a plugin system that any MCP-compatible server can tap into: a Postgres query tool, a Jira ticket reader, a Figma file accessor, a custom internal API wrapper.

You configure MCP servers in Cline’s settings and they become available as tools during any agentic session. The agent can decide on its own to call a tool when it determines it’s relevant to the task. You see those calls in the approval queue the same way you see file writes and terminal commands. Nothing runs silently.

For teams with internal tooling, this is significant. You can build an MCP server for your internal deployment pipeline and let Cline trigger deploys as part of a task. You can wrap your team’s design tokens in an MCP server and let the agent reference them directly when writing CSS. The surface area is limited only by what you’re willing to build.

Browser and computer use

Cline can open a browser and interact with it. This is useful in more situations than it first appears. Checking whether a UI change looks right in a real browser. Scraping documentation that isn’t available as an API. Logging into a staging environment to verify that a new feature works end-to-end. Filing a bug report in a web interface.

The browser actions are exposed through the same approval model as everything else. The agent proposes an action, “navigate to http://localhost:3000 and take a screenshot,” and you approve or reject. It’s not running a headless browser in secret. You’re watching it work.

Computer use is the more powerful and more cautious capability. Cline can interact with your desktop, not just the browser, which means it can operate applications that don’t have APIs. This is useful for scripted QA work on native apps. It requires more careful supervision than web browsing, and most developers will leave it off for day-to-day tasks.

Plan mode and guardrails

Plan mode is one of Cline’s most practical features. Before the agent touches a single file, you can ask it to produce a plan: a written description of what it intends to do, in what order, and why. You read the plan, give feedback, and only then let it proceed. If the plan looks wrong, you catch the misunderstanding before it manifests in a dozen file edits.

This is especially valuable for large tasks that span multiple systems. Migrating an authentication layer, extracting a service into a separate module, updating a dependency with breaking changes. These tasks have enough surface area that a wrong assumption at the start can mean thirty minutes of work in the wrong direction. Plan mode catches that.

Beyond plan mode, Cline lets you configure which actions require approval and which can auto-proceed. If you trust the agent to read files without asking, you can auto-approve reads. If you want every write to require confirmation, you keep the default. The guardrail configuration is per-project, so you can be looser in a throwaway sandbox and stricter in a production codebase.

Pricing, why open source isn’t free

Cline the extension costs nothing. The GitHub repository is MIT-licensed, the VS Code extension is free to install, and there are no feature tiers. What you pay for is the AI itself.

If you use Claude Sonnet 4.6 through your own Anthropic API key, you’re paying Anthropic’s published input and output token rates. A typical medium-complexity task, say adding a new API endpoint with tests, might consume 150,000 to 400,000 tokens depending on how many files the agent reads and how many iterations it takes. At current Sonnet 4.6 rates, that’s somewhere between $0.45 and $1.20 per task. Claude Opus 4.7 costs roughly six times more per token, so a complex refactor with Opus could run $8 to $15 without careful context management.

GPT-5 is in a similar range to Opus for complex work. Gemini 3 is cheaper on output but the effective cost depends on how you configure context windows. Local models via Ollama cost nothing per token after the hardware, but quality on complex multi-file tasks drops noticeably compared to frontier models.

The practical discipline is to start with Sonnet-class models for most work, switch to Opus or GPT-5 for tasks that genuinely need more reasoning power, and use plan mode to avoid expensive multi-step sessions that go off-track. Cline’s per-session cost display helps here because you see the meter running. Set a mental budget per task and cancel if you’re burning through it without results.

Cline Cloud exists for teams that want a managed billing layer instead of distributing API keys to every developer. It routes requests through Cline’s infrastructure and provides a unified billing console. Pricing is separate from the open-source core.

Where Cline shines and where it stumbles

Cline does its best work on tasks with clear scope and a testable outcome. “Add rate limiting to the /api/search endpoint, write unit tests, and make them pass” is a good Cline task. The agent can read the existing code, understand the pattern, implement the middleware, write the tests, run them, and fix failures. It’s the kind of work that takes an experienced engineer 45 minutes and Cline can do it in 10, with your oversight costing another 10 minutes of approvals and spot-checks.

Where it struggles is ambiguous, design-heavy work. “Improve the architecture of the data layer” is a bad Cline task if you haven’t first defined what “improve” means. The agent will make choices, and you may not like them, and the approval-step-by-approval-step unwind is tedious. Cline rewards clear input. It’s an amplifier, not a decision-maker.

The other friction point is context management. On large monorepos, the agent can burn a lot of tokens reading files to orient itself before it starts writing anything. You need to give it good starting points rather than letting it explore freely. Point it at the right files in your prompt, and you cut both cost and time.

There’s also no persistent memory across sessions in the open-source version. Each new session starts cold. If you’re working on a complex, multi-day task, you either leave the session open or brief the agent again at the start. This is a real gap compared to tools that maintain project context between sessions.

Who Cline is built for

Cline fits engineers who want to delegate meaningful work to an AI agent but aren’t willing to cede oversight. If your mental model is “I’m the senior engineer and the AI is a very capable junior,” Cline matches that dynamic well. You review, you approve, you redirect. The agent does the typing.

It’s particularly well-suited to developers at companies with compliance requirements around code sharing. Because you control the API key and can point it at a provider with appropriate data handling agreements, there’s a clear chain of custody. No Cline server sees your code.

Developers building on open-source stacks who don’t want a subscription dependency will also find it natural. There’s nothing to renew, no seat count to manage, no account to lose access to. The extension is installed locally, the keys are yours, and the code stays on your machine or your provider.

It’s less well-suited to developers who want AI assistance woven into their moment-to-moment typing flow. Cline isn’t trying to be an inline autocomplete tool. If that’s the primary use case, Cursor or Continue are better fits. Cline is for task-level delegation, not keystroke-level assistance.

Cline vs the alternatives

Cline vs Cursor

Cursor is an AI-first editor that wraps VS Code and owns the whole experience. It has superior autocomplete, a polished chat interface, and a subscription that handles model access for you. Cline is an extension that adds autonomous capabilities to your existing VS Code setup without replacing anything. Cursor’s agent mode is capable, but it’s less transparent than Cline’s approval loop, and the model routing isn’t fully visible. If you already love VS Code and want an agent rather than a new editor, Cline is the cleaner choice. If you want a fully integrated AI coding environment with great tab completion, Cursor wins on that dimension. See the full breakdown of Cursor alternatives.

Cline vs Claude Code

Claude Code is Anthropic’s own CLI agent, designed to run in the terminal and work across your entire system. It’s not a VS Code extension, so it doesn’t have the same visual review flow. Claude Code is tighter, faster for terminal-native engineers, and integrates directly with git. Cline has the richer visual interface, the broader model support (Claude Code only runs on Claude), and the MCP ecosystem. Engineers who live in the terminal will prefer Claude Code. Engineers who live in VS Code will prefer Cline. They’re meaningfully different tools for meaningfully different workflows.

Cline vs Aider

Aider is the terminal-native open-source coding agent that git-patches your way through tasks. It’s fast, it’s minimal, and it commits as it goes. Cline is heavier: it has a full VS Code UI, browser use, MCP support, and the approval interface. Aider is better for clean, surgical changes on git-tracked code where you want minimal ceremony. Cline is better when you want richer tooling, visual diffs, and a broader set of capabilities. Many developers run both: Aider for quick targeted edits, Cline for larger multi-step tasks. Check the best AI agents for coding to see how the full category stacks up.

Getting started

Install the Cline extension from the VS Code Marketplace by searching for “Cline.” Once installed, open the Cline panel from the sidebar. You’ll be prompted to add an API provider. Paste in your Anthropic, OpenAI, or Google API key, select a model, and you’re ready.

For a first task, try something concrete and bounded: “Read src/utils/format.ts and add a formatCurrency function that follows the pattern of the existing functions. Write a unit test.” That gives the agent clear scope, a reference pattern to follow, and a verifiable output.

Turn on plan mode for your first few sessions. Ask the agent to describe what it’s going to do before it starts. Read the plan, ask questions in the chat, and only then let it proceed. You’ll get a feel for how the approval loop works and how to redirect the agent when its plan is off.

If you want to connect MCP tools, the documentation at docs.cline.bot walks through server configuration. Start with the filesystem and git MCP servers before building anything custom. They’re immediately useful and teach you how the tool-call approval flow works in practice.

The bottom line

Cline is what happens when you build an autonomous coding agent for engineers who don’t trust black boxes. It’s open source, free to install, works with every major AI provider, and shows you exactly what it’s doing before it does it. The cost discipline is on you because you’re paying provider rates directly, but that’s the cost of knowing what’s actually happening.

It’s not the most polished product in the category. The setup is manual, the UX is functional rather than refined, and there’s no persistent memory between sessions. But for the engineer who wants to delegate real tasks to an agent while staying in control of every step, Cline is the most honest option available. It earns its place in the sidebar because it respects your judgment rather than replacing it.

Key features

  • Step-by-step transparency with explicit approval for every file write and command
  • Bring-your-own-key support for Anthropic, OpenAI, Google, Mistral, and local models
  • MCP (Model Context Protocol) client for connecting custom tools and data sources
  • Browser and computer use for web research and UI testing
  • Plan mode for reviewing the agent's strategy before it touches a single file
  • Terminal command execution with full output streaming in VS Code
  • Multi-file edits with diffs shown inline before acceptance

Pros and cons

Pros

  • + Full step-by-step visibility into every action the agent takes
  • + Bring your own API keys, so you control model choice and costs directly
  • + Completely open source, auditable, and extensible
  • + Plan mode lets you review strategy before the agent writes a single line
  • + MCP support means you can connect it to any custom tool or data source
  • + Works with virtually any AI backend, including local models via Ollama

Cons

  • − No managed UX, setup requires comfort with API keys and VS Code extensions
  • − API costs can spike fast on large multi-step tasks with expensive models
  • − No built-in cloud workspace or persistent memory across sessions
  • − Less polished autocomplete experience compared to dedicated editors like Cursor

Who is Cline for?

  • Backend engineers who want to delegate full feature implementation while staying informed
  • Developers running open-source or self-hosted stacks who can't share code with SaaS tools
  • Teams that need reproducible, auditable AI-assisted workflows with no black-box steps
  • Polyglot developers who want one agent that works across every language and editor theme

Alternatives to Cline

If Cline isn't quite the right fit, the closest alternatives are cursor , claude-code , and aider . See our full Cline alternatives page for side-by-side comparisons.

Frequently Asked Questions

What is Cline?
Cline is an open-source autonomous coding agent that installs as a VS Code extension. It gives an AI model access to your file system, terminal, browser, and any tools you connect via MCP. You bring your own API keys from providers like Anthropic, OpenAI, or Google, so you control which model runs and what you pay. Every action the agent takes, from editing a file to running a shell command, requires your explicit approval. It started as a project called claude-dev in 2024 and was later renamed Cline as it expanded support beyond Claude models.
Is Cline free?
Cline itself is free and open source under the MIT license. You install it in VS Code at no cost. What you pay for is API usage through your own provider accounts. If you use Claude Opus 4.7 for a complex refactor, that token cost goes directly to your Anthropic account. There's no Cline subscription fee on top of that. An optional Cline Cloud service exists for managed billing, but it's not required.
How is Cline different from Cursor?
Cursor is a full fork of VS Code with AI baked into the editor itself. Cline is an extension that drops into VS Code without replacing the editor. Cursor manages the model for you behind a subscription. Cline gives you raw access to any API backend with full cost transparency. Cursor is more polished for everyday autocomplete. Cline is more capable for long autonomous tasks where you want to see every step. They're not competing for the same moment, they're competing for different workflows.
Does Cline work with Claude or GPT?
Cline works with virtually any major AI provider. You can configure it with Claude models from Anthropic, GPT-4o or GPT-5 from OpenAI, Gemini from Google, Mistral, Cohere, and local models running via Ollama or LM Studio. Each request routes through your own API key. You can switch backends per project or even mid-session. Claude tends to perform best for complex multi-step coding tasks, which is why it was the original default.
How does Cline compare to Aider?
Both are open-source coding agents that use your own API keys. Aider is terminal-first and excels at git-centric workflows, applying patches cleanly and committing as it goes. Cline lives in VS Code and adds browser use, MCP tool connections, and a visual approval UI. Aider is faster for engineers who live in the terminal. Cline is better when you want richer tooling and a visual review layer. Many developers use both depending on the task.
Is Cline production-ready?
Cline is stable enough to use on real codebases. It has tens of thousands of GitHub stars and an active contributor community. The approval-before-action model means it won't destroy your working tree without your consent. That said, it's still an autonomous agent, and complex multi-step tasks can go sideways. Use it with a clean git branch, keep plan mode on for unfamiliar tasks, and review diffs before you accept them. For greenfield work and refactors on well-tested code, it's production-capable.

Related agents