Agentbrisk
codingchatvscode-extensionjetbrains Status: active

Sourcegraph Cody

AI coding assistant that uses Sourcegraph's code graph for monorepo-scale context


Sourcegraph Cody is an AI coding assistant built on top of Sourcegraph's code graph, which means it can find and read the right files across a massive codebase without you having to paste context manually. Where tools like Copilot or Cursor index only what you have open, Cody queries the same graph that Sourcegraph uses for code search, giving it a structural understanding of your entire repository or set of repos. That makes it particularly strong for monorepos, platform teams, and enterprises where the relevant code could live anywhere. It supports multiple AI models, integrates with VS Code and JetBrains, and comes with enterprise-grade compliance controls. For small projects, the advantage is less obvious, but for large-scale engineering organizations, it's the most context-aware AI assistant available.

Most AI coding assistants have the same basic problem: they only know what you’ve shown them. Open a file, maybe a few more, and the model does its best with what’s visible. That works fine on a 10-file side project. It breaks down badly when you’re working in a 500,000-line monorepo where the function you need is three service boundaries away and you don’t even know it exists. Sourcegraph Cody was built to solve that problem. It uses Sourcegraph’s code graph as its retrieval layer, so it can find the right context before it answers, not just work with whatever you happen to have open. If you’ve ever watched Copilot hallucinate an internal API that does exist in your codebase but that the model simply can’t see, Cody is the tool worth looking at.

Quick verdict

Cody is the best AI coding assistant for large, complex codebases. Its code graph context is a genuine technical differentiator that no other mainstream tool matches at scale. The trade-off is real: for small projects, that advantage evaporates and the pricing looks less competitive against Cursor or GitHub Copilot. If you’re a platform engineer or part of an enterprise team with a serious monorepo, Cody belongs in your toolkit. If you’re a solo developer on a personal project, it’s probably more than you need.

What is Sourcegraph Cody, exactly?

Sourcegraph has been building code search infrastructure since 2013. Before AI assistants were a category, Sourcegraph was the tool engineering teams used to search across massive codebases, understand how internal APIs were used, track deprecations, and make sense of code they didn’t write. That’s the foundation Cody is standing on.

When you ask Cody a question or trigger a completion, it doesn’t just look at your open editor tabs. It queries the Sourcegraph code graph, which is a live, indexed, structural representation of your codebase. That graph knows about symbols, references, call sites, definitions, and relationships between files. Cody uses that knowledge to select the most relevant context before passing anything to the language model. The result is that Cody tends to give you answers that account for how your actual codebase is structured, not just what the training data says a typical codebase looks like.

That structural awareness is the core differentiator. Other tools index files locally or use embedding similarity to guess what might be relevant. Cody uses Sourcegraph’s graph traversal, which means it can follow a function call through multiple files, understand which modules import what, and surface code that’s genuinely relevant even when you didn’t know to look for it.

The product itself ships as a VS Code extension, a JetBrains plugin, a web interface, and a CLI. You get inline completions, a chat panel, and the ability to run Cody against specific files or selections. For Enterprise customers, there’s also a server-side deployment option that keeps code off third-party infrastructure entirely, which matters in regulated industries.

Cody also has a model picker, which is an increasingly important feature as the AI model landscape keeps changing. You can choose your underlying model per session, which means you’re not locked into whatever Sourcegraph decides is best. As of mid-2026, the supported models include Claude Opus 4.7, Claude Sonnet 4.6, GPT-5, and Gemini 3.

The features that earn Cody a spot in your stack

Code graph context for monorepos

This is the feature that separates Cody from everything else at scale. When Sourcegraph indexes your codebase, it builds a graph of symbols, their definitions, and their references. Cody can traverse that graph at query time to pull in context that’s relevant to what you’re working on, even if those files aren’t open.

In practice, this means you can ask Cody how a shared authentication middleware works and it’ll find the actual implementation, the places it’s called, and any configuration that affects its behavior, without you having to know the file path ahead of time. For a developer who’s new to a codebase or working in unfamiliar territory, that’s not a minor convenience. It’s the difference between getting a useful answer and getting a plausible-sounding fiction.

The graph also updates as your code changes. This isn’t a snapshot you take once. Sourcegraph continuously indexes commits, which means Cody’s context reflects your current codebase state rather than a stale embed from last week.

Multi-model picker

Cody lets you choose which underlying model handles your request. You can use Claude Sonnet 4.6 for quick completions where latency matters, then switch to Claude Opus 4.7 for a complex architectural question where you want more careful reasoning, then try GPT-5 if you want a second opinion with different training characteristics.

This flexibility matters. Model quality is not static and different models have genuine strengths in different situations. Cody’s picker lets you act on that instead of hoping your vendor made the right choice for you.

Enterprise customers can go further and bring their own LLM endpoint. If your organization has negotiated a custom deployment with Anthropic or Microsoft or runs an open-source model on private infrastructure, Cody can route requests there instead of to Sourcegraph’s hosted models.

Inline completions and chat

The day-to-day interface is familiar. You get ghost-text completions as you type, with the option to accept, reject, or cycle through alternatives. The chat panel lets you ask questions about code, request rewrites, ask for test stubs, or get explanations of unfamiliar logic. There’s also inline editing, where you highlight a block and give a natural-language instruction.

What’s different from Copilot-style completions is what feeds them. Because Cody has access to the code graph, its inline suggestions can pull in types and patterns from files you’ve never opened. A completion that correctly uses your internal utility function’s actual signature, rather than a generic version of it, is the kind of thing that saves real debugging time.

The chat interface also lets you explicitly reference files, symbols, or repositories with @ mentions, giving you manual control over context when you want it. The automatic context is good; the manual control is there when it isn’t.

Cross-repo intelligence

On Enterprise plans, Cody’s context scope expands beyond a single repository. If your organization uses Sourcegraph to index multiple repos, Cody can pull context from all of them. That’s significant for platform engineering scenarios where a service depends on a shared library maintained in a separate repo, or where you’re debugging an integration between two services that live in different codebases.

Most AI coding assistants cap out at single-repo context. Cross-repo intelligence is a feature that only makes sense if you’ve built the indexing infrastructure to support it, which is exactly what Sourcegraph spent the last decade doing.

Enterprise compliance

For engineering organizations that operate under SOC 2, HIPAA, or similar frameworks, Cody’s enterprise tier includes the controls you need. You get SSO, role-based access, audit logging of prompts and responses, and the option to run everything in your own infrastructure so code never leaves your environment.

The bring-your-own-LLM option is particularly important here. If your legal or security team won’t approve sending source code to a third-party AI API, self-hosted Sourcegraph with your own model endpoint satisfies that constraint while still giving developers a capable AI assistant.

Pricing

Cody’s pricing has four layers.

The free tier includes inline completions and chat with a monthly usage cap. The cap isn’t published explicitly, but in practice it’s enough for a developer doing normal coding work, less so for someone spending six hours a day in chat. You also get access to multiple AI models on the free tier, which is genuinely unusual.

Pro is approximately $9 per user per month. This raises the usage limits significantly and is appropriate for individual developers or small teams where the free tier is feeling constrictive. You don’t get the Sourcegraph code search bundle at this tier; your context is local to what you have indexed in your editor.

Enterprise Starter runs approximately $59 per user per month. This is where the code graph story actually lands. Enterprise Starter includes Sourcegraph’s code search platform alongside Cody, which is what enables the cross-repo intelligence and the code graph context at full scale. For teams already paying for Sourcegraph, this tier is a natural expansion. For teams evaluating Cody in isolation, the price requires honest justification against what GitHub Copilot costs at scale.

Enterprise custom pricing is available for large organizations with specific requirements around data residency, model choice, SLAs, or compliance frameworks. This is where the bring-your-own-LLM and fully self-hosted options live.

One thing worth noting about the pricing structure: if you’re comparing Cody Pro at $9 against Copilot or Cursor, you’re comparing tools that work differently. At the Pro tier, Cody’s context advantage over those alternatives is real but modest. The full code graph capability requires Enterprise, and that’s a different price conversation.

Where Cody wins and where it doesn’t

Cody’s strongest position is in large, complex engineering organizations where codebase context is genuinely hard to manage. If you’re a backend engineer who regularly has to understand services written by other teams, or a platform engineer whose shared libraries are consumed in dozens of places, the code graph context isn’t a marketing claim. It produces noticeably better answers because it’s working from a more accurate representation of your codebase than any other tool can access locally.

The place Cody doesn’t win is on smaller projects where that advantage doesn’t apply. If you’re working on a personal project with three directories, or a startup app where you can hold the whole structure in your head, you don’t need code graph retrieval. You need fast, accurate completions and good chat, and tools like Cursor deliver that with a more polished editor experience at lower cost for individuals.

There’s also an honest onboarding cost. Getting full value from Cody requires running Sourcegraph, which is its own setup if your organization isn’t already using it. Free and Pro users get a simpler path with the VS Code extension, but they’re also getting a stripped-down version of the core value proposition. The full story only shows up once Sourcegraph is indexing your codebase.

For enterprise teams that are already Sourcegraph customers, the calculus is easy: Cody is the natural AI layer for the infrastructure you’ve already built. For everyone else, the decision depends on how big and complex your codebase actually is.

Who Cody is built for

The clearest fit is a platform or infrastructure engineer at a company with a real monorepo. Think a team maintaining shared authentication, internal SDKs, or data pipeline libraries that are used across a hundred microservices. These engineers spend a lot of time understanding code they didn’t write, navigating dependencies they can’t always anticipate, and answering questions from consumers of their APIs. Cody’s code graph context directly addresses that workflow in a way that no other AI assistant does.

Enterprise engineering organizations with compliance requirements are the second clear fit. The self-hosted option, bring-your-own-LLM, SSO, and audit logging make Cody deployable in environments where a SaaS AI tool is simply off the table. That’s a meaningful niche.

Teams already using Sourcegraph for code search are the third group. Adding Cody to an existing Sourcegraph deployment is a natural step, and those teams get the full code graph benefit without additional infrastructure overhead. For those developers, Cody isn’t a competitor to other AI tools. It’s the AI layer that fits their existing stack.

Solo developers and small teams on small codebases are a weaker fit. Cody works for them, but it won’t feel different from the alternatives. See the best AI agent for coding if you’re still picking your first tool.

Cody vs the alternatives

Cody vs GitHub Copilot. Copilot is the incumbent and it’s deeply integrated into GitHub’s ecosystem. For developers who live in GitHub’s pull request and issue flow, that integration is genuinely useful. But Copilot’s context is shallow: it reads your open files and a fuzzy embedding of your repo, and that’s about it. In a large codebase, it consistently misses code that’s relevant but not visible. Cody wins on context depth in those scenarios. Copilot wins on GitHub ecosystem integration and on simplicity for individuals. The Copilot vs Cody comparison goes deeper on specific use cases.

Cody vs Cursor. Cursor is an excellent choice for individual developers and small teams. Its editor experience is polished, its Composer mode for multi-file edits is the best in class, and its Tab completion has learned tricks that feel almost predictive. But Cursor is fundamentally an editor-level tool. Its codebase indexing is local and embedding-based. At 100k lines it holds up. At 500k lines with cross-repo dependencies, you’re going to feel the context ceiling. Cody doesn’t have the same editor polish as Cursor, but it operates at a scale Cursor doesn’t reach.

Cody vs Tabnine. Tabnine positioned itself early on privacy and on-premises deployment, which makes it a Cody competitor in enterprise contexts. Tabnine’s private deployment story is solid. What it doesn’t have is Sourcegraph’s code graph retrieval. Tabnine gives you a private AI assistant; Cody gives you a private AI assistant that actually understands the structure of your codebase. For organizations where both privacy and context quality matter, Cody is the stronger technical choice.

The broader picture is that Cody occupies a specific position: it’s the AI coding assistant that trades editor simplicity for codebase depth. If depth is what your work requires, no current alternative matches it.

Getting started

The fastest path to Cody is the VS Code extension. Search for “Cody AI” in the extensions marketplace, install it, and sign in with a Sourcegraph account. The free tier activates immediately without a credit card. You’ll get inline completions and chat within a few minutes of installation.

For JetBrains users, the process is the same through the JetBrains Marketplace. Cody supports IntelliJ IDEA, PyCharm, GoLand, and the rest of the JetBrains suite.

If you want the full code graph experience, you need to connect Cody to a Sourcegraph instance. Sourcegraph.com hosts a free tier that indexes public repositories. For private repositories and team use, you’ll need a Sourcegraph instance that has indexed your codebase. Enterprise Starter includes the setup resources to get there; Sourcegraph’s documentation covers self-hosted deployment if that’s the route you’re taking.

The CLI option is available for developers who prefer terminal workflows or want to build Cody into scripts and automation pipelines. It’s less polished than the IDE integrations but fully functional for question-answering and code generation tasks.

Start with the free VS Code extension. If you’re working on a large private codebase and the free tier’s local context feels limiting, that’s when the Enterprise conversation becomes worth having.

The bottom line

Cody isn’t trying to be the most accessible AI coding assistant. It’s trying to be the most context-accurate one at scale. For the right use case, that’s a trade worth making. A monorepo team that’s been limping along with Copilot and manually pasting context into every chat prompt will feel the difference in their first real debugging session. An enterprise team that needs compliance controls and model flexibility will find fewer compromises in Cody than in any alternative.

For solo developers on personal projects, the pitch is thinner. You can use Cody, but you probably won’t use it differently from a cheaper tool with better editor ergonomics.

The code graph is a real technical moat, and Sourcegraph has spent a decade building it. Cody is the best reason yet to care about that investment.

Key features

  • Code graph context that pulls from Sourcegraph's indexed codebase, not just open files
  • Multi-model picker: choose Claude Opus 4.7, Sonnet 4.6, GPT-5, or others per session
  • Inline completions and chat in VS Code, JetBrains, and the web UI
  • Cross-repo intelligence for understanding dependencies and shared libraries
  • Enterprise SSO, audit logs, and bring-your-own-LLM support
  • CLI access for terminal-based workflows

Pros and cons

Pros

  • + Code graph context gives it genuine codebase-wide awareness, not a shallow local file index
  • + Multi-model flexibility: swap between Claude, GPT, and others without leaving the IDE
  • + Cross-repo intelligence is practically unique among AI coding tools
  • + Enterprise tier includes audit logs, SSO, and bring-your-own-LLM
  • + Free tier is genuinely usable, not artificially hobbled
  • + Backed by Sourcegraph's decade of code-search infrastructure

Cons

  • − For solo developers or small repos, the context advantage is marginal
  • − Enterprise Starter at $59/user/mo is a hard sell next to cheaper alternatives
  • − Sourcegraph's own platform adds onboarding complexity for teams not already using code search
  • − The free tier usage caps hit faster than you'd expect on busy days

Who is Sourcegraph Cody for?

  • Platform and infrastructure teams working in 500k+ line monorepos who need accurate cross-file and cross-service context
  • Enterprise engineering organizations with compliance requirements around data residency and model provenance
  • Developers already using Sourcegraph for code search who want AI assistance that shares the same index
  • Backend engineers debugging unfamiliar codebases where the relevant logic is spread across dozens of files

Alternatives to Sourcegraph Cody

If Sourcegraph Cody isn't quite the right fit, the closest alternatives are github-copilot , cursor , and tabnine . See our full Sourcegraph Cody alternatives page for side-by-side comparisons.

Frequently Asked Questions

What is Sourcegraph Cody?
Sourcegraph Cody is an AI coding assistant that integrates with Sourcegraph's code graph to provide context-aware code completions, chat, and cross-repo intelligence. Unlike tools that only read your open files, Cody can query Sourcegraph's indexed representation of your entire codebase to find the most relevant code before it responds. It's available as a VS Code extension, a JetBrains plugin, a web interface, and a CLI, and it supports multiple underlying AI models including Claude and GPT-5.
Is Cody free?
Yes. Cody has a free tier that includes inline completions and chat with usage limits. For heavier use, the Pro plan is around $9 per user per month. Teams that need Sourcegraph code search bundled with Cody should look at Enterprise Starter at approximately $59 per user per month. Enterprise pricing for large organizations with custom compliance needs is available on request.
How does Cody compare to GitHub Copilot?
The main difference is context depth. Copilot works from your open files and a limited local index. Cody queries Sourcegraph's full code graph, which means it can find relevant code across your entire repository or across multiple repos without you manually providing it. Copilot is simpler to set up and works well for individual developers; Cody's advantage scales with codebase size and team complexity.
Is Cody good for monorepos?
Yes, and that's arguably its strongest use case. Sourcegraph was built for large, complex codebases, and Cody inherits that architecture. In a monorepo where a function you're editing depends on a shared library five directories away, Cody can find and include that context automatically. Most other AI coding assistants won't locate that dependency unless you open the files yourself.
What models does Cody use?
Cody supports multiple models and lets you pick per session. As of mid-2026, you can use Claude Opus 4.7, Claude Sonnet 4.6, GPT-5, and Gemini 3, depending on your plan. Enterprise customers can also bring their own LLM endpoint, which is useful for organizations with strict data-handling requirements.
Can Cody read across multiple repos?
Yes, when connected to a Sourcegraph Enterprise instance that indexes multiple repositories. Cody can search and pull context from any repo that Sourcegraph has indexed, which is particularly useful for platform teams building shared libraries that are consumed across many services. Free and Pro tier users working locally get single-repo context.

Related agents