Amazon Bedrock Agents
AWS-native AI agent platform built on Bedrock with Lambda actions and Guardrails
Amazon Bedrock Agents is AWS's managed service for building AI agents that orchestrate multi-step tasks, call external APIs through Lambda functions, retrieve private data via Knowledge Bases, and enforce compliance rules through Guardrails. You pick a foundation model hosted on Bedrock (Claude, Llama, Amazon Nova, Mistral, and others), wire in your action groups and data sources, and AWS handles the orchestration loop, session state, and retry logic. It's not a polished product aimed at non-technical users. It's a set of managed building blocks for teams that are already inside the AWS ecosystem and want agent infrastructure that inherits IAM, VPC, CloudTrail, and the rest of the AWS compliance stack. If your organization already runs its workloads on AWS and needs agents that never leave your cloud environment, Bedrock Agents is the most natural path. If you're starting from scratch, the setup cost is real.
If your AWS bill already runs six figures a month and you’ve been told you need AI agents, Amazon Bedrock Agents is probably already on your shortlist. It launched in November 2023 as AWS’s answer to the question every enterprise architect was asking: can we build AI agents without moving our data off AWS? The short answer is yes. The longer answer involves IAM roles, Lambda functions, CloudWatch logs, and a meaningful amount of configuration work before your first agent does anything useful. That’s the honest trade-off at the center of Bedrock Agents, and it’s worth understanding before you commit.
Quick verdict
Bedrock Agents is a solid infrastructure-layer choice for organizations that are already deep in the AWS stack and need agents that respect their existing compliance posture. It’s not the fastest path to a working agent, and it’s not cheap once you stack Knowledge Bases, Guardrails, and Lambda costs together. But if IAM-controlled access, VPC isolation, and CloudTrail audit logs are non-negotiable, few alternatives come close.
What are Amazon Bedrock Agents, exactly?
The name does some work here. Bedrock Agents isn’t a standalone AI agent product the way Devin or Glean are. It’s a managed orchestration layer that sits on top of the Bedrock foundation model service, adding the infrastructure to turn a model call into a multi-step, action-taking workflow.
Here’s the basic mental model: you configure an agent with a foundation model (your choice of Claude, Llama, Nova, Mistral, or others), a set of Action Groups that define what the agent can do, and optionally a Knowledge Base for private data retrieval. When a user sends a request, Bedrock passes it to the model, which reasons about the task, decides which action to call, triggers the corresponding Lambda function, incorporates the result, and continues until the task completes or hits a stopping condition.
AWS handles the orchestration loop. Session state persists across turns. You don’t write the ReAct loop or manage the context window manually. What you do write is the Lambda functions behind each action, the IAM roles that govern what those functions can access, and the prompts that define how the agent interprets its role.
This is a meaningful distinction. Bedrock Agents is a building-block service, not a finished product. You can’t sign up, connect it to your Slack, and have it answer IT tickets by Tuesday afternoon. What you can do is build an agent that calls your internal ticketing API, queries your private knowledge base, enforces your compliance rules, and logs every decision to CloudTrail, all without your data ever leaving your AWS account. For the enterprises that care about those properties, the build-it-yourself nature is a feature, not a gap.
The service also introduced Amazon Bedrock AgentCore more recently, a layer that brings support for open-source agent frameworks and provides managed hosting, making it easier to deploy agents built with tools like LangGraph or the Anthropic SDK without giving up AWS infrastructure guarantees.
The features that justify the AWS lock-in
Multi-model backend through Bedrock
One of Bedrock Agents’ genuine advantages is that the reasoning engine is swappable. You’re not locked into one model family. Claude Opus 4.7 and Sonnet 4.6 are available for tasks requiring strong reasoning. Amazon Nova models offer a cost-optimized option for high-volume, simpler workflows. Meta Llama variants and Mistral models round out the selection.
In practice, most teams end up picking one model and sticking with it, but the ability to swap matters when models update. When Anthropic released Claude Opus 4.7, Bedrock Agents users could update their agent configuration to the new model version without touching their Lambda functions or Knowledge Base setup. That kind of separation between reasoning engine and agent infrastructure is worth something over a multi-year deployment lifetime.
Cross-region inference profiles add another wrinkle: if your primary region runs hot during peak hours, Bedrock can route inference requests to an adjacent region automatically. For latency-sensitive workflows, this helps avoid the cold spots that come with single-region model deployments.
Lambda functions as agent actions
Action Groups are what give a Bedrock Agent the ability to do things beyond generating text. Each Action Group maps to one or more Lambda functions and defines the schema for how the model should call them, using an OpenAPI-style specification.
The model decides which action to invoke based on the task, constructs the parameters according to your schema, and Bedrock calls the Lambda on your behalf. Your Lambda can do anything a Lambda can do: call a REST API, query RDS or DynamoDB, write to S3, publish to SNS, trigger a Step Functions execution. The agent gets back whatever your Lambda returns, incorporates it into its reasoning, and continues.
This is powerful but it puts the complexity on you. Writing clean OpenAPI schemas that the model reliably interprets, handling Lambda errors gracefully, and keeping action definitions narrow enough that the model doesn’t make wrong choices about when to call them, these are skills you develop through iteration. The model isn’t magic; it calls the actions it thinks are appropriate based on how you’ve described them.
Knowledge Bases for RAG
Bedrock Knowledge Bases is the RAG layer. You point it at a data source, S3, Confluence, SharePoint, Salesforce, or a web crawler, and it handles chunking, embedding, and vector storage in an AWS-managed vector database. When an agent needs to answer a question grounded in private data, it queries the Knowledge Base, retrieves relevant chunks, and uses them to inform its response.
The managed pipeline means you don’t run your own embedding job or maintain your own vector index. Reranking is available using Cohere Rerank 3.5 at $2.00 per 1,000 queries, which improves retrieval quality for large or noisy document sets. Structured data retrieval lets agents query SQL databases with natural language, generating SQL under the hood at $0.002 per request.
The operational convenience is real. The cost is also real, particularly if you’re doing high-volume lookups or using reranking on every query.
Guardrails for safety and compliance
Guardrails is Bedrock’s safety layer, and it’s one of the more differentiated features for regulated industries. You can configure content filters to block harmful content categories, denied topics to prevent the agent from discussing things outside its scope, PII detection and redaction, and automated reasoning checks that verify factual claims against source documents.
Pricing is granular: $0.15 per 1,000 text units for content filters, $0.10 per 1,000 for sensitive information filters, $0.17 per 1,000 for automated reasoning checks. These costs stack per message, both on input and output, so a high-traffic agent with full Guardrails coverage will see those charges accumulate.
The compliance value is that Guardrails runs at the infrastructure layer, not in application code. An auditor can see that content filtering is configured in AWS and enforced on every agent interaction, without trusting that a developer remembered to add the check. For healthcare, financial services, and legal applications, that architecture matters.
Multi-agent collaboration
Bedrock Agents supports multi-agent patterns where a supervisor agent routes subtasks to specialized subagents. The supervisor breaks down a complex task, delegates components to agents with the right tools or knowledge, and aggregates results.
This is genuinely useful for workflows that benefit from parallelism or specialization: one subagent handles customer data lookups, another handles policy interpretation, a third generates the final response. The supervisor doesn’t need to know how each subagent works internally.
The catch is that multi-agent workflows multiply cost and complexity. Debugging why a supervisor made a particular routing decision, or why a subagent returned unexpected output, requires correlating logs across multiple agent sessions. It’s manageable, but it’s not simple.
Pricing
Bedrock Agents has no base subscription fee. What you pay depends entirely on what you use, and those costs add up across several independent dimensions.
Model inference is the biggest line item. Claude Sonnet 4.6 on Bedrock runs $3 per million input tokens and $15 per million output tokens. Claude Opus 4.7 costs more. Amazon Nova Lite is significantly cheaper and worth considering for high-volume, lower-complexity tasks. Multi-turn agent conversations accumulate tokens fast because each turn re-sends the growing context window.
Lambda invocations add the compute cost for your action groups. For most workloads this is small compared to model costs, but high-frequency agents with many actions will see it show up.
Knowledge Bases adds embedding costs when you ingest new documents, plus $2.00 per 1,000 retrieval queries if you use reranking. Without reranking, retrieval costs are lower but recall quality drops on complex queries.
Guardrails layers on top: $0.15 per 1,000 text units for content filtering, applied to both input and output on each turn. A 2,000-character user message and a 1,500-character agent response together consume roughly three text units per turn. Run 100,000 conversations a month and Guardrails alone adds around $45 to the bill, before model costs.
There’s no free tier for Bedrock Agents. AWS does offer free-tier credits for new accounts, but production workloads should be modeled with full pricing from the start.
Where Bedrock Agents win and where they don’t
Bedrock Agents wins when your requirements list includes things like: data never leaves my AWS account, every agent action must be auditable in CloudTrail, agents must respect existing IAM policies, and we need HIPAA or SOC 2 eligibility. Those requirements are hard to satisfy with third-party SaaS tools, open-source frameworks running on developer laptops, or even well-regarded competitors that operate in separate cloud environments.
The compliance story is genuinely strong because Bedrock inherits the AWS certification portfolio. You’re not trusting a startup’s security claims; you’re working within infrastructure your security team already audited.
Where Bedrock Agents struggles is developer experience and debugging. Setting up a working agent requires creating IAM roles for the agent to invoke Lambda, IAM roles for Lambda to call other AWS services, a KMS key if you want encryption, an S3 bucket for artifacts, and model access approvals in Bedrock itself, before you’ve written any agent logic. First-time setup takes hours, not minutes.
Debugging is harder still. When an agent does something unexpected, the trace data in the console shows you the model’s reasoning steps and action calls, which is useful. But correlating that with Lambda logs in CloudWatch, checking for Guardrails blocks, and understanding why a Knowledge Base returned a particular set of chunks requires jumping between several AWS services. There’s no integrated debugger, no timeline view that shows you the full causal chain in one place.
Who Bedrock Agents are built for
The target customer is an engineering team inside an organization that’s already committed to AWS. Not dabbling in AWS, but running core infrastructure there, with established IAM practices, VPC architecture, and compliance programs that AWS certifications support.
Within that customer, the typical buyer is a platform or MLOps team building agent infrastructure for internal product teams, rather than a product team building an agent for end users directly. They want a managed runtime that gives application developers a clean interface without exposing raw model API keys or requiring every team to build their own orchestration layer.
Regulated industries are a natural fit: financial services firms that need SOC 2, healthcare organizations that need HIPAA eligibility, government contractors with data residency requirements. In those contexts, Bedrock Agents isn’t competing primarily on developer experience. It’s competing on trust, auditability, and existing vendor relationships.
Startups, individual developers, and teams without significant AWS infrastructure are unlikely to find the overhead worthwhile. The managed convenience Bedrock Agents provides assumes you already have the surrounding AWS scaffolding in place.
Bedrock Agents vs the alternatives
vs Glean: These don’t really compete. Glean is an enterprise search and work assistant product that connects to SaaS tools like Google Drive, Slack, and Salesforce, and surfaces answers to employees through a polished interface. It’s a finished product with a sales process, not a platform you build on. Bedrock Agents is the infrastructure layer for building something like Glean, or something entirely different. If you want to buy a knowledge assistant for your employees, Glean is the more direct path. If you want to build one that integrates with proprietary internal systems and respects your AWS compliance posture, Bedrock Agents gives you the primitives.
vs Claude Code: Claude Code is a coding agent that runs in your terminal and operates on local files. It’s a tool for individual developers and small teams. Bedrock Agents is server-side infrastructure for deploying agents to production at scale. They solve completely different problems. The more useful comparison might be: Claude Code is what a developer uses to write the Lambda functions that a Bedrock Agent calls.
vs Azure AI Studio agents: This is the most direct competition. Azure AI Studio offers a similar managed agent runtime with model choice (GPT-5, Phi, Llama), function calling through Azure Functions, retrieval via Azure AI Search, and content safety filters. The feature sets are broadly comparable. The real difference is which cloud you’re already in. Azure wins if you’re a Microsoft shop with existing Azure AD, Azure Functions, and Azure Cognitive Search investments. Bedrock Agents wins if you’re AWS-native. Neither is meaningfully better than the other on pure agent capability; the decision is almost entirely an infrastructure alignment question.
For teams evaluating Anthropic Computer Use or looking for help with the best AI agent for coding, Bedrock Agents occupies a different niche: it’s not about operating a computer or writing code, it’s about building durable, compliant agent workflows inside a cloud environment your organization already trusts.
Getting started
The practical starting point is the Bedrock console in an AWS account where you’ve already requested model access for at least one foundation model. Model access isn’t automatic; you request it per model family per region, and some models require a short approval wait.
From there, the Agents section of the Bedrock console walks you through creating an agent: name it, pick a model, write an instruction prompt, and optionally attach Action Groups and Knowledge Bases. AWS provides a test console where you can send messages to the agent before deploying it.
For anything production-bound, the AWS SDK (Bedrock Agent Runtime client) is how you integrate agent invocation into your application. Python, JavaScript, Java, and Go all have first-class support. The InvokeAgent API call takes a session ID and input text; you get back the agent’s response and an optional trace object for debugging.
The AWS documentation is thorough if dense. The Bedrock workshop on GitHub has working examples for common patterns. Budget at least a day for initial setup if your team hasn’t used Bedrock before, and two or three days if you’re also standing up a Knowledge Base from scratch.
The bottom line
Amazon Bedrock Agents is the right tool for a specific type of organization: one that’s already running on AWS, faces real compliance requirements, and needs agent infrastructure that fits inside existing security and governance practices. For those teams, it delivers genuine value that’s hard to replicate with third-party tools or self-hosted frameworks.
For everyone else, the overhead is hard to justify. The developer experience is clunky, the debugging workflow is scattered across AWS services, and the cost compounds quickly once you stack Knowledge Bases and Guardrails on top of model inference. There are faster paths to a working agent if compliance lock-in isn’t your primary constraint. Bedrock Agents earns its place not by being the most capable or most developer-friendly agent platform, but by being the one that fits inside the AWS world that enterprises have already built.
Key features
- Multi-model backend: choose Claude, Llama, Amazon Nova, or Mistral as the reasoning engine
- Action Groups backed by AWS Lambda for calling external APIs and running custom logic
- Knowledge Bases for managed RAG against S3, Confluence, SharePoint, and other data sources
- Guardrails for content filtering, PII redaction, topic blocking, and automated reasoning checks
- Multi-agent collaboration with supervisor and subagent routing
- Session memory that persists context across turns within and across conversations
- Code interpretation for secure sandboxed code execution during agent runs
Pros and cons
Pros
- + Native AWS integration means IAM roles, VPC endpoints, CloudTrail logs, and regional data residency come for free
- + Model-agnostic backend lets you swap between Claude, Llama, Nova, and Mistral without rewriting agent logic
- + Knowledge Bases handles the full RAG pipeline including chunking, embedding, vector storage, and retrieval
- + Guardrails applies content filtering, PII redaction, and automated reasoning checks at the infrastructure layer
- + Multi-agent collaboration supports supervisor-subagent patterns for complex, parallelizable workflows
- + No base fee for the Agents service itself; you only pay for compute you actually use
Cons
- − Setup requires configuring IAM roles, Lambda functions, S3 buckets, and Bedrock model access before you write a single line of agent logic
- − Debugging is harder than alternatives; tracing an agent failure means correlating CloudWatch logs across Lambda, Bedrock, and the agent runtime
- − Per-layer pricing stacks fast; a busy agent using Knowledge Bases reranking and Guardrails can cost significantly more than raw model API calls
- − Console-based testing is rudimentary compared to the development experience of tools like Claude Code or LangSmith
- − Vendor lock-in is substantial; Bedrock Agents abstractions don't port cleanly to other clouds or self-hosted setups
Who is Amazon Bedrock Agents for?
- Enterprise IT teams automating internal helpdesk workflows that must access private knowledge bases behind a VPC
- Financial services firms building compliance-aware agents where Guardrails provide documented content controls for auditors
- Platform engineering teams who want to give internal product teams a managed agent runtime without exposing raw model API keys
- Data engineering groups running analytical agents that query databases, generate code, and execute it in a sandboxed interpreter
Alternatives to Amazon Bedrock Agents
If Amazon Bedrock Agents isn't quite the right fit, the closest alternatives are glean , claude-code , and anthropic-computer-use . See our full Amazon Bedrock Agents alternatives page for side-by-side comparisons.
Frequently Asked Questions
What are Amazon Bedrock Agents?
How much do Bedrock Agents cost?
How do Bedrock Agents compare to Claude Code or Devin?
Can Bedrock Agents call Lambda functions?
What models does Bedrock support?
Are Bedrock Agents production-ready?
Related agents
Amazon Q Developer
AWS-native AI coding assistant with deep cloud integration
Anthropic Computer Use
Claude's computer-use capability that powers desktop and browser agents
Augment Code
AI coding assistant built for million-line enterprise codebases