
Update: May updates to listing attribute usage and enumeration values


I have been building with AI agents and MCP tools for a while now, and one question kept coming up: how do you give an agent real, authenticated access to AWS without handing it the keys to the kingdom? Today, there is an answer.
I’m happy to announce the general availability of the AWS MCP Server, a managed remote Model Context Protocol (MCP) server that gives AI agents and coding assistants secure, authenticated access to all AWS services through a small, fixed set of tools.
The AWS MCP Server is part of the Agent Toolkit for AWS, a suite of tooling that includes the MCP Server, skills, and plugins that help coding agents build more effectively and efficiently on AWS.
AI coding agents are already useful for many tasks, but they run into real trouble when working with AWS at any meaningful depth. Without access to current AWS documentation, agents rely on training data that may be months out of date and may not know about services like Amazon S3 Vectors, Amazon Aurora DSQL, or Amazon Bedrock AgentCore. When asked to build infrastructure, they tend to reach for the AWS Command Line Interface (AWS CLI) rather than AWS Cloud Development Kit (AWS CDK) or AWS CloudFormation, and they produce AWS Identity and Access Management (IAM) policies that are far broader than necessary. The result is infrastructure that works in a demo but is not production-ready.
The AWS MCP Server addresses this through a compact set of tools that do not consume your model’s context window. The call_aws tool executes any of the 15,000+ AWS API operations using your existing IAM credentials. When we will launch new APIs, they will be supported within days. The search_documentation and read_documentation tools retrieve current AWS documentation and best practices at query time, so the agent always works from up-to-date information.
With general availability, we are introducing several new capabilities. The AWS MCP Server now supports IAM context keys, so you no longer need a separate IAM permission to use the server and can express fine-grained access in a standard IAM policy. Documentation retrieval no longer requires authentication. We have also reduced the number of tokens required per interaction, which matters for complex, multi-step workflows.
Also new, the run_script tool lets the agent write a short Python script that runs server-side in a sandboxed environment. The sandbox inherits your IAM permissions but has no network access, so you can give an agent the ability to process data without giving it access to your local file system or a shell. When an agent needs to call multiple APIs and combine the results, making them one at a time is slow and burns context. With run_script, the agent chains API calls, filters responses, and computes results in a single round-trip, which is both faster and more context-efficient.
The most significant addition is the transition from Agent SOPs to Skills. Skills provide curated guidance and best practices for the tasks where agents most commonly make mistakes. This helps agents complete work faster, using validated best practices, with fewer errors and fewer tokens — all of which saves you time and money. Skills are contributed and maintained by AWS service teams. This keeps the tool list short and predictable, which reduces hallucination and keeps the agent focused.
For enterprise customers, the AWS MCP Server provides a clear separation between human and agent permissions. You can use IAM policies or Service Control Policies to specify that a given user can perform mutating operations while the MCP server is restricted to read-only actions. Amazon CloudWatch metrics published under the AWS-MCP namespace let you observe MCP server calls separately from direct human calls, giving you the audit trail that compliance teams require. Amazon CloudTrail captures all API calls for a complete record.
Let’s see it in action
For this demo, I chose to use Claude Code, but I can use the AWS MCP Server with any AI agent that supports MCP, which is basically all tools available today: Kiro CLI, Kiro, Cursor, Codex, and more. I configure Claude Code to use the Anthropic Opus 4.6 model.
Opus 4.6 has a knowledge cutoff date in May 2025. It means it doesn’t know anything that happened after May last year. I ask a question about an AWS service that was introduced recently: Amazon S3 Vectors, launched in preview in July 2025 and that went GA in December 2025.
The question is “how to store embedding on S3″. (embedding is a kind of vector)
It gives me five solutions, all correct, but none using S3 Vectors as I asked. Note that this answer comes from the Opus 4.6 model, not from Claude Code. Any AI tool using the same model will return similar answers because S3 Vectors wasn’t announced at the time the model was trained.
Let’s now try with the AWS MCP Server.
The AWS MCP Server uses AWS Identity and Access Management (IAM) and IAM SigV4 authentication. To use my local AWS credentials configuration over MCP, which only supports OAuth 2.1, I configure my AI coding agent to call the AWS MCP Server through a proxy. The MCP Proxy for AWS is an open source proxy that runs on my machine and bridges the world of IAM authentication to OAuth.
I add the MCP configuration with this command:
claude mcp add-json aws-mcp --scope user \
'{"command":"uvx","args":["mcp-proxy-for-aws@latest","https://aws-mcp.us-east-1.api.aws/mcp","--metadata","AWS_REGION=us-west-2"]}'
Let’s analyze the JSON configuration:
uvx mcp-proxy-for-aws is the command to launch the proxy; the rest of the arguments are parameters passed to the proxy.https://aws-mcp.us-east-1.api.aws/mcp is one of the two regional endpoints for the AWS MCP Server. The proxy will forward Claude Code’s requests to that endpoint.--metadata are passed to the proxy target. Here, it tells the AWS MCP Server to use the US West (Oregon) Region.I start Claude Code and I type /mcp to verify the AWS MCP Server is correctly installed and can use my credentials.
I ask the same question: “how can I store embedding on S3”.
This time, Claude Code knows it has a tool it can use to answer the question. It asks me permission to invoke the aws___search_documentation tool. After a few seconds, I receive a correct answer: “AWS now has a dedicated service for this: Amazon S3 Vectors …”
Pricing and availability
The AWS MCP Server is available today in the US East (N. Virginia) and Europe (Frankfurt) AWS Regions and can make API calls to any Region. There is no additional charge for the AWS MCP server itself. You pay only for the AWS resources you create and any applicable data transfer costs.
The AWS MCP Server works with Claude Code, Kiro, Cursor, and any MCP-compatible client. To get started, see the AWS MCP Server User Guide.
I have been waiting for something like this since I started using MCP tools in my AI agents early last year. The combination of current documentation, authenticated API access, and sandboxed script execution in a single server changes what an agent can actually do on AWS. I am curious what you build with it. Let me know in the comments.
— sebEnterprises face a significant challenge when deploying AI agents: the desktop and legacy applications that power most business workflows are simply inaccessible to modern AI systems. According to a 2024 Gartner report, 75% of organizations run legacy applications that lack modern APIs, and 71% of Fortune 500 companies operate critical processes on mainframe systems without adequate programmatic access. For many organizations, this has meant choosing between delaying AI adoption or undertaking expensive and risky modernization projects.
Today, we are announcing that Amazon WorkSpaces now enables AI agents to securely operate desktop applications without requiring application modernization. The same managed virtual desktops that millions of employees use and trust can now also serve AI agents, turning WorkSpaces into infrastructure for scaling enterprise productivity, not just delivering it. Because agents operate within your existing WorkSpaces environment, there are no APIs to build, no application migrations to plan, and no new infrastructure to manage.
Some of our customers had an early opportunity to give their agents a WorkSpace. Chris Noon, Director, Nuvens Consulting shared with us, “WorkSpaces lets our clients give AI agents the same secure, governed desktop environment their employees already use — no custom API integrations, full audit trails, and enterprise-grade isolation out of the box. For regulated industries, that’s not a nice-to-have — it’s the baseline.”
Secure cloud desktop access for AI agents
With WorkSpaces, AI agents can securely access and operate desktop applications running inside managed WorkSpaces environments to complete complex business workflows. Agents authenticate through AWS Identity and Access Management (IAM) and connect via Workspaces with complete audit trails available through AWS CloudTrail and Amazon CloudWatch. Because agents operate within secure WorkSpaces environments rather than on local machines, your existing security controls and compliance policies remain fully intact.
Amazon Workspaces supports the industry-standard Model Context Protocol (MCP), which means WorkSpaces works with any agent framework, such as LangChain, CrewAI and Strands Agents.
Let’s try it out
To set up a WorkSpaces environment for AI agents, I started in the AWS Management Console by creating a new WorkSpaces Applications stack—the environment definition that controls how agents connect and what they’re allowed to do.
From the Amazon WorkSpaces console, I chose Create stack and configured the basics: name, fleet association, and VPC endpoints. In Step 3 of the stack creation workflow, I noticed the new AI agents section with two options. The first, No AI agent access, is the default configuration for standard WorkSpaces designed for people. The second, Add AI Agents, allows AI agents to securely access and operate applications using their own identity and permissions. I selected Add AI Agents to enable agent connections on this stack.

Next, I will enable storage before configuring the agent access settings to define how agents interact with the desktop.

Under Agent features, I enabled three capabilities. Computer input allows the agent to click, type, and scroll within the desktop. Computer vision allows the agent to capture screenshots of the desktop, which is how it “sees” the application. Finally, screenshot storage configures where session screenshots are stored for audit and debugging.

Under Desktop screen layout, I set the screen resolution to 1280×720 and image format to PNG. The resolution determines the fidelity of what the agent sees during a session—a complex application with dense UI elements might benefit from higher resolution, while a terminal-style interface works well at 720p.

With my stack configured, WorkSpaces exposes a managed MCP endpoint. I pointed my agent framework to this endpoint, provided IAM credentials for authentication, and my agent began interacting with the desktop applications installed on the fleet’s image.
To see this in action, here’s an agent built with the Strands Agent SDK and Amazon Bedrock handling a prescription refill, looking up the patient record, searching for the medication, placing the order, and confirming a successful refill, all inside a sample pharmacy system with no API.
The application doesn’t know an agent is driving it. Nothing about the software was modified, rebuilt, or integrated. The agent worked with it exactly as it exists today.
Now available
This feature is available today in public preview at no additional cost in US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, Paris), and Asia (Tokyo, Mumbai, Sydney, Seoul, Singapore) Regions.
Get started building today using our GitHub repo, or visit the WorkSpaces page for more details.
Last week, I took some time off in York, England, often described as the most haunted city in the country. I wandered through the ruins of abbeys that have stood for nearly a thousand years, walked along medieval walls, and spent an evening on a ghost tour hearing stories passed down through centuries. There’s something grounding about standing in a place that has witnessed so much history. Now I’m back at my desk, and the contrast is hard to miss: those abbey stones have stood for a thousand years largely unchanged, while in the span of a single week away, the pace of technological change has moved forward yet again.
The ruins of Whitby Abbey in North Yorkshire. Stones that have seen a thousand years, while this week alone brought another wave of change.
Now, let’s get into this week’s AWS news.
Headlines
On April 28, Matt Garman, CEO of AWS, Colleen Aubrey, SVP Amazon Applied AI Solutions, Julia White, CMO of AWS, and OpenAI leaders took the stage to share how customers are changing the way businesses operate with agents. The event brought a packed slate of announcements across Amazon Quick, Amazon Connect, and a deeper partnership with OpenAI. Here’s a roundup of the biggest announcements from the event.
Amazon Quick expands with a desktop app, new pricing plans, and visual asset generation – Amazon Quick is an AI assistant for work that connects to your apps, learns what matters to you, and takes action on your behalf. This week, Quick introduced a new desktop app (Preview) that keeps you connected to your local files, calendar, and communications without opening a browser. You can sign up within minutes using your personal email address or existing Google, Apple, Github, or Amazon credentials—no AWS account required. Quick can now generate polished documents, presentations, infographics, and images directly from the chat interface, and native integrations expand to include Google Workspace, Zoom, Airtable, Dropbox, and Microsoft Teams. A new Build custom apps with Quick capability (Preview) lets you create intelligent apps, dashboards, and web pages connected to the rest of your business using natural language.
Amazon Connect expands into four agentic AI solutions – Amazon Connect is expanding from a single product into a set of four agentic AI solutions designed to work within your existing workflows. Amazon Connect Decisions is a supply chain planning and intelligence solution that shifts teams from crisis management to proactive planning, combining 30 years of Amazon operational science with more than 25 specialized supply chain tools. Amazon Connect Talent (Preview) is an agentic AI hiring solution that delivers AI-led interviews, science-backed assessments, and consistent evaluation for talent acquisition leaders managing scaled hiring. Amazon Connect Customer, previously known as Amazon Connect, delivers personalized customer experiences across voice, chat, and digital channels, with new configuration capabilities that enable organizations to set up conversational AI in weeks rather than months. Amazon Connect Health delivers agentic patient verification, appointment management, patient insights, ambient documentation, and medical coding, giving patients faster access to care and clinicians more time to deliver it.
AWS and OpenAI expand their partnership across Amazon Bedrock – AWS and OpenAI are bringing the latest OpenAI models to Amazon Bedrock, launching Codex on Amazon Bedrock, and introducing Amazon Bedrock Managed Agents powered by OpenAI — all in limited preview. OpenAI models on Amazon Bedrock (Limited preview) brings the latest OpenAI models, including GPT-5.5 and GPT-5.4, to the Bedrock APIs you already use, with unified security, governance, and cost controls. No additional infrastructure to configure, no new security model to learn. Codex on Amazon Bedrock (Limited preview) lets you access the OpenAI coding agent within your existing AWS environments, authenticating with your AWS credentials, processing inference through Bedrock, and applying Codex usage toward your AWS cloud commitments. Codex on Bedrock is available through the Bedrock API, starting with the Codex CLI, the Codex desktop app, and a Visual Studio Code extension. Amazon Bedrock Managed Agents, powered by OpenAI (Limited preview) combines OpenAI frontier models with AWS infrastructure to build production-ready OpenAI-powered agents in the cloud, built with the OpenAI harness for faster execution, sharper reasoning, and reliable steering of long-running tasks.
To learn more, visit Top announcements of the What’s Next with AWS, 2026.
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.
Other AWS news
Here are some additional posts and resources that you might find interesting:
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
Visit the AWS Builder Center to meet other builders, contribute solutions, and find resources that help you keep building. You can also browse upcoming AWS-led in-person and virtual events, plus developer-focused sessions.
— EsraThis post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
Today at the What’s Next with AWS, Matt Garman, CEO of AWS, Colleen Aubrey, SVP Amazon Applied AI Solutions, Julia White, CMO of AWS, and OpenAI leaders discussed how they and their customers are changing how businesses operate with agents.
Here’s our roundup of the biggest announcements from the event:
Amazon Quick is an AI assistant for work that connects to all of them, learns what matters to you, and takes action on your behalf. Starting today, you can use the new desktop app, sign up for Free and Plus pricing plans, generate visual assets in the chat, and easily connect Quick to even more apps.
To learn more, visit the About Amazon News post.
Amazon Connect is expanding from a single product into a set of four agentic AI solutions designed to work within your existing workflows: Amazon Connect Decisions (supply chains), Talent (hiring), Customer (customer experience), and Health (health care).
To learn more, visit the About Amazon News post.
AWS and OpenAI extended partnership
AWS and OpenAI are bringing the latest OpenAI models to Amazon Bedrock, launching Codex on Amazon Bedrock, and launching Amazon Bedrock Managed Agents, powered by OpenAI (all in limited preview), giving enterprises the frontier intelligence they want on the infrastructure they trust.

To learn more, visit the AWS What’s New post and About Amazon News post.
Late March took me to Seattle for the Specialist Tech Conference, one of the most energizing gatherings of AWS specialists from around the world. It was an incredible opportunity to connect with peers, exchange experiences, and go deep on the latest advancements in Generative AI and Amazon Bedrock — and a powerful reminder of something I truly believe in: when specialists come together to challenge each other, explore edge cases, and co-create solutions, the impact goes far beyond the meeting room. In a fast-moving space like AI, having a strong internal community isn’t a nice-to-have — it’s a competitive advantage.
Now, let’s get into this week’s AWS news…
Headlines
Anthropic partnership: Claude on AWS Trainium and Graviton, and Claude Cowork in Amazon Bedrock – This week, AWS and Anthropic deepened their product collaboration in meaningful ways for builders. Anthropic is now training its most advanced foundation models on AWS Trainium and Graviton infrastructure, co-engineering directly at the silicon level with Annapurna Labs to maximize computational efficiency from the hardware up through the full stack.
Claude Cowork is now available in Amazon Bedrock — Claude Cowork brings Anthropic’s collaborative AI capabilities directly to enterprise builders within the AWS ecosystem, enabling teams to work alongside Claude as a true collaborator, not just a tool. You can now deploy Claude Cowork within your existing Amazon Bedrock environment, keeping your data secure within AWS while leveraging the full power of Claude for team-based AI workflows.
Claude Platform on AWS (Coming soon) — A unified developer experience to build, deploy, and scale Claude-powered applications without leaving AWS. If you’re building with Generative AI on AWS, this is a significant step forward in what you’ll be able to do with Claude directly through Amazon Bedrock.
Meta signs agreement with AWS to power agentic AI on Amazon’s Graviton chips — Meta has signed an agreement to deploy AWS Graviton processors at scale, starting with tens of millions of Graviton cores to power CPU-intensive agentic AI workloads — including real-time reasoning, code generation, search, and multi-step task orchestration.
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.
Other AWS news
Here are some additional posts and resources that you might find interesting:
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development. Browse here for upcoming AWS-led in-person and virtual events and developer-focused events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
— Daniel Abib
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
Last week I had the honor of delivering a commencement speech at the University of Namur (uNamur) for their 2025 graduation ceremony.
Standing in front of freshly minted computer science graduates, I talked about the future of software development in the age of AI. My message to them was simple: AI will not make you obsolete. We’ve seen tools evolve over the decades, from punch cards to IDEs to AI-assisted coding, but the work remains yours, not the tool’s. The developers who will thrive are those who stay curious, think in systems, communicate with precision, and take ownership of what they build. The world needs more people with coding skills, not fewer. AI raises the bar on what we can accomplish, and that’s a good thing.
Now, let’s get into this week’s AWS news.
Headlines
Anthropic’s Claude Opus 4.7 is now available in Amazon Bedrock – Anthropic’s most intelligent Opus model is now available in Amazon Bedrock, with improved performance across coding, long-running agents, and professional knowledge work. Claude Opus 4.7 scores 64.3% on SWE-bench Pro and 87.6% on SWE-bench Verified, extending its lead in agentic coding with stronger long-horizon autonomy and complex code reasoning. It also does better on knowledge work tasks like document creation, financial analysis, and multi-step research.
The model runs on Bedrock’s next-generation inference engine with dynamic capacity allocation, adaptive thinking (letting Claude allocate thinking token budgets based on request complexity), and the full 1M token context window. It also adds high-resolution image support for better accuracy on charts, dense documents, and screen UIs. Claude Opus 4.7 is available at launch in US East (N. Virginia), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm), with up to 10,000 requests per minute per account per Region.
AWS Interconnect is now generally available with a new option to simplify last-mile connectivity – AWS Interconnect brings two managed private connectivity capabilities to general availability. The first, AWS Interconnect – Multicloud, provides Layer 3 private connections between AWS VPCs and other cloud providers (Google Cloud available now, Azure and OCI coming later in 2026). Traffic flows over the AWS global backbone and the partner cloud’s private network, never over the public internet, with built-in MACsec encryption, multi-facility resiliency, and CloudWatch monitoring. AWS published the underlying specification on GitHub under Apache 2.0 so any cloud provider can become an Interconnect partner.
The second capability, AWS Interconnect – Last Mile, simplifies high-speed private connections from branch offices, data centers, and remote locations to AWS through existing network providers. It provisions 4 redundant connections across 2 physical locations automatically, configures BGP routing, activates MACsec encryption and Jumbo Frames by default, and offers bandwidth from 1 Gbps to 100 Gbps adjustable from the console without reprovisioning. Last Mile launches in US East (N. Virginia) with Lumen as the initial partner.
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.
Other AWS news
Here are some additional posts and resources that you might find interesting:
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
That’s all for this week. Check back next Monday for another Weekly Roundup!
— sebToday, we’re announcing Claude Opus 4.7 in Amazon Bedrock, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work.
Claude Opus 4.7 is powered by Amazon Bedrock’s next generation inference engine, delivering enterprise-grade infrastructure for production workloads. Bedrock’s new inference engine has brand-new scheduling and scaling logic which dynamically allocates capacity to requests, improving availability particularly for steady-state workloads while making room for rapidly scaling services. It provides zero operator access—meaning customer prompts and responses are never visible to Anthropic or AWS operators—keeping sensitive data private.
According to Anthropic, Claude Opus 4.7 model provides improvements across the workflows that teams run in production such as agentic coding, knowledge work, visual understanding,long-running tasks. Opus 4.7 works better through ambiguity, is more thorough in its problem solving, and follows instructions more precisely.
The model is an upgrade from Opus 4.6 but may require prompting changes and harness tweaks to get the most out of the model. To learn more, visit Anthropic’s prompting guide.
Claude Opus 4.7 model in action
You can get started with Claude Opus 4.7 model in Amazon Bedrock console. Choose Playground under Test menu and choose Claude Opus 4.7 when you select model. Now, you can test your complex coding prompt with the model.

I run the following prompt example about technical architecture decision:
Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions.

You can also access the model programmatically using the Anthropic Messages API to call the bedrock-runtime through Anthropic SDK or bedrock-mantle endpoints, or keep using the Invoke and Converse API on bedrock-runtime through the AWS Command Line Interface (AWS CLI) and AWS SDK.
To get started with making your first API call to Amazon Bedrock in minutes, choose Quickstart in the left navigation pane in the console. After choosing your use case, you can generate a short term API key to authenticate your requests as testing purpose.
When you choose the API method such as the OpenAI-compatible Responses API, you can get sample codes to run your prompt to make your inference request using the model.

To invoke the model through the Anthropic Claude Messages API, you can proceed as follows using anthropic[bedrock] SDK package for a streamlined experience:
from anthropic import AnthropicBedrockMantle
# Initialize the Bedrock Mantle client (uses SigV4 auth automatically)
mantle_client = AnthropicBedrockMantle(aws_region=REGION)
# Create a message using the Messages API
message = mantle_client.messages.create(
model="anthropic.claude-opus-4-7",
max_tokens=2048,
messages=[
{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions"}
]
)
print(message.content[0].text)
You can also run the following command to invoke the model directly to bedrock-runtime endpoint using the AWS CLI and the Invoke API:
aws bedrock-runtime invoke-model \
--model-id anthropic.claude-opus-4-7 \
--region us-east-1 \
--body '{"messages": [{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions."}], "max_tokens": 512, "temperature": 0.5, "top_p": 0.9}' \
--cli-binary-format raw-in-base64-out \
invoke-model-output.txt
For more intelligent reasoning capability, you can use Adaptive thinking with Claude Opus 4.7, which lets Claude dynamically allocate thinking token budgets based on the complexity of each request.
To learn more, visit the Anthropic Claude Messages API and check out code examples for multiple use cases and a variety of programming languages.
Things to know
Let me share some important technical details that I think you’ll find useful.
Now available
Anthropic’s Claude Opus 4.7 model is available today in the US East (N. Virginia), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm) Regions; check the full list of Regions for future updates. To learn more, visit the Claude by Anthropic in Amazon Bedrock page and the Amazon Bedrock pricing page.
Give Anthropic’s Claude Opus 4.7 a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.
— Channy
Today, we’re announcing the general availability of AWS Interconnect – multicloud, a managed private connectivity service that connects your Amazon Virtual Private Cloud (Amazon VPC) directly to VPCs on other cloud providers. We’re also introducing AWS Interconnect – last mile, a new capability that simplifies how you establish high-speed, private connections to AWS from your branch offices, data centers, and remote locations through your existing network providers.
Large enterprises increasingly run workloads across multiple cloud providers, whether to use specialized services, meet data residency requirements, or support teams that have standardized on different providers. Connecting those environments reliably and securely has historically required significant coordination: managing VPN tunnels, working with colocation facilities, and configuring third-party network fabrics. The result is that your networking team spends time on undifferentiated heavy lifting instead of focusing on the applications that matter to your business.
AWS Interconnect is the answer to these challenges. It is a managed connectivity service that simplifies connectivity into AWS. Interconnect provides you the ability to establish private, high-speed network connections with dedicated bandwidth to and from AWS across hybrid and multicloud environments. You can configure resilient, end-to-end connectivity with ease in a few clicks through the AWS Console by selecting your location, partner, or cloud provider, preferred Region, and bandwidth requirements, removing the friction of discovering partners and the complexity of manual network configurations.
It comes with two capabilities: multicloud connectivity between AWS and other cloud providers, and last-mile connectivity between AWS and your private on-premises networks. Both capabilities are built on the same principle: a fully managed, turnkey experience that removes the infrastructure complexity from your team.
AWS Interconnect – multicloud
AWS Interconnect – multicloud gives you a private, managed Layer 3 connection between your AWS environment and other cloud providers, starting with Google Cloud, and Microsoft Azure coming later in 2026. Traffic flows entirely over the AWS global backbone and the partner cloud’s private network, so it never traverses the public internet. This means you get predictable latency, consistent throughput, and isolation from internet congestion without having to manage any physical infrastructure yourself.
Security is built in by default. Every connection uses IEEE 802.1AE MACsec encryption on the physical links between AWS routers and the partner cloud provider’s routers at the interconnection facilities. You don’t need to configure these separately. Note that each cloud provider manages encryption independently on its own backbone, so you should review the encryption documentation for your specific deployment to verify it meets your compliance requirements. Resiliency is also built in: each connection spans multiple logical links distributed across at least two physical facilities, so a single device or building failure does not interrupt your connectivity.
For monitoring, AWS Interconnect – multicloud integrates with Amazon CloudWatch. You get a Network Synthetic Monitor included with each connection to track round-trip latency and packet loss, and bandwidth utilization metrics to support capacity planning.
AWS has published the underlying specification on GitHub under the Apache 2.0 license, providing any cloud service provider the opportunity to collaborate with AWS Interconnect – multicloud. To become an AWS Interconnect partner, cloud providers must implement the technical specification and meet AWS operational requirements, including resiliency standards, support commitments, and service level agreements.
How it works
Provisioning a connection takes minutes. I create the connection from the AWS Direct Connect console. I start from the AWS Interconnect section and select Google Cloud as the provider. I select my source and destination regions. I specify bandwidth, and provide my Google Cloud project ID. AWS generates an activation key that I use on the Google Cloud side to complete the connection. Routes propagate automatically in both directions, and my workloads can start exchanging data shortly after.
For this demo, I start with a single VPC and I connect it to a Google Cloud VPC. I use a Direct Connect Gateway. It’s the simplest path: one connection, one attachment, and my workloads on both sides can start talking to each other in minutes.
Step 1: request an interconnect in the AWS Management Console.
I navigate to AWS Direct Connect, AWS Interconnect and I select Create. I first choose the cloud provider I want to connect to. In this example, Google Cloud.
Then, I choose the AWS Region (eu-central-1) and the Google Cloud Region (europe-west3).
On step 3, I enter Description,I choose the Bandwidth, the Direct Connect gateway to attach, and the ID of my Google Cloud project.
After reviewing and confirming the request, the console gives me an activation key. I will use that key to validate the request on the Google cloud side.
Step 2: create the transport and VPC Peering resources on my Google Cloud Platform (GCP) account.
Now that I have the activation key, I continue the process on the GCP side. At the time of this writing, no web-based console was available. I choose to use the GCP command line (CLI) instead. I take note of the CIDR range in the GCP VPC subnet in the europe-west3 region. Then, I open a Terminal and type:
gcloud network-connectivity transports create aws-news-blog \
--region=europe-west3 \
--activation-key=${ACTIVATION_KEY} \
--network=default \
--advertised-routes=10.156.0.0/20
Create request issued for: [aws-news-blog]
...
peeringNetwork: projects/oxxxp-tp/global/networks/transport-9xxxf-vpc
...
state: PENDING_CONFIG
updateTime: '2026-03-19T09:30:51.103979219Z'
It takes a couple of minutes for the command to complete. Once the command returns, I create a peering between my GCP VPC and the new transport I just created. I can do that in the GCP console or with the gcloud command line. Because I was using the Terminal for the previous command, I continued with the command line:
gcloud compute networks peerings create aws-news-blog \
--network=default \
--peer-network=projects/oxxxp-tp/global/networks/transport-9xxxf-vpc \
--import-custom-routes \
--export-custom-routes
The network name is the name of my GCP VPC. The peer network is given in the output of the previous command.
Once completed, I can verify the peering in the GCP console.

In the AWS Interconnect console, I verify the status is available.
In the AWS Direct Connect console, under Direct Connect gateways, I see the attachment to the new interconnect.
Step 3: associate the new gateway on the AWS side
I select Gateway associations and Associate gateway to attach the Virtual Private Gateway (VGW) that I created before starting this demo (pay attention to use a VGW in the same AWS Region as the interconnect)
You don’t need to configure the network routing on the GCP side. On AWS, there is a final step: add a route entry in your VPC Route tables to send all traffic to the GCP IP address range through the Virtual Gateway.
Once the network setup is done. I start two compute instances, one on AWS and one on GCP.
On AWS, I verify the Security Group accepts ingress traffic on TCP:8080. I connect to the machine and I start a minimal web server:
python3 -c \
"from http.server import HTTPServer, BaseHTTPRequestHandler
class H(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200);self.end_headers()
self.wfile.write(b'Hello AWS World!\n\n')
HTTPServer(('',8080),H).serve_forever()"
On the GCP side, I open a SSH session to the machine and I call the AWS web server by its private IP address.
Et voilà ! I have a private network route between my two networks, entirely managed by the two Cloud Service Providers.
Things to know
There are a couple of configuration options that you should keep in mind:
172.31.0.0/16and the default on GCP was 10.156.0.0/20. I was able to proceed with these default values.Reference architectures
When your deployment grows and you have multiple VPCs in a single region, AWS Transit Gateway gives you a centralized routing hub to connect them all through a single Interconnect attachment. You can segment traffic between environments, apply consistent routing policies, and integrate AWS Network Firewall if you need to inspect what crosses the cloud boundary.
And when you’re operating at global scale, with workloads spread across multiple AWS Regions and multiple Google Cloud environments, AWS Cloud WAN extends that same model across the world. Any region in your network can reach any Interconnect attachment globally, with centralized policy management and segment-based routing that applies consistently everywhere you operate.
My colleagues Alexandra and Santiago documented these reference architectures in their blog post: Build resilient and scalable multicloud connectivity architectures with AWS Interconnect – multicloud.
AWS Interconnect – last mile
Based on the same architecture and design as AWS Interconnect – multicloud, AWS Interconnect – last mile provides the ability to connect your on-premises or remote location to AWS through a participating network provider’s last-mile infrastructure, directly from the AWS Management Console.
The onboarding process mirrors AWS Interconnect – multicloud: you select a provider, authenticate, and specify your connection endpoints and bandwidth. AWS generates an activation key that you provide in the provider console to complete the configuration. AWS Interconnect – last mile automatically provisions four redundant connections across two physical locations, configures BGP routing, and activates MACsec encryption and Jumbo Frames by default. The result is a resilient private connection to AWS that aligns with best practices, without requiring you to manually configure networking components.
AWS Interconnect – last mile supports bandwidths from 1 Gbps to 100 Gbps, and you can adjust bandwidth from the console without reprovisioning. The service includes a 99.99% availability SLA up to the Direct Connect port and bundles CloudWatch Network Synthetic Monitor for connection health monitoring. Just like AWS Interconnect – multicloud, AWS Interconnect – last mile attaches to a Direct Connect Gateway, which connects to your Virtual Private Gateway, Transit Gateway, or AWS Cloud WAN deployment. For more details, refer to the AWS Interconnect User Guide.
Scott Yow, SVP Product at Lumen Technologies, wrote:
By combining AWS Interconnect – last mile with Lumen fiber network and Cloud Interconnect, we simplify the last-mile complexity that often slows cloud adoption and enable a faster, and more resilient path to AWS for customers.
Pricing and availability
AWS Interconnect – multicloud and AWS Interconnect – last mile pricing is based on a flat hourly rate for the capacity you request, billed prorata by the hour. You select the bandwidth tier that fits your workload needs.
AWS Interconnect – multicloud pricing varies by region pair: a connection between US East (N. Virginia) and Google Cloud N. Virginia is priced differently from a connection between US East (N. Virginia) and a more distant region. When you use AWS Cloud WAN, the global any-to-any routing model means traffic can traverse multiple regions, which affects the total cost of your deployment. I recommend reviewing the AWS Interconnect pricing page for the full rate card by region pair and capacity tier before sizing your connection.
AWS Interconnect – multicloud is available today in five region pairs: US East (N. Virginia) to Google Cloud N. Virginia, US West (N. California) to Google Cloud Los Angeles, US West (Oregon) to Google Cloud Oregon, Europe (London) to Google Cloud London, and Europe (Frankfurt) to Google Cloud Frankfurt. Microsoft Azure support is coming later in 2026.
AWS Interconnect – last mile is launching in US East (N. Virginia) with Lumen as the initial partner. Additional partners, including AT&T and Megaport, are in progress, and additional regions are planned.
To get started with AWS Interconnect, visit the AWS Direct Connect console and select AWS Interconnect from the navigation menu.
I’d love to hear how you’re using AWS Interconnect in your environment. Leave a comment below or reach out through the AWS re:Post community.
— sebIn my last Week in Review post, I mentioned how much time I’ve been spending on AI-Driven Development Lifecycle (AI-DLC) workshops with customers this year. A common theme in those sessions is the need for better cost visibility. Teams are moving fast with AI, but as they go from experimenting to full production, finance and leadership really need to know who is using which resources and at what cost. That’s why I was so excited to see the launch of Amazon Bedrock new support for cost allocation by IAM user and role this week. This lets you tag IAM principals with attributes like team or cost center and then activate those tags in your Billing and Cost Management console. The resulting cost data flows into AWS Cost Explorer and the detailed Cost and Usage Report, giving you a clear line of sight into model inference spending. Whether you’re scaling agents across teams, tracking foundation model use by department, or running tools like Claude Code on Amazon Bedrock, this new feature is a game changer for tracking and managing your AI investments. You can get all the details on setting this up in the IAM principal cost allocation documentation.
Now, let’s get into this week’s AWS news…
Headlines
Amazon Bedrock now offers Claude Mythos Preview Anthropic’s most sophisticated AI model to date is now available on Amazon Bedrock as a gated research preview through Project Glasswing. Claude Mythos introduces a new model class focused on cybersecurity, capable of identifying sophisticated security vulnerabilities in software, analyzing large codebases, and delivering state of the art performance across cybersecurity, coding, and complex reasoning tasks. Security teams can use it to discover and address vulnerabilities in critical software before threats emerge. Access is currently limited to allowlisted organizations, with Anthropic and AWS prioritizing internet critical companies and open source maintainers.
AWS Agent Registry for centralized agent discovery and governance now in preview AWS launched Agent Registry through Amazon Bedrock AgentCore, providing organizations with a private catalog for discovering and managing AI agents, tools, skills, MCP servers, and custom resources. The registry helps teams locate existing capabilities rather than duplicating them, with semantic and keyword search, approval workflows, and CloudTrail audit trails. It is accessible via the AgentCore Console, AWS CLI, SDK, and as an MCP server queryable from IDEs.
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.
Other AWS news
Here are some additional posts and resources that you might find interesting:
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
Browse here for upcoming AWS led in person and virtual events, startup events, and developer focused events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
~ micah