Thursday, April 16, 2026

Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock

Today, we’re announcing Claude Opus 4.7 in Amazon Bedrock, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work.

Claude Opus 4.7 is powered by Amazon Bedrock’s next generation inference engine, delivering enterprise-grade infrastructure for production workloads. Bedrock’s new inference engine has brand-new scheduling and scaling logic which dynamically allocates capacity to requests, improving availability particularly for steady-state workloads while making room for rapidly scaling services. It provides zero operator access—meaning customer prompts and responses are never visible to Anthropic or AWS operators—keeping sensitive data private.

According to Anthropic, Claude Opus 4.7 model provides improvements across the workflows that teams run in production such as agentic coding, knowledge work, visual understanding,long-running tasks. Opus 4.7 works better through ambiguity, is more thorough in its problem solving, and follows instructions more precisely.

  • Agentic coding: The model extends Opus 4.6’s lead in agentic coding, with stronger performance on long-horizon autonomy, systems engineering, and complex code reasoning tasks. According to Anthropic, the model records high-performance scores with 64.3% on SWE-bench Pro, 87.6% on SWE-bench Verified, and 69.4% on Terminal-Bench 2.0.
  • Knowledge work: The model advances professional knowledge work, with stronger performance on document creation, financial analysis, and multi-step research workflows. The model reasons through underspecified requests, making sensible assumptions and stating them clearly, and self-verifies its output to improve quality on the first step. According to Anthropic, the model reaches 64.4% on Finance Agent v1.1.
  • Long-running tasks: The model stays on track over longer horizons, with stronger performance over its full 1M token context window as it reasons through ambiguity and self-verifies its output.
  • Vision: the model adds high-resolution image support, improving accuracy on charts, dense documents, and screen UIs where fine detail matters.

The model is an upgrade from Opus 4.6 but may require prompting changes and harness tweaks to get the most out of the model. To learn more, visit Anthropic’s prompting guide.

Claude Opus 4.7 model in action
You can get started with Claude Opus 4.7 model in Amazon Bedrock console. Choose Playground under Test menu and choose Claude Opus 4.7 when you select model. Now, you can test your complex coding prompt with the model.

I run the following prompt example about technical architecture decision:
Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions.

You can also access the model programmatically using the Anthropic Messages API to call the bedrock-runtime through Anthropic SDK or bedrock-mantle endpoints, or keep using the Invoke and Converse API on bedrock-runtime through the AWS Command Line Interface (AWS CLI) and AWS SDK.

To get started with making your first API call to Amazon Bedrock in minutes, choose Quickstart in the left navigation pane in the console. After choosing your use case, you can generate a short term API key to authenticate your requests as testing purpose.

When you choose the API method such as the OpenAI-compatible Responses API, you can get sample codes to run your prompt to make your inference request using the model.


To invoke the model through the Anthropic Claude Messages API, you can proceed as follows using anthropic[bedrock] SDK package for a streamlined experience:

from anthropic import AnthropicBedrockMantle
# Initialize the Bedrock Mantle client (uses SigV4 auth automatically)
mantle_client = AnthropicBedrockMantle(aws_region=REGION)
# Create a message using the Messages API
message = mantle_client.messages.create(
    model="anthropic.claude-opus-4-7",
    max_tokens=2048,
    messages=[ 
            {"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions"}
    ]
)
print(message.content[0].text)

You can also run the following command to invoke the model directly to bedrock-runtime endpoint using the AWS CLI and the Invoke API:

aws bedrock-runtime invoke-model \ 
 --model-id anthropic.claude-opus-4-7 \ 
 --region us-east-1 \ 
 --body '{"messages": [{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions."}], "max_tokens": 512, "temperature": 0.5, "top_p": 0.9}' \ 
 --cli-binary-format raw-in-base64-out \ 
invoke-model-output.txt

For more intelligent reasoning capability, you can use Adaptive thinking with Claude Opus 4.7, which lets Claude dynamically allocate thinking token budgets based on the complexity of each request.

To learn more, visit the Anthropic Claude Messages API and check out code examples for multiple use cases and a variety of programming languages.

Things to know
Let me share some important technical details that I think you’ll find useful.

  • Choosing APIs: You can choose from a variety of Bedrock APIs for model inference, as well as the Anthropic Messages API. The Bedrock-native Converse API supports multi-turn conversations and Guardrails integration. The Invoke API provides direct model invocation and lowest-level control.
  • Scaling and capacity: Bedrock’s new inference engine is designed to rapidly provision and serve capacity across many different models. When accepting requests, we prioritize keeping steady state workloads running, and ramp usage and capacity rapidly in response to changes in demand. During periods of high demand, requests are queued, rather than rejected. Up to 10,000 requests per minute (RPM) per account per Region are available immediately, with more available upon request.

Now available
Anthropic’s Claude Opus 4.7 model is available today in the US East (N. Virginia), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm) Regions; check the full list of Regions for future updates. To learn more, visit the Claude by Anthropic in Amazon Bedrock page and the Amazon Bedrock pricing page.

Give Anthropic’s Claude Opus 4.7 a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

Channy



from AWS News Blog https://ift.tt/1HbtSc0
via IFTTT

Tuesday, April 14, 2026

AWS Interconnect is now generally available, with a new option to simplify last-mile connectivity

Today, we’re announcing the general availability of AWS Interconnect – multicloud, a managed private connectivity service that connects your Amazon Virtual Private Cloud (Amazon VPC) directly to VPCs on other cloud providers. We’re also introducing AWS Interconnect – last mile, a new capability that simplifies how you establish high-speed, private connections to AWS from your branch offices, data centers, and remote locations through your existing network providers.

Large enterprises increasingly run workloads across multiple cloud providers, whether to use specialized services, meet data residency requirements, or support teams that have standardized on different providers. Connecting those environments reliably and securely has historically required significant coordination: managing VPN tunnels, working with colocation facilities, and configuring third-party network fabrics. The result is that your networking team spends time on undifferentiated heavy lifting instead of focusing on the applications that matter to your business.

AWS Interconnect is the answer to these challenges. It is a managed connectivity service that simplifies connectivity into AWS. Interconnect provides you the ability to establish private, high-speed network connections with dedicated bandwidth to and from AWS across hybrid and multicloud environments. You can configure resilient, end-to-end connectivity with ease in a few clicks through the AWS Console by selecting your location, partner, or cloud provider, preferred Region, and bandwidth requirements, removing the friction of discovering partners and the complexity of manual network configurations.

It comes with two capabilities: multicloud connectivity between AWS and other cloud providers, and last-mile connectivity between AWS and your private on-premises networks. Both capabilities are built on the same principle: a fully managed, turnkey experience that removes the infrastructure complexity from your team.

AWS Interconnect – multicloud
AWS Interconnect – multicloud gives you a private, managed Layer 3 connection between your AWS environment and other cloud providers, starting with Google Cloud, and Microsoft Azure coming later in 2026. Traffic flows entirely over the AWS global backbone and the partner cloud’s private network, so it never traverses the public internet. This means you get predictable latency, consistent throughput, and isolation from internet congestion without having to manage any physical infrastructure yourself.

Security is built in by default. Every connection uses IEEE 802.1AE MACsec encryption on the physical links between AWS routers and the partner cloud provider’s routers at the interconnection facilities. You don’t need to configure these separately. Note that each cloud provider manages encryption independently on its own backbone, so you should review the encryption documentation for your specific deployment to verify it meets your compliance requirements. Resiliency is also built in: each connection spans multiple logical links distributed across at least two physical facilities, so a single device or building failure does not interrupt your connectivity.

AWS Interconnect - multicloud - architectureFor monitoring, AWS Interconnect – multicloud integrates with Amazon CloudWatch. You get a Network Synthetic Monitor included with each connection to track round-trip latency and packet loss, and bandwidth utilization metrics to support capacity planning.

AWS has published the underlying specification on GitHub under the Apache 2.0 license, providing any cloud service provider the opportunity to collaborate with AWS Interconnect – multicloud. To become an AWS Interconnect partner, cloud providers must implement the technical specification and meet AWS operational requirements, including resiliency standards, support commitments, and service level agreements.

How it works
Provisioning a connection takes minutes. I create the connection from the AWS Direct Connect console. I start from the AWS Interconnect section and select Google Cloud as the provider. I select my source and destination regions. I specify bandwidth, and provide my Google Cloud project ID. AWS generates an activation key that I use on the Google Cloud side to complete the connection. Routes propagate automatically in both directions, and my workloads can start exchanging data shortly after.

AWS INterconnect - multicloud - provisionningFor this demo, I start with a single VPC and I connect it to a Google Cloud VPC. I use a Direct Connect Gateway. It’s the simplest path: one connection, one attachment, and my workloads on both sides can start talking to each other in minutes.

Step 1: request an interconnect in the AWS Management Console.

I navigate to AWS Direct Connect, AWS Interconnect and I select Create. I first choose the cloud provider I want to connect to. In this example, Google Cloud.

AWS interconnect - 1Then, I choose the AWS Region (eu-central-1) and the Google Cloud Region (europe-west3).

AWS interconnect - 2On step 3, I enter Description,I choose the Bandwidth, the Direct Connect gateway to attach, and the ID of my Google Cloud project.

AWS interconnect - 3

After reviewing and confirming the request, the console gives me an activation key. I will use that key to validate the request on the Google cloud side.

AWS interconnect - 4

Step 2: create the transport and VPC Peering resources on my Google Cloud Platform (GCP) account.

Now that I have the activation key, I continue the process on the GCP side. At the time of this writing, no web-based console was available. I choose to use the GCP command line (CLI) instead. I take note of the CIDR range in the GCP VPC subnet in the europe-west3 region. Then, I open a Terminal and type:

gcloud network-connectivity transports create aws-news-blog \
    --region=europe-west3  \
    --activation-key=${ACTIVATION_KEY} \
    --network=default \
    --advertised-routes=10.156.0.0/20

Create request issued for: [aws-news-blog]
...
peeringNetwork: projects/oxxxp-tp/global/networks/transport-9xxxf-vpc
...
state: PENDING_CONFIG
updateTime: '2026-03-19T09:30:51.103979219Z'

It takes a couple of minutes for the command to complete. Once the command returns, I create a peering between my GCP VPC and the new transport I just created. I can do that in the GCP console or with the gcloud command line. Because I was using the Terminal for the previous command, I continued with the command line:

gcloud compute networks peerings create aws-news-blog \
      --network=default \
      --peer-network=projects/oxxxp-tp/global/networks/transport-9xxxf-vpc \
      --import-custom-routes \
      --export-custom-routes

The network name is the name of my GCP VPC. The peer network is given in the output of the previous command.

Once completed, I can verify the peering in the GCP console.
AWS Interconnect - Peering in the Google console

In the AWS Interconnect console, I verify the status is available.

AWS Interconnect availableIn the AWS Direct Connect console, under Direct Connect gateways, I see the attachment to the new interconnect.

AWS INterconnect attachment

Step 3: associate the new gateway on the AWS side

I select Gateway associations and Associate gateway to attach the Virtual Private Gateway (VGW) that I created before starting this demo (pay attention to use a VGW in the same AWS Region as the interconnect)

AWS Interconnect associate CGW

You don’t need to configure the network routing on the GCP side. On AWS, there is a final step: add a route entry in your VPC Route tables to send all traffic to the GCP IP address range through the Virtual Gateway.

VPC Route to the VGW

Once the network setup is done. I start two compute instances, one on AWS and one on GCP.

On AWS, I verify the Security Group accepts ingress traffic on TCP:8080. I connect to the machine and I start a minimal web server:

python3 -c \
"from http.server import HTTPServer, BaseHTTPRequestHandler 
class H(BaseHTTPRequestHandler):
   def do_GET(self):
      self.send_response(200);self.end_headers()
      self.wfile.write(b'Hello AWS World!\n\n')
HTTPServer(('',8080),H).serve_forever()"

On the GCP side, I open a SSH session to the machine and I call the AWS web server by its private IP address.

AWS Interconnect : curl from GCP to AWS

Et voilà! I have a private network route between my two networks, entirely managed by the two Cloud Service Providers.

Things to know
There are a couple of configuration options that you should keep in mind:

  • When connecting networks, pay attention to the IP addresses range on both sides. The GCP and AWS VPC ranges can’t overlap. For this demo, the default range on AWS was 172.31.0.0/16and the default on GCP was 10.156.0.0/20. I was able to proceed with these default values.
  • You can configure IPV4, IPV6, or both on each side. You must select the same option on both sides.
  • The Maximum Transmission Unit (MTU) must be the same on both VPC. The default values for AWS VPCs and GCP VPCs are not. MTU is the largest packet size, in bytes, that a network interface can transmit without fragmentation. Mismatched MTU sizes between peered VPCs cause packet drops or fragmentation, leading to silent data loss, degraded throughput, and broken connections across the interconnect.
  • For more details, refer to the GCP Partner Cross Cloud Interconnect and the AWS Interconnect User Guide.

Reference architectures
When your deployment grows and you have multiple VPCs in a single region, AWS Transit Gateway gives you a centralized routing hub to connect them all through a single Interconnect attachment. You can segment traffic between environments, apply consistent routing policies, and integrate AWS Network Firewall if you need to inspect what crosses the cloud boundary.

And when you’re operating at global scale, with workloads spread across multiple AWS Regions and multiple Google Cloud environments, AWS Cloud WAN extends that same model across the world. Any region in your network can reach any Interconnect attachment globally, with centralized policy management and segment-based routing that applies consistently everywhere you operate.

My colleagues Alexandra and Santiago documented these reference architectures in their blog post: Build resilient and scalable multicloud connectivity architectures with AWS Interconnect – multicloud.

AWS Interconnect – last mile
Based on the same architecture and design as AWS Interconnect – multicloud, AWS Interconnect – last mile provides the ability to connect your on-premises or remote location to AWS through a participating network provider’s last-mile infrastructure, directly from the AWS Management Console.

The onboarding process mirrors AWS Interconnect – multicloud: you select a provider, authenticate, and specify your connection endpoints and bandwidth. AWS generates an activation key that you provide in the provider console to complete the configuration. AWS Interconnect – last mile automatically provisions four redundant connections across two physical locations, configures BGP routing, and activates MACsec encryption and Jumbo Frames by default. The result is a resilient private connection to AWS that aligns with best practices, without requiring you to manually configure networking components.

AWS Interconnect - lastmile

AWS Interconnect – last mile supports bandwidths from 1 Gbps to 100 Gbps, and you can adjust bandwidth from the console without reprovisioning. The service includes a 99.99% availability SLA up to the Direct Connect port and bundles CloudWatch Network Synthetic Monitor for connection health monitoring. Just like AWS Interconnect – multicloud, AWS Interconnect – last mile attaches to a Direct Connect Gateway, which connects to your Virtual Private Gateway, Transit Gateway, or AWS Cloud WAN deployment. For more details, refer to the AWS Interconnect User Guide.

Scott Yow, SVP Product at Lumen Technologies, wrote:

By combining AWS Interconnect – last mile with Lumen fiber network and Cloud Interconnect, we simplify the last-mile complexity that often slows cloud adoption and enable a faster, and more resilient path to AWS for customers.

Pricing and availability
AWS Interconnect – multicloud and AWS Interconnect – last mile pricing is based on a flat hourly rate for the capacity you request, billed prorata by the hour. You select the bandwidth tier that fits your workload needs.

AWS Interconnect – multicloud pricing varies by region pair: a connection between US East (N. Virginia) and Google Cloud N. Virginia is priced differently from a connection between US East (N. Virginia) and a more distant region. When you use AWS Cloud WAN, the global any-to-any routing model means traffic can traverse multiple regions, which affects the total cost of your deployment. I recommend reviewing the AWS Interconnect pricing page for the full rate card by region pair and capacity tier before sizing your connection.

AWS Interconnect – multicloud is available today in five region pairs: US East (N. Virginia) to Google Cloud N. Virginia, US West (N. California) to Google Cloud Los Angeles, US West (Oregon) to Google Cloud Oregon, Europe (London) to Google Cloud London, and Europe (Frankfurt) to Google Cloud Frankfurt. Microsoft Azure support is coming later in 2026.

AWS Interconnect – last mile is launching in US East (N. Virginia) with Lumen as the initial partner. Additional partners, including AT&T and Megaport, are in progress, and additional regions are planned.

To get started with AWS Interconnect, visit the AWS Direct Connect console and select AWS Interconnect from the navigation menu.

I’d love to hear how you’re using AWS Interconnect in your environment. Leave a comment below or reach out through the AWS re:Post community.

— seb

from AWS News Blog https://ift.tt/wtVPQ9S
via IFTTT

Monday, April 13, 2026

AWS Weekly Roundup: Claude Mythos Preview in Amazon Bedrock, AWS Agent Registry, and more (April 13, 2026)

In my last Week in Review post, I mentioned how much time I’ve been spending on AI-Driven Development Lifecycle (AI-DLC) workshops with customers this year. A common theme in those sessions is the need for better cost visibility. Teams are moving fast with AI, but as they go from experimenting to full production, finance and leadership really need to know who is using which resources and at what cost. That’s why I was so excited to see the launch of Amazon Bedrock new support for cost allocation by IAM user and role this week. This lets you tag IAM principals with attributes like team or cost center and then activate those tags in your Billing and Cost Management console. The resulting cost data flows into AWS Cost Explorer and the detailed Cost and Usage Report, giving you a clear line of sight into model inference spending. Whether you’re scaling agents across teams, tracking foundation model use by department, or running tools like Claude Code on Amazon Bedrock, this new feature is a game changer for tracking and managing your AI investments. You can get all the details on setting this up in the IAM principal cost allocation documentation.

Now, let’s get into this week’s AWS news…

Headlines
Amazon Bedrock now offers Claude Mythos Preview Anthropic’s most sophisticated AI model to date is now available on Amazon Bedrock as a gated research preview through Project Glasswing. Claude Mythos introduces a new model class focused on cybersecurity, capable of identifying sophisticated security vulnerabilities in software, analyzing large codebases, and delivering state of the art performance across cybersecurity, coding, and complex reasoning tasks. Security teams can use it to discover and address vulnerabilities in critical software before threats emerge. Access is currently limited to allowlisted organizations, with Anthropic and AWS prioritizing internet critical companies and open source maintainers.

AWS Agent Registry for centralized agent discovery and governance now in preview AWS launched Agent Registry through Amazon Bedrock AgentCore, providing organizations with a private catalog for discovering and managing AI agents, tools, skills, MCP servers, and custom resources. The registry helps teams locate existing capabilities rather than duplicating them, with semantic and keyword search, approval workflows, and CloudTrail audit trails. It is accessible via the AgentCore Console, AWS CLI, SDK, and as an MCP server queryable from IDEs.

Last week’s launches
Here are some launches and updates from this past week that caught my attention:

  • Announcing Amazon S3 Files, making S3 buckets accessible as file systems — Amazon S3 Files transforms S3 buckets into shared file systems that connect any AWS compute resource directly with your S3 data. Built on Amazon EFS technology, it delivers full file system semantics with low latency performance, caching actively used data and providing multiple terabytes per second of aggregate read throughput. Applications can access S3 data through both file system and S3 APIs simultaneously without code modifications or data migration.
  • Amazon OpenSearch Service supports Managed Prometheus and agent tracing —Amazon OpenSearch Service now provides a unified observability platform that consolidates metrics, logs, traces, and AI agent tracing into a single interface. The update includes native Prometheus integration with direct PromQL query support, RED metrics monitoring, and OpenTelemetry GenAI semantic convention support for LLM execution visibility. Operations teams can correlate slow traces to logs and overlay Prometheus metrics on dashboards without switching between tools.
  • Amazon WorkSpaces Advisor now available for AI powered troubleshooting— AWS launched Amazon WorkSpaces Advisor, an AI powered administrative tool that uses generative AI to help IT administrators troubleshoot Amazon WorkSpaces Personal deployments. It analyzes WorkSpace configurations, detects problems automatically, and provides actionable recommendations to restore service and optimize performance.
  • Amazon Braket adds support for Rigetti’s 108 qubit Cepheus QPU — Amazon Braket now offers access to Rigetti’s Cepheus-1-108Q device, the first 100+ qubit superconducting quantum processor on the platform. The modular design features twelve 9 qubit chiplets with CZ gates that offer enhanced resilience to phase errors. It supports multiple frameworks including Braket SDK, Qiskit, CUDA-Q, and Pennylane, with pulse level control for researchers.

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.

Other AWS news
Here are some additional posts and resources that you might find interesting:

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • What’s Next with AWS (April 28, Virtual) Join this livestream at 9am PT for a candid discussion about how agentic AI is transforming how businesses operate. Featuring AWS CEO Matt Garman, SVP Colleen Aubrey, and OpenAI leaders discussing emerging agent capabilities, Amazon’s internal experiences, and new agentic solutions and platform capabilities.

Browse here for upcoming AWS led in person and virtual events, startup events, and developer focused events.


That’s all for this week. Check back next Monday for another Weekly Roundup!

~ micah



from AWS News Blog https://ift.tt/2DWPGjh
via IFTTT

Tuesday, April 7, 2026

Launching S3 Files, making S3 buckets accessible as file systems

I’m excited to announce Amazon S3 Files, a new file system that seamlessly connects any AWS compute resource with Amazon Simple Storage Service (Amazon S3).

More than a decade ago, as an AWS trainer, I spent countless hours explaining the fundamental differences between object storage and file systems. My favorite analogy was comparing S3 objects to books in a library (you can’t edit a page, you need to replace the whole book) versus files on your computer that you can modify page by page. I drew diagrams, created metaphors, and helped customers understand why they needed different storage types for different workloads. Well, today that distinction becomes a bit more flexible.

With S3 Files, Amazon S3 is the first and only cloud object store that offers fully-featured, high-performance file system access to your data. It makes your buckets accessible as file systems. This means changes to data on the file system are automatically reflected in the S3 bucket and you have fine-grained control over synchronization. S3 Files can be attached to multiple compute resources enabling data sharing across clusters without duplication.

Until now, you had to choose between Amazon S3 cost, durability, and the services that can natively consume data from it or a file system’s interactive capabilities. S3 Files eliminates that tradeoff. S3 becomes the central hub for all your organization’s data. It’s accessible directly from any AWS compute instance, container, or function, whether you’re running production applications, training ML models, or building agentic AI systems.

You can access any general purpose bucket as a native file system on your Amazon Elastic Compute Cloud (Amazon EC2) instances, containers running on Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), or AWS Lambda functions. The file system presents S3 objects as files and directories, supporting all Network File System (NFS) v4.1+ operations like creating, reading, updating, and deleting files.

As you work with specific files and directories through the file system, associated file metadata and contents are placed onto the file system’s high-performance storage. By default, files that benefit from low-latency access are stored and served from the high performance storage. For files not stored on high performance storage such as those needing large sequential reads, S3 Files automatically serves those files directly from Amazon S3 to maximize throughput. For byte-range reads, only the requested bytes are transferred, minimizing data movement and costs.

The system also supports intelligent pre-fetching to anticipate your data access needs. You also have fine-grained control over what gets stored on the file system’s high performance storage. You can decide whether to load full file data or metadata only, which means you can optimize for your specific access patterns.

Under the hood, S3 Files uses Amazon Elastic File System (Amazon EFS) and delivers ~1ms latencies for active data. The file system supports concurrent access from multiple compute resources with NFS close-to-open consistency, making it ideal for interactive, shared workloads that mutate data, from agentic AI agents collaborating through file-based tools to ML training pipelines processing datasets.

Let me show you how to get started.
Creating my first Amazon S3 file system, mounting, and using it from an EC2 instance is straightforward.

I have an EC2 instance and a general purpose bucket. In this demo, I configure an S3 file system and access the bucket from an EC2 instance, using regular file system commands.

For this demo, I use the AWS Management Console. You can also use the AWS Command Line Interface (AWS CLI) or infrastructure as code (IaC).

Here is the architecture diagram for this demo.

S3 Files demo architectureStep 1: Create an S3 file system.

On the Amazon S3 section of the console, I choose File systems and then Create file system.

S3 Files create file system

I enter the name of the bucket I want to expose as a file system and choose Create file system.

S3 Files create file system, part 2

Step 2: Discover the mount target.

A mount target is a network endpoint that will live in my virtual private cloud (VPC). It allows my EC2 instance to access the S3 file system.

The console creates the mount targets automatically. I take notes of the Mount target IDs on the Mount targets tab.

When using the CLI, two separate commands are necessary to create the file system and its mount targets. First, I create the S3 file system with create-file-system. Then, I create the mount target with create-mount target.

Step 3: Mount the file system on my EC2 instance.

After it’s connected to an EC2 instance, I type:

sudo mkdir /home/ec2-user/s3files sudo mount -t s3files fs-0aa860d05df9afdfe:/ /home/ec2-user/s3files

I can now work with my S3 data directly through the mounted file system in ~/s3files, using standard file operations.

When I make updates to my files in the file system, S3 automatically manages and exports all updates as a new object or a new version on an existing object back in my S3 bucket within minutes.

Changes made to objects on the S3 bucket are visible in the file system within a few seconds but can sometimes take a minute or longer.

# Create a file on the EC2 file system 
echo "Hello S3 Files" > s3files/hello.txt 

# and verify it's here 
ls -al s3files/hello.txt
 -rw-r--r--. 1 ec2-user ec2-user 15 Oct 22 13:03 s3files/hello.txt 

# See? the file is also on S3 
aws s3 ls s3://s3files-aws-news-blog/hello.txt 
2025-10-22 13:04:04 15 hello.txt 

# And the content is identical! 
aws s3 cp s3://s3files-aws-news-blog/hello.txt . && cat hello.txt
Hello S3 Files

Things to know
Let me share some important technical details that I think you’ll find useful.

Another question I frequently hear in customer conversations is about choosing the right file service for your workloads. Yes, I know what you’re thinking: AWS and its seemingly overlapping services, keeping cloud architects entertained during their architecture review meetings. Let me help demystify this one.

S3 Files works best when you need interactive, shared access to data that lives in Amazon S3 through a high performance file system interface. It’s ideal for workloads where multiple compute resources—whether production applications, agentic AI agents using Python libraries and CLI tools, or machine learning (ML) training pipelines—need to read, write, and mutate data collaboratively. You get shared access across compute clusters without data duplication, sub-millisecond latency, and automatic synchronization with your S3 bucket.

For workloads migrating from on-premises NAS environments, Amazon FSx provides the familiar features and compatibility you need. Amazon FSx is also ideal for high-performance computing (HPC) and GPU cluster storage with Amazon FSx for Lustre. It’s particularly valuable when your applications require specific file system capabilities from Amazon FSx for NetApp ONTAP, Amazon FSx for OpenZFS, or Amazon FSx for Windows File Server.

Pricing and availability
S3 Files is available today in all commercial AWS Regions.

You pay for the portion of data stored in your S3 file system, for small file read and all write operations to the file system, and for S3 requests during data synchronization between the file system and the S3 bucket. The Amazon S3 pricing page has all the details.

From discussions with customers, I believe S3 Files helps simplify cloud architectures by eliminating data silos, synchronization complexity, and manual data movement between objects and files. Whether you’re running production tools that already work with file systems, building agentic AI systems that rely on file-based Python libraries and shell scripts, or preparing datasets for ML training, S3 Files lets these interactive, shared, hierarchical workloads access S3 data directly without choosing between the durability of Amazon S3 and cost benefits and a file system’s interactive capabilities. You can now use Amazon S3 as the place for all your organizations’ data, knowing the data is accessible directly from any AWS compute instance, container, and function.

To learn more and get started, visit the S3 Files documentation.

I’d love to hear how you use this new capability. Feel free to share your feedback in the comments below.

— seb

from AWS News Blog https://ift.tt/yv3RSHw
via IFTTT

Monday, April 6, 2026

AWS Weekly Roundup: AWS DevOps Agent & Security Agent GA, Product Lifecycle updates, and more (April 6, 2026)

Last week, I visited AWS Hong Kong User Group with my team. Hong Kong has a small but strong community, and their energy and passion are high. They recently started a new AI user group, and we hope more people will join. I was able to strengthen my bond with the community through great food and conversation.

This week, I’ll first take a closer look at some of the key launches.

AWS DevOps Agent and Security Agent GA
At the last re:Invent, we introduced the concept of frontier agents that work autonomously across multiple steps to achieve outcomes, operating continuously until the job is done. The first two—AWS DevOps Agent and AWS Security Agent—are now generally available after the preview.

AWS DevOps Agent helps you run cloud operations—investigating incidents, reducing time to resolution, and preventing issues before they happen. Customers like United Airlines, Western Governors University, and T-Mobile are already using DevOps Agent to accelerate incident response and simplify operations at scale. At WGU, resolution time dropped from hours to minutes, and in preview customers report up to 75% lower MTTR and 3 to 5 times faster resolution. Learn more in Sébastien’s preview blog post and GA announcement.

AWS Security Agent brings continuous, context-aware penetration testing into the development lifecycle. This agent operates like a human penetration tester. Customers including LG CNS, HENNGE, and Wayspring are seeing strong results. At LG CNS, teams estimate over 50% faster testing and ~30% lower costs, along with significantly fewer false positives. Learn more in Esra’s preview blog post and GA announcement.

Both are designed to work across AWS cloud, multicloud, and on-prem environments. You can have an always-available teammate that can handle the heavy lifting, so you can focus on what matters most.

AWS Service Availability Updates
When the availability of an AWS service or feature changes, we provide customers guidance in AWS Product Lifecycle Changes on available alternatives and support for migration so that disruptions to your operations are minimized. The following lifecycle changes were updated on March 31, 2026.

We understand that changes in availability can impact your operations. For specific guidance, consult the relevant service documentation or contact AWS Support.

Last week’s launches
Here are last week’s launches that caught my attention:

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.

Additional updates
Here are some additional news items that you might find interesting:

For a full list of AWS blog posts, be sure to keep an eye on the AWS Blogs page.

Learn more about AWS, browse and join upcoming AWS led in-person and virtual events, startup events, and developer-focused events as well as AWS Summits and AWS Community Days. Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Channy



from AWS News Blog https://ift.tt/ZilXwST
via IFTTT

Friday, April 3, 2026

Amazon Bedrock Guardrails supports cross-account safeguards with centralized control and management

Today, we’re announcing the general availability of cross-account safeguards in Amazon Bedrock Guardrails, a new capability that enables centralized enforcement and management of safety controls across multiple AWS accounts within an organization.

With this new capability, you can specify a guardrail in a new Amazon Bedrock policy within the management account of your organization that automatically enforces configured safeguards across all member entities for every model invocation with Amazon Bedrock. This organization-wide implementation supports uniform protection across all accounts and generative AI applications with centralized control and management. This capability also offers flexibility to apply account-level and application-specific controls depending on use case requirements in addition to organizational safeguards.

  • Organization-level enforcements apply a single guardrail from your organization’s management account to all entities within the organization through policy settings. This guardrail automatically enforces filters across all member entities, including organizational units (OUs) and individual accounts, for all Amazon Bedrock model invocations.
  • Account-level enforcement enables automatic enforcement of configured safeguards across all Amazon Bedrock model invocations in your AWS account. The configured safeguards in the account-level guardrail apply to all inference API calls.

You can now establish and centrally manage dependable, comprehensive protection through a single, unified approach. This supports consistent adherence to corporate responsible AI requirements while significantly reducing the administrative burden of monitoring individual accounts and applications. Your security team no longer needs to oversee and verify configurations or compliance for each account independently.

Getting started with centralized enforcement in Amazon Bedrock Guardrails
You can get started with account-level and organization-level enforcement configuration in the Amazon Bedrock Guardrails console. Before the enforcement configuration, you need to create a guardrail with a particular version to support the guardrail configuration remains immutable and cannot be modified by member accounts and complete prerequisites for using the new capability such as resource-based policies for guardrails.

To enable account-level enforcement, choose Create in the section of Account-level enforcement configurations.

You can choose the guardrail and version to automatically apply to all Bedrock inference calls from this account in this Region. With general availability, we introduce the new feature defining which models will be affected by the enforcement with either Include or Exclude behavior.

You can also configure selective content guarding controls for system prompts and user prompts with either Comprehensive or Selective.

  • Use Comprehensive when you want to enforce guardrails on everything, regardless of what the caller tags. This is the safer default when you don’t want to rely on callers to correctly identify sensitive content.
  • Use Selective when you trust callers to tag the right content and want to reduce unnecessary guardrail processing. This is useful when callers handle a mix of pre-validated and user-generated content, and only need guardrails applied to specific portions.

After creating the enforcement, you can test and verify enforcement using a role in your account. The account-enforced guardrail should automatically apply to both prompts and outputs.

Check the response for guardrail assessment information. The guardrail response will include enforced guardrail information. You can also test by making a Bedrock inference call using InvokeModel, InvokeModelWithResponseStream, Converse, or ConverseStream APIs.

To enable organization-level enforcement, go to AWS Organizations console and choose Policies menu. You can enable the Bedrock policies in the console.

You can create a Bedrock policy that specifies your guardrail and attach it to your target accounts or OUs. Choose Bedrock policies enabled and Create policy. Specify your guardrail ARN and version and configure the input tags setting for in the AWS Organizations. To learn more, visit Amazon Bedrock policies in AWS Organizations and Amazon Bedrock policy syntax and examples.

After creating the policy, you can attach the policy to your desired organizational units, accounts, root in the Targets tab.

Search and select your organization root, OUs, or individual accounts to attach your policy, and choose Attach policy.

You can test that the guardrail is being enforced on member accounts and verify which guardrail is enforced. From a member account attached, you should see the organization enforced guardrail under the section Organization-level enforcement configurations.

The underlying safeguards within the specified guardrail are then automatically enforced for every model inference request across all member entities, ensuring consistent safety controls. To accommodate varying requirements of individual teams or applications, you can attach different policies with associated guardrails to different member entities through your organization.

Things to know
Here are key considerations to know about GA features:

  • You can now choose to include or exclude specific models in Bedrock for inference, enabling centralized enforcement on model invocation calls. You can also choose to safeguard partial or complete system prompts and input prompts. To learn more, visit Apply cross-account safeguards with Amazon Bedrock Guardrails enforcement.
  • Ensure you are specifying the accurate guardrail Amazon Resource Names (ARN) in the policy. Specifying an incorrect or invalid ARN will result in policy violations, non-enforcement of safeguards, and the inability to use the models in Amazon Bedrock for inference. To learn more, visit Best practices for using Amazon Bedrock policies.
  • Automated Reasoning checks are not supported with this capability.

Now available
Cross-account safeguards in Amazon Bedrock Guardrails is generally available today in the all AWS commercial and GovCloud Regions where Bedrock Guardrails is available. For Regional availability and a future roadmap, visit the AWS Capabilities by Region. Charges apply to each enforced guardrail according to its configured safeguards. For detailed pricing information on individual safeguards, visit Amazon Bedrock Pricing page.

Give this capability a try in the Amazon Bedrock console and send feedback to AWS re:Post for Amazon Bedrock Guardrails or through your usual AWS Support contacts.

Channy



from AWS News Blog https://ift.tt/MoUy2nj
via IFTTT

Wednesday, April 1, 2026

Announcing managed daemon support for Amazon ECS Managed Instances

Today, we’re announcing managed daemon support for Amazon Elastic Container Service (Amazon ECS) Managed Instances. This new capability extends the managed instances experience we introduced in September 2025, by giving platform engineers independent control over software agents such as monitoring, logging, and tracing tools, without requiring coordination with application development teams, while also improving reliability by ensuring every instance consistently runs required daemons and enabling comprehensive host-level monitoring.

When running containerized workloads at scale, platform engineers manage a wide range of responsibilities, from scaling and patching infrastructure to keeping applications running reliably and maintaining the operational agents that support those applications. Until now, many of these concerns were tightly coupled. Updating a monitoring agent meant coordinating with application teams, modifying task definitions, and redeploying entire applications, a significant operational burden when you’re managing hundreds or thousands of services.

Decoupled lifecycle management for daemons

Amazon ECS now introduces a dedicated managed daemons construct that enables platform teams to centrally manage operational tooling. This separation of concerns allows platform engineers to independently deploy and update monitoring, logging, and tracing agents to infrastructure, while enforcing consistent use of required tools across all instances, without requiring application teams to redeploy their services. Daemons are guaranteed to start before application tasks and drain last, ensuring that logging, tracing, and monitoring are always available when your application needs them.

Platform engineers can deploy managed daemons across multiple capacity providers, or target specific capacity providers, giving them flexibility in how they roll out agents across their infrastructure. Resource management is also centralized, allowing teams to define daemon CPU and memory parameters separately from application configurations with no need to rebuild AMIs or update task definitions, while optimizing resource utilization since each instance runs exactly one daemon copy shared across multiple application tasks.

Let’s try it out
To take ECS Managed Daemons for a spin, I decided to start with the Amazon CloudWatch Agent as my first managed daemon. I had previously set up an Amazon ECS cluster with a Managed Instance capacity provider using the documentation.

From the Amazon Elastic Container Service console, I noticed a new Daemon task definitions option in the navigation pane, where I can define my managed daemons.

Managed daemons console

I chose Create new daemon task definition to get started. For this example, I configured the CloudWatch Agent with 1 vCPU and 0.5 GB of memory. In the Daemon task definition family field, I entered a name I’d recognize later.

For the Task execution role, I selected ecsTaskExecutionRole from the dropdown. Under the Container section, I gave my container a descriptive name and pasted in the image URI: public.ecr.aws/cloudwatch-agent/cloudwatch-agent:latest along with a few additional details.

After reviewing everything, I chose Create.

Once my daemon task definition was created, I navigated to the Clusters page, selected my previously created cluster and found the new Daemons tab.

Managed daemons 2

Here I can simply click the Create daemon button and complete the form to configure my daemon.

Managed daemons 3

Under Daemon configuration, I selected my newly created daemon task definition family and then assigned my daemon a name. For Environment configuration, I selected the ECS Managed Instances capacity provider I had set up earlier. After confirming my settings, I chose Create.

Now ECS automatically ensures the daemon task launches first on every provisioned ECS managed instance in my selected capacity provider. To see this in action, I deployed a sample nginx web service as a test workload. Once my workload was deployed, I could see in the console that ECS Managed Daemons had automatically deployed the CloudWatch Agent daemon alongside my application, with no manual intervention required.

When I later updated my daemon, ECS handled the rolling deployment automatically by provisioning new instances with the updated daemon, starting the daemon first, then migrating application tasks to the new instances before terminating the old ones. This “start before stop” approach ensures continuous daemon coverage: your logging, monitoring, and tracing agents remain operational throughout the update with no gaps in data collection. The drain percentage I configured controlled the pace of this replacement, giving me complete control over addon updates without any application downtime.

How it works
The managed daemon experience introduces a new daemon task definition that is separate from task definitions, with its own parameters and validation scheme. A new daemon_bridge network mode enables daemons to communicate with application tasks while remaining isolated from application networking configurations.

Managed daemons support advanced host-level access capabilities that are essential for operational tooling. Platform engineers can configure daemon tasks as privileged containers, add additional Linux capabilities, and mount paths from the underlying host filesystem. These capabilities are particularly valuable for monitoring and security agents that require deep visibility into host-level metrics, processes, and system calls.

When a daemon is deployed, ECS launches exactly one daemon process per container instance before placing application tasks. This guarantees that operational tooling is in place before your application starts receiving traffic. ECS also supports rolling deployments with automatic rollbacks, so you can update agents with confidence.

Now available
Managed daemon support for Amazon ECS Managed Instances is available today in all AWS Regions. To get started, visit the Amazon ECS console or review the Amazon ECS documentation. You can also explore the new managed daemons Application Programming Interface (APIs) by visiting this website.

There is no additional cost to use managed daemons. You pay only for the standard compute resources consumed by your daemon tasks.



from AWS News Blog https://ift.tt/gzfjrdx
via IFTTT

Tuesday, March 31, 2026

Announcing the AWS Sustainability console: Programmatic access, configurable CSV reports, and Scope 1–3 reporting in one place

As many of you are, I’m a parent. And like you, I think about the world I’m building for my children. That’s part of why today’s launch matters for many of us. I’m excited to announce the launch of the AWS Sustainability console, a standalone service that consolidates all AWS sustainability reporting and resources in one place.

With the The Climate Pledge, Amazon set a goal in 2019 to reach net-zero carbon across our operations by 2040. That commitment shapes how AWS builds its data centers and services. In addition, AWS is also committed to helping you measure and reduce the environmental footprint of your own workloads. The AWS Sustainability console is the latest step in that direction.

The AWS Sustainability console builds on the Customer Carbon Footprint Tool (CCFT), which lives inside the AWS Billing console, and introduces a new set of capabilities for which you’ve been asking.

Until now, accessing your carbon footprint data required billing-level permissions. That created a practical problem: sustainability professionals and reporting teams often don’t have (and shouldn’t need) access to cost and billing data. Getting the right people access to the right data meant navigating permission structures that weren’t designed with sustainability workflows in mind. The AWS Sustainability console has its own permissions model, independent of the Billing console. Sustainability professionals can now get direct access to emissions data without requiring billing permissions to be granted alongside it.

The console includes Scope 1, 2, and 3 emissions attributed to your AWS usage and shows you a breakdown by AWS Region, service, such as Amazon CloudFront, Amazon Elastic Compute Cloud (Amazon EC2), and Amazon Simple Storage Service (Amazon S3). The underlying data and methodology haven’t changed with this launch; these are the same as the ones used by the CCFT. We changed how you can access and work with the data.

As sustainability reporting requirements have grown more complex, teams need more flexibility accessing and working with their emissions data. The console now includes a Reports page where you can download preset monthly and annual carbon emissions reports covering both market-based method (MBM) and location-based method (LBM) data. You can also build a custom comma-separated values (CSV) report by selecting which fields to include, the time granularity, and other filters.

If your organization’s fiscal year doesn’t align with the calendar year, you can now configure the console to match your reporting period. When that is set, all data views and exports reflect your fiscal year and quarters, which removes a common friction point for finance and sustainability teams working in parallel.

You can also use the new API or the AWS SDKs to integrate emissions data into your own reporting pipelines, dashboards, or compliance workflows. This is useful for teams that need to pull data for a specific month across a large number of accounts without setting up a data export or for organizations that need to establish custom account groupings that don’t align with their existing AWS Organizations structure.

You can read about the latest features released and methodology updates directly on the Release notes page on the Learn more tab.

Lets see it in action
To show you the Sustainability console, I opened the AWS Management Console and searched for “sustainability” in the search bar at the top of the screen.

Sustainability console - carbon emission 1

Sustainability console - carbon emission 2

The Carbon emissions section gives an estimate on your carbon emissions, expressed in metric tons of carbon dioxide equivalent (MTCO2e). It shows the emissions by scope, expressed in the MBM and the LBM. On the right side of the screen, you can adjust the date range or filter by service, Regions, and more.

For those unfamiliar: Scope 1 includes direct emissions from owned or controlled sources (for example, data center fuel use); Scope 2 covers indirect emissions from the production of purchased energy (with MBM accounting for energy attribute certificates and LBM using average local grid emissions); and Scope 3 includes other indirect emissions across the value chain, such as server manufacturing and data center construction. You can read more about this in our methodology document, which was independently verified by Apex, a third-party consultant.

I can also use API or AWS Command Line Interface (AWS CLI) to programmatically pull the emissions data.

aws sustainability get-estimated-carbon-emissions \
     --time-period='{"Start":"2025-03-01T00:00:00Z","End":"2026-03-01T23:59:59.999Z"}'

{
    "Results": [
        {
            "TimePeriod": {
                "Start": "2025-03-01T00:00:00+00:00",
                "End": "2025-04-01T00:00:00+00:00"
            },
            "DimensionsValues": {},
            "ModelVersion": "v3.0.0",
            "EmissionsValues": {
                "TOTAL_LBM_CARBON_EMISSIONS": {
                    "Value": 0.7,
                    "Unit": "MTCO2e"
                },
                "TOTAL_MBM_CARBON_EMISSIONS": {
                    "Value": 0.1,
                    "Unit": "MTCO2e"
                }
            }
        },
...

The combination of the visual console and the new API gives you two additional ways to work with your data, in addition to the Data Exports still available. You can now explore and identify hotspots on the console and automate the reporting you want to share with stakeholders.

The Sustainability console is designed to grow. We plan to continue to release new features as we grow the console’s capabilities alongside our customers.

Get started today
The AWS Sustainability console is available today at no additional cost. You can access it from the AWS Management Console. Historical data is available going back to January 2022, so you can start exploring your emissions trends right away.

Get started on the console today. If you want to learn more about the AWS commitment to sustainability, visit the AWS Sustainability page.

— seb

from AWS News Blog https://ift.tt/MQm47xI
via IFTTT

Monday, March 30, 2026

AWS Weekly Roundup: AWS AI/ML Scholars program, Agent Plugin for AWS Serverless, and more (March 30, 2026)

Last week, what excited me most was the launch of the 2026 AWS AI & ML Scholars program by Swami Sivasubramanian, VP of AWS Agentic AI, to provide free AI education to up to 100,000 learners worldwide. The program has two phases: a Challenge phase where you’ll learn foundational generative AI skills, followed by a fully funded three-month Udacity Nanodegree for the top 4,500 performers. Anyone 18 or older can apply, with no prior AI or ML experience required. Applications close on June 24, 2026. Visit the AWS AI & ML Scholars webpage to learn more and apply.

The AWS AI & ML Scholars Program is back

I’m also excited about the start of AWS Summit season, kicking off with AWS Summit Paris on April 1, followed by London on April 22. AWS Summits are free in-person events where builders and innovators can learn about Cloud and AI, think big, and make new connections. Explore the AWS Summits near you and join us in person.

Now, let’s dive into this week’s AWS news…

Last week’s launches
Here are last week’s launches that caught my attention:

  • Announcing Amazon Aurora PostgreSQL serverless database creation in seconds — Amazon Aurora PostgreSQL now offers express configuration, a streamlined setup with preconfigured defaults that supports creating and connecting to a database in seconds. With just two clicks, you can launch an Aurora PostgreSQL serverless database. You can modify certain settings during or after creation.
  • Amazon Aurora PostgreSQL now available with the AWS Free Tier — Amazon Aurora PostgreSQL is now available on the AWS Free Tier. If you’re new to AWS, you receive $100 in AWS credits upon sign-up and can earn an additional $100 in credits by using services like Amazon Relational Database Service (Amazon RDS).
  • Announcing Agent Plugin for AWS Serverless — With the new Agent Plugin for AWS Serverless, you can easily build, deploy, troubleshoot, and manage serverless applications using AI coding assistants like Kiro, Claude Code, and Cursor. This plugin extends AI assistants with structured capabilities by packaging skills, sub-agents, and Model Context Protocol (MCP) servers into one modular unit. It automatically loads the guidance and expertise you need throughout development to build production-ready serverless applications on AWS.
  • Amazon SageMaker Studio now supports Kiro and Cursor IDEs as remote IDEs — You can now remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This lets you use your existing Kiro and Cursor setup, including spec-driven development, conversational coding, and automated feature generation, while accessing the scalable compute resources of Amazon SageMaker Studio.
  • Introducing visual customization capability in AWS Management Console — You can now customize your AWS Management Console with visual settings like account color and control which regions and services you see. Hiding unused regions and services helps you focus better and work faster by reducing cognitive load and unnecessary scrolling.
  • Announcing Aurora DSQL connector to simplify building Ruby applications — You can now use the Aurora DSQL Connector for Ruby (pg gem) to easily build Ruby applications on Aurora DSQL. The Ruby Connector simplifies authentication and improves security by automatically generating tokens for each connection, eliminating the risks of traditional passwords while maintaining full compatibility with existing pg gem features.
  • AWS Lambda increases the file descriptor limit for functions running on Lambda Managed Instances — AWS Lambda increases the file descriptor limit from 1,024 to 4,096, a 4x increase, for functions running on Lambda Managed Instances (LMI). You can now run I/O intensive workloads such as high-concurrency web services and file-heavy data processing pipelines without running into file descriptor limits.
  • AWS Lambda now supports up to 32 GB of memory and 16 vCPUs for Lambda Managed Instances — AWS Lambda functions on Lambda Managed Instances now support up to 32 GB of memory and 16 vCPUs. You can run compute-intensive workloads like data processing, media transcoding, and scientific simulations without managing infrastructure. Plus, you can adjust the memory-to-vCPU ratio (2:1, 4:1, or 8:1) to fit your workload.
  • Announcing Bidirectional Streaming API for Amazon Polly — Traditional text-to-speech APIs use a request-response pattern. The new Bidirectional Streaming API for Amazon Polly is designed for conversational AI applications that generate text or audio incrementally, like large language model (LLM) responses. This lets you start synthesizing audio before the full text is available.

For a full list of AWS announcements, be sure to keep an eye on our News Blog channel and the What’s New with AWS page.

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • AWS Summits — As I mentioned earlier, join AWS Summits in 2026 for free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include Paris (April 1), London (April 22), Bengaluru (April 23–24), Singapore (May 6), Tel Aviv (May 6), and Stockholm (May 7).
  • AWS Community Days — Community-led conferences where content is planned, sourced, and delivered by community leaders, featuring technical discussions, workshops, and hands-on labs. Upcoming events include San Francisco (April 10) and Romania (April 23–24).

Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development. Browse the AWS Events and Webinars for upcoming AWS-led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Prasad

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!



from AWS News Blog https://ift.tt/jFxMXlI
via IFTTT