Wednesday, April 2, 2025
Tuesday, April 1, 2025
Meet the AWS News Blog team!
Now that Jeff Barr has retired from the AWS News Blog as of December last year, the AWS News Blog team will keep sharing the most important and impactful AWS product launches the moment they become available. I want to quote Jeff’s last comment on the future of the News Blog again:
Going forward, the team will continue to grow and the goal remains the same: to provide our customers with carefully chosen, high-quality information about the latest and most meaningful AWS launches. The blog is in great hands and this team will continue to keep you informed even as the AWS pace of innovation continues to accelerate.
Since 2016, Jeff has been building the AWS News Blog as a team. Currently, we’re a group of 11 bloggers working in North America, South America, Asia, Europe, and Africa. We co-work with AWS product teams, testing new features firsthand on behalf of customers, and delivering key details in the News Blog the way Jeff has always done.
The Leadership Principles for AWS News Bloggers that Jeff shared on LinkedIn are a textbook for anyone writing for customers in tech companies. They’re the fundamentals that can help you understand and get started blogging quickly, and we’ll continue to stick to these principles with our team. This is why the AWS News Blog is different from other tech companies’ product news channels.
Voices from blog writers
You may be familiar with the names of News Blog writers, but you may not have had the chance to hear about them. Let us introduce ourselves!
I’m honored to continue Jeff’s legacy as a new lead blogger of the News Blog team; he is my role model. When I joined AWS in 2014, the first thing I did was to create the AWS Korea Blog and I started translating Jeff’s blog posts into the Korean language. During the journey, I learned how to write accurate, honest, and powerful guides to help customers get started with new AWS products and features.
Since my first News Blog post in 2018, I have learned so much by being part of this team. Working with product managers and service teams is always an amazing experience. I am interested in serverless, event-driven architectures, and AI/ML. It’s incredible how technologies like generative AI are becoming part of software development implicitly (through AI-enabled development tools) and explicitly (by using models in code).
I’m fortunate to have been a part of this team since 2019. When I don’t write posts, I produce episodes of the AWS Developers Podcast and le podcast AWS en français. I also work with the teams for Amazon EC2 Mac, AWS SDK for Swift, and the CodeBuild and CodeArtifact teams trying to make the AWS Cloud easier to use for Apple developers. My pet project is the Swift Runtime for AWS Lambda.
The Amazon Leadership Principles (LPs) guide all that we do here at AWS, including the work we do as authors of the News Blog. As a developer advocate, I’ve taken the guidance of the LPs and used it to guide members of the AWS community who are looking to create technical content, especially those new in their technical content creation journey.
Just like brewing coffee, being a blog author has been a mix of fun, challenge, and reward. I’ve been particularly fortunate to observe how customer obsession is built into AWS teams. I’ve seen how they work backwards, transforming your feedback into services or features. I genuinely hope that you enjoy reading our articles and look forward to the next chapter of the News Blog team.
As an author, I’m committed to delivering timely information about the latest AWS innovations and launches to our global audience of builders, developers, and technology enthusiasts. I understand the importance of providing clear, accurate, and actionable content that helps you use AWS services effectively. Happy reading everyone!
My specialties are .NET development and microservices, but I’ve always been a jack-of-all-trades and writing for this blog helps me to keep my knife sharp across all corners of modern technology, while also helping others do the same. Thousands of people read the AWS News Blog and use it as a go-to source to keep up with what’s new and to help them make decisions, so I know that what we are doing is meaningful work with huge impact.
Through my blogs, I strive to highlight not just the “what” of new services, but also the “why” and “how” they can transform businesses and user experiences. As a solutions architect specializing in Microsoft Workloads on AWS, I help customers migrate and modernize their workloads and build scalable architecture on AWS. I also mentor diverse people to excel in their cloud careers.
Every time I start writing a new blog, I feel honored to be part of this team, to be able to experiment with something new before it’s released, and to be able to share my experience with the reader. This team is made up of specialists of all levels and from multiple countries and together, we are a multicultural and multi-specialty team. Thank you, reader, for being here.
Joining the News Blog team has transformed how I communicate about technology. With an ever-curious mindset, I approach each new announcement aiming to make innovative services accessible and engaging. By bringing my unique and diverse perspective to technical content, I strive to help developers truly enjoy exploring our latest technologies.
Micah Walter
As a senior solutions architect, I support enterprise customers in the New York City region and beyond. I advise executives, engineers, and architects at every step along their journey to the cloud, with a deep focus on sustainability and practical design.
I also want to give credit to our behind-the-scenes editor-in-chief, Jane Watson, and program manager, Jane Scolieri, who play an essential role in helping us get product launch news to you as soon as it happens, including the 60 launches we announced in one week at re:Invent 2024!
Share your feedback
At AWS, we are customer obsessed. We’re always focused on improving and providing a better customer experience, and we need your feedback to do so. Take our survey to share insights about your experience with the AWS News Blog and suggestion for how we can serve you even better.
This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.
— Channy
from AWS News Blog https://ift.tt/ix1HzmC
via IFTTT
Monday, March 31, 2025
Accelerate operational analytics with Amazon Q Developer in Amazon OpenSearch Service
Today, I’m happy to announce Amazon Q Developer support for Amazon OpenSearch Service, providing AI-assisted capabilities to help you investigate and visualize operational data. Amazon Q Developer enhances the OpenSearch Service experience by reducing the learning curve for query languages, visualization tools, and alerting features. The new capabilities complement existing dashboards and visualizations by enabling natural language exploration and pattern detection. After incidents, you can rapidly create additional visualizations to strengthen your monitoring infrastructure. This enhanced workflow accelerates incident resolution and optimizes engineering resource usage, helping you focus more time on innovation rather than troubleshooting.
Amazon Q Developer in Amazon OpenSearch Service improves operational analytics by integrating natural language exploration and generative AI capabilities directly into OpenSearch workflows. During incident response, you can now quickly gain context on alerts and log data, leading to faster analysis and resolution times. When alert monitors trigger, Amazon Q Developer provides summaries and insights directly in the alerts interface, helping you understand the situation quickly without waiting for specialists or consulting documentation. From there, you can use Amazon Q Developer to explore the underlying data, build visualizations using natural language, and identify patterns to determine root causes. For example, you can create visualizations that break down errors by dimensions such as Region, data center, or endpoint. Additionally, Amazon Q Developer assists with dashboard configuration and recommends anomaly detectors for proactive alerting, improving both initial monitoring setup and troubleshooting efficiency.
Get started with Amazon Q Developer in OpenSearch Service
To get started, I go to my OpenSearch user interface and sign in. From the home page, I choose a workspace to test Amazon Q Developer in OpenSearch Service. For this demonstration, I use a preconfigured environment with the sample logs dataset available on the user interface.
This feature is on by default through the Amazon Q Developer Free tier, which is also on by default. You can disable the feature by unselecting the Enable natural language query generation checkbox under the Artificial Intelligence (AI) and Machine Learning (ML) section during domain creation or by editing the cluster configuration in console.
In OpenSearch Dashboards, I navigate to Discover from the left navigation pane. To use natural language to explore the data, I switch to PPL language in order to show the prompt box.
I choose the Amazon Q icon in the main navigation bar to open the Amazon Q panel. You can use this panel to create recommended anomaly detectors to drive alerting and use natural language to generate visualization.
I enter the following prompt in the Ask a natural language question text box:
Show me a breakdown of HTTP response codes for the last 24 hours
When results appear, Amazon Q automatically generates a summary of these results. You can control the summary display using the Show result summarization option under the Amazon Q panel to hide or show the summary. You can use the thumbs up or thumbs down buttons to provide feedback, and you can copy the summary to your clipboard using the copy button.
Other capabilities of Amazon Q Developer in OpenSearch Service are generating visualizations directly from natural language descriptions, providing conversational assistance for OpenSearch related queries, providing AI-generated summaries and insights for your OpenSearch alerts, and analyzing your data, and suggesting appropriate anomaly detectors.
Let’s look into how to generate visualizations directly from natural language descriptions. I choose Generate visualization from Amazon Q panel. I enter Create a bar chart showing the number of requests by HTTP status code
in the input field and choose generate.
To refine the visualization, you can choose Edit visual and add style instructions such as Show me a pie chart
or Use a light gray background with a white grid
.
Now available
You can now use Amazon Q Developer in OpenSearch Service to reduce mean time to resolution, enable more self-service troubleshooting, and help teams extract greater value from your observability data.
The service is available today in US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (London), Europe (Paris), and South America (São Paulo) AWS Regions.
To learn more, visit the Amazon Q Developer documentation and start using Amazon Q Developer in your OpenSearch Service domain today.
— EsraHow is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/3faRpdo
via IFTTT
Amazon API Gateway now supports dual-stack (IPv4 and IPv6) endpoints
Today, we are launching IPv6 support for Amazon API Gateway across all endpoint types, custom domains, and management APIs, in all commercial and AWS GovCloud (US) Regions. You can now configure REST, HTTP, and WebSocket APIs, and custom domains, to accept calls from IPv6 clients alongside the existing IPv4 support. You can also call API Gateway management APIs from dual-stack (IPv6 and IPv4) clients. As organizations globally confront growing IPv4 address scarcity and increasing costs, implementing IPv6 becomes critical for future-proofing network infrastructure. This dual-stack approach helps organizations maintain future network compatibility and expand global reach. To learn more about dualstack in the Amazon Web Services (AWS) environment, see the IPv6 on AWS Documentation.
Creating new dual-stack resources
This post focuses on two ways to create an API or a domain name with a dualstack IP address type: AWS Management Console and AWS Cloud Development Kit (CDK).
AWS Console
When creating a new API or domain name in the console, select IPv4 only or dualstack (IPv4 and IPv6) for the IP address type.
As shown in the following image, you can select the dualstack option when creating a new REST API.
For custom domain names, you can similarly configure dualstack as shown in the next image.
If you need to revert to IPv4-only for any reason, you can modify the IP address type setting, with no need to redeploy your API for the update to take effect.
REST APIs of all endpoint types (EDGE, REGIONAL and PRIVATE) support dualstack. Private REST APIs only support dualstack configuration.
AWS CDK
With AWS CDK, start by configuring a dual-stack REST API and domain name.
const api = new apigateway.RestApi(this, "Api", {
restApiName: "MyDualStackAPI",
endpointConfiguration: {ipAddressType: "dualstack"}
});
const domain_name = new apigateway.DomainName(this, "DomainName", {
regionalCertificateArn: 'arn:aws:acm:us-east-1:111122223333:certificate/a1b2c3d4-5678-90ab',
domainName: 'dualstack.example.com',
endpointConfiguration: {
types: ['Regional'],
ipAddressType: 'dualstack'
},
securityPolicy: 'TLS_1_2'
});
const basepathmapping = new apigateway.BasePathMapping(this, "BasePathMapping", {
domainName: domain_name,
restApi: api
});
IPv6 Source IP and authorization
When your API begins receiving IPv6 traffic, client source IPs will be in IPv6 format. If you use resource policies, Lambda authorizers, or AWS Identity and Access Management (IAM) policies that reference source IP addresses, make sure they’re updated to accommodate IPv6 address formats.
For example, to permit traffic from a specific IPv6 range in a resource policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "execute-api:stage-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24",
"2001:db8:1234::/48"
]
}
}
}
]
}
Summary
API Gateway dual-stack support helps manage IPv4 address scarcity and costs, comply with government and industry mandates, and prepare for the future of networking. The dualstack implementation provides a smooth transition path by supporting both IPv4 and IPv6 clients simultaneously.
To get started with API Gateway dual-stack support, visit the Amazon API Gateway documentation. You can configure dualstack for new APIs or update existing APIs with minimal configuration changes.
Special thanks to Ellie Frank (elliesf), Anjali Gola (anjaligl), and Pranika Kakkar (pranika) for providing resources, answering questions, and offering valuable feedback during the writing process. This blog post was made possible through the collaborative support of the service and product management teams.
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/bzAqkhs
via IFTTT
AWS Weekly Roundup: Amazon Bedrock, Amazon QuickSight, AWS Amplify, and more (March 31, 2025)
It’s AWS Summit season! Free events are now rolling out worldwide, bringing our cloud computing community together to connect, collaborate, and learn. Whether you prefer joining us online or in-person, these gatherings offer valuable opportunities to expand your AWS knowledge. I’ll be attending the AWS Amsterdam Summit and would love to meet you—if you’re planning to be there, please stop by to say hello! Visit the AWS Summit website today to find events in your area, sign up for registration alerts, and reserve your spot at an AWS Summit near you.
Speaking of AWS news, let’s look at last week’s new announcements.
Last week’s launches
Here are the launches that got my attention.
AWS WAF integration with AWS Amplify Hosting now generally available – You can now directly attach AWS WAF to your AWS Amplify applications through a one-click integration in the Amplify console or using infrastructure as code (IaC). This integration provides access to the full range of AWS WAF capabilities, including managed rules that protect against common web exploits like SQL injection and cross-site scripting (XSS). You can also create custom rules based on your application needs, implement rate-based rules to protect against distributed denial of service (DDoS) attacks by limiting request rates from IP addresses, and configure geo-blocking to restrict access from specific countries. Firewall support is available in all AWS Regions in which Amplify Hosting operates.
Amazon Bedrock Custom Model Import introduces real-time cost transparency – If you’re using Amazon Bedrock Custom Model Import to run your customized foundation models (FMs), you can now access full transparency into compute resources and calculate inference costs in real time. Before model invocation, you can view the minimum compute resources (custom model units or CMUs) required through both the Amazon Bedrock console and Amazon Bedrock APIs. As models scale to handle increased traffic, Amazon CloudWatch metrics provide real-time visibility into total CMUs used, enabling better cost control through near-instant visibility. This helps you make on-the-fly model configuration changes to optimize costs. The feature is available in all Regions where Amazon Bedrock Custom Model Import is supported, with additional details available in Calculate the cost of running a custom model in the Amazon Bedrock User Guide.
Amazon Bedrock Knowledge Bases now supports Amazon OpenSearch Managed Cluster for vector storage – Amazon Bedrock Knowledge Bases securely connects FMs to company data sources for Retrieval Augmented Generation (RAG), delivering more relevant and accurate responses. With this launch, you can use Amazon OpenSearch Managed Cluster as a vector database while using the full suite of Amazon Bedrock Knowledge Bases features. This integration expands the list of supported vector databases, which already includes Amazon OpenSearch Serverless, Amazon Aurora, Amazon Neptune Analytics, Pinecone, MongoDB Atlas, and Redis. The native integration with vector databases helps mitigate the need to build custom data source integrations. This feature is now generally available in all existing Amazon Bedrock Knowledge Bases and OpenSearch Service Regions.
Amazon Bedrock Guardrails announces the general availability of industry-leading image content filters – This new capability offers industry-leading text and image content safeguards that help you block up to 88% of harmful multimodal content without building custom safeguards or relying on error-prone manual content moderation. Image content filters can be applied across all categories within the content filter policy including hate, insults, sexual, violence, misconduct, and prompt attacks. Amazon Bedrock Guardrails provides configurable safeguards to detect and block harmful content and prompt attacks, define topics to deny and disallow specific topics, redact personally identifiable information (PII) such as personal data, and block specific words. It also provides contextual grounding checks to detect and block model hallucinations and to identify the relevance of model responses and claims, and to identify, correct, and explain factual claims in model responses using Automated Reasoning checks. This capability is generally available in the US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Tokyo) Regions. To learn more, visit Amazon Bedrock Guardrails image content filters provide industry-leading safeguards in the AWS Machine Learning Blog and Stop harmful content in models using Amazon Bedrock Guardrails in the Amazon Bedrock User Guide.
Scenarios capability now generally available for Amazon Q in QuickSight – This capability guides you through data analysis by uncovering hidden trends, making recommendations for your business, and intelligently suggesting next steps for deeper exploration using natural language interactions. Now you can explore past trends, forecast future scenarios, and model solutions without needing specialized skill, analyst support, or manual manipulation of data in spreadsheets. With its intuitive interface and step-by-step guidance, the scenarios capability of Amazon Q in QuickSight helps you perform complex data analysis up to 10x faster than spreadsheets. Whether you’re optimizing marketing budgets, streamlining supply chains, or analyzing investments, Amazon Q makes advanced data analysis accessible so you can make data-driven decisions across your organization. This capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to asking what-if questions and comparing alternatives. Previous analyses can be easily modified, extended, and reused, helping you quickly adapt to changing business needs.
For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page.We launched existing services and instance types in additional Regions:
- Amazon DataZone is now available in Asia Pacific (Mumbai) and Europe (Paris) AWS Regions – Amazon DataZone is a fully managed data management service to catalog, discover, analyze, share, and govern data between data producers and consumers in your organization.
- The next generation of Amazon SageMaker is now available in Asia Pacific (Mumbai) and Europe (Paris) AWS Regions – Amazon SageMaker is the center for all your data, analytics, and AI. SageMaker Unified Studio provides a single development environment that consolidates AWS analytics and AI/ML tools.
- Amazon Redshift Query Editor V2 is now available in Mexico (Central) and Asia Pacific (Thailand) AWS Regions – Amazon Redshift Query Editor V2 makes data in your Amazon Redshift data warehouse and data lake more accessible with a web-based tool for SQL users such as data analysts, data scientists, and database developers.
- Amazon Keyspaces expands Multi-Region Replication to support all AWS Regions – Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, managed Cassandra-compatible database service that helps you run your Cassandra workloads on AWS using existing application code and developer tools.
- AWS Network Firewall is now available in the Asia Pacific (Thailand) and Mexico (Central) AWS Regions – AWS Network Firewall is a managed firewall service that automatically scales with traffic, requires no infrastructure maintenance, and integrates with AWS Firewall Manager for centralized policy control across AWS accounts.
- Amazon CloudWatch RUM is now generally available in Israel (Tel Aviv) and Asia Pacific (Hong Kong) AWS Regions – CloudWatch RUM monitors web applications by collecting real-time client-side performance data and providing dashboards that show end-user experience metrics, including page load anomalies, core web vitals, and errors across different geolocations, browsers, and devices.
- Amazon VPC IP Address Manager is now available in Asia Pacific (Thailand) and Mexico (Central) AWS Regions – Amazon Virtual Private Cloud (Amazon VPC) IP Address Manager (Amazon VPC IPAM) makes it easier to plan, track, and monitor IP addresses for AWS workloads, helping you organize addresses based on routing and security needs and set simple business rules to govern IP address assignments.
- Amazon Q Business now available in Asia Pacific (Sydney) AWS Region – Amazon Q Business is the most capable generative AI–powered assistant for finding information, gaining insight, and taking action at work. It can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems.
- Amazon EC2 P5en instances are now available in US East (N. Virginia) and Asia Pacific (Jakarta) AWS Regions – P5en instances feature 8 H200 GPUs with 1.7x memory size, paired with 4th Gen Intel Xeon processors and Gen5 PCIe for 4x CPU-GPU bandwidth. This helps improve collective communications performance for distributed training workloads such as deep learning, generative AI, real-time data processing, and high performance computing (HPC) applications.
- Amazon EC2 R8g instances now available in US West (N. California) AWS Region – These instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5 TB) than AWS Graviton3 based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to Graviton3 based R7g instances.
- Amazon EC2 C8g instances now available in Asia Pacific (Tokyo) AWS Region – These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3 based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors.
- Amazon SageMaker AI is now available in Mexico (Central) and Asia Pacific (Thailand) AWS Regions – Amazon SageMaker AI is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly.
- Amazon ElastiCache now supports AWS PrivateLink in Asia Pacific (Jakarta) and Asia Pacific (Hyderabad) AWS Regions – AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises networks without exposing traffic to the public internet and securing your network traffic. To use AWS PrivateLink with Amazon ElastiCache, you create an interface VPC endpoint for Amazon ElastiCache in your VPC using the Amazon VPC console, AWS SDK, or AWS Command Line Interface (AWS CLI).
Other AWS events
Check your calendar and sign up for upcoming AWS events.
AWS GenAI Lofts are collaborative spaces and immersive experiences that showcase AWS expertise in cloud computing and AI. They provide startups and developers with hands-on access to AI products and services, exclusive sessions with industry leaders, and valuable networking opportunities with investors and peers. Find a GenAI Loft location near you and don’t forget to register.
Browse all upcoming AWS led in-person and virtual events here.
That’s all for this week. Check back next Monday for another Weekly Roundup!
— EsraThis post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/zltHRr9
via IFTTT
Thursday, March 27, 2025
Accelerating CI with AWS CodeBuild: Parallel test execution now available
I’m excited to announce that AWS CodeBuild now supports parallel test execution, so you can run your test suites concurrently and reduce build times significantly.
With the demo project I wrote for this post, the total test time went down from 35 minutes to six minutes, including the time to provision the environments. These two screenshots from the AWS Management Console show the difference.
Sequential execution of the test suite
Parallel execution of the test suite
Very long test times pose a significant challenge when running continuous integration (CI) at scale. As projects grow in complexity and team size, the time required to execute comprehensive test suites can increase dramatically, leading to extended pipeline execution times. This not only delays the delivery of new features and bug fixes, but also hampers developer productivity by forcing them to wait for build results before proceeding with their tasks. I have experienced pipelines that took up to 60 minutes to run, only to fail at the last step, requiring a complete rerun and further delays. These lengthy cycles can erode developer trust in the CI process, contribute to frustration, and ultimately slow down the entire software delivery cycle. Moreover, long-running tests can lead to resource contention, increased costs because of wasted computing power, and reduced overall efficiency of the development process.
With parallel test execution in CodeBuild, you can now run your tests concurrently across multiple build compute environments. This feature implements a sharding approach where each build node independently executes a subset of your test suite. CodeBuild provides environment variables that identify the current node number and the total number of nodes, which are used to determine which tests each node should run. There is no control build node or coordination between nodes at build time—each node operates independently to execute its assigned portion of your tests.
To enable test splitting, configure the batch fanout section in your buildspec.xml
, specifying the desired parallelism level and other relevant parameters. Additionally, use the codebuild-tests-run utility in your build step, along with the appropriate test commands and the chosen splitting method.
The tests are split based on the sharding strategy you specify. codebuild-tests-run
offers two sharding strategies:
- Equal-distribution. This strategy sorts test files alphabetically and distributes them in chunks equally across parallel test environments. Changes in the names or quantity of test files might reassign files across shards.
- Stability. This strategy fixes the distribution of tests across shards by using a consistent hashing algorithm. It maintains existing file-to-shard assignments when new files are added or removed.
CodeBuild supports automatic merging of test reports when running tests in parallel. With automatic test report merging, CodeBuild consolidates tests reports into a single test summary, simplifying result analysis. The merged report includes aggregated pass/fail statuses, test durations, and failure details, reducing the need for manual report processing. You can view the merged results in the CodeBuild console, retrieve them using the AWS Command Line Interface (AWS CLI), or integrate them with other reporting tools to streamline test analysis.
Let’s look at how it works
Let me demonstrate how to implement parallel testing in a project. For this demo, I created a very basic Python project with hundreds of tests. To speed things up, I asked Amazon Q Developer on the command line to create a project and 1,800 test cases. Each test case is in a separate file and takes one second to complete. Running all tests in a sequence requires 30 minutes, excluding the time to provision the environment.
In this demo, I run the test suite on ten compute environments in parallel and measure how long it takes to run the suite.
To do so, I added a buildspec.yml
file to my project.
version: 0.2
batch:
fast-fail: false
build-fanout:
parallelism: 10 # ten runtime environments
ignore-failure: false
phases:
install:
commands:
- echo 'Installing Python dependencies'
- dnf install -y python3 python3-pip
- pip3 install --upgrade pip
- pip3 install pytest
build:
commands:
- echo 'Running Python Tests'
- |
codebuild-tests-run \
--test-command 'python -m pytest --junitxml=report/test_report.xml' \
--files-search "codebuild-glob-search 'tests/test_*.py'" \
--sharding-strategy 'equal-distribution'
post_build:
commands:
- echo "Test execution completed"
reports:
pytest_reports:
files:
- "*.xml"
base-directory: "report"
file-format: JUNITXML
There are three parts to highlight in the YAML file.
First, there’s a build-fanout
section under batch
. The parallelism
command tells CodeBuild how many test environments to run in parallel. The ignore-failure
command indicates if failure in any of the fanout build tasks can be ignored.
Second, I use the pre-installed codebuild-tests-run
command to run my tests.
This command receives the complete list of test files and decides which of the tests must be run on the current node.
- Use the
sharding-strategy
argument to choose between equally distributed or stable distribution, as I explained earlier. - Use the
files-search
argument to pass all the files that are candidates for a run. We recommend to use the providedcodebuild-glob-search
command for performance reasons, but any file search tool, such as find(1), will work. - I pass the actual test command to run on the shard with the
test-command
argument.
Lastly, the reports
section instructs CodeBuild to collect and merge the test reports on each node.
Then, I open the CodeBuild console to create a project and a batch build configuration for this project. There’s nothing new here, so I’ll spare you the details. The documentation has all the details to get you started. Parallel testing works on batch builds. Make sure to configure your project to run in batch.
Now, I’m ready to trigger an execution of the test suite. I can commit new code on my GitHub repository or trigger the build in the console.
After a few minutes, I see a status report of the different steps of the build; with a status for each test environment or shard.
When the test is complete, I select the Reports tab to access the merged test reports.
The Reports section aggregates all test data from all shards and keeps the history for all builds. I select my most recent build in the Report history section to access the detailed report.
As expected, I can see the aggregated and the individual status for each of my 1,800 test cases. In this demo, they’re all passing, and the report is green.
The 1,800 tests of the demo project take one second each to complete. When I run this test suite sequentially, it took 35 minutes to complete. When I run the test suite in parallel on ten compute environments, it took six minutes to complete, including the time to provision the environments. The parallel run took 17.9 percent of the time of the sequential run. Actual numbers will vary with your projects.
Additional things to know
This new capability is compatible with all testing frameworks. The documentation includes examples for Django, Elixir, Go, Java (Maven), Javascript (Jest), Kotlin, PHPUnit, Pytest, Ruby (Cucumber), and Ruby (RSpec).
For test frameworks that don’t accept space-separated lists, the codebuild-tests-run
CLI provides a flexible alternative through the CODEBUILD_CURRENT_SHARD_FILES
environment variable. This variable contains a newline-separated list of test file paths for the current build shard. You can use it to adapt to different test framework requirements and format test file names.
You can further customize how tests are split across environments by writing your own sharding script and using the CODEBUILD_BATCH_BUILD_IDENTIFIER
environment variable, which is automatically set in each build. You can use this technique to implement framework-specific parallelization or optimization.
Pricing and availability
With parallel test execution, you can now complete your test suites in a fraction of the time previously required, accelerating your development cycle and improving your team’s productivity.
Parallel test execution is available on all three compute modes offered by CodeBuild: on-demand, reserved capacity, and AWS Lambda compute.
This capability is available today in all AWS Regions where CodeBuild is offered, with no additional cost beyond the standard CodeBuild pricing for the compute resources used.
I invite you to try parallel test execution in CodeBuild today. Visit the AWS CodeBuild documentation to learn more and get started with parallelizing your tests.
— sebPS: Here’s the prompt I used to create the demo application and its test suite: “I’m writing a blog post to announce codebuild parallel testing. Write a very simple python app that has hundreds of tests, each test in a separate test file. Each test takes one second to complete.”
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/BMQbFJz
via IFTTT
Wednesday, March 26, 2025
Firewall support for AWS Amplify hosted sites
Today, we’re announcing the general availability of the AWS WAF integration with AWS Amplify Hosting.
Web application owners are constantly working to protect their applications from a variety of threats. Previously, if you wanted to implement a robust security posture for your Amplify Hosted applications, you needed to create architectures using Amazon CloudFront distributions with AWS WAF protection, which required additional configuration steps, expertise, and management overhead.
With the general availability of AWS WAF in Amplify Hosting, you can now directly attach a web application firewall to your AWS Amplify apps through a one-click integration in the Amplify console or using infrastructure as code (IaC). This integration gives you access to the full range of AWS WAF capabilities including managed rules, which provide protection against common web exploits and vulnerabilities like SQL injection and cross-site scripting (XSS). You can also create your own custom rules based on your specific application needs.
This new capability helps you implement defense-in-depth security strategies for your web applications. You can take advantage of AWS WAF rate-based rules to protect against distributed denial of service (DDoS) attacks by limiting the rate of requests from IP addresses. Additionally, you can implement geo-blocking to restrict access to your applications from specific countries, which is particularly valuable if your service is designed for specific geographic regions.
Let’s see how it works
Setting up AWS WAF protection for your Amplify app is straightforward. From the Amplify console, navigate to your app settings, select the Firewall tab, and choose the predefined rules you want to apply to your configuration.
Amplify hosting simplifies configuring firewall rules. You can activate four categories of protection.
- Amplify-recommended firewall protection – Protect against the most common vulnerabilities found in web applications, block IP addresses from potential threats based on Amazon internal threat intelligence, and protect against malicious actors discovering application vulnerabilities.
- Restrict access to amplifyapp.com – Restrict access to the default Amplify generated amplifyapp.com domain. This is useful when you add a custom domain to prevent bots and search engines from crawling the domain.
- Enable IP address protection – Restrict web traffic by allowing or blocking requests from specified IP address ranges.
- Enable country protection – Restrict access based on specific countries.
Protections enabled through the Amplify console will create an underlying web access control list (ACL) in your AWS account. For fine-grained rulesets, you can use the AWS WAF console rule builder.
After a few minutes, the rules are associated to your app and AWS WAF blocks suspicious requests.
If you want to see AWS WAF in action, you can simulate an attack and monitor it using the AWS WAF request inspection capabilities. For example, you can send a request with an empty User-Agent value. It will trigger a blocking rule in AWS WAF.
Let’s first send a valid request to my app.
curl -v -H "User-Agent: MyUserAgent" https://main.d3sk5bt8rx6f9y.amplifyapp.com/
* Host main.d3sk5bt8rx6f9y.amplifyapp.com:443 was resolved.
...(redacted for brevity)...
> GET / HTTP/2
> Host: main.d3sk5bt8rx6f9y.amplifyapp.com
> Accept: */*
> User-Agent: MyUserAgent
>
* Request completely sent off
< HTTP/2 200
< content-type: text/html
< content-length: 0
< date: Mon, 10 Mar 2025 14:45:26 GMT
We can observe that the server returned an HTTP 200 (OK) message.
Then, send a request with no value associated to the User-Agent HTTP header.
curl -v -H "User-Agent: " https://main.d3sk5bt8rx6f9y.amplifyapp.com/
* Host main.d3sk5bt8rx6f9y.amplifyapp.com:443 was resolved.
... (redacted for brevity) ...
> GET / HTTP/2
> Host: main.d3sk5bt8rx6f9y.amplifyapp.com
> Accept: */*
>
* Request completely sent off
< HTTP/2 403
< server: CloudFront
... (redacted for brevity) ...
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>403 ERROR</H1>
<H2>The request could not be satisfied.</H2>
We can observe that the server returned an HTTP 403 (Forbidden) message.
AWS WAF provide visibility into request patterns, helping you fine-tune your security settings over time. You can access logs through Amplify Hosting or the AWS WAF console to analyze traffic trends and refine security rules as needed.
Availability and pricing
Firewall support is available in all AWS Regions in which Amplify Hosting operates. This integration falls under an AWS WAF global resource, similar to Amazon CloudFront. Web ACLs can be attached to multiple Amplify Hosting apps, but they must reside in the same Region.
The pricing for this integration follows the standard AWS WAF pricing model, You pay for the AWS WAF resources you use based on the number of web ACLs, rules, and requests. On top of that, AWS Amplify Hosting adds $15/month when you attach a web application firewall to your application. This is prorated by the hour.
This new capability brings enterprise-grade security features to all Amplify Hosting customers, from individual developers to large enterprises. You can now build, host, and protect your web applications within the same service, reducing the complexity of your architecture and streamlining your security management.
To learn more, visit the AWS WAF integration documentation for Amplify or try it directly in the Amplify console.
— sebHow is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/Lo3AdfB
via IFTTT
Tuesday, March 25, 2025
Detailed geographic information for all AWS Regions and Availability Zones is now available
Starting today, you can get more granular visibility of geographic location information for AWS Regions and AWS Availability Zones (AZs). This detailed information will help you choose the Regions and AZs that align with your regulatory, compliance, and operational requirements.
We continue to expand the AWS global infrastructure to meet your business requirements and now have 114 AZs across 36 Regions. We have announced plans to add 12 more AZs and four Regions in New Zealand, Kingdom of Saudi Arabia, Taiwan, and the AWS European Sovereign Cloud.
One of the things we’ve learned from our customers is the need to have more visibility into the specific location of infrastructure within an AWS Region. This is important for customers in highly regulated industries such as the financial industry or gaming, where there are specific requirements for the physical placement of infrastructure. For example, FanDuel, a leading sports gaming company based in the U.S., is scaling into new markets across the U.S. and Canada. They are taking advantage of the improved geographic transparency to make more informed decisions and ensure they’re meeting data residency requirements as they scale their business quickly.
Geographies for AWS Regions
To find the geographic information for your Region, you can visit the AWS Global Infrastructure Regions and Availability Zones page. Once you navigate to this page, you can choose any tab on the map and scroll to the bottom to review the geographic information for each Region. See the following image for an example showing the North America Regions. As would be expected, the infrastructure for the US West (Oregon) Region is located in the United States of America, and the Canada (Central) Region is located in Canada.
Geographies for Availability Zones
To find the specific geographic information for an AZ, you can visit the AWS Regions and Availability Zones page in AWS Documentation. Choose the Region you’re interested in and you’ll find a table showing you the geography for that Region. As you see in the following screenshot, the infrastructure of the AZ with AZ ID use1-az1
is located in Virginia, United States of America.
Stay tuned
We will update these pages to reflect new geographic information as we continue to grow our AWS Global infrastructure footprint and add more AWS Regions and AZs.
Quick links
To learn more, visit the AWS Global Infrastructure Regions and Availability Zones page or AWS Regions and Availability Zones in AWS Documentation, and send feedback to AWS re:Post or through your usual AWS Support contacts.
– Prasad
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/ghi0sLT
via IFTTT
Monday, March 24, 2025
AWS Weekly Roundup: Omdia recognition, Amazon Bedrock RAG evaluation, International Women’s Day events, and more (March 24, 2025)
As we celebrate International Women’s Day (IWD) this March, I had the privilege of attending the ‘Women in Tech’ User Group meetup in Shenzhen last weekend. I was inspired to see over 100 women in tech from different industries come together to discuss AI ethics from a female perspective. Together, we explored strategies such as reducing gender bias in AI systems and promoting diverse representation in model training data. In the AWS Cloud Lab, participants used Amazon Bedrock with large language models (LLMs) to generate rose bloom videos, which was the most popular part of this meetup.
These gatherings are crucial to our efforts to engage more women in AI technology exploration and development, and to help make sure that the generative AI era evolves without gender bias. The collaborative spirit and technical curiosity displayed throughout the event is further proof that diverse teams truly build inclusive and effective solutions.
Speaking of vibrant community engagement, I also had the honor of presenting at Kubernetes Community Day (KCD) Beijing 2025 this weekend. The enthusiasm for container technologies was remarkable, with nearly 300 developers gathering to share experiences and best practices. During my keynote introducing the DoEKS project from Amazon Web Services (AWS), I was struck by the depth of interest in managed Kubernetes services. The audience’s questions revealed how widely adopted services such as Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS) have become among Chinese developers building mission-critical applications.This strong community interest aligns perfectly with findings from the Omdia Universe: Cloud Container Management & Services 2024–25 report. In this comprehensive evaluation of container management solutions hosted on public clouds, AWS was recognized as a Leader. The report specifically highlights that AWS offers “widest range of options for working with Kubernetes or its own container management service, across cloud, edge, and on-premises environments.” You can read the full report about AWS offerings to learn more about our comprehensive container portfolio and how we’re helping builders deploy scalable, reliable containerized applications.
Last Week’s launches
In addition to the inspiring community events, here are some AWS launches that caught my attention.
Amazon Q Business browser extension gets upgrades – The Amazon Q Business browser extension now features significant enhancements designed to streamline browser-based tasks. Users gain access to their company’s indexed knowledge alongside web content, direct PDF support within the browser, image file attachment capabilities, and controls to remove irrelevant attachments from conversation context. The expanded context window accommodates larger web pages and more detailed prompts, resulting in more helpful responses. For advanced needs, the extension offers seamless transition to the full Amazon Q Business web experience with access to Actions and Amazon Q Apps. Review the Enhancing web browsing with Amazon Q Business in the documentation for detailed setup instructions and feature descriptions to learn more about this announcement.
Amazon Bedrock RAG evaluation is now generally available – Offering comprehensive assessment of both Bedrock Knowledge Bases and custom Retrieval Augmented Generation (RAG) systems through LLM-as-a-judge methodology. The service evaluates retrieval quality and end-to-end generation with metrics for relevance, correctness, and hallucination detection, and the newly added support for custom RAG pipeline evaluations lets you bring your own input-output pairs and retrieved contexts directly into the evaluation job, along with new citation precision metrics and Amazon Bedrock Guardrails integration for more flexible RAG system optimization. To learn more, visit the Amazon Bedrock Evaluations page and What is Amazon Bedrock? in the documentation.
Amazon Nova expands Tool Choice options for Converse API – We’ve enhanced Amazon Nova with expanded Tool Choice capabilities for the Converse API, giving developers more flexibility in building sophisticated AI applications. This update allows models to determine when to use tools to fulfill user requests more effectively. Learn more in the announcement about expands Tool Choice options.
Amazon Bedrock Guardrails adds policy-based enforcement for responsible AI – Our builders can now enforce responsible AI policies at scale with Amazon Bedrock Guardrails’ new AWS Identity and Access Management (IAM) policy-based enforcement capabilities. This feature helps you to specify required guardrails through IAM policies using the bedrock:GuardrailIdentifier
condition key, so that all model inference calls comply with your organization’s AI safety standards. When your teams make Amazon Bedrock Invoke or Converse API calls, requests are automatically rejected if they don’t include the mandated guardrails, providing consistent protection against undesirable content, sensitive information exposure, and model hallucinations. Refer to the Set up permissions to use Guaidrails for content filtering in the technical documentation and the Amazon Bedrock Guardrails product page to learn more about the announcement about policy based enforcement for responsible AI.
Next generation of Amazon Connect released – We’ve launched the next generation of Amazon Connect, featuring AI-powered interactions designed to strengthen customer relationships and improve business outcomes. This major update brings enhanced agent experiences, smarter customer interactions, and deeper operational insights to contact centers of all sizes. Learn more from the new launch post in the AWS Contact Center Blog.
Amazon Redshift Serverless introduces Current and Trailing release tracks – Amazon Redshift Serverless now offers two release tracks to give users more control over their update cadence. The Current track delivers the most up-to-date certified release with the latest features and security updates, while the Trailing track remains on the previous certified release. This dual-track approach allows organizations to validate new releases on select workgroups before implementing them across production environments. Users can easily switch between tracks through the Amazon Redshift console, providing the flexibility to balance innovation with stability for mission-critical workloads. This capability is available in all AWS Regions where Amazon Redshift Serverless is offered. Refer to Tracks for Amazon Redshift provisioned cluster and serverless work groups to learn more about the Current and Trailing tracks in Amazon Redshift Serverless.
AWS WAF now supports URI fragment field matching – AWS WAF has expanded its capability to include URI fragment field matching, allowing security teams to create rules that inspect and match against the fragment portion of URLs. This enhancement enables more precise security controls for web applications that use URI fragments to identify specific sections within pages. Security professionals can now implement more targeted protections, such as restricting access to sensitive page elements, detecting suspicious navigation patterns, and enhancing bot mitigation by analyzing fragment usage patterns characteristic of automated attacks. This feature is available in all AWS Regions where AWS WAF is supported. For more information about URI field for matching, visit the AWS WAF Developer Guide.
For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS.
Other AWS news
Here are some other additional projects and blog posts that you might find interesting.
Build your generative AI skills at AWS Gen AI Lofts – AWS has established more than 10 global hubs offering training and networking for developers and startups in 2025, where you can gain practical, hands-on experience with the latest AI technologies. These revamped spaces feature dedicated zones where you can participate in workshops on prompt engineering, foundation model (FM) selection, and implementing AI in production environments. If you’re near San Francisco, New York, Tokyo, or other major tech hubs with AWS Gen AI Lofts, stop by to access these free resources and accelerate your generative AI development skills. Check out all of the AWS Gen AI Loft locations and events and to read 5 ways to build your AI skills on AWS Gen AI Loft to learn more.
AWS Lambda‘s architecture for billions of asynchronous invocations – A recent technical article reveals how AWS Lambda handles massive scale through sophisticated engineering approaches. The Lambda asynchronous invocation path employs multiple queuing strategies, consistent hashing for intelligent partitioning, and shuffle-sharding techniques to minimize noisy neighbor effects. The system relies on key observability metrics (AsyncEventReceived, AsyncEventAge, and AsyncEventDropped) to maintain optimal performance. These architectural decisions enable Lambda to process tens of trillions of monthly invocations across 1.5 million active customers while providing reliable scalability and performance isolation. For details read Handling billions of invocations – best practices from AWS Lambda in the AWS computing blog.
AWS is reducing prices by more than 11% for its high-memory U7i instances across all Regions and pricing models. The reduction applies to four instances: u7i-12tb.224xlarge, u7in-16tb.224xlarge, u7in-24tb.224xlarge, and u7in-32tb.224xlarge. The new On-Demand pricing, which covers shared, dedicated, and host tenancy options is retroactive, to March 1, 2025. For new Savings Plan purchases, pricing is effective immediately.
Create your AWS Builder ID and reserve your alias – Builder ID is a universal login credential that gives you access beyond the AWS Management Console to AWS tools and resources, including over 600 free training courses, community features, and developer tools such as Amazon Q Developer.
From community.aws
Here are some of my favorite posts from community.aws.
Model Context Protocol (MCP): why it matters – The recently introduced Model Context Protocol (MCP) creates a standardized way for AI applications to communicate with multiple FMs using consistent prompts and tools.
Build serverless GenAI Apps faster with Amazon Q Developer CLI agent – Discover how Amazon Q Developer CLI Agent revolutionizes cloud development by building a complete serverless generative AI application in minutes instead of days.
Automating code reviews with Amazon Q and GitHub actions – A new developer tutorial demonstrates how to integrate Amazon Q Developer with GitHub Actions to automatically analyze pull requests and provide AI-powered code feedback.
DeepSeek on AWS – A new technical guide demonstrates how to deploy DeepSeek’s powerful open-source AI models on AWS infrastructure. The tutorial provides step-by-step instructions for setting up these cutting-edge models using Amazon SageMaker, Amazon Elastic Compute Cloud (Amazon EC2) instances with GPUs, or through integration with Amazon Bedrock. The guide covers optimization techniques, sample applications, and best practices for balancing performance with cost efficiency.
Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events.
Empowering Futures – Women Leading the Way in Tech and Non-Tech Careers – Whether you’re here to expand your professional circle, learn about the AWS Cloud or gain wisdom from inspiring speakers, this event has something for everyone. This is a public event open to everyone in the Seattle area—for free—on March 27, 2025.
AWS at KubeCon + CloudNativeCon London 2025 – Join us at KubeCon London on April 1 – April 4 , at Excel booth S300 for live product demonstrations that help you simplify Kubernetes operations, optimize costs and performance, harness the power of artificial learning and machine learning (AI/ML), and build scalable platform strategies.
That’s all for this week. Check back next Monday for another Weekly Roundup!
– Betty
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/sv81ka0
via IFTTT
Monday, March 17, 2025
AWS Weekly Roundup: AWS Pi Day, Amazon Bedrock multi-agent collaboration, Amazon SageMaker Unified Studio, Amazon S3 Tables, and more
Thanks to everyone who joined us for the fifth annual AWS Pi Day on March 14. Since its inception in 2021, commemorating the Amazon Simple Storage Service (Amazon S3) 15th anniversary, AWS Pi Day has grown into a flagship event highlighting the transformative power of cloud technologies in data management, analytics, and AI.
This year’s virtual event featured in-depth discussions with Amazon Web Services (AWS) product teams showcasing our continued innovation in helping customers build robust data foundations for analytics and AI workloads.
Missed the live event? You can still access all content on-demand at the event page. Whether you’re developing data lakehouses, training AI models, creating generative AI applications, or optimizing analytics workloads, the shared insights will help you maximize the value of your data.
Last week’s launches
Here are some launches that got my attention during the previous week.
Amazon Bedrock now supports multi-agent collaboration – With the availability of multi-agent collaboration in Amazon Bedrock, you can create networks of specialized agents that communicate and coordinate under the guidance of a supervisor agent. You can build, deploy, and manage networks of AI agents that work together to execute complex, multi-step workflows efficiently.
Availability of fully managed DeepSeek-R1 model in Amazon Bedrock – AWS is the first cloud service provider (CSP) to deliver DeepSeek-R1 as a fully managed, generally available model. Use the capabilities of DeepSeek-R1 for your generative AI applications with a single API through this fully managed service in Amazon Bedrock.
Amazon SageMaker Unified Studio is now generally available – You can now use Amazon SageMaker Unified Studio as your single data and AI development environment, where you can find and access all of your organization’s data and work using the best tools for your specific needs. With the new simplified permissions management, you can easily bring your existing AWS resources into the unified studio. You’ll be able to find, access, and query your organization’s data and AI assets while collaborating with your team to securely build and share your analytics and AI artifacts—from data and models to generative AI applications.
Amazon Bedrock’s capabilities now generally available within Amazon SageMaker Unified Studio – SageMaker Unified Studio brings selected capabilities from Amazon Bedrock into SageMaker. You can now rapidly prototype, customize, and share generative AI applications using foundation models (FMs) and advanced features such as Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, Amazon Bedrock Agents, and Amazon Bedrock Flows to create tailored solutions aligned with your requirements and responsible AI guidelines all within SageMaker.
Amazon S3 Tables integration with Amazon SageMaker Lakehouse is now generally available – Amazon S3 Tables now seamlessly integrate with Amazon SageMaker Lakehouse, making it easy for you to query and join S3 Tables with data in S3 data lakes, Amazon Redshift data warehouses, and third-party data sources. S3 Tables deliver the first cloud object store with built-in Apache Iceberg support.
Amazon S3 Tables now support create and query table operations directly from the S3 console using Amazon Athena – Amazon S3 Tables adds create and query table support in the S3 console. With this new feature, you can now create a table, populate it with data, and query it directly from the S3 console using Amazon Athena, making it easier to get started and analyze data in S3 table buckets.
Amazon S3 reduces pricing for S3 object tagging by 35% – Amazon S3 reduces pricing for S3 object tagging by 35% in all AWS Regions to $0.0065 per 10,000 tags per month. Object tags are key-value pairs applied to S3 objects that can be created, updated, or deleted at any time during the lifetime of the object.
Serverless Land Patterns available in Visual Studio Code – Serverless Land‘s extensive application pattern library is now available directly into the Visual Studio Code (VS Code) IDE, making it easier for developers to build serverless applications. This integration eliminates the need to switch between your development environment and external resources when building serverless architectures by enabling you to browse, search, and implement pre-built serverless patterns directly in VS Code IDE.
Amplify Hosting Announces Skew Protection Support – AWS Amplify Hosting now offers Skew Protection, a feature that guarantees version consistency across your deployments. This feature ensures frontend requests are always routed to the correct server backend version—eliminating version skew and making deployments more reliable.
From community.aws
Here are some of my favorite posts from community.aws. Create your AWS Builder ID to start sharing your tips and connect with fellow builders. Your Builder ID is a universal login credential that gives you access, beyond the AWS Management Console, to AWS tools and resources, including over 600 free training courses, community features, and developer tools such as Amazon Q Developer.
Seamless SQL Server Recovery on EC2 with AWS Systems Manager (Greg Vinton) – This guide explains how to use the AWSEC2-RestoreSqlServerDatabaseWithVss automation runbook to restore a Microsoft SQL Server database on an Amazon Elastic Compute Cloud (Amazon EC2) instance.
Secure Deployment Strategies in Amazon EKS with Azure DevOps (Abhishek Nanda) – Build and Deploy containerized applications on Amazon Elastic Kubernetes Service (Amazon EKS) using Azure DevOps.
Connect Your Favorite LLM Client to Bedrock (Qinjie Zhang) – It’s common to use desktop applications like MSTY, Chatbox AI, LM Studio to simplify the use of Large Language Models (LLM) models. This blog provides a step-by-step guide on how you can connect your favorite local LLM clients to Amazon Bedrock.
From PHP to Python with the help of Amazon Q Developer (Ricardo Sueiras) – In this blog post, Ricardo showcases how to use Amazon Q Developer CLI to refactor code from one programming language to another.
Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events:
AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Milan, Italy (April 2), Bay Area – Security Edition (April 4), Timișoara, Romania (April 10), and Prague, Czech Republic (April 29).
AWS Innovate: Generative AI + Data – Join a free online conference focusing on generative AI and data innovations in Latin America on April 8.
AWS Summits – The AWS Summit season is coming along! Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Paris (April 9), Amsterdam (April 16), London (April 30), and Poland (May 5).
AWS re:Inforce (June 16–18) – Our annual learning event devoted to all things AWS Cloud security in Philadelphia, PA. Registration opens in March, so be ready to join more than 5,000 security builders and leaders.
AWS DevDays are free, technical events where developers can learn about some of the hottest topics in cloud computing. DevDays offer hands-on workshops, technical sessions, live demos, and networking with AWS technical experts and your peers. Register to access AWS DevDays sessions on demand.
That’s all for this week. Check back next Monday for another Weekly Roundup!
– Prasad
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
from AWS News Blog https://ift.tt/QcxNobA
via IFTTT