Tuesday, September 30, 2025

Announcing Amazon ECS Managed Instances for containerized applications

Today, we’re announcing Amazon ECS Managed Instances, a new compute option for Amazon Elastic Container Service (Amazon ECS) that enables developers to use the full range of Amazon Elastic Compute Cloud (Amazon EC2) capabilities while offloading infrastructure management responsibilities to Amazon Web Service (AWS). This new offering combines the operational simplicity of offloading infrastructure with the flexibility and control of Amazon EC2, which means customers can focus on building applications that drive innovation, while reducing total cost of ownership (TCO) and maintaining AWS best practices.

Customers running containerized workloads told us they want to combine the simplicity of serverless with the flexibility of self-managed EC2 instances. Although serverless options provide an excellent general-purpose solution, some applications require specific compute capabilities, such as GPU acceleration, particular CPU architectures, or enhanced networking performance. Additionally, customers with existing Amazon EC2 capacity investments through EC2 pricing options couldn’t fully use these commitments with serverless offerings.

Amazon ECS Managed Instances provides a fully managed container compute environment that supports a broad range of EC2 instance types and deep integration with AWS services. By default, it automatically selects the most cost-optimized EC2 instances for your workloads, but you can specify particular instance attributes or types when needed. AWS handles all aspects of infrastructure management, including provisioning, scaling, security patching, and cost optimization, enabling you to concentrate on building and running your applications.

Let’s try it out

Looking at the AWS Management Console experience for creating a new Amazon ECS cluster, I can see the new option for using ECS Managed Instances. Let’s take a quick tour of all the new options.

Creating a ECS cluster with Managed Instances

After I’ve selected Fargate and Managed Instances, I’m presented with two options. If I select Use ECS default, Amazon ECS will choose general purpose instance types based on grouping together pending Tasks, and picking the optimum instance type based on cost and resilience metrics. This is the most straightforward and recommended way to get started. Selecting Use custom – advanced opens up additional configuration parameters, where I can fine-tune the attributes of instances Amazon ECS will use.

Creating a ECS cluster with Managed Instances

By default, I see CPU and Memory as attributes, but I can select from 20 additional attributes to continue to filter the list of available instance types Amazon ECS can access.

Creating a ECS cluster with Managed Instances

After I’ve made my attribute selections, I see a list of all the instance types that match my choices.

Creating a ECS cluster with Managed Instances

From here, I can create my ECS cluster as usual and Amazon ECS will provision instances for me on my behalf based on the attributes and criteria I’ve defined in the previous steps.

Key features of Amazon ECS Managed Instances

With Amazon ECS Managed Instances, AWS takes full responsibility for infrastructure management, handling all aspects of instance provisioning, scaling, and maintenance. This includes implementing regular security patches initiated every 14 days (due to instance connection draining, the actual lifetime of the instance may be longer), with the ability to schedule maintenance windows using Amazon EC2 event windows to minimize disruption to your applications.

The service provides exceptional flexibility in instance type selection. Although it automatically selects cost-optimized instance types by default, you maintain the power to specify desired instance attributes when your workloads require specific capabilities. This includes options for GPU acceleration, CPU architecture, and network performance requirements, giving you precise control over your compute environment.

To help optimize costs, Amazon ECS Managed Instances intelligently manages resource utilization by automatically placing multiple tasks on larger instances when appropriate. The service continually monitors and optimizes task placement, consolidating workloads onto fewer instances to dry up, utilize and terminate idle (empty) instances, providing both high availability and cost efficiency for your containerized applications.

Integration with existing AWS services is seamless, particularly with Amazon EC2 features such as EC2 pricing options. This deep integration means that you can maximize existing capacity investments while maintaining the operational simplicity of a fully managed service.

Security remains a top priority with Amazon ECS Managed Instances. The service runs on Bottlerocket, a purpose-built container operating system, and maintains your security posture through automated security patches and updates. You can see all the updates and patches applied to the Bottlerocket OS image on the Bottlerocket website. This comprehensive approach to security keeps your containerized applications running in a secure, maintained environment.

Available now

Amazon ECS Managed Instances is available today in US East (North Virginia), US West (Oregon), Europe (Dublin), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo) AWS Regions. You can start using Managed Instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or infrastructure as code (IaC) tools such as AWS Cloud Development Kit (AWS CDK) and AWS CloudFormation. You pay for the EC2 instances you use plus a management fee for the service.

To learn more about Amazon ECS Managed Instances, visit the documentation and get started simplifying your container infrastructure today.



from AWS News Blog https://ift.tt/7Z0tbIW
via IFTTT

Announcing AWS Outposts third-party storage integration with Dell and HPE

Since announcing second-generation AWS Outposts racks in April with breakthrough performance and scalability, we’ve continued to innovate on behalf of our customers at the edge of the cloud. Today, we’re expanding AWS Outposts third-party storage integration program to include Dell PowerStore and HPE Alletra Storage MP B10000 systems, joining our list of existing integrations with NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. This program makes it easy for customers to use AWS Outposts with third-party storage arrays through AWS native tooling. The solution integration is particularly important for organizations migrating VMware workloads to AWS who need to maintain their existing storage infrastructure during the transition, and for those who must meet strict data residency requirements by keeping their data on-premises while using AWS services.

Outposts compute rack_Gen2_front_45This announcement builds upon two significant storage integration milestones we achieved in the past year. In December 2024, we introduced the ability to attach block data volumes from third-party storage arrays to Amazon EC2 instances on Outposts directly through the AWS Management Console. Then in July 2025, we enabled booting Amazon EC2 instances directly from these external storage arrays. Now, with the addition of Dell and HPE, customers have even more choice in how they integrate their on-premises storage investments with AWS Outposts.

Enhanced storage integration capabilities

Our third-party storage integration supports both data and boot volumes, offering two boot methods: iSCSI SANboot and Localboot. The iSCSI SANboot option enables both read-only and read-write boot volumes, while Localboot supports read-only boot volumes using either iSCSI or NVMe-over-TCP protocols. With this comprehensive approach, customers can centrally manage their storage resources while maintaining the consistent hybrid experience that Outposts provides.

Through the Amazon EC2 Launch Instance Wizard in the AWS Management Console, customers can configure their instances to use external storage from any of our supported partners. For boot volumes, we provide AWS-verified AMIs for Windows Server 2022 and Red Hat Enterprise Linux 9, with automation scripts available through AWS Samples to simplify the setup process.

Support for various Outposts configurations

All third-party storage integration features are supported on Outposts 2U servers and both generations of Outposts racks. Support for second-generation Outposts racks means customers can combine the enhanced performance of our latest EC2 instances on Outposts—including twice the vCPU, memory, and network bandwidth—with their preferred storage solutions. The integration works seamlessly with both our new simplified network scaling capabilities and specialized Amazon EC2 instances designed for ultra-low latency and high throughput workloads.

Things to know

Customers can begin using these capabilities today with their existing Outposts deployments or when ordering new Outposts through the AWS Management Console. If you are using third-party storage integration with Outposts servers, you can have either your onsite personnel or a third-party IT provider install the servers for you. After the Outposts servers are connected to your network, AWS will remotely provision compute and storage resources so you can start launching applications. For Outposts rack deployments, the process involves a setup where AWS technicians verify site conditions and network connectivity before the rack installation and activation. Storage partners assist with the implementation of the third-party storage components.

Third-party storage integration for Outposts with all compatible storage vendors is available at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions.

This expansion of our Outposts third-party storage integration program demonstrates our continued commitment to providing flexible, enterprise-grade hybrid cloud solutions, meeting customers where they are in their cloud migration journey. To learn more about this capability and our supported storage vendors, visit the AWS Outposts partner page and our technical documentation for Outposts servers, second-generation Outposts racks, and first-generation Outposts racks. To learn more about partner solutions, check out Dell PowerStore integration with AWS Outposts and HPE Alletra Storage MP B10000 integration with AWS Outposts.



from AWS News Blog https://ift.tt/tKPOTjn
via IFTTT

Monday, September 29, 2025

Introducing Claude Sonnet 4.5 in Amazon Bedrock: Anthropic’s most intelligent model, best for coding and complex agents

Today, we’re excited to announce that Claude Sonnet 4.5, powered by Anthropic, is now available in Amazon Bedrock, a fully managed service that offers a choice of high- performing foundation models from leading AI companies. This new model builds upon Claude 4’s foundation to achieve state-of-the-art performance in coding and complex agentic applications.

Claude Sonnet 4.5 demonstrates advancements in agent capabilities, with enhanced performance in tool handling, memory management, and context processing. The model shows marked improvements in code generation and analysis, from identifying optimal improvements to exercising stronger judgment in refactoring decisions. It particularly excels at autonomous long-horizon coding tasks, where it can effectively plan and execute complex software projects spanning hours or days while maintaining consistent performance and reliability throughout the development cycle.

By using Claude Sonnet 4.5 in Amazon Bedrock, developers gain access to a fully managed service that not only provides a unified API for foundation models but ensures their data stays under complete control with enterprise-grade tools for security, and optimization.

Claude Sonnet 4.5 also seamlessly integrates with Amazon Bedrock AgentCore, enabling developers to maximize the model’s capabilities for building complex agents. AgentCore’s purpose-built infrastructure complements the model’s enhanced abilities in tool handling, memory management, and context understanding. Developers can leverage complete session isolation, 8-hour long-running support, and comprehensive observability features to deploy and monitor production-ready agents from autonomous security operations to complex enterprise workflows.

Business applications and use cases
Beyond its technical capabilities, Sonnet 4.5 delivers practical business value through consistent performance and advanced problem-solving abilities. The model excels at producing and editing business documents while maintaining reliable performance across complex workflows.

The model demonstrates strength in several key industries:

  • Cybersecurity – Claude Sonnet 4.5 can be used to deploy agents that autonomously patch vulnerabilities before exploitation, shifting from reactive detection to proactive defense.
  • Finance – Sonnet 4.5 handles everything from entry-level financial analysis to advanced predictive analysis, helping transform manual audit preparation into intelligent risk management.
  • Research – Sonnet 4.5 can better handle tools, context, and deliver ready-to-go office files to drive expert analysis into final deliverables and actionable insights.

Sonnet 4.5 features in the Amazon Bedrock API
Here are some highlights of Sonnet 4.5 in the Amazon Bedrock API:

Smart Context Window Management – The new API introduces intelligent handling when AI models reach their maximum capacity. Instead of returning errors when conversations get too long, Claude Sonnet 4.5 will now generate responses up to the available limit and clearly indicate why it stopped. This eliminates frustrating interruptions and allows users to maximize their available context window.

Tool Use Clearing for Efficiency – Claude Sonnet 4.5 enables automatic cleanup of tool interaction history during long conversations. When conversations involve multiple tool calls, the system can automatically remove older tool results while preserving recent ones. This keeps conversations efficient and prevents unnecessary token consumption, reducing costs while maintaining conversation quality.

Cross-Conversation Memory – A new memory capability enables Sonnet 4.5 to remember information across different conversations through the use of a local memory file. Users can explicitly ask the model to remember preferences, context, or important information that persists beyond a single chat session. This creates more personalized and contextually aware interactions while keeping the information safe within the local file.

With these new capabilities for managing context, developers can build AI agents capable of handling long-running tasks at higher intelligence without hitting context limits or losing critical information as frequently.

Getting started
To begin working with Claude Sonnet 4.5, you can access it through Amazon Bedrock using the correct model ID. A good practice is to use the Amazon Bedrock Converse API to write code once and seamlessly switch between different models, making it easier to experiment with Sonnet 4.5 or any of the other models available in Amazon Bedrock.

Let’s see this in action with a simple example. I’m going to use the Amazon Bedrock Converse API to send a prompt to Sonnet 4.5. I start by importing the modules I’m going to use. For this short example, I only need AWS SDK for Python (Boto3) so I can create a BedrockRuntimeClient. I’m also importing the rich package so I can format my output nicely later on.

Following best practices, I create a boto3 session and create an Amazon Bedrock client from it instead of creating one directly. This gives you explicit control over configuration, improves thread safety, and makes your code more predictable and testable compared to relying on the default session.

I want to give the model something with a bit of complexity instead of asking a simple question to demonstrate the power of Sonnet 4.5. So I’m going to give the model the current state of an imaginary legacy monolithic application written in Java with a single database and ask for a digital transformation plan which includes a migration strategy, risk assessment, estimated timeline and key milestones and specific AWS services recommendations.

Because the prompt is quite long I put it in a text file locally and just load it up in code. I then set up the Amazon Bedrock converse payload setting the role to “user” to indicate that this is a message by the user of the application and add the prompt to the content.

This is where the magic happens! We put it all together and call Claude Sonnet 4.5 using its model ID. Well, kind of. You can only access Sonnet 4.5 through an inference profile. This defines which AWS Regions will process your model requests and helps manage throughput and performance.

For this demo, I’ll be using one of Amazon Bedrock’s system-defined cross-Region inference profiles, which automatically routes requests across multiple Regions for optimal performance.

Now I just need to print to the screen to see the results. This is where I use the rich package I imported earlier just so we may have a nicely formatted output as I’m expecting a long response for this one. I also save the output to a file so I can have it handy as something to share with my teams.

Ok, let’s check the results! As expected, Sonnet 4.5 worked through my requirements and provided extensive and deep guidance for my digital transformation plan that I could start putting into practice. It included an executive summary, a step-by-step migration strategy split into phases with time estimates, and even some code samples to seed the development process and start breaking things down into microservices. It also provided the business cases for introducing technology and recommended the correct AWS services for each scenario. Here are some highlights from the report.

Claude Sonnet 4.5 is able to maintain consistency while delivering creative solutions making it an ideal choice for businesses seeking to use AI for complex problem-solving and development tasks. Its enhanced capabilities in following directions and using tools effectively translate into more reliable and innovative solutions across various business contexts.

Things to know
Claude Sonnet 4.5 represents a significant step forward in agent capabilities, particularly excelling in areas where consistent performance and creative problem-solving are essential. Its enhanced abilities in tool handling, memory management, and context processing make it particularly valuable across key industries such as finance, research, and cybersecurity. Whether handling complex development lifecycles, executing long-running tasks, or tackling business-critical workflows, Claude Sonnet 4.5 combines technical excellence with practical business value.

Claude Sonnet 4.5 is available today. For detailed information about its availability please visit the documentation.

To learn more about Amazon Bedrock explore our self-paced Amazon Bedrock Workshop and discover how to use available models and their capabilities in your applications.



from AWS News Blog https://ift.tt/Ny7Utgh
via IFTTT

AWS Weekly Roundup: Amazon S3, Amazon Bedrock AgentCore, AWS X-Ray and more (September 29, 2025)

Wow, can you all believe it? We’re nearing the end of the year already. Next thing you know, AWS re:Invent will be here! This is our biggest event that takes place every year in Las Vegas from December 1st to December 5th where we reveal and release many of the things that we’ve been working on. If you haven’t already, buy your tickets to AWS re:Invent 2025 to experience it in person. If you can’t make it to Vegas, don’t worry, make sure to stay tuned here on the AWS News Blog where will be covering many of the announcements as they happen.

However, there are plenty of new exciting new releases between now and then, so, as usual, let’s take a quick look at some of the highlights from last week so you can catch up on what’s been recently launched, starting with one of the most popular services: Amazon S3!

S3 updates
The S3 team has been working really hard to make working with S3 even better. This month alone has seen releases such as bulk target selection for S3 Batch Operations, support for conditional deletes in S3 general purpose buckets, increased file size and archive scanning limits for malware protection, and more.

Last week was another S3 milestone with the addition of a preview in the AWS Console for Amazon S3 Tables. You can now take a quick peek at your S3 Tables right from the console, making it easier to understand their data structure and content without writing any SQL. This viewer-friendly feature is ready to use across all regions where S3 Tables are supported, with costs limited to just the S3 requests needed to display your table preview.

Other releases
Here are some highlights from other services which also released some great stuff this week.

Amazon Bedrock AgentCore expands enterprise integration and automation options — Bedrock AgentCore services are leveling up their enterprise readiness with new support for Amazon VPC connectivity, AWS PrivateLink, AWS CloudFormation, and resource tagging, giving developers more control over security and infrastructure automation. These enhancements let you deploy AI agents that can securely access private resources, automate infrastructure deployment, and maintain organized resource management whether you’re using AgentCore Runtime for scalable agent deployment, Browser for web interactions, or Code Interpreter for secure code execution.

AWS X-Ray brings smart sampling for better error detection — AWS X-Ray now offers adaptive sampling that automatically adjusts trace capture rates within your defined limits, helping DevOps teams and SREs catch critical issues without oversampling during normal operations. The new capability includes Sampling Boost for increased sampling during anomalies and Anomaly Span Capture for targeted error tracing, giving teams better observability exactly when they need it while keeping costs in check.

AWS Clean Rooms enhances real-time collaboration wilth incremental ID mapping — AWS Clean Rooms now lets you update ID mapping tables with only new, modified, or deleted records through AWS Entity Resolution, making data synchronization across collaborators more efficient and timely. This improvement helps measurement providers maintain fresh datasets with advertisers and publishers while preserving privacy controls, enabling always-on campaign measurement without the need to reprocess entire datasets.

Short and sweet
Here are some bite-sized updates that could prove really handy for your teams or workloads.

Keeping up with the latest EC2 instance types can be challenging. AWS Compute Optimizer now supports 99 additional instance types including the latest C8, M8, R8, and I8 families.

In competitive gaming, every millisecond counts! Amazon GameLift has launched a new Local Zone in Dallas bringing ultra-low latency game servers closer to players in Texas.

When managing large-scale Amazon EC2 deployments, control is everything! Amazon EC2 Allowed AMIs setting now supports filtering by marketplace codes, deprecation time, creation date, and naming patterns to help prevent the use of non-compliant images. Additionally, EC2 Auto Scaling now lets you force cancel instance refreshes immediately, giving you faster control during critical deployments.

Making customer service more intelligent and secure across languages! Amazon Connect introduces enhanced analytics in its flow designer for better customer journey insights, adds custom attributes for precise interaction tracking, and expands Contact Lens sensitive data redaction to support seven additional European and American languages.

That’s it for this week!

Don’t forget to check out all the upcoming AWS events happening across the globe. There are many exciting opportunities for you to attend free events where you can meet lots of people and learn a lot while enjoying a great day amongst other like-minded people in the tech industry.

And if you feel like competing for some cash, time is running out to be part of something extraordinary! The AWS AI Agent Global Hackathon continues until October 20, offering developers a unique opportunity to build innovative AI agents using AWS’s comprehensive gen AI stack. With over $45,000 in prizes and exclusive go-to-market opportunities up for grabs, don’t miss the chance to showcase your creativity and technical prowess in this global competition.

I hope you have found something useful or exciting within this last week’s launches. We post a weekly review every Monday to help you keep up with the latest from AWS so make sure to bookmark this and hopefully see you for the next one!

Matheus Guimaraes | @codingmatheus

from AWS News Blog https://ift.tt/Dq9AeEP
via IFTTT

Friday, September 26, 2025

Tuesday, September 23, 2025

Accelerate AI agent development with the Nova Act IDE extension

Today, I’m excited to announce the Nova Act extension — a tool that streamlines the path to build browser automation agents without leaving your IDE. The Nova Act extension integrates directly into IDEs like Visual Studio Code (VS Code), Kiro, and Cursor, helping you to create web-based automation agents using natural language with the Nova Act model.

Here’s a quick look at the Nova Act extension in Visual Studio Code:

The Nova Act extension is built on top of the Amazon Nova Act SDK (preview), our browser automation agents SDK (Software Development Kit). The Nova Act extension transforms traditional workflow development by eliminating context switching between coding and testing environments. You can now build, customize, and test production-grade agent scripts—all within your IDE—using features like natural language based generation, atomic cell-style editing, and integrated browser testing. This unified experience accelerates development velocity for tasks like form filling, QA automation, search, and complex multi-step workflows.

You can start with the Nova Act extension by describing your workflow in natural language to quickly generate an initial agent script. Customize it using the notebook-style builder mode to integrate APIs, data sources, and authentication, then validate it with local testing tools that simulate real-world conditions, including live step-by-step debugging of lengthy multi-step workflows.

Getting started with the Nova Act extension
First, I need to install the Nova Act extension from the extension manager in my IDE. 

I’m using Visual Studio Code, and after choosing Extensions, I enter Nova Act. Then, I select the extension and choose Install

To get started, I need to obtain an API key. To do this, I navigate to the Nova Act page and follow the instructions to get the API key. I select Set API Key by opening the Command Palette with Cmd+Shift+P / Ctrl+Shift+P.

After I’ve entered my API key, I can try Builder Mode. This is a notebook-style builder mode that breaks complex automation scripts into modular cells, allowing me to test and debug each step individually before moving to the next.

Here, I can use the Nova Act SDK to build my agent. On the right side, I have a Live view panel to preview my agent’s actions in the browser and an Output panel to monitor execution logs, including the model’s thinking and actions.

To test the Nova Act extension, I choose Run all cells. This will start a new browser instance and act based on the given prompt.

I choose Fullscreen to see how browser automation works.

Another useful feature in Builder Mode is that I can navigate to the Output panel and select the cell to see its logs. This helps me debug or review logs specific to the cell I’m working on.

I can also select a template to get started.

Besides using Builder Mode, I can also chat with Nova Act to create a script for me. To do that, I select the extension and choose Generate Nova Act Script. The Nova Act extension opens a chat dialog in the right panel and automatically creates a script for me.

After I finish creating the script, I can choose Start Builder Mode, and the Nova Act extension will help me create a Python file in Builder Mode. This creates a seamless integration because I can switch between chat capability and Builder Mode.

In the chat interface, I see three workflow modes available:

  • Ask: Describe tasks in natural language to generate automation scripts
  • Edit: Refine or customize generated scripts before execution
  • Agent: Run, monitor, and interact with the AI agent performing the workflow

I can also add Context to provide relevant information about my active documents, instructions, problems, or additional Model Context Protocol (MCP) resources the agent can use, plus a screenshot of the current window. Providing this information helps the agent understand any specific requirements for the automation task.

The Nova Act extension also provides a set of predefined templates that I can access by entering / in the chat. These templates are predefined automation scenarios designed to help quickly generate scripts for common web tasks.

I can use these templates (for example, @novaAct /shopping [my requirements]) to get tailored Python scripts for my workflow. At launch, Nova Act extension provides the following templates:

  • /shopping: Automates online shopping tasks (searching, comparing, purchasing)
  • /extract: Handles data extraction
  • /search: Performs search and information gathering
  • /qa: Automates quality assurance and testing workflows
  • /formfilling: Completes forms and data entry tasks

This extension transforms my agent development workflow by positioning Nova Act extension as a full-stack agent builder tool—a complete agent IDE for the entire development lifecycle. I can prototype with natural language, customize with modular scripting, and validate with local testing—all without leaving my IDE—ensuring production-grade scripts.

Things to know
Here are key points to note:

  • Supported IDEs: At launch, the Nova Act extension is available for Visual Studio Code, Cursor, and Kiro, with additional IDE support planned
  • Open source: The Nova Act extension is available under the Apache 2.0 license, allowing for community contributions and customization
  • Pricing: The Nova Act extension is available at no charge.

Get started with Nova Act extension by installing it from your IDE’s extension marketplace or visiting the GitHub repository for documentation and examples.

Happy automating!
Donnie



from AWS News Blog https://ift.tt/XtwgHDQ
via IFTTT

Monday, September 22, 2025

AWS Weekly Roundup: Amazon Q Developer, AWS Step Functions, AWS Cloud Club Captain deadline, and more (September 22, 2025)

Three weeks ago, I published a post about the new AWS Region in New Zealand (ap-southeast-6). This led to an incredible opportunity to visit New Zealand, where I met passionate builders and presented at several events including Serverless and Platform Engineering meetup, AWS Tools and Programming meetup, AWS Cloud Clubs in Auckland, and AWS Community Day New Zealand.

During my content creation process for these presentations, I discovered a useful feature in Amazon Q CLI called tangent mode. This feature has transformed how I stay focused by creating conversation checkpoints that let you explore side topics without losing your main thread.

This feature is in experimental mode, and you can enable it with q settings chat.enableTangentMode true. Try it out and see if it helps you.

Last week’s launches
Here are some launches that got my attention:

  • New Foundation Models in Amazon Bedrock — Amazon Bedrock expands its model selection with Qwen model family, DeepSeek-V3.1, and Stability AI image services now generally available, giving developers access to powerful multilingual models and advanced image generation capabilities for text generation, code generation, image creation, and complex problem-solving tasks.
  • Amazon VPC Reachability Analyzer Expands to Seven New Regions — Network Access Analyzer capabilities are now available in additional regions, helping customers analyze and troubleshoot network connectivity issues across their VPC infrastructure with improved global coverage.
  • Amazon Q Developer Supports Remote MCP Servers — Amazon Q Developer now integrates with remote Model Context Protocol (MCP) servers, enabling developers to extend their AI assistant capabilities with custom tools and data sources for enhanced development workflows.
  • AWS Step Functions Enhances Distributed Map with New Data Source Options — Step Functions introduces additional data source options and improved observability features for Distributed Map, making it easier to process large-scale parallel workloads with better monitoring and debugging capabilities.
  • Amazon Corretto 25 Generally Available — Amazon’s no-cost, multiplatform distribution of OpenJDK 25 is now generally available, providing Java developers with long-term support, performance enhancements, and security updates for building modern applications.
  • Amazon SageMaker HyperPod Introduces Autoscaling — SageMaker HyperPod now supports automatic scaling capabilities, allowing machine learning teams to dynamically adjust compute resources based on workload demands, optimizing both performance and cost for distributed training jobs.

Additional Updates

  • AWS Named Leader in 2025 Gartner Magic Quadrant for AI Code Assistants – AWS has been recognized as a Leader in Gartner’s Magic Quadrant for AI Code Assistants, highlighting Amazon Q Developer’s capabilities in helping developers write code faster and more securely with AI-powered suggestions.
  • Become an AWS Cloud Club Captain – Only a couple of days before it closes! Join a growing network of student cloud enthusiasts by becoming an AWS Cloud Club Captain! As a Captain, you’ll get to organize events and build cloud communities while developing leadership skills. The application window is open September 1-28, 2025.

Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events as well as AWS re:Invent and AWS Summits:

  • AWS AI Agent Global Hackathon – This is your chance to dive deep into our powerful generative AI stack and create something truly awesome. From September 8th to October 20th, you have the opportunity to create AI agents using AWS suite of AI services, competing for over $45,000 in prizes and exclusive go-to-market opportunities.
  • AWS Gen AI Lofts – You can learn AWS AI products and services with exclusive sessions and meet industry-leading experts, and have valuable networking opportunities with investors and peers. Register in your nearest city: Mexico City (September 30–October 2), Paris (October 7–21), London (Oct 13–21), and Tel Aviv (November 11–19).
  • AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: South Africa (September 20), Bolivia (September 20), Portugal (September 27), and Manila (October 4-5).

You can browse all upcoming AWS events and AWS startup events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Happy building!

— Donnie



from AWS News Blog https://ift.tt/BWEhOTH
via IFTTT

Thursday, September 18, 2025

DeepSeek-V3.1 model now available in Amazon Bedrock

In March, Amazon Web Services (AWS) became the first cloud service provider to deliver DeepSeek-R1 in a serverless way by launching it as a fully managed, generally available model in Amazon Bedrock. Since then, customers have used DeepSeek-R1’s capabilities through Amazon Bedrock to build generative AI applications, benefiting from the Bedrock’s robust guardrails and comprehensive tooling for safe AI deployment.

Today, I am excited to announce DeepSeek-V3.1 is now available as a fully managed foundation model in Amazon Bedrock. DeepSeek-V3.1 is a hybrid open weight model that switches between thinking mode (chain-of-thought reasoning) for detailed step-by-step analysis and non-thinking mode (direct answers) for faster responses.

According to DeepSeek, the thinking mode of DeepSeek-V3.1 achieves comparable answer quality with better results, stronger multi-step reasoning for complex search tasks, and big gains in thinking efficiency compared with DeepSeek-R1-0528.

Benchmarks DeepSeek-V3.1 DeepSeek-R1-0528
Browsecomp 30.0 8.9
Browsecomp_zh 49.2 35.7
HLE 29.8 24.8
xbench-DeepSearch 71.2 55.0
Frames 83.7 82.0
SimpleQA 93.4 92.3
Seal0 42.6 29.7
SWE-bench Verified 66.0 44.6
SWE-bench Multilingual 54.5 30.5
Terminal-Bench 31.3 5.7
(c) https://api-docs.deepseek.com/news/news250821

DeepSeek-V3.1 model performance in tool usage and agent tasks has significantly improved through post-training optimization compared to previous DeepSeek models. DeepSeek-V3.1 also supports over 100 languages with near-native proficiency, including significantly improved capability in low-resource languages lacking large monolingual or parallel corpora. You can build global applications to deliver enhanced accuracy and reduced hallucinations compared to previous DeepSeek models, while maintaining visibility into its decision-making process.

Here are your key use cases using this model:

  • Code generation – DeepSeek-V3.1 excels in coding tasks with improvements in software engineering benchmarks and code agent capabilities, making it ideal for automated code generation, debugging, and software engineering workflows. It performs well on coding benchmarks while delivering high-quality results efficiently.
  • Agentic AI tools – The model features enhanced tool calling through post-training optimization, making it strong in tool usage and agentic workflows. It supports structured tool calling, code agents, and search agents, positioning it as a solid choice for building autonomous AI systems.
  • Enterprise applications – DeepSeek models are integrated into various chat platforms and productivity tools, enhancing user interactions and supporting customer service workflows. The model’s multilingual capabilities and cultural sensitivity make it suitable for global enterprise applications.

As I mentioned in my previous post, when implementing publicly available models, give careful consideration to data privacy requirements when implementing in your production environments, check for bias in output, and monitor your results in terms of data security, responsible AI, and model evaluation.

You can access the enterprise-grade security features of Amazon Bedrock and implement safeguards customized to your application requirements and responsible AI policies with Amazon Bedrock Guardrails. You can also evaluate and compare models to identify the optimal model for your use cases by using Amazon Bedrock model evaluation tools.

Get started with the DeepSeek-V3.1 model in Amazon Bedrock
If you’re new to using the DeepSeek-V3.1 model, go to the Amazon Bedrock console, choose Model access under Bedrock configurations in the left navigation pane. To access the fully managed DeepSeek-V3.1 model, request access for DeepSeek-V3.1 in the DeepSeek section. You’ll then be granted access to the model in Amazon Bedrock.

Next, to test the DeepSeek-V3.1 model in Amazon Bedrock, choose Chat/Text under Playgrounds in the left menu pane. Then choose Select model in the upper left, and select DeepSeek as the category and DeepSeek-V3.1 as the model. Then choose Apply.

Using the selected DeepSeek-V3.1 model, I run the following prompt example about technical architecture decision.

Outline the high-level architecture for a scalable URL shortener service like bit.ly. Discuss key components like API design, database choice (SQL vs. NoSQL), how the redirect mechanism works, and how you would generate unique short codes.

You can turn the thinking on and off by toggling Model reasoning mode to generate a response’s chain of thought prior to the final conclusion.

You can also access the model using the AWS Command Line Interface (AWS CLI) and AWS SDK. This model supports both the InvokeModel and Converse API. You can check out a broad range of code examples for multiple use cases and a variety of programming languages.

To learn more, visit DeepSeek model inference parameters and responses in the AWS documentation.

Now available
DeepSeek-V3.1 is now available in the US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Europe (London), and Europe (Stockholm) AWS Regions. Check the full Region list for future updates. To learn more, check out the DeepSeek in Amazon Bedrock product page and the Amazon Bedrock pricing page.

Give the DeepSeek-V3.1 model a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

Channy



from AWS News Blog https://ift.tt/74zvyfY
via IFTTT

Monday, September 15, 2025

AWS named as a Leader in 2025 Gartner Magic Quadrant for Cloud-Native Application Platforms and Container Management

A month ago, I shared that Amazon Web Services (AWS) is recognized as a Leader in 2025 Gartner Magic Quadrant for Strategic Cloud Platform Services (SCPS), with Gartner naming AWS a Leader for the fifteenth consecutive year.

In 2024, AWS was named as a Leader in the Gartner Magic Quadrant for AI Code Assistants, Cloud-Native Application Platforms, Cloud Database Management Systems, Container Management, Data Integration Tools, Desktop as a Service (DaaS), and Data Science and Machine Learning Platforms as well as the SCPS. In 2025, we were also recognized as a Leader in the Gartner Magic Quadrant for Contact Center as a Service (CCaaS), Desktop as a Service and Data Science and Machine Learning (DSML) platforms. We strongly believe this means AWS provides the broadest and deepest range of services to customers.

Today, I’m happy to share recent Magic Quadrant reports that named AWS as a Leader in more cloud technology markets: Cloud-Native Application Platforms (aka Cloud Application Platforms) and Container Management.

2025 Gartner Magic Quadrant for Cloud-Native Application Platforms
AWS has been named a Leader in the Gartner Magic Quadrant for Cloud-Native Application Platforms for 2 consecutive years. AWS was positioned highest on “Ability to Execute”. Gartner defines cloud-native application platforms as those that provide managed application runtime environments for applications and integrated capabilities to manage the lifecycle of an application or application component in the cloud environment.

The following image is the graphical representation of the 2025 Magic Quadrant for Cloud-Native Application Platforms.

Our comprehensive cloud-native application portfolio—AWS Lambda, AWS App Runner, AWS Amplify, and AWS Elastic Beanstalk—offers flexible options for building modern applications with strong AI capabilities, demonstrated through continued innovation and deep integration across our broader AWS service portfolio.

You can simplify the service selection through comprehensive documentation, reference architectures, and prescriptive guidance available in the AWS Solutions Library, along with AI-powered, contextual recommendations from Amazon Q based on your specific requirements. While AWS Lambda is optimized for AWS to provide the best possible serverless experience, it follows industry standards for serverless computing and supports common programming languages and frameworks. You can find all necessary capabilities within AWS, including advanced features for AI/ML, edge computing, and enterprise integration.

You can build, deploy, and scale generative AI agents and applications by integrating these compute offerings with Amazon Bedrock for serverless inferences and Amazon SageMaker for artificial intelligence and machine learning (AI/ML) training and management.

Access the complete 2025 Gartner Magic Quadrant for Cloud-Native Application Platforms to learn more.

2025 Gartner Magic Quadrant for Container Management
In the 2025 Gartner Magic Quadrant for Container Management, AWS has been named as a Leader for three years and was positioned furthest for “Completeness of Vision”. Gartner defines container management as offerings that support the deployment and operation of containerized workloads. This process involves orchestrating and overseeing the entire lifecycle of containers, covering deployment, scaling, and operations, to ensure their efficient and consistent performance across different environments.

The following image is the graphical representation of the 2025 Magic Quadrant for Container Management.

AWS container services offer fully managed container orchestration with AWS native solutions and open-source technologies to focus on providing a wide range of deployment options, from Kubernetes to our native orchestrator.

You can use Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). Both can be used with AWS Fargate for serverless container deployment. Additionally, EKS Auto Mode simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, and dynamically scaling resources for containerized applications.

You can connect on-premises and edge infrastructure back to AWS container services with EKS Hybrid Nodes and ECS Anywhere, or use EKS Anywhere for a fully disconnected Kubernetes experience supported by AWS. With flexible compute and deployment options, you can reduce operational overhead and focus on innovation and drive business value faster.

Access the complete 2025 Gartner Magic Quadrant for Container Management to learn more.

Channy

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.



from AWS News Blog https://ift.tt/JiocgpU
via IFTTT

AWS Weekly Roundup: Strands Agents 1M+ downloads, Cloud Club Captain, AI Agent Hackathon, and more (September 15, 2025)

Last week, Strands Agents, AWS open source for agentic AI SDK just hit 1 million downloads and earned 3,000+ GitHub Stars less than 4 months since launching as a preview in May 2025. With Strands Agents, you can build production-ready, multi-agent AI systems in a few lines of code.

We’ve continuously improved features including support for multi-agent patterns, A2A protocol, and Amazon Bedrock AgentCore. You can use a collection of sample implementations to help you get started with building intelligent agents using Strands Agents. We always welcome your contribution and feedback to our project including bug reports, new features, corrections, or additional documentation.

Here is the latest research article of Amazon Science about the future of agentic AI and questions that scientists are asking about agent-to-agent communications, contextual understanding, common sense reasoning, and more. You can understand the technical topic of agentic AI with with relatable examples, including one about our personal behaviors about leaving doors open or closed, locked or unlocked.

Last week’s launches
Here are some launches that got my attention:

  • Amazon EC2 M4 and M4 Pro Mac instances – New M4 Mac instances offer up to 20% better application build performance compared to M2 Mac instances, while M4 Pro Mac instances deliver up to 15% better application build performance compared to M2 Pro Mac instances. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari.
  • LocalStack integration in Visual Studio Code (VS Code) – You can use LocalStack to locally emulate and test your serverless applications using the familiar VS Code interface without switching between tools or managing complex setup, thus simplifying your local serverless development process.
  • AWS Cloud Development Kit (AWS CDK) Refactor (Preview) –You can rename constructs, move resources between stacks, and reorganize CDK applications while preserving the state of deployed resources. By using AWS CloudFormation’s refactor capabilities with automated mapping computation, CDK Refactor eliminates the risk of unintended resource replacement during code restructuring.
  • AWS CloudTrail MCP Server – New AWS CloudTrail MCP server allows AI assistants to analyze API calls, track user activities, and perform advanced security analysis across your AWS environment through natural language interactions. You can explore more AWS MCP servers for working with AWS service resources.
  • Amazon CloudFront support for IPv6 origins – Your applications can send IPv6 traffic all the way to their origins, allowing them to meet their architectural and regulatory requirements for IPv6 adoption. End-to-end IPv6 support improves network performance for end users connecting over IPv6 networks, and also removes concerns for IPv4 address exhaustion for origin infrastructure.

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS? page.

Other AWS news
Here are some additional news items that you might find interesting:

  • A city in the palm of your hand – Check out this interactive feature that explains how our AWS Trainium chip designers think like city planners, optimizing every nanometer to move data at near light speed.
  • Measuring the effectiveness of software development tools and practices – Read how Amazon developers that identified specific challenges before adopting AI tools cut costs by 15.9% year-over-year using our cost-to-serve-software framework (CTS-SW). They deployed more frequently and reduced manual interventions by 30.4% by focusing on the right problems first.
  • Become an AWS Cloud Club Captain – Join a growing network of student cloud enthusiasts by becoming an AWS Cloud Club Captain! As a Captain, you’ll get to organize events and building cloud communities while developing leadership skills. Application window is open September 1-28, 2025.

Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events as well as AWS re:Invent and AWS Summits:

  • AWS AI Agent Global Hackathon – This is your chance to dive deep into our powerful generative AI stack and create something truly awesome. From September 8 to October 20, you have the opportunity to create AI agents using AWS suite of AI services, competing for over $45,000 in prizes and exclusive go-to-market opportunities.
  • AWS Gen AI Lofts – You can learn AWS AI products and services with exclusive sessions and meet industry-leading experts, and have valuable networking opportunities with investors and peers. Register in your nearest city: Mexico City (September 30–October 2), Paris (October 7–21), London (Oct 13–21), and Tel Aviv (November 11–19).
  • AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Aotearoa and Poland (September 18), South Africa (September 20), Bolivia (September 20), Portugal (September 27), Germany (October 7), and Hungary (October 16).

You can browse all upcoming AWS events and AWS startup events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Channy



from AWS News Blog https://ift.tt/xZsK8RD
via IFTTT

Friday, September 12, 2025

Announcing Amazon EC2 M4 and M4 Pro Mac instances

As someone who has been using macOS since 2001 and Amazon EC2 Mac instances since their launch 4 years ago, I’ve helped numerous customers scale their continuous integration and delivery (CI/CD) pipelines on AWS. Today, I’m excited to share that Amazon EC2 M4 and M4 Pro Mac instances are now generally available.

Development teams building applications for Apple platforms need powerful computing resources to handle complex build processes and run multiple iOS simulators simultaneously. As development projects grow larger and more sophisticated, teams require increased performance and memory capacity to maintain rapid development cycles.

Apple M4 Mac mini at the core
EC2 M4 Mac instances (known as mac-m4.metal in the API) are built on Apple M4 Mac mini computers and are built on the AWS Nitro System. They feature Apple silicon M4 chips with 10-core CPU (four performance and six efficiency cores), 10-core GPU, 16-core Neural Engine, and 24 GB unified memory, delivering enhanced performance for iOS and macOS application build workloads. When building and testing applications, M4 Mac instances deliver up to 20 percent better application build performance compared to EC2 M2 Mac instances.

EC2 M4 Pro Mac (mac-m4pro.metal in the API) instances are powered by Apple silicon M4 Pro chips with 14-core CPU, 20-core GPU, 16-core Neural Engine, and 48 GB unified memory. These instances offer up to 15 percent better application build performance compared to EC2 M2 Pro Mac instances. The increased memory and computing power make it possible to run more tests in parallel using multiple device simulators.

Each M4 and M4 Pro Mac instance now comes with 2 TB of local storage, providing low-latency storage for improved caching and build and test performance.

Both instance types support macOS Sonoma version 15.6 and later as Amazon Machine Images (AMIs). The AWS Nitro System provides up to 10 Gbps of Amazon Virtual Private Cloud (Amazon VPC) network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth through high-speed Thunderbolt connections.

Amazon EC2 Mac instances integrate seamlessly with AWS services, which means you can:

Let me show you how to get started
You can launch an EC2 M4 or M4 Pro Mac instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs.

For this demo, let’s start an M4 Pro instance from the console. I first allocate a dedicated host to run my instances. On the AWS Management Console, I navigate to EC2, then Dedicated Hosts, and I select Allocate Dedicated Host.

Then, I enter a Name tag and I select the Instance family (mac-m4pro) and an Instance type (mac-m4pro.metal). I choose one Availability Zone and I clear Host maintenance.

EC2 Mac M$ - Dedicated hosts

Alternatively, I can use the command line interface:

aws ec2 allocate-hosts                          \
        --availability-zone-id "usw2-az4"       \
        --auto-placement "off"                  \
        --host-recovery "off"                   \
        --host-maintenance "off"                \
        --quantity 1                            \
        --instance-type "mac-m4pro.metal"

After the dedicated host is allocated to my account, I select the host I just allocated, then I select the Actions menu and choose Launch instance(s) onto host.

Notice the console gives you, among other information, the Latest supported macOS versions for this type of host. In this case, it’s macOS 15.6.

EC2 Mac M4 - Dedicated hosts Launch 

On the Launch an instance page, I enter a Name. I select a macOS Sequoia Amazon Machine Image (AMI). I make sure the Architecture is 64-bit Arm and the Instance type is mac-m4pro.metal.

The rest of the parameters arn’t specific to Amazon EC2 Mac: the network and storage configuration. When starting an instance for development use, make sure you select a volume with minimum 200 Gb or more. The default 100 Gb volume size isn’t sufficient to download and install Xcode.

EC2 Mac M4 - Dedicated hosts Launch DetailsWhen ready, I select the Launch instance orange button on the bottom of the page. The instance will rapidly appear as Running in the console. However, it might take up to 15 minutes to allow you to connect over SSH.

Alternatively, I can use this command:

aws ec2 run-instances \
    --image-id "ami-000420887c24e4ac8"  \ # AMI ID depends on the region !
    --instance-type "mac-m4pro.metal"   \
    --key-name "my-ssh-key-name"        \
    --network-interfaces '{"AssociatePublicIpAddress":true,"DeviceIndex":0,"Groups":["sg-0c2f1a3e01b84f3a3"]}' \ # Security Group ID depends on your config
    --tag-specifications '{"ResourceType":"instance","Tags":[{"Key":"Name","Value":"My Dev Server"}]}' \
    --placement '{"HostId":"h-0e984064522b4b60b","Tenancy":"host"}' \ # Host ID depends on your config 
    --private-dns-name-options '{"HostnameType":"ip-name","EnableResourceNameDnsARecord":true,"EnableResourceNameDnsAAAARecord":false}' \
    --count "1" 

Install Xcode from the Terminal
After the instance is reachable, I can connect using SSH to it and install my development tools. I use xcodeinstall to download and install Xcode 16.4.

From my laptop, I open a session with my Apple developer credentials:

# on my laptop, with permissions to access AWS Secret Manager
» xcodeinstall authenticate -s eu-central-1                                                                                               

Retrieving Apple Developer Portal credentials...
Authenticating...
🔐 Two factors authentication is enabled, enter your 2FA code: 067785
✅ Authenticated with MFA.

I connect to the EC2 Mac instance I just launched. Then, I download and install Xcode:

» ssh ec2-user@44.234.115.119                                                                                                                                                                   

Warning: Permanently added '44.234.115.119' (ED25519) to the list of known hosts.
Last login: Sat Aug 23 13:49:55 2025 from 81.49.207.77

    ┌───┬──┐   __|  __|_  )
    │ ╷╭╯╷ │   _|  (     /
    │  └╮  │  ___|\___|___|
    │ ╰─┼╯ │  Amazon EC2
    └───┴──┘  macOS Sequoia 15.6

ec2-user@ip-172-31-54-74 ~ % brew tap sebsto/macos
==> Tapping sebsto/macos
Cloning into '/opt/homebrew/Library/Taps/sebsto/homebrew-macos'...
remote: Enumerating objects: 227, done.
remote: Counting objects: 100% (71/71), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 227 (delta 22), reused 63 (delta 14), pack-reused 156 (from 1)
Receiving objects: 100% (227/227), 37.93 KiB | 7.59 MiB/s, done.
Resolving deltas: 100% (72/72), done.
Tapped 1 formula (13 files, 61KB).

ec2-user@ip-172-31-54-74 ~ % brew install xcodeinstall 
==> Fetching downloads for: xcodeinstall
==> Fetching sebsto/macos/xcodeinstall
==> Downloading https://github.com/sebsto/xcodeinstall/releases/download/v0.12.0/xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
Already downloaded: /Users/ec2-user/Library/Caches/Homebrew/downloads/9f68a7a50ccfdc479c33074716fd654b8528be0ec2430c87bc2b2fa0c36abb2d--xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
==> Installing xcodeinstall from sebsto/macos
==> Pouring xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
🍺  /opt/homebrew/Cellar/xcodeinstall/0.12.0: 8 files, 55.2MB
==> Running `brew cleanup xcodeinstall`...
Disable this behaviour by setting `HOMEBREW_NO_INSTALL_CLEANUP=1`.
Hide these hints with `HOMEBREW_NO_ENV_HINTS=1` (see `man brew`).
==> No outdated dependents to upgrade!

ec2-user@ip-172-31-54-74 ~ % xcodeinstall download -s eu-central-1 -f -n "Xcode 16.4.xip"
                        Downloading Xcode 16.4
100% [============================================================] 2895 MB / 180.59 MBs
[ OK ]
✅ Xcode 16.4.xip downloaded

ec2-user@ip-172-31-54-74 ~ % xcodeinstall install -n "Xcode 16.4.xip"
Installing...
[1/6] Expanding Xcode xip (this might take a while)
[2/6] Moving Xcode to /Applications
[3/6] Installing additional packages... XcodeSystemResources.pkg
[4/6] Installing additional packages... CoreTypes.pkg
[5/6] Installing additional packages... MobileDevice.pkg
[6/6] Installing additional packages... MobileDeviceDevelopment.pkg
[ OK ]
✅ file:///Users/ec2-user/.xcodeinstall/download/Xcode%2016.4.xip installed

ec2-user@ip-172-31-54-74 ~ % sudo xcodebuild -license accept

ec2-user@ip-172-31-54-74 ~ % 

EC2 Mac M4 - install xcode

Things to know
Select an EBS volume with minimum 200 Gb for development purposes. The 100 Gb default volume size is not sufficient to install Xcode. I usually select 500 Gb. When you increase the EBS volume size after the launch of the instance, remember to resize the APFS filesystem.

Alternatively, you can choose to install your development tools and framework on the low-latency local 2 Tb SSD drive available in the Mac mini. Pay attention that the content of that volume is bound to the instance lifecycle, not the dedicated host. This means that everything will be deleted from the internal SSD storage when you stop and restart the instance.

Themac-m4.metal and mac-m4pro.metal instances support macOS Sequoia 15.6 and later.

You can migrate your existing EC2 Mac instances when the migrated instance runs macOS 15 (Sequoia). Create a custom AMI from your existing instance and start an M4 or M4 Pro instance from this AMI.

Finally, I suggest checking the tutorials I wrote to help you to get started with Amazon EC2 Mac:

Pricing and availability
EC2 M4 and M4 Pro Mac instances are currently available in US East (N. Virginia) and US West (Oregon), with additional Regions planned for the future.

Amazon EC2 Mac instances are available for purchase as Dedicated Hosts through the On-Demand and Savings Plans pricing models. Billing for EC2 Mac instances is per second with a 24-hour minimum allocation period to comply with the Apple macOS Software License Agreement. At the end of the 24-hour minimum allocation period, the host can be released at any time with no further commitment

As someone who works closely with Apple developers, I’m curious to see how you’ll use these new instances to accelerate your development cycles. The combination of increased performance, enhanced memory capacity, and integration with AWS services opens new possibilities for teams building applications for iOS, macOS, iPadOS, tvOS, watchOS, and visionOS platforms. Beyond application development, Apple silicon’s Neural Engine makes these instances cost-effective candidates for running machine learning (ML) inference workloads. I’ll be discussing this topic in detail at AWS re:Invent 2025, where I’ll share benchmarks and best practices for optimizing ML workloads on EC2 Mac instances.

To learn more about EC2 M4 and M4 Pro Mac instances, visit the Amazon EC2 Mac Instances page or refer to the EC2 Mac documentation. You can start using these instances today to modernize your Apple development workflows on AWS.

— seb

from AWS News Blog https://ift.tt/mxVylvW
via IFTTT