Thursday, October 31, 2024

Unlock the potential of your supply chain data and gain actionable insights with AWS Supply Chain Analytics

Today, we’re announcing the general availability of AWS Supply Chain Analytics powered by Amazon QuickSight. This new feature helps you to build custom report dashboards using your data in AWS Supply Chain. With this feature, your business analysts or supply chain managers can perform custom analyses, visualize data, and gain actionable insights for your supply chain management operations.

Here’s how it looks:

AWS Supply Chain Analytics leverages the AWS Supply Chain data lake and provides Amazon QuickSight embedded authoring tools directly into the AWS Supply Chain user interface. This integration provides you with a unified and configurable experience for creating custom insights, metrics, and key performance indicators (KPIs) for your operational analytics.

In addition, AWS Supply Chain Analytics provides prebuilt dashboards that you can use as-is or modify based on your needs. At launch, you will have the following prebuilt dashboards:

  1. Plan-Over-Plan Variance: Presents a comparison between two demand plans, showcasing variances in both units and values across key dimensions such as product, site, and time periods.
  2. Seasonality Analytics: Presents a year-over-year view of demand, illustrating trends in average demand quantities and highlighting seasonality patterns through heatmaps at both monthly and weekly levels.

Let’s get started
Let me walk you through the features of AWS Supply Chain Analytics.

The first step is to enable AWS Supply Chain Analytics. To do this, navigate to Settings, then select Organizations and choose Analytics. Here, I can Enable data access for Analytics.

Now I can edit existing roles or create a new role with analytics access. To learn more, visit User permission roles.

Once this feature is enabled, when I log in to AWS Supply Chain I can access the AWS Supply Chain Analytics feature by selecting either the Connecting to Analytics card or Analytics on the left navigation menu.

Here, I have an embedded Amazon QuickSight interface ready for me to use. To get started, I navigate to Prebuilt Dashboards.

Then, I can select the prebuilt dashboards I need in the Supply Chain Function dropdown list:

What I like the most about this prebuilt dashboards is I can easily get started. AWS Supply Chain Analytics will prepare all the datasets, analysis, and even a dashboard for me. I select Add to begin.

Then, I navigate to the dashboard page, and I can see the results. I can also share this dashboard with my team, which improves the collaboration aspect.

If I need to include other datasets for me to build a custom dashboard, I can navigate to Datasets and select New dataset.

Here, I have AWS Supply Chain data lake as an existing dataset for me to use.

Next, I need to select Create dataset.

Then, I can select a table that I need to include into my analysis. On the Data section, I can see all available fields. All data sets that start with asc_ are generated by AWS Supply Chain, such as data from Demand Planning, Insights, Supply Planning, and others.

I can also find all the datasets I have ingested into AWS Supply Chain. To learn more on data entities, visit the AWS Supply Chain documentation page. One thing to note here is if I have not ingested data into AWS Supply Chain Data Lake, I need to ingest data before using AWS Supply Chain Analytics. To learn how to ingest data into the data lake, visit the data lake page.

At this stage, I can start my analysis. 

Now available
AWS Supply Chain Analytics is now generally available in all regions where AWS Supply Chain is offered. Give it a try to experience and transform your operations with the AWS Supply Chain Analytics.

Happy building,
— Donnie



from AWS News Blog https://ift.tt/6L82k7f
via IFTTT

Amazon Aurora PostgreSQL Limitless Database is now generally available

Today, we are announcing the general availability of Amazon Aurora PostgreSQL Limitless Database, a new serverless horizontal scaling (sharding) capability of Amazon Aurora. With Aurora PostgreSQL Limitless Database, you can scale beyond the existing Aurora limits for write throughput and storage by distributing a database workload over multiple Aurora writer instances while maintaining the ability to use it as a single database.

When we previewed Aurora PostgreSQL Limitless Database at AWS re:Invent 2023, I explained that it uses a two-layer architecture consisting of multiple database nodes in a DB shard group – either routers or shards to scale based on the workload.

  • Routers – Nodes that accept SQL connections from clients, send SQL commands to shards, maintain system-wide consistency, and return results to clients.
  • Shards – Nodes that store a subset of tables and full copies of data, which accept queries from routers.

There will be three types of tables that contain your data: sharded, reference, and standard.

  • Sharded tables – These tables are distributed across multiple shards. Data is split among the shards based on the values of designated columns in the table, called shard keys. They are useful for scaling the largest, most I/O-intensive tables in your application.
  • Reference tables – These tables copy data in full on every shard so that join queries can work faster by eliminating unnecessary data movement. They are commonly used for infrequently modified reference data, such as product catalogs and zip codes.
  • Standard tables – These tables are like regular Aurora PostgreSQL tables. Standard tables are all placed together on a single shard so join queries can work faster by eliminating unnecessary data movement. You can create sharded and reference tables from standard tables.

Once you have created the DB shard group and your sharded and reference tables, you can load massive amounts of data into Aurora PostgreSQL Limitless Database and query data in those tables using standard PostgreSQL queries. To learn more, visit Limitless Database architecture in the Amazon Aurora User Guide.

Getting started with Aurora PostgreSQL Limitless Database
You can get started in the AWS Management Console and AWS Command Line Interface (AWS CLI) to create a new DB cluster that uses Aurora PostgreSQL Limitless Database, add a DB shard group to the cluster, and query your data.

1. Create an Aurora PostgreSQL Limitless Database Cluster
Open the Amazon Relational Database Service (Amazon RDS) console and choose Create database. For Engine options, choose Aurora (PostgreSQL Compatible) and Aurora PostgreSQL with Limitless Database (Compatible with PostgreSQL 16.4).

For Aurora PostgreSQL Limitless Database, enter a name for your DB shard group and values for minimum and maximum capacity measured by Aurora Capacity Units (ACUs) across all routers and shards. The initial number of routers and shards in a DB shard group is determined by this maximum capacity. Aurora PostgreSQL Limitless Database scales a node up to a higher capacity when its current utilization is too low to handle the load. It scales the node down to a lower capacity when its current capacity is higher than needed.

For DB shard group deployment, choose whether to create standbys for the DB shard group: no compute redundancy, one compute standby in a different Availability Zone, or two compute standbys in two different Availability Zones.

You can set the remaining DB settings to what you prefer and choose Create database. After the DB shard group is created, it is displayed on the Databases page.

You can connect, reboot, or delete a DB shard group, or you can change the capacity, split a shard, or add a router in the DB shard group. To learn more, visit Working with DB shard groups in the Amazon Aurora User Guide.

2. Create Aurora PostgreSQL Limitless Database tables
As shared previously, Aurora PostgreSQL Limitless Database has three table types: sharded, reference, and standard. You can convert standard tables to sharded or reference tables to distribute or replicate existing standard tables or create new sharded and reference tables.

You can use variables to create sharded and reference tables by setting the table creation mode. The tables that you create will use this mode until you set a different mode. The following examples show how to use these variables to create sharded and reference tables.

For example, create a sharded table named items with a shard key composed of the item_id and item_cat columns.

SET rds_aurora.limitless_create_table_mode='sharded';
SET rds_aurora.limitless_create_table_shard_key='{"item_id", "item_cat"}';
CREATE TABLE items(item_id int, item_cat varchar, val int, item text);

Now, create a sharded table named item_description with a shard key composed of the item_id and item_cat columns and collocate it with the items table.

SET rds_aurora.limitless_create_table_collocate_with='items';
CREATE TABLE item_description(item_id int, item_cat varchar, color_id int, ...);

You can also create a reference table named colors.

SET rds_aurora.limitless_create_table_mode='reference';
CREATE TABLE colors(color_id int primary key, color varchar);

You can find information about Limitless Database tables by using the rds_aurora.limitless_tables view, which contains information about tables and their types.

postgres_limitless=> SELECT * FROM rds_aurora.limitless_tables;

 table_gid | local_oid | schema_name | table_name  | table_status | table_type  | distribution_key
-----------+-----------+-------------+-------------+--------------+-------------+------------------
         1 |     18797 | public      | items       | active       | sharded     | HASH (item_id, item_cat)
         2 |     18641 | public      | colors      | active       | reference   | 

(2 rows)

You can convert standard tables into sharded or reference tables. During the conversion, data is moved from the standard table to the distributed table, then the source standard table is deleted. To learn more, visit Converting standard tables to limitless tables in the Amazon Aurora User Guide.

3. Query Aurora PostgreSQL Limitless Database tables
Aurora PostgreSQL Limitless Database is compatible with PostgreSQL syntax for queries. You can query your Limitless Database using psql or any other connection utility that works with PostgreSQL. Before querying tables, you can load data into Aurora Limitless Database tables by using the COPY command or by using the data loading utility.

To run queries, connect to the cluster endpoint, as shown in Connecting to your Aurora Limitless Database DB cluster. All PostgreSQL SELECT queries are performed on the router to which the client sends the query and shards where the data is located.

To achieve a high degree of parallel processing, Aurora PostgreSQL Limitless Database utilizes two querying methods: single-shard queries and distributed queries, which determines whether your query is single-shard or distributed and processes the query accordingly.

  • Single-shard query – A query where all the data needed for the query is on one shard. The entire operation can be performed on one shard, including any result set generated. When the query planner on the router encounters a query like this, the planner sends the entire SQL query to the corresponding shard.
  • Distributed query – A query run on a router and more than one shard. The query is received by one of the routers. The router creates and manages the distributed transaction, which is sent to the participating shards. The shards create a local transaction with the context provided by the router, and the query is run.

For examples of single-shard queries, you use the following parameters to configure the output from the EXPLAIN command.

postgres_limitless=> SET rds_aurora.limitless_explain_options = shard_plans, single_shard_optimization;
SET

postgres_limitless=> EXPLAIN SELECT * FROM items WHERE item_id = 25;

                     QUERY PLAN
--------------------------------------------------------------
 Foreign Scan  (cost=100.00..101.00 rows=100 width=0)
   Remote Plans from Shard postgres_s4:
         Index Scan using items_ts00287_id_idx on items_ts00287 items_fs00003  (cost=0.14..8.16 rows=1 width=15)
           Index Cond: (id = 25)
 Single Shard Optimized
(5 rows) 

To learn more about the EXPLAIN command, see EXPLAIN in the PostgreSQL documentation.

For examples of distributed queries, you can insert new items named Book and Pen into the items table.

postgres_limitless=> INSERT INTO items(item_name)VALUES ('Book'),('Pen')

This makes a distributed transaction on two shards. When the query runs, the router sets a snapshot time and passes the statement to the shards that own Book and Pen. The router coordinates an atomic commit across both shards, and returns the result to the client.

You can use distributed query tracing, a tool to trace and correlate queries in PostgreSQL logs across Aurora PostgreSQL Limitless Database. To learn more, visit Querying Limitless Database in the Amazon Aurora User Guide.

Some SQL commands aren’t supported. For more information, see Aurora Limitless Database reference in the Amazon Aurora User Guide.

Things to know
Here are a couple of things that you should know about this feature:

  • Compute – You can only have one DB shard group per DB cluster and set the maximum capacity of a DB shard group to 16–6144 ACUs. Contact us if you need more than 6144 ACUs. The initial number of routers and shards is determined by the maximum capacity that you set when you create a DB shard group. The number of routers and shards doesn’t change when you modify the maximum capacity of a DB shard group. To learn more, see the table of the number of routers and shards in the Amazon Aurora User Guide.
  • Storage – Aurora PostgreSQL Limitless Database only supports the Amazon Aurora I/O-Optimized DB cluster storage configuration. Each shard has a maximum capacity of 128 TiB. Reference tables have a size limit of 32 TiB for the entire DB shard group. To reclaim storage space by cleaning up your data, you can use the vacuuming utility in PostgreSQL.
  • Monitoring – You can use Amazon CloudWatch, Amazon CloudWatch Logs, or Performance Insights to monitor Aurora PostgreSQL Limitless Database. There are also new statistics functions and views and wait events for Aurora PostgreSQL Limitless Database that you can use for monitoring and diagnostics.

Now available
Amazon Aurora PostgreSQL Limitless Database is available today with PostgreSQL 16.4 compatibility in the AWS US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) Regions.

Give Aurora PostgreSQL Limitless Database a try in the Amazon RDS console. For more information, visit the Amazon Aurora User Guide and send feedback to AWS re:Post for Amazon Aurora or through your usual AWS support contacts.

Channy



from AWS News Blog https://ift.tt/sk6Ifyn
via IFTTT

Wednesday, October 30, 2024

Simplify and enhance Amazon S3 static website hosting with AWS Amplify Hosting

We are announcing an integration between AWS Amplify Hosting and Amazon Simple Storage Service (Amazon S3). Now, you can deploy static websites with content stored in your S3 buckets and serve over a content delivery network (CDN) with just a few clicks.

AWS Amplify Hosting is a fully managed service for hosting static sites that handles various aspects of deploying a website. It gives you benefits such as custom domain configuration with SSL, redirects, custom headers, and deployment on a globally available CDN powered by Amazon CloudFront.

When deploying a static website, Amplify remembers the connection between your S3 bucket and deployed website, so you can easily update your website with a single click when you make changes to website content in your S3 bucket. Using AWS Amplify Hosting is the recommended approach for static website hosting because it offers more streamlined and faster deployment without extensive setup.

Here’s how the integration works starting from the Amazon S3 console:

Deploying a static website using the Amazon S3 console
Let’s use this new integration to host a personal website directly from my S3 bucket.

To get started, I navigate to my bucket in the Amazon S3 console . Here’s the list of all the content in that S3 bucket:

To use the new integration with AWS Amplify Hosting, I navigate to the Properties section, then I scroll down until I find Static website hosting and select Create Amplify app.

Then, it redirects me to the Amplify page and populates the details from my S3 bucket. Here, I configure my App name and the Branch name. Then, I select Save and deploy.

Within seconds, AWS Amplify has deployed my static website, and I can visit the site by selecting Visit deployed URL. If I make any subsequent changes in my S3 bucket for my static website, I need to redeploy my application in the Amplify console by selecting the Deploy updates button.

I can also use the AWS Command Line Interface (AWS CLI) for programmatic deployment. To do that, I need to get the values for required parameters, such as APP_ID and BRANCH_NAME from my AWS Amplify dashboard. Here’s the command I use for deployment:

aws amplify start-deployment --appId APP_ID --branchName BRANCH_NAME --sourceUrlType=BUCKET_PREFIX --sourceUrl s3://S3_BUCKET/S3_PREFIX

After Amplify Hosting generates a URL for my website, I can optionally configure a custom domain for my static website. To do that, I navigate to my apps in AWS Amplify and select Custom domains in the navigation pane. Then, I select Add domain to start configuring a custom domain for my static website. Learn more about setting up custom domains in the Amplify Hosting User Guide.

In the following screenshot, I have my static website configured with my custom domain. Amplify also issues an SSL/TLS certificate for my domain so that all traffic is secured through HTTPS.

Now, I have my static site ready, and I can check it out at https://donnie.id.

Things you need to know
More available features – AWS Amplify Hosting has more features you can use for your static websites. Visit the AWS Amplify product page to learn more.

Deployment options – You can get started deploying a static website from Amazon S3 using the Amplify Hosting console, AWS CLI, or AWS SDKs.

Pricing – For pricing information, visit Amazon S3 pricing page and AWS Amplify pricing page.

Availability – Amplify Hosting integration with Amazon S3 is now available in AWS Regions where Amplify Hosting is available

Start building your static website with this new integration. To learn more about Amazon S3 static website hosting with AWS Amplify, visit the AWS Amplify Hosting User Guide

Happy building,

Donnie



from AWS News Blog https://ift.tt/0ADRq3C
via IFTTT

Monday, October 28, 2024

Celebrating 10 Years of Amazon ECS: Powering a Decade of Containerized Innovation

Today, we celebrate 10 years of Amazon Elastic Container Service (ECS) and its incredible journey of pushing the boundaries of what’s possible in the cloud! What began as a solution to streamline running Docker containers on Amazon Web Services (AWS) has evolved into a cornerstone technology, offering both impressive performance and operational simplicity, including a serverless option with AWS Fargate for seamless container orchestration.

Over the past decade, Amazon ECS has become a trusted solution for countless organizations, providing the reliability and performance that customers such as SmugMug rely on to power their operations without being bogged down by infrastructure challenges. As Andrew Shieh, Principal Engineer at SmugMug, shares, Amazon ECS has been the “unsung hero” behind their seamless transition to AWS and efficient handling of massive data operations, such as migrating petabytes of photos to Amazon Simple Storage Service (Amazon S3). “The blazingly fast container spin-ups allow us to deliver awesome experiences to our customers,” he adds. It’s this kind of dependable support that has made Amazon ECS a favorite among developers and platform teams, helping them scale their solutions and innovate over the years.

In the early 2010s, as containerized services like Docker gained traction, developers started looking for efficient ways to manage and scale their applications in this new paradigm. Traditional infrastructure was cumbersome, and managing containers at scale was challenging. Amazon ECS arrived in 2014, just when developers were looking to adopt containers at scale. It offered a fully managed, and reliable solution that streamlined container orchestration on AWS. Teams could focus on building and deploying applications without the overhead of managing clusters or complex infrastructure, ushering in a new era of cloud-native development.

When the Amazon ECS team set out to build the service, their vision was clear. As Deepak Singh, product manager who launched Amazon ECS now serving as VP of Next Generation Developer Experience, said at the time, “Our customers wanted a solution that was deeply integrated with AWS, that could work for them at scale and could grow as they grew.” Amazon ECS was designed to use the best of what AWS has to offer—scalability, availability, resilience, and security—to give customers the confidence to run their applications in production environments.

Evolution
Amazon ECS has consistently innovated for customers over the past decade. It marked the beginning of the container innovation journey at AWS, paving the way for a broader ecosystem of container-related services that have transformed how businesses build and manage applications.

Smartsheet proudly sings the praises of the significant impact that Amazon ECS, and especially AWS Fargate, had on their business to date. “Our teams can deploy more frequently, increase throughput, and reduce the engineering time to deploy from hours to minutes. We’ve gone from weekly deployments to deployments that we do multiple times a day. And from what used to be hours of at least two engineers’ time, we’ve been able to shave that down to several minutes,” said Skylar Graika, distinguished engineer at Smartsheet. ” Within the last year, we have been able to scale out its capacity by 50 times, and by leveraging deep integrations across AWS services, we have improved efficiencies and simplified our security and compliance process. Additionally, by adopting AWS Graviton with the Fargate deployments, we’ve seen a 20 percent reduction in cost.”

Amazon ECS played a pivotal role as the starting point for a decade of container evolution at AWS and today, it still stands as one of the most scalable and reliable container orchestration solutions, powering massive operations such as Prime Day 2024, where Amazon launched an impressive 77.24 million ECS tasks, Rufus, a shopping assistant experience powered by generative AI that uses Amazon ECS as part of its core architecture and so many others.

Rustem Feyzkhanov, ML engineering manager at Instrumental, and AWS Machine Learning Hero, is quick to recognize the increased efficiency gained from adopting the service. “Amazon ECS has become an indispensable tool in our work,” says Rastem. “Over the past years, it has simplified container management and service scaling, allowing us to focus on development rather than infrastructure. This service makes it possible for application code teams to co-own infrastructure and that speeds up the development process.”

Timeline
Let’s have a look at some of the key milestones that have shaped the evolution of ECS, marking pivotal moments that changed how customers harness the power of containers on AWS.

2014Introducing Amazon EC2 Container Service! – Check out this nostalgic blog post, which marked the release of ECS in preview mode. It shows how much functionality the service already launched with making a big impact from the get-go! Customers could already run, stop, and manage Docker containers on a cluster of Amazon Elastic Compute Cloud (EC2) instances, with built-in resource management and task scheduling. It became generally available on April 9, 2015.

2015Amazon ECS auto-scaling – With the introduction of added support for more Amazon CloudWatch metrics, customers could now automatically scale their clusters in and out by monitoring the CPU and memory usage in the cluster and configuring threshold values for auto scaling. I think this is a great example of how seemingly modest releases can have a huge impact for customers. Another impactful release was the introduction of Amazon ECR, a fully managed container registry that streamlines container storage and deployment.

2016Application Load Balancer (ALB) for ECS – The introduction of ALB for ECS, provided advanced routing features for containerized applications. ALB enabled more efficient load balancing across microservices, improving traffic management and scalability for ECS workloads. Windows users also benefitted from various releases this year including the added support for Windows Server 2016 with several AMIs and right and beta support for Windows Server Containers.

2017Introducing AWS Fargate! – Fargate was a huge leap forward towards customers being able to run containers without managing the underlying infrastructure, which significantly streamlined their operations. Developers no longer had to worry about provisioning, scaling, or maintaining the EC2 instances on which their containers ran and could now focus entirely on their application logic while AWS handled the rest. This helped them to scale faster and innovate more freely, accelerating their cloud-centered journeys and transforming how they approached containerized applications.

2018AWS Auto Scaling – With this release, teams could now build scaling plans easily for their Amazon ECS tasks. This year also saw the release of many improvements such as moving Amazon ECR to its own console experience outside of the Amazon ECS console, integration of Amazon ECS with AWS Cloud Map, and many others. Additionally, AWS Fargate continued to expand into regions world-wide.

2019Arm-based Graviton2 instances available on Amazon ECS – AWS Graviton2 was released during a time when many businesses were turning their attention towards reprioritizing their sustainability goals. With a focus on improved performance and lower power usage, EC2-instances powered by Graviton2 were supported on Amazon ECS from day 1 of their launch. Customers could take full advantage of this new groundbreaking custom chipset specially built for the cloud. Another great highlight from this year was the launch of AWS Fargate Spot which helped customers to achieve significant cost reductions.

2020Bottlerocket – An open-source, Linux-based operating system optimized for running containers. Designed to improve security and simplify updates, Bottlerocket helped Amazon ECS users achieve greater efficiency and stability in managing containerized workloads.

2021ECS Exec – Amazon ECS introduced ECS Exec in March 2021. With it, customers could run commands directly inside a running container on Amazon EC2 or AWS Fargate. This feature provided enhanced troubleshooting and debugging capabilities without requiring to modify or redeploy containers, streamlining operational workflows. This year also saw the release of Amazon ECS Windows containers streamlined operations for those running them in their cluster.

2022Amazon ECS introduces Service Connect – The release of ECS Service Connect marked a pivotal moment for organizations running microservices architectures on Amazon ECS because it abstracted away much of the complexity involved in service-to-service networking. This dramatically streamlined management of communication between services. With a native service discovery and service mesh capability, developers could now define and manage how their services interacted with each other seamlessly, improving observability, resilience, and security without the need to manage custom networking or load balancers.

2023Amazon GuardDuty ECS runtime monitoring – Last year, Amazon GuardDuty introduced ECS Runtime Monitoring for AWS Fargate, enhancing security by detecting potential threats within running containers. This feature provides continuous visibility into container workloads, improving security posture without additional performance overhead.

2024Amazon ECS Fargate with EBS Integration – In January this year, Amazon ECS and AWS Fargate added support for Amazon EBS volumes, enabling persistent storage for containers. This integration allows users to attach EBS volumes to Fargate tasks, making it much more effortless to deploy storage and support data intensive applications.

Where are we now?
Amazon ECS is in an exciting place right now as it enjoys a level of maturity that allows it to keep innovating while delivering huge value to both new and existing customers. This year has seen many improvements to the service making it increasingly more secure, cost-effective and straightforward to use.

This includes releases such as the support for automatic traffic encryption using TLS in Service Connect;  enhanced stopped task error messages which makes it more straightforward to troubleshoot task launch failures; and the ability to restart containers without having to relaunch the task. The introduction of Graviton2 based instances with AWS Fargate Spot provided customers with a great opportunity to double down on their cost savings.

As usual with AWS, the Amazon ECS team are very focused on delighting customers. “With Amazon ECS and AWS Fargate, we make it really easy for you to focus on your differentiated business logic while leveraging all the powerful compute that AWS offers without having to manage it,” says Nick Coult, director of Product and Science, Serverless Compute. “Our vision with these services was, and still is, to enable you to minimize infrastructure management, write less code, architect for extensibility, and drive high performance, resilience, and security. And, we have continuously innovated in these areas with this goal in mind over the past 10 years. At Amazon ECS, we remain steadfast in our commitment to delivering agility without compromising security, empowering developers with an exceptional experience, unlocking broader, simpler integrations, and new possibilities for emerging workloads like generative AI.”

Conclusion
Looking back on its history, it’s clear to me that ECS is a testament to the AWS approach of working backwards from customer needs. From its early days of streamlining container orchestration to the transformative introduction of Fargate and Service Connect, ECS has consistently evolved to remove barriers for developers and businesses alike.

As we look to the future, I think ECS will keep pushing boundaries, enabling even more innovative and scalable solutions. I encourage everyone to continue exploring what ECS has to offer, discovering new ways to build and pushing the platform to its full potential. There’s a lot more to come, and I’m excited to see where the journey takes us.

Learning resources
If you’re new to Amazon ECS, I recommend you read the comprehensive and accessible Getting Started With Amazon ECS guide.

When you’re ready to skill up with some hands-on free training, I recommend trying this self-paced Amazon ECS workshop, which covers many aspects of the service, including many of the features mentioned in this post.

Thank you, Amazon ECS, and thank you to all of you who use this service and continue to help us make it better for you. Here’s to another 10 years of container innovation! 🥂



from AWS News Blog https://ift.tt/j8VMt52
via IFTTT

AWS Weekly Roundup: New code editor in AWS Lambda console, Amazon Q Business analytics, Claude 3.5 upgrades, and more (October 28, 2024)

Two weeks ago, I had the wonderful opportunity to host subject matter experts from across Asia Pacific in the global 24 Hours of Amazon Q live stream event. This continuous 24-hour stream offered insights from AWS experts on Amazon Q Developer and Amazon Q Business, featuring use cases, product demos, and Q&A sessions.

The highlight for me was that I learned a lot from them. Since then, I’ve tried to integrate Amazon Q Business into my workflow. If you’re curious about what Amazon Q can do for you, check out the on-demand replay on Twitch.

Last week’s launches
Here’s a recap of AWS launches that caught my attention last week:

AWS Lambda console now features a new code editor based on Code-OSS (VS Code – Open Source) — AWS Lambda introduces a new code editing experience in the AWS console based on the popular Code-OSS, Visual Studio Code Open Source code editor. You can use your preferred coding environment and tools in the Lambda console.

Amazon Bedrock Custom Model Import now generally available — Amazon Bedrock now allows customers to import and use their customized models alongside existing foundation models through a single, unified API. This feature supports leveraging fine-tuned models or developing proprietary models based on popular open-source architectures without managing infrastructure or model lifecycle tasks.

EC2 Image Builder now supports building and testing macOS images — EC2 Image Builder adds support for creating and managing machine images for macOS workloads, in addition to existing Windows and Linux support. It streamlines image management processes and reduces the operational overhead of maintaining macOS images.

Upgraded Claude 3.5 Sonnet from Anthropic (available now), computer use (public beta), and Claude 3.5 Haiku (coming soon) in Amazon Bedrock — Anthropic’s Claude 3.5 model family in Amazon Bedrock receives significant upgrades, including improved intelligence for Claude 3.5 Sonnet and new computer use capabilities in public beta. These enhancements support building more advanced AI applications, automating complex tasks, and leveraging improved reasoning capabilities for various use cases.

Amazon Connect now offers screen sharing — Amazon Connect introduces screen sharing capabilities for agents. This feature is available in multiple regions and can be easily integrated into existing voice and video calling setups. This feature gives you opportunity to personalize and improve customer experiences.

Amazon Aurora launches Global Database writer endpoint — Amazon Aurora now supports a highly available and fully managed Global Database writer endpoint. This feature simplifies routing for applications and eliminates the need for application code changes after initiating cross-region Global Database Switchover or Failover operations.

Gain deeper insights into Amazon Q Business with new analytics and conversation insights — Amazon Q Business now offers an analytics dashboard and integration with Amazon CloudWatch Logs. You now have comprehensive insights into the usage of Amazon Q Business application environments and Amazon Q Apps, facilitating monitoring, analysis, and optimization of usage.

Announcing the new Resiliency widget on myApplications — AWS introduces a new Resiliency widget on myApplications, offering enhanced visibility and control over application resilience. You can start a resilience assessment directly from the myApplications dashboard and gain actionable insights.

From community.aws
Here’s my top 5 personal favorites posts from community.aws:

Upcoming AWS events
Check your calendars and sign up for upcoming AWS and community events:

AWS GenAI Lofts – Gain deep insights, get your questions answered, and learn all you need to know to start building your next innovation at AWS GenAI Lofts: Seoul (October 30–November 6), São Paulo (through November 20), and Paris (through November 25).

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs. Upcoming AWS Community Days are in: Malta (November 8), Malaysia, Chile (November 9), Indonesia (November 23), Kochi, India (December 14).

AWS re:InventRegistration is now open for the annual tech extravaganza, taking place December 2–6 in Las Vegas. Learn about new product launches, watch demos, and get behind-the-scenes insights during five headline-making keynotes.

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Donnie

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!



from AWS News Blog https://ift.tt/HTdPQ4r
via IFTTT

Wednesday, October 23, 2024

EC2 Image Builder now supports building and testing macOS images

I’m thrilled to announce macOS support in EC2 Image Builder. This new capability allows you to create and manage machine images for your macOS workloads in addition to the existing support for Windows and Linux.

A golden image is a bootable disk image, also called an Amazon Machine Image (AMI), pre-installed with the operating system and all the tools required for your workloads. In the context of a continuous integration and continuous deployment (CI/CD) pipeline, your golden image most probably contains the specific version of your operating system (macOS) and all required development tools and libraries to build and test your applications (Xcode, Fastlane, and so on.)

Developing and manually managing pipelines to build macOS golden images is time-consuming and diverts talented resources from other tasks. And when you have existing pipelines to build Linux or Windows images, you need to use different tools for creating macOS images, leading to a disjointed workflow.

For these reasons, many of you have been asking for the ability to manage your macOS images using EC2 Image Builder. You want to consolidate your image pipelines across operating systems and take advantage of the automation and cloud-centered integrations that EC2 Image Builder provides.

By adding macOS support to EC2 Image Builder, you can now streamline your image management processes and reduce the operational overhead of maintaining macOS images. EC2 Image Builder takes care of testing, versioning, and validating the base images at scale, saving you the costs associated with maintaining your preferred macOS versions.

Let’s see it in action
Let’s create a pipeline to create a macOS AMI with Xcode 16. You can follow a similar process to install Fastlane on your AMIs.

At a high level, there are four main steps.

  1. I define a component for each tool I want to install. A component is a YAML document that tells EC2 Image Builder what application to install and how. In this example, I create a custom component to install Xcode. If you want to install Fastlane, you create a second component. I use the ExecuteBash action to enter the shell commands required to install Xcode.
  2. I define a recipe. A recipe starts from a base image and lists the components I want to install on it.
  3. I define the infrastructure configuration I want to use to build my image. This defines the pool of Amazon Elastic Compute Cloud (Amazon EC2) instances to build the image. In my case, I allocate an EC2 Mac Dedicated Host in my account and reference it in the infrastructure configuration.
  4. I create a pipeline and a schedule to run on the infrastructure with the given recipes and an image workflow. I test the output AMI and deliver it at the chosen destination (my account or another account)

It’s much easier than it sounds. I’ll show you the steps in the AWS Management Console. I can also configure EC2 Image Builder with the AWS Command Line Interface (AWS CLI) or write code using one of our AWS SDKs.

Step 1: Create a component
I open the console and select EC2 Image Builder, then Components, and finally Create component.

Image Builder - Create component

I select a base Image operating system and the Compatible OS Versions. Then, I enter a Component name and Component version. I select Define document content and enter this YAML as Content.

name: InstallXCodeDocument
description: This downloads and installs Xcode. Be sure to run `xcodeinstall authenticate -s us-east-1` from your laptop first.
schemaVersion: 1.0

phases:
  - name: build
    steps:
      - name: InstallXcode
        action: ExecuteBash
        inputs:
          commands:
             - sudo -u ec2-user /opt/homebrew/bin/brew tap sebsto/macos
             - sudo -u ec2-user /opt/homebrew/bin/brew install xcodeinstall
             - sudo -u ec2-user /opt/homebrew/bin/xcodeinstall download -s us-east-1 --name "Xcode 16.xip"
             - sudo -u ec2-user /opt/homebrew/bin/xcodeinstall install --name "Xcode 16.xip"
  
  - name: validate
    steps:
      - name: TestXcode
        action: ExecuteBash
        inputs:
          commands:
            -  xcodebuild -version && xcode-select -p   

I use a tool I wrote to download and install Xcode from the command line. xcodeinstall integrates with AWS Secrets Manager to securely store authentication web tokens. Before running the pipeline, I authenticate from my laptop with the command xcodeinstall authenticate -s us-east-1. This command starts a session with Apple server’s and stores the session token in Secrets Manager. xcodeinstall uses this token during the image creation pipeline to download Xcode.

When you use xcodeinstall with Secrets Manager, you must give permission to your pipeline to access the secrets. Here is the policy document I added to the role attached to the EC2 instance used by EC2 Image Builder (in the following infrastructure configuration).

{
        "Sid": "xcodeinstall",
        "Effect": "Allow",
        "Action": [
            "secretsmanager:GetSecretValue"
            "secretsmanager:PutSecretValue"
        ],
        "Resource": "arn:aws:secretsmanager:us-east-1:<YOUR ACCOUNT ID>:secret:xcodeinstall*"
}

To test and debug these components locally, without having to wait for long cycle to start and recycle the EC2 Mac instance, you can use the AWS Task Orchestrator and Executor (AWSTOE) command.

Step 2: Create a recipe
The next step is to create a recipe. On the console, I select Image recipes and Create image recipe.

I select macOS as the base Image Operating System. I choose macOS Sonoma ARM64 as Image name.

In the Build components section, I select the Xcode 16 component I just created during step 1.

Finally, I make sure the volume is large enough to store the operating system, Xcode, and my builds. I usually select a 500 Gb gp3 volume.

Image Builder - Create a recipe

Steps 3 and 4: Create the pipeline (and the infrastructure configuration)
On the EC2 Image Builder page, I select Image pipelines and Create image pipeline. I give my pipeline a name and select a Build schedule. For this demo, I select a manual trigger.Image Builder - Create Pipeline 1

Then, I select the recipe I just created (Sonoma-Xcode).

Image Builder - Create Pipeline 2

I chose Default workflows for Define image creation process (not shown for brevity).

I create or select an existing infrastructure configuration. In the context of building macOS images, you have to allocate Amazon EC2 Dedicated Hosts first. This is where I choose the instance type that EC2 Image Builder will use to create the AMI. I may also optionally select my virtual private cloud (VPC), security group, AWS Identity and Access Management (IAM) roles with permissions required during the preparation of the image, key pair, and all the parameters I usually select when I start an EC2 instance.

Image Builder - Create Pipeline 4

Finally, I select where I want to distribute the output AMI. By default, it stays on my account. But I can also share or copy it to other accounts.

Image Builder - Create Pipeline 5

Run the pipeline
Now I’m ready to run the pipeline. I select Image pipelines, then I select the pipeline I just created (Sonoma-Xcode). From the Actions menu, I select Run pipeline.

Image Builder - launch pipeline

I can observe the progress and the detailed logs from Amazon CloudWatch.

After a while, the AMI is created and ready to use.

Image Builder - AMI build succeeded

Testing my AMI
To finish the demo, I start an EC2 Mac instance with the AMI I just created (remember to allocate a Dedicated Host first or to reuse the one you used for EC2 Image Builder).

Once the instance is started, I connect to it using secure shell (SSH) and verify that Xcode is correctly installed.

Image Builder - Connect to new AMI

Pricing and availability
EC2 Image Builder for macOS is now available in all AWS Regions where EC2 Mac instances are available: US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland, London, Stockholm) (not all Mac instance types are available in all Regions).

It comes at no additional cost, and you’re only charged for the resources in use during the pipeline execution, namely the time your EC2 Mac Dedicated Host is allocated, with a minimum of 24 hours.

The preview of macOS support in EC2 Image Builder allows you to consolidate your image pipelines, automate your golden image creation processes, and use the benefits of cloud-focused integrations on AWS. As the EC2 Mac platform continues to expand with more instance types, this new capability positions EC2 Image Builder as a comprehensive solution for image management across Windows, Linux, and macOS.

Create your first pipeline today! 

-- seb

from AWS News Blog https://ift.tt/YaxPX6q
via IFTTT

Tuesday, October 22, 2024

Upgraded Claude 3.5 Sonnet from Anthropic (available now), computer use (public beta), and Claude 3.5 Haiku (coming soon) in Amazon Bedrock

Four months ago, we introduced Anthropic’s Claude 3.5 in Amazon Bedrock, raising the industry bar for AI model intelligence while maintaining the speed and cost of Claude 3 Sonnet.

Today, I am excited to announce three new capabilities for the Claude 3.5 model family in Amazon Bedrock:

Upgraded Claude 3.5 Sonnet – You now have access to an upgraded Claude 3.5 Sonnet model that builds upon its predecessor’s strengths, offering even more intelligence at the same cost. Claude 3.5 Sonnet continues to improve its capability to solve real-world software engineering tasks and follow complex, agentic workflows. The upgraded Claude 3.5 Sonnet helps across the entire software development lifecycle, from initial design to bug fixes, maintenance, and optimizations. With these capabilities, the upgraded Claude 3.5 Sonnet model can help build more advanced chatbots with a warm, human-like tone. Other use cases in which the upgraded model excels include knowledge Q&A platforms, data extraction from visuals like charts and diagrams, and automation of repetitive tasks and operations.

Computer use – Claude 3.5 Sonnet now offers computer use capabilities in Amazon Bedrock in public beta, allowing Claude to perceive and interact with computer interfaces. Developers can direct Claude to use computers the way people do: by looking at a screen, moving a cursor, clicking buttons, and typing text. This works by giving the model access to integrated tools that can return computer actions, like keystrokes and mouse clicks, editing text files, and running shell commands. Software developers can integrate computer use in their solutions by building an action-execution layer and grant screen access to Claude 3.5 Sonnet. In this way, software developers can build applications with the ability to perform computer actions, follow multiple steps, and check their results. Computer use opens new possibilities for AI-powered applications. For example, it can help automate software testing and back office tasks and implement more advanced software assistants that can interact with applications. Given this technology is early, developers are encouraged to explore lower-risk tasks and use it in a sandbox environment.

Claude 3.5 Haiku – The new Claude 3.5 Haiku is coming soon and combines rapid response times with improved reasoning capabilities, making it ideal for tasks that require both speed and intelligence. Claude 3.5 Haiku improves on its predecessor and matches the performance of Claude 3 Opus (previously Claude’s largest model) at the speed and cost of Claude 3 Haiku. Claude 3.5 Haiku can help with use cases such as fast and accurate code suggestions, highly interactive chatbots that need rapid response times for customer service, e-commerce solutions, and educational platforms. For customers dealing with large volumes of unstructured data in finance, healthcare, research, and more, Claude 3.5 Haiku can help efficiently process and categorize information.

According to Anthropic, the upgraded Claude 3.5 Sonnet delivers across-the-board improvements over its predecessor, with significant gains in coding, an area where it already excelled. The upgraded Claude 3.5 Sonnet shows wide-ranging improvements on industry benchmarks. On coding, it improves performance on SWE-bench Verified from 33% to 49%, scoring higher than all publicly available models. It also improves performance on TAU-bench, an agentic tool use task, from 62.6% to 69.2% in the retail domain, and from 36.0% to 46.0% in the airline domain. The following table includes the model evaluations provided by Anthropic.

UPgraded Claude 3.5 Sonnet evaluations

Computer use, a new frontier in AI interaction
Instead of restricting the model to use APIs, Claude has been trained on general computer skills, allowing it to use a wide range of standard tools and software programs. In this way, applications can use Claude to perceive and interact with computer interfaces. Software developers can integrate this API to enable Claude to translate prompts (for example, “find me a hotel in Rome”) into specific computer commands (open a browser, navigate this website, and so on).

More specifically, when invoking the model, software developers now have access to three new integrated tools that provide a virtual set of hands to operate a computer:

  • Computer tool – This tool can receive as input a screenshot and a goal and returns a description of the mouse and keyboard actions that should be performed to achieve that goal. For example, this tool can ask to move the cursor to a specific position, click, type, and take screenshots.
  • Text editor tool – Using this tool, the model can ask to perform operations like viewing file contents, creating new files, replacing text, and undoing edits.
  • Bash tool – This tool returns commands that can be run on a computer system to interact at a lower level as a user typing in a terminal.

These tools open up a world of possibilities for automating complex tasks, from data analysis and software testing to content creation and system administration. Imagine an application powered by Claude 3.5 Sonnet interacting with the computer just as a human would, navigating through multiple desktop tools including terminals, text editors, internet browsers, and also capable of filling out forms and even debugging code.

We’re excited to help software developers explore these new capabilities with Amazon Bedrock. We expect this capability to improve rapidly in the coming months, and Claude’s current ability to use computers has limits. Some actions such as scrolling, dragging, or zooming can present challenges for Claude, and we encourage you to start exploring low-risk tasks.

When looking at OSWorld, a benchmark for multimodal agents in real computer environments, the upgraded Claude 3.5 Sonnet currently gets 14.9%. While human-level skill is far ahead with about 70-75%, this result is much better than the 7.7% obtained by the next-best model in the same category.

Using the upgraded Claude 3.5 Sonnet in the Amazon Bedrock console
To get started with the upgraded Claude 3.5 Sonnet, I navigate to the Amazon Bedrock console and choose Model access in the navigation pane. There, I request access for the new Claude 3.5 Sonnet V2 model.

To test the new vision capability, I open another browser tab and download from the Our World in Data website the Wind power generation chart in PNG format.

Our Word in Data – Wind power generation chart

Back in the Amazon Bedrock console, I choose Chat/text under Playgrounds in the navigation pane. For the model, I select Anthropic as the model provider and then Claude 3.5 Sonnet V2.

I use the three vertical dots in the input section of the chat to upload the image file from my computer. Then I enter this prompt:

Which are the top countries for wind power generation? Answer only in JSON.

The result follows my instructions and returns the list extracting the information from the image.

Console screenshot.

Using the upgraded Claude 3.5 Sonnet with AWS CLI and SDKs
Here’s a sample AWS Command Line Interface (AWS CLI) command using the Amazon Bedrock Converse API. I use the --query parameter of the CLI to filter the result and only show the text content of the output message:

aws bedrock-runtime converse \
    --model-id anthropic.claude-3-5-sonnet-20241022-v2:0 \
    --messages '[{ "role": "user", "content": [ { "text": "What do you throw out when you want to use it, but take in when you do not want to use it?" } ] }]' \
    --query 'output.message.content[*].text' \
    --output text

In output, I get this text in the response.

An anchor! You throw an anchor out when you want to use it to stop a boat, but you take it in (pull it up) when you don't want to use it and want to move the boat.

The AWS SDKs implement a similar interface. For example, you can use the AWS SDK for Python (Boto3) to analyze the same image as in the console example:

import boto3

MODEL_ID = "anthropic.claude-3-5-sonnet-20241022-v2:0"
IMAGE_NAME = "wind-generation.png"

bedrock_runtime = boto3.client("bedrock-runtime")

with open(IMAGE_NAME, "rb") as f:
    image = f.read()

user_message = "Which are the top countries for wind power generation? Answer only in JSON."

messages = [
    {
        "role": "user",
        "content": [
            {"image": {"format": "png", "source": {"bytes": image}}},
            {"text": user_message},
        ],
    }
]

response = bedrock_runtime.converse(
    modelId=MODEL_ID,
    messages=messages,
)
response_text = response["output"]["message"]["content"][0]["text"]
print(response_text)

Integrating computer use with your application
Let’s see how computer use works in practice. First, I take a snapshot of the desktop of a Ubuntu system:

Ubuntu desktop screenshot

This screenshot is the starting point for the steps that will be implemented by computer use. To see how that works, I run a Python script passing in input to the model the screenshot image and this prompt:

Find me a hotel in Rome.

This script invokes the upgraded Claude 3.5 Sonnet in Amazon Bedrock using the new syntax required for computer use:

import base64
import json
import boto3

MODEL_ID = "anthropic.claude-3-5-sonnet-20241022-v2:0"

IMAGE_NAME = "ubuntu-screenshot.png"

bedrock_runtime = boto3.client(
    "bedrock-runtime",
    region_name="us-east-1",
)

with open(IMAGE_NAME, "rb") as f:
    image = f.read()

image_base64 = base64.b64encode(image).decode("utf-8")

prompt = "Find me a hotel in Rome."

body = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 512,
    "temperature": 0.5,
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": prompt},
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/jpeg",
                        "data": image_base64,
                    },
                },
            ],
        }
    ],
    "tools": [
        { # new
            "type": "computer_20241022", # literal / constant
            "name": "computer", # literal / constant
            "display_height_px": 1280, # min=1, no max
            "display_width_px": 800, # min=1, no max
            "display_number": 0 # min=0, max=N, default=None
        },
        { # new
            "type": "bash_20241022", # literal / constant
            "name": "bash", # literal / constant
        },
        { # new
            "type": "text_editor_20241022", # literal / constant
            "name": "str_replace_editor", # literal / constant
        }
    ],
    "anthropic_beta": ["computer-use-2024-10-22"],
}

# Convert the native request to JSON.
request = json.dumps(body)

try:
    # Invoke the model with the request.
    response = bedrock_runtime.invoke_model(modelId=MODEL_ID, body=request)

except Exception as e:
    print(f"ERROR: {e}")
    exit(1)

# Decode the response body.
model_response = json.loads(response["body"].read())
print(model_response)

The body of the request includes new options:

  • anthropic_beta with value ["computer-use-2024-10-22"] to enable computer use.
  • The tools section supports a new type option (set to custom for the tools you configure).
  • Note that the computer tool needs to know the resolution of the screen (display_height_px and display_width_px).

To follow my instructions with computer use, the model provides actions that operate on the desktop described by the input screenshot.

The response from the model includes a tool_use section from the computer tool that provides the first step. The model has found in the screenshot the Firefox browser icon and the position of the mouse arrow. Because of that, it now asks to move the mouse to specific coordinates to start the browser.

{
    "id": "msg_bdrk_01WjPCKnd2LCvVeiV6wJ4mm3",
    "type": "message",
    "role": "assistant",
    "model": "claude-3-5-sonnet-20241022",
    "content": [
        {
            "type": "text",
            "text": "I'll help you search for a hotel in Rome. I see Firefox browser on the desktop, so I'll use that to access a travel website.",
        },
        {
            "type": "tool_use",
            "id": "toolu_bdrk_01CgfQ2bmQsPFMaqxXtYuyiJ",
            "name": "computer",
            "input": {"action": "mouse_move", "coordinate": [35, 65]},
        },
    ],
    "stop_reason": "tool_use",
    "stop_sequence": None,
    "usage": {"input_tokens": 3443, "output_tokens": 106},
}

This is just the first step. As with usual tool use requests, the script should reply with the result of using the tool (moving the mouse in this case). Based on the initial request to book a hotel, there would be a loop of tool use interactions that will ask to click on the icon, type a URL in the browser, and so on until the hotel has been booked.

A more complete example is available in this repository shared by Anthropic.

Things to know
The upgraded Claude 3.5 Sonnet is available today in Amazon Bedrock in the US West (Oregon) AWS Region and is offered at the same cost as the original Claude 3.5 Sonnet. For up-to-date information on regional availability, refer to the Amazon Bedrock documentation. For detailed cost information for each Claude model, visit the Amazon Bedrock pricing page.

In addition to the greater intelligence of the upgraded model, software developers can now integrate computer use (available in public beta) in their applications to automate complex desktop workflows, enhance software testing processes, and create more sophisticated AI-powered applications.

Claude 3.5 Haiku will be released in the coming weeks, initially as a text-only model and later with image input.

You can see how computer use can help with coding in this video with Alex Albert, Head of Developer Relations at Anthropic.

This other video describes computer use for automating operations.

To learn more about these new features, visit the Claude models section of the Amazon Bedrock documentation. Give the upgraded Claude 3.5 Sonnet a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock. You can find deep-dive technical content and discover how our Builder communities are using Amazon Bedrock at community.aws. Let us know what you build with these new capabilities!

Danilo



from AWS News Blog https://ift.tt/tPEr8ce
via IFTTT

Monday, October 21, 2024

AWS Weekly Roundup: Agentic workflows, Amazon Transcribe, AWS Lambda insights, and more (October 21, 2024)

Agentic workflows are quickly becoming a cornerstone of AI innovation, enabling intelligent systems to autonomously handle and refine complex tasks in a way that mirrors human problem-solving. Last week, we launched Serverless Agentic Workflows with Amazon Bedrock, a new short course developed in collaboration with Dr. Andrew Ng and DeepLearning.AI.

Serverless Agentic Workflows with Amazon Bedrock

This hands-on course, taught by my colleague Mike Chambers, teaches how to build serverless agents that can handle complex tasks without the hassle of managing infrastructure. You will learn everything you need to know about integrating tools, automating workflows, and deploying responsible agents with built-in guardrails on Amazon Web Services (AWS) with Amazon Bedrock. The hands-on labs provided with the course let you apply your knowledge directly in an AWS environment, hosted by AWS Partner Vocareum. Find more information and enroll for free on the DeepLearning.AI course page.

Now, let’s turn our attention to other exciting news in the AWS universe from last week.

Last week’s launches
Here are some launches that got my attention:

Amazon Transcribe now supports streaming transcription in 30 additional languagesAmazon Transcribe has expanded its support to include 30 additional languages, bringing the total number of supported languages to 54. This enhancement helps you reach a broader global audience and improves accessibility across various industries, including contact centers, broadcasting, and e-learning. The expanded language support allows for more efficient content moderation, improved agent productivity, and automatic subtitling for live events and meetings.

AWS Lambda console now surfaces key function insights and supports real-time log analytics – The AWS Lambda console now features a built-in Amazon CloudWatch Metrics Insights dashboard and supports CloudWatch Logs Live Tail, providing instant visibility into critical function metrics and real-time log streaming. You can now identify and troubleshoot errors or performance issues for your Lambda functions without leaving the console, as well as view and analyze logs in real time as they become available. You can reduce context switching and accelerate the development and troubleshooting processes for serverless applications. Check out the launch post for more details.

Amazon Bedrock Model Evaluation now supports evaluating custom model import models – You can now evaluate custom models you’ve imported to Amazon Bedrock using the model evaluation feature. This helps you to complete the full cycle of selecting, customizing, and evaluating models before deploying them. To evaluate an imported model, select the custom model from the list of models to evaluate in the model selector tool when creating an evaluation job.

Amazon Q in AWS Supply Chain – You can now use Amazon Q, an interactive AI assistant, to analyze your supply chain data in AWS Supply Chain and get insights to operate your supply chain more efficiently. Amazon Q can answer your supply chain questions by diving into your data. This reduces the time spent searching for information and streamlines finding answers to improve your supply chain operations.

For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page.

Other AWS news
Here are some additional news items and posts that you might find interesting:

New Amazon OpenSearch Service YouTube channel – The channel offers bite-sized tutorials, curated content, and organized playlists on topics such as log analytics, semantic search, vector databases, and operational best practices. You can also provide feedback to influence future channel content and the OpenSearch Service roadmap. Check out the launch post for more details and subscribe to the Amazon OpenSearch Service YouTube channel.

Deploying Generative AI Applications with NVIDIA NIM Microservices on Amazon Elastic Kubernetes Service (Amazon EKS) – This post shows you how to use Amazon EKS to orchestrate the deployment of pods containing NVIDIA NIM microservices, to enable quick-to-setup and optimized large-scale large language model (LLM) inference on Amazon EC2 G5 instances. It also demonstrates how to scale (both pod and cluster) by monitoring for custom metrics through Prometheus, and how you can load balance using an Application Load Balancer.

Instant Well-Architected CDK Resources with Solutions Constructs Factories – You can now create well-architected AWS resources such as Amazon Simple Storage Service (Amazon S3) buckets and AWS Step Functions state machines with a single function call using the new AWS Solutions Constructs Factories. These factories handle all the best practices configuration for you while still allowing customization. Try using a Constructs factory the next time you need to deploy one of the supported resources.

Upcoming AWS events
Check your calendars and sign up for these AWS events:

AWS GenAI LoftsAWS GenAI LoftsAWS GenAI Lofts are about more than just the tech, they bring together startups, developers, investors, and industry experts. Whether you’re looking to gain deep insights, or get your questions answered by generative AI pros, our GenAI Lofts have you covered and provide everything you need to start building your next innovation. Join events in London (through October 25), Seoul (October 30–November 6), São Paulo (through November 20), and Paris (through November 25).

AWS Community DaysAWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Malta (November 8), Chile (November 9), and Kochi, India (December 14).

AWS re:Invent 2024AWS re:InventRegistration is now open for the annual tech extravaganza, taking place December 2–6 in Las Vegas. At re:Invent 2024, you’ll get a front row seat to hear real stories from customers and AWS leaders about navigating pressing topics, such as generative AI. Learn about new product launches, watch demos, and get behind-the-scenes insights during five headline-making keynotes.

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Antje

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!



from AWS News Blog https://ift.tt/bvMNhLa
via IFTTT