Tuesday, February 28, 2023
New – Amazon Lightsail for Research with All-in-One Research Environments
Today we are announcing the general availability of Amazon Lightsail for Research, a new offering that makes it easy for researchers and students to create and manage a high-performance CPU or a GPU research computer in just a few clicks on the cloud. You can use your preferred integrated development environments (IDEs) like preinstalled Jupyter, RStudio, Scilab, VSCodium, or native Ubuntu operating system on your research computer.
You no longer need to use your own research laptop or shared school computers for analyzing larger datasets or running complex simulations. You can create your own research environments and directly access the application running on the research computer remotely via a web browser. Also, you can easily upload data to and download from your research computer via a simple web interface.
You pay only for the duration the computers are in use and can delete them at any time. You can also use budgeting controls that can automatically stop your computer when it’s not in use. Lightsail for Research also includes all-inclusive prices of compute, storage, and data transfer, so you know exactly how much you will pay for the duration you use the research computer.
Get Started with Amazon Lightsail for Research
To get started, navigate to the Lightsail for Research console, and choose Virtual computers in the left menu. You can see my research computers naming “channy-jupyter
” or “channy-rstudio
” already created.
Choose Create virtual computer to create a new research computer, and select which software you’d like preinstalled on your computer and what type of research computer you’d like to create.
In the first step, choose the application you want installed on your computer and the AWS Region to be located in. We support Jupyter, RStudio, Scilab, and VSCodium. You can install additional packages and extensions through the interface of these IDE applications.
Next, choose the desired virtual hardware type, including a fixed amount of compute (vCPUs or GPUs), memory (RAM), SSD-based storage volume (disk) space, and a monthly data transfer allowance. Bundles are charged on an hourly and on-demand basis.
Standard types are compute-optimized and ideal for compute-bound applications that benefit from high-performance processors.
Name | vCPUs | Memory | Storage | Monthly data transfer allowance* |
Standard XL | 4 | 8 GB | 50 GB | 0.5TB |
Standard 2XL | 8 | 16 GB | 50 GB | 0.5TB |
Standard 4XL | 16 | 32 GB | 50 GB | 0.5TB |
GPU types provide a high-performance platform for general-purpose GPU computing. You can use these bundles to accelerate scientific, engineering, and rendering applications and workloads.
Name | GPU | vCPUs | Memory | Storage | Monthly data transfer allowance* |
GPU XL | 1 | 4 | 16 GB | 50 GB | 1 TB |
GPU 2XL | 1 | 8 | 32 GB | 50 GB | 1 TB |
GPU 4XL | 1 | 16 | 64 GB | 50 GB | 1 TB |
* AWS created the Global Data Egress Waiver (GDEW) program to help eligible researchers and academic institutions use AWS services by waiving data egress fees. To learn more, see the blog post.
After making your selections, name your computer and choose Create virtual computer to create your research computer. Once your computer is created and running, choose the Launch application button to open a new window that will display the preinstalled application you selected.
Lightsail for Research Features
As with existing Lightsail instances, you can create additional block-level storage volumes (disks) that you can attach to a running Lightsail for Research virtual computer. You can use a disk as a primary storage device for data that requires frequent and granular updates. To create your own storage, choose Storage and Create disk.
You can also create Snapshots, a point-in-time copy of your data. You can create a snapshot of your Lightsail for Research virtual computers and use it as baselines to create new computers or for data backup. A snapshot contains all of the data that is needed to restore your computer from the moment when the snapshot was taken.
When you restore a computer by creating it from a snapshot, you can easily create a new one or upgrade your computer to a larger size using a snapshot backup. Create snapshots frequently to protect your data from corrupt applications or user errors.
You can use Cost control rules that you define to help manage the usage and cost of your Lightsail for Research virtual computers. You can create rules that stop running computers when average CPU utilization over a selected time period falls below a prescribed level.
For example, you can configure a rule that automatically stops a specific computer when its CPU utilization is equal to or less than 1 percent for a 30-minute period. Lightsail for Research will then automatically stop the computer so that you don’t incur charges for running computers.
In the Usage menu, you can view the cost estimate and usage hours for your resources during a specified time period.
Now Available
Amazon Lightsail for Research is now available in the US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and Europe (Sweden) Regions.
Now you can start using it today. To learn more, see the Amazon Lightsail for Research User Guide, and please send feedback to AWS re:Post for Amazon Lightsail or through your usual AWS support contacts.
– Channy
from AWS News Blog https://ift.tt/BWHJG1t
via IFTTT
Monday, February 27, 2023
AWS Week in Review – February 27, 2023
A couple days ago, I had the honor of doing a live stream on generative AI, discussing recent innovations and concepts behind the current generation of large language and vision models and how we got there. In today’s roundup of news and announcements, I will share some additional information—including an expanded partnership to make generative AI more accessible, a blog post about diffusion models, and our weekly Twitch show on Generative AI. Let’s dive right into it!
Last Week’s Launches
Here are some launches that got my attention during the previous week:
Integrated Private Wireless on AWS – The Integrated Private Wireless on AWS program is designed to provide enterprises with managed and validated private wireless offerings from leading communications service providers (CSPs). The offerings integrate CSPs’ private 5G and 4G LTE wireless networks with AWS services across AWS Regions, AWS Local Zones, AWS Outposts, and AWS Snow Family. For more details, read this Industries Blog post and check out this eBook. And, if you’re attending the Mobile World Congress Barcelona this week, stop by the AWS booth at the Upper Walkway, South Entrance, at the Fira Barcelona Gran Via, to learn more.
AWS Glue Crawlers – Now integrate with Lake Formation. AWS Glue Crawlers are used to discover datasets, extract schema information, and populate the AWS Glue Data Catalog. With this Glue Crawler and Lake Formation integration, you can configure a crawler to use Lake Formation permissions to access an S3 data store or a Data Catalog table with an underlying S3 location within the same AWS account or another AWS account. You can configure an existing Data Catalog table as a crawler’s target if the crawler and the Data Catalog table reside in the same account. To learn more, check out this Big Data Blog post.
Amazon SageMaker Model Monitor – You can now launch and configure Amazon SageMaker Model Monitor from the SageMaker Model Dashboard using a code-free point-and-click setup experience. SageMaker Model Dashboard gives you unified monitoring across all your models by providing insights into deviations from expected behavior, automated alerts, and troubleshooting to improve model performance. Model Monitor can detect drift in data quality, model quality, bias, and feature attribution and alert you to take remedial actions when such changes occur.
Amazon EKS – Now supports Kubernetes version 1.25. Kubernetes 1.25 introduced several new features and bug fixes, and you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.25. You can create new 1.25 clusters or upgrade your existing clusters to 1.25 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool. To learn more about this release named “Combiner,” check out this Containers Blog post.
Amazon Detective – New self-paced workshop available. You can now learn to use Amazon Detective with a new self-paced workshop in AWS Workshop Studio. AWS Workshop Studio is a collection of self-paced tutorials designed to teach practical skills and techniques to solve business problems. The Amazon Detective workshop is designed to teach you how to use the primary features of Detective through a series of interactive modules that cover topics such as security alert triage, security incident investigation, and threat hunting. Get started with the Amazon Detective Workshop.
For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page.Other AWS News
Here are some additional news items and blog posts that you may find interesting:
AWS and Hugging Face collaborate to make generative AI more accessible and cost-efficient – This previous week, we announced an expanded collaboration between AWS and Hugging Face to accelerate the training, fine-tuning, and deployment of large language and vision models used to create generative AI applications. Generative AI applications can perform a variety of tasks, including text summarization, answering questions, code generation, image creation, and writing essays and articles. For more details, read this Machine Learning Blog post.
If you are interested in generative AI, I also recommend reading this blog post on how to Fine-tune text-to-image Stable Diffusion models with Amazon SageMaker JumpStart. Stable Diffusion is a deep learning model that allows you to generate realistic, high-quality images and stunning art in just a few seconds. This blog post discusses how to make design choices, including dataset quality, size of training dataset, choice of hyperparameter values, and applicability to multiple datasets.
AWS open-source news and updates – My colleague Ricardo writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #146 here.
Upcoming AWS Events
Check your calendars and sign up for these AWS events:
#BuildOn Generative AI – Join our weekly live Build On Generative AI Twitch show. Every Monday morning, 9:00 US PT, my colleagues Emily and Darko take a look at aspects of generative AI. They host developers, scientists, startup founders, and AI leaders and discuss how to build generative AI applications on AWS.
In today’s episode, my colleague Chris walked us through an end-to-end ML pipeline from data ingestion to fine-tuning and deployment of generative AI models. You can watch the video here.
AWS Pi Day – Join me on March 14 for the third annual AWS Pi Day live, virtual event hosted on the AWS On Air channel on Twitch as we celebrate the 17th birthday of Amazon S3 and the cloud.
We will discuss the latest innovations across AWS Data services, from storage to analytics and AI/ML. If you are curious about how AI can transform your business, register here and join my session.
AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results. Register now for EMEA (March 9) and the Americas (March 14).
You can browse all upcoming AWS-led in-person, virtual events and developer focused events such as Community Days.
That’s all for this week. Check back next Monday for another Week in Review!
— Antje
This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!
from AWS News Blog https://ift.tt/DoH5KFv
via IFTTT
Wednesday, February 22, 2023
Tuesday, February 21, 2023
New: AWS Telco Network Builder – Deploy and Manage Telco Networks
Over the course of more than one hundred years, the telecom industry has become standardized and regulated, and has developed methods, technologies, and an entire vocabulary (chock full of interesting acronyms) along the way. As an industry, they need to honor this tremendous legacy while also taking advantage of new technology, all in the name of delivering the best possible voice and data services to their customers.
Today I would like to tell you about AWS Telco Network Builder (TNB). This new service is designed to help Communications Service Providers (CSPs) deploy and manage public and private telco networks on AWS. It uses existing standards, practices, and data formats, and makes it easier for CSPs to take advantage of the power, scale, and flexibility of AWS.
Today, CSPs often deploy their code to virtual machines. However, as they look to the future they are looking for additional flexibility and are increasingly making use of containers. AWS TNB is intended to be a part of this transition, and makes use of Kubernetes and Amazon Elastic Kubernetes Service (EKS) for packaging and deployment.
Concepts and Vocabulary
Before we dive in to the service, let’s take a look some concepts and vocabulary that are unique to this industry, and are relevant to AWS TNB:
European Telecommunications Standards Institute (ETSI) – A European organization that defines specifications suitable for global use. AWS TNB supports multiple ETSI specifications including ETSI SOL001 through ETSI SOL005, and ETSI SOL007.
Communications Service Provider (CSP) – An organization that offers telecommunications services.
Topology and Orchestration Specification for Cloud Applications (TOSCA) – A standardized grammar that is used to describe service templates for telecommunications applications.
Network Function (NF) – A software component that performs a specific core or value-added function within a telco network.
Virtual Network Function Descriptor (VNFD) – A specification of the metadata needed to onboard and manage a Network Function.
Cloud Service Archive (CSAR) – A ZIP file that contains a VNFD, references to container images that hold Network Functions, and any additional files needed to support and manage the Network Function.
Network Service Descriptor (NSD) – A specification of the compute, storage, networking, and location requirements for a set of Network Functions along with the information needed to assemble them to form a telco network.
Network Core – The heart of a network. It uses control plane and data plane operations to manage authentication, authorization, data, and policies.
Service Orchestrator (SO) – An external, high-level network management tool.
Radio Access Network (RAN) – The components (base stations, antennas, and so forth) that provide wireless coverage over a specific geographic area.
Using AWS Telco Network Builder (TNB)
I don’t happen to be a CSP, but I will do my best to walk you through the getting-started experience anyway! The primary steps are:
- Creating a function package for each Network Function by uploading a CSAR.
- Creating a network package for the network by uploading a Network Service Descriptor (NSD).
- Creating a network by selecting and instantiating an NSD.
To begin, I open the AWS TNB Console and click Get started:
Initially, I have no networks, no function packages, and no network packages:
My colleagues supplied me with sample CSARs and an NSD for use in this blog post (the network functions are from Free 5G Core):
Each CSAR is a fairly simple ZIP file with a VNFD and other items inside. For example, the VNFD for the Free 5G Core Session Management Function (smf) looks like this:
tosca_definitions_version: tnb_simple_yaml_1_0
topology_template:
node_templates:
Free5gcSMF:
type: tosca.nodes.AWS.VNF
properties:
descriptor_id: "4b2abab6-c82a-479d-ab87-4ccd516bf141"
descriptor_version: "1.0.0"
descriptor_name: "Free5gc SMF 1.0.0"
provider: "Free5gc"
requirements:
helm: HelmImage
HelmImage:
type: tosca.nodes.AWS.Artifacts.Helm
properties:
implementation: "./free5gc-smf"
The final section (HelmImage
) of the VNFD points to the Kubernetes Helm Chart that defines the implementation.
I click Function packages in the console, then click Create function package. Then I upload the first CSAR and click Next:
I review the details and click Create function package (each VNFD can include a set of parameters that have default values which can be overwritten with values that are specific to a particular deployment):
I repeat this process for the nine remaining CSARs, and all ten function packages are ready to use:
Now I am ready to create a Network Package. The Network Service Descriptor is also fairly simple, and I will show you several excerpts. First, the NSD establishes a mapping from descriptor_id
to namespace
for each Network Function so that the functions can be referenced by name:
vnfds:
- descriptor_id: "aa97cf70-59db-4b13-ae1e-0942081cc9ce"
namespace: "amf"
- descriptor_id: "86bd1730-427f-480a-a718-8ae9dcf3f531"
namespace: "ausf"
...
Then it defines the input variables, including default values (this reminds me of a AWS CloudFormation template):
inputs:
vpc_cidr_block:
type: String
description: "CIDR Block for Free5GCVPC"
default: "10.100.0.0/16"
eni_subnet_01_cidr_block:
type: String
description: "CIDR Block for Free5GCENISubnet01"
default: "10.100.50.0/24"
...
Next, it uses the variables to create a mapping to the desired AWS resources (a VPC and a subnet in this case):
Free5GCVPC:
type: tosca.nodes.AWS.Networking.VPC
properties:
cidr_block: { get_input: vpc_cidr_block }
dns_support: true
Free5GCENISubnet01:
type: tosca.nodes.AWS.Networking.Subnet
properties:
type: "PUBLIC"
availability_zone: { get_input: subnet_01_az }
cidr_block: { get_input: eni_subnet_01_cidr_block }
requirements:
route_table: Free5GCRouteTable
vpc: Free5GCVPC
Then it defines an AWS Internet Gateway within the VPC:
Free5GCIGW:
type: tosca.nodes.AWS.Networking.InternetGateway
capabilities:
routing:
properties:
dest_cidr: { get_input: igw_dest_cidr }
requirements:
route_table: Free5GCRouteTable
vpc: Free5GCVPC
Finally, it specifies deployment of the Network Functions to an EKS cluster; the functions are deployed in the specified order:
Free5GCHelmDeploy:
type: tosca.nodes.AWS.Deployment.VNFDeployment
requirements:
cluster: Free5GCEKS
deployment: Free5GCNRFHelmDeploy
vnfs:
- amf.Free5gcAMF
- ausf.Free5gcAUSF
- nssf.Free5gcNSSF
- pcf.Free5gcPCF
- smf.Free5gcSMF
- udm.Free5gcUDM
- udr.Free5gcUDR
- upf.Free5gcUPF
- webui.Free5gcWEBUI
interfaces:
Hook:
pre_create: Free5gcSimpleHook
I click Create network package, select the NSD, and click Next to proceed. AWS TNB asks me to review the list of function packages and the NSD parameters. I do so, and click Create network package:
My network package is created and ready to use within seconds:
Now I am ready to create my network instance! I select the network package and choose Create network instance from the Actions menu:
I give my network a name and a description, then click Next:
I make sure that I have selected the desired network package, review the list of functions packages that will be deployed, and click Next:
Then I do one final review, and click Create network instance:
I select the new network instance and choose Instantiate from the Actions menu:
I review the parameters, and enter any desired overrides, then click Instantiate network:
AWS Telco Network Builder (TNB) begins to instantiate my network (behind the scenes, the service creates a AWS CloudFormation template, uses the template to create a stack, and executes other tasks including Helm charts and custom scripts). When the instantiation step is complete, my network is ready to go. Instantiating a network creates a deployment, and the same network (perhaps with some parameters overridden) can be deployed more than once. I can see all of the deployments at a glance:
I can return to the dashboard to see my networks, function packages, network packages, and recent deployments:
Inside an AWS TNB Deployment
Let’s take a quick look inside my deployment. Here’s what AWS TNB set up for me:
Network – An Amazon Virtual Private Cloud (Amazon VPC) with three subnets, a route table, a route, and an Internet Gateway.
Compute – An Amazon Elastic Kubernetes Service (EKS) cluster.
CI/CD – An AWS CodeBuild project that is triggered every time a node is added to the cluster.
Things to Know
Here are a couple of things to know about AWS Telco Network Builder (TNB):
Access – In addition to the console access that I showed you above, you can access AWS TNB from the AWS Command Line Interface (AWS CLI) and the AWS SDKs.
Deployment Options – We are launching with the ability to create a network that spans multiple Availability Zones in a single AWS Region. Over time we expect to add additional deployment options such as Local Zones and Outposts.
Pricing – Pricing is based on the number of Network Functions that are managed by AWS TNB and on calls to the AWS TNB APIs, but the first 45,000 API requests per month in each AWS Region are not charged. There are also additional charges for the AWS resources that are created as part of the deployment. To learn more, read the TNB Pricing page.
Getting Started
To learn more and to get started, visit the AWS Telco Network Builder (TNB) home page.
— Jeff;
from AWS News Blog https://ift.tt/AQTvwYG
via IFTTT
Monday, February 20, 2023
AWS Week in Review – February 20, 2023
Since the devastating earthquake in Türkiye and Syria, Amazon has activated disaster relief services to quickly provide relief items to impacted areas. The company and Amazon customers have donated nearly 100,000 relief items so far, and donations continue to come in.
The AWS Disaster Preparedness and Response team is providing trained technical volunteers and solutions to Help.NGO, a United Nations standby partner assisting in the region.
We continue to support field requests for winter survival equipment, clothing, hygiene products, and other items. If you wish to donate, check out our blog post to find your local donation site and to learn more about how we’ve supported relief efforts so far. Thank you for your support!
Last Week’s Launches
As usual, let’s take a look at some launches from the last week that I want to remind you of:
New Amazon EC2 M7g and R7g instances – Since we launched C7g instances in May 2022, the General Purpose (M7g) and the Memory-Optimized (R7g) instances are generally available. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25 percent better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in Amazon EC2.
Here is my infographic to highlight the principal performance and capacity improvements that we have made available with the new instances:
Enable AWS Systems Manager across all Amazon EC2 instances – All EC2 instances in your account become managed instances, with a single action using the Default Host Management Configuration (DHMC) Agent without changing existing instance profile roles. DHMC is ideal for all EC2 users, and offers a simple, scalable process to standardize the availability of System Manager tools for users who manage many instances. To learn more, see Default Host Management Configuration in the AWS documentation.
Programmatically manage opt-in AWS Regions – You can now view and manage enabled and disabled opt-in AWS Regions on your AWS accounts using AWS APIs. You can enable, disable, read, and list Region opt status by using the following AWS CLI commands in case of enabling Africa (Cape Town) Region:
$ aws account enable-region --region-name af-south-1
$ aws account get-region-opt-status --region-name af-south-1
{
"RegionName": "af-south-1",
"RegionOptStatus": "ENABLING"
}
It will save you the time and effort of doing it through the AWS Management Console. To learn more, see Specifying which AWS Regions your account can use in the AWS documentation.
AWS Modular Data Center (AWS MDC) – AWS MDC is available as a self-contained modular data center unit: an environmentally controlled physical enclosure that can host racks of AWS Outposts or AWS Snow Family devices. AWS MDC lets defense customers run low-latency applications in infrastructure-limited environments for scenarios like large-scale military operations, crisis response, and security cooperation.
At this time, AWS MDC is now available in the AWS GovCloud Regions, and this service can only be purchased by the U.S. Department of Defense under the Joint Warfighting Cloud Capability (JWCC) contract. To learn more, read the AWS Public Sector Blog post.
Amazon EKS Anywhere on Snow – This is a new deployment option that helps you create and operate Kubernetes clusters on AWS Snowball Edge devices for provisioning and familiar operational visibility tooling of container applications deployed at the edge.
Amazon EKS Anywhere on Snow is ideal for customers who run their operations using secure and durable AWS Snow Family devices in unconditioned or mobile environments such as construction sites, ships, and rapidly deployed military forces. To learn more, read the AWS Container Blog post.
For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.
Other AWS News
Here are some other news items that you may find interesting in the last week:
- Now Go Build with Werner Vogels – Digital Humans: The new episode is about how AWS helps Soul Machines create digital people and what makes digital people so human-like. Dr. Werner Vogels visits Soul Machines, an AI company that uses the cloud to deliver meaningful connections between human and machine. Watch full episodes of Now Go Build if you are interested in the future of technology on the cloud.
- This Is My Architecture Special with Scuderia Ferrari: Scuderia Ferrari has launched a new fan app that innovates fan engagement through personalization. Utilizing AWS analytics and event-driven architecture, the app provides a unique look behind the scenes of the most successful Formula 1 team. Join builder Adrian De Luca in Maranello, Italy to learn about the app’s architecture and meet with Ferrari leadership to understand the future of fan engagement in the sport.
- Sustainability at AWS re:Invent 2022: Adrian Cockcroft, ex-VP of Amazon Sustainability, picked out the most relevant and his watched videos related to this topic from AWS re:Invent 2022. You can read the summary of them and find the latest information on progress to Sustainability in the Cloud and Amazon’s 100% renewable energy by 2025.
- AWS Spatial Computing Blog: Spatial computing is about potential digitization (or virtualization, or digital twin) of all objects, systems, machines, people, and their interactions and environments. This new area treats extended reality(XR), digital twin, and three-dimensional simulations, as the future of technology. Read the latest interesting blog posts such as Metaverse Building Blocks and From 3D to Simulation.
Upcoming AWS Events
Check your calendars and sign up for these AWS-led events:
AWS at MWC 2023 – Join AWS at MWC23 in Barcelona, Spain, February 27 – March 2, and interact with upcoming innovative new service demonstrations, be inspired at one of our many sessions, or request a more personal meeting with us onsite.
AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results. Register now for Asia Pacific & Japan (February 22, 2023), EMEA (March 9), and the Americas (March 14).
AWS Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. We kick off Paris and Sydney on April 4th and schedule most other Summits from April to June. Please stay tuned and watch for the dates and locations to be announced.
You can browse all upcoming AWS-led in-person, virtual events, and developer focused events such as Community Days.
That’s all for this week. Check back next Monday for another Week in Review!
— Channy
This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!
from AWS News Blog https://ift.tt/Ot6wWdB
via IFTTT
Thursday, February 16, 2023
Wednesday, February 15, 2023
Behind the Scenes at AWS – DynamoDB UpdateTable Speedup
We often talk about the Pace of Innovation at AWS, and share the results in this blog, in the AWS What’s New page, and in our weekly AWS on Air streams. Today I would like to talk about a slightly different kind of innovation, the kind that happens behind the scenes.
Each AWS customer uses a different mix of services, and uses those services in unique ways. Every service is instrumented and monitored, and the team responsible for designing, building, running, scaling, and evolving the service pays continuous attention to all of the resulting metrics. The metrics provide insights into how the service is being used, how it performs under load, and in many cases highlights areas for optimization in pursuit of higher availability, better performance, and lower costs.
Once an area for improvement has been identified, a plan is put in to place, changes are made and tested in pre-production environments, then deployed to multiple AWS regions. This happens routinely, and (to date) without fanfare. Each part of AWS gets better and better, with no action on your part.
DynamoDB UpdateTable
In late 2021 we announced the Standard-Infrequent Access table class for Amazon DynamoDB. As Marcia noted in her post, using this class can reduce your storage costs by 60% compared to the existing (Standard) class. She also showed you how you could modify a table to use the new class. The modification operation calls the UpdateTable
function, and that function is the topic of this post!
As is the case with just about every AWS launch, customers began to make use of the new table class right away. They created new tables and modified existing ones, benefiting from the lower pricing as soon as the modification was complete.
DynamoDB uses a highly distributed storage architecture. Each table is split into multiple partitions; operations such as changing the storage class are done in parallel across the partitions. After looking at a lot of metrics, the DynamoDB team found ways to increase parallelism and to reduce the amount of time spent managing the parallel operations.
This change had a dramatic effect for Amazon DynamoDB tables over 500 GB in size, reducing the time to update the table class by up to 97%.
Each time we make a change like this, we capture the “before” and “after” metrics, and share the results internally so that other teams can learn from the experience while they are in the process of making similar improvements of their own. Even better, each change that we make opens the door to other ones, creating a positive feedback loop that (once again) benefits everyone that uses a particular service or feature.
Every DynamoDB user can take advantage of this increased performance right away without the need for a version upgrade or downtime for maintenance (DynamoDB does not even have maintenance windows).
Incremental performance and operational improvements like this one are done routinely and without much fanfare. However it is always good to hear back from our customers when their own measurements indicate that some part of AWS became better or faster.
Leadership Principles
As I was thinking about this change while getting ready to write this post, several Amazon Leadership Principles came to mind. The DynamoDB team showed Customer Obsession by implementing a change that would benefit any DynamoDB user with tables over 500 GB in size. To do this they had to Invent and Simplify, coming up with a better way to implement the UpdateTable
function.
While you, as an AWS customer, get the benefits with no action needed on your part, this does not mean that you have to wait until we decide to pay special attention to your particular use case. If you are pushing any aspect of AWS to the limit (or want to), I recommend that you make contact with the appropriate service team and let them know what’s going on. You might be running into a quota or other limit, or pushing bandwidth, memory, or other resources to extremes. Whatever the case, the team would love to hear from you!
Stay Tuned
I have a long list of other internal improvements that we have made, and will be working with the teams to share more of them throughout the year.
— Jeff;
from AWS News Blog https://ift.tt/WgF0Juo
via IFTTT
Tuesday, February 14, 2023
How to Connect Business and Technology to Embrace Strategic Thinking (Book Review)
How to Connect Business and Technology to Embrace Strategic Thinking (Book Review)
The Value Flywheel Effect: Power the Future and Accelerate Your Organization to the Modern Cloud
by David Anderson with Mark McCann and Michael O’Reilly
With this post, I’d like to share a new book that got my attention. It’s a book at the intersection of business, technology, and people. This is a great read for anyone who wants to understand how organizations can evolve to maximize the business impact of new technologies and speed up their internal processes.
Last year at re:Invent, I had the opportunity to meet David Anderson. As Director of Technology at Liberty Mutual, he drove the technology change when the global insurance company, founded in 1912, moved its services to the cloud and adopted a serverless-first strategy. He created an environment where experimentation was normal, and software engineers had time and space to learn. This worked so well that, at some point, he had four AWS Heroes in his extended team.
A few months before, I heard that David was writing a book with Mark McCann and Michael O’Reilly. They all worked together at Liberty Mutual, and they were distilling their learnings to help other organizations implement a similar approach. The book was just out when we met, and I was curious to learn more, starting from the title. We met in the expo area, and David was kind enough to give me a signed copy of the book.
The book is published by IT Revolution, the same publisher behind some of my favorite books such as The Phoenix Project, Team Topologies, and Accelerate. The book is titled The Value Flywheel Effect because when you connect business and technology in an organization, you start to turn a flywheel that builds momentum with each small win.
The Value Flywhell
The four phases of the Value Flywheel are:
- Clarity of Purpose – This is the part where you look at what is really important for your organization, what makes your company different, and define your North Star and how to measure your distance from it. In this phase, you look at the company through the eyes of the CEO.
- Challenge & Landscape – Here you prepare the organization and set up the environment for the teams. We often forget the social aspect of technical teams and great focus is given here on how to set up the right level of psychological safety for teams to operate. This phase is for engineers.
- Next Best Action – In this phase, you think like a product leader and plan the next steps with a focus on how to improve the developer experience. One of the key aspects is that “code is a liability” and the less code you write to solve a business problem, the better it is for speed and maintenance. For example, you can avoid some custom implementations and offload their requirements to capabilities offered by cloud providers.
- Long-Term Value – This is the CTO perspective, looking at how to set up a problem-preventing culture with well-architected systems and a focus on observability and sustainability. Sustainability here is not just considering the global environment but also the teams and the people working for the organization.
As you would expect from a flywheel, you should iterate on these four phases so that every new spin gets easier and faster.
Wardley Mapping
One thing that I really appreciate from the book is how it made it easy for me to use Wardley mapping (usually applied to a business context) in a technical scenario. Wardley maps, invented by Simon Wardley, provide a visual representation of the landscape in which a business operates.
Each map consists of a value chain, where you draw the components that your customers need. The components are connected to show how they depend on each other. The position of the components is based on how visible they are to customers (vertical) and their evolution status from genesis to being a product or a commodity (horizontal). Over time, some components evolve from being custom-built to becoming a product or being commoditized. This displays on the map with a natural movement to the right as things evolve. For example, data centers were custom-built in the past, but then they became a standard product, and cloud computing made them available as a commodity.
Basic elements of a map – Provided courtesy of Simon Wardley, CC BY-SA 4.0.
With mapping, you can more easily understand what improvements you need and what gaps you have in your technical solution. In this way, engineers can identify which components they should focus on to maximize their impact and what parts are not strategic and can be offloaded to a SaaS solution. It’s a sort of evolutionary architecture where mapping gives a way to look ahead at how the system should evolve over time and where inertia can slow down the evolution of part of the system.
Sometimes it seems the same best practices apply everywhere but this is not true. An advantage of mapping is that it helps identify the best team and methodology to use based on a component evolution status as described by its horizontal position on a map. For example, an “explorer” attitude is best suited for components in their genesis or being custom built, a “villager” works best on products, and when something becomes a commodity you need a “town planner.”
More Tools and Less Code
The authors look at many available tools and frameworks. For example, the book introduces the North Star Framework, a way to manage products by first identifying their most important metric (the North Star), and Gojko Adzic‘s Impact Mapping, a collaborative planning technique that focuses on leading indicators to help teams make a big impact with their software products. By the way, Gojko is also an AWS Serverless Hero.
Another interesting point is how to provide engineers with the necessary time and space to learn. I specifically like how internal events are called out and compared to public conferences. In internal events, engineers have a chance to use a new technology within their company environment, making it easier to demonstrate what can be done with all the limits of an actual scenario.
Finally, I’d like to highlight this part that clearly defines what the book intends by the statements, “code is a liability”:
“When you ask a software team to build something, they deliver a system, not lines of code. The asset is not the code; the asset is the system. The less code in the system, the less overhead you have bought. Some developers may brag about how much code they’ve written, but this isn’t something to brag about.”
This is not a programming book, and serverless technologies are used as examples of how you can speed up the flywheel. If you are looking for a technical deep dive on serverless technologies, you can find more on Serverless Land, a site that brings together the latest information and learning resources for serverless computing, or have a look at the Serverless Architectures on AWS book.
Now that every business is a technology business, The Value Flywheel Effect is about how to accelerate and transform an organization. It helps set the right environment, purpose, and stage to modernize your applications as you adopt cloud computing and get the benefit of it.
You can meet David, Mark, and Michael at the Serverless Edge, where a team of engineers, tech enthusiasts, marketers, and thought leaders obsessed with technology help learn and communicate how serverless can transform a business model.
— Danilo
from AWS News Blog https://ift.tt/95exUnO
via IFTTT
Monday, February 13, 2023
New Graviton3-Based General Purpose (m7g) and Memory-Optimized (r7g) Amazon EC2 Instances
We’ve come a long way since the launch of the m1.small instance in 2006, adding instances with additional memory, compute power, and your choice of Intel, AMD, or Graviton processors. The original general-purpose “one size fits all” instance has evolved into six families, each one optimized for specific uses cases, with over 600 generally available instances in all.
New M7g and R7g
Today I am happy to tell you about the newest Amazon EC2 instance types, the M7g and the R7g. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25% better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in EC2.
The M7g instances are for general purpose workloads such as application servers, microservices, gaming servers, mid-sized data stores, and caching fleets. The R7g instances are a great fit for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.
Here are the specs for the M7g instances:
Instance Name | vCPUs |
Memory |
Network Bandwidth |
EBS Bandwidth |
m7g.medium | 1 | 4 GiB | up to 12.5 Gbps | up to 10 Gbps |
m7g.large | 2 | 8 GiB | up to 12.5 Gbps | up to 10 Gbps |
m7g.xlarge | 4 | 16 GiB | up to 12.5 Gbps | up to 10 Gbps |
m7g.2xlarge | 8 | 32 GiB | up to 15 Gbps | up to 10 Gbps |
m7g.4xlarge | 16 | 64 GiB | up to 15 Gbps | up to 10 Gbps |
m7g.8xlarge | 32 | 128 GiB | 15 Gbps | 10 Gbps |
m7g.12xlarge | 48 | 192 GiB | 22.5 Gbps | 15 Gbps |
m7g.16xlarge | 64 | 256 GiB | 30 Gbps | 20 Gbps |
m7g.metal | 64 | 256 GiB | 30 Gbps | 20 Gbps |
And here are the specs for the R7g instances:
Instance Name | vCPUs |
Memory |
Network Bandwidth |
EBS Bandwidth |
r7g.medium | 1 | 8 GiB | up to 12.5 Gbps | up to 10 Gbps |
r7g.large | 2 | 16 GiB | up to 12.5 Gbps | up to 10 Gbps |
r7g.xlarge | 4 | 32 GiB | up to 12.5 Gbps | up to 10 Gbps |
r7g.2xlarge | 8 | 64 GiB | up to 15 Gbps | up to 10 Gbps |
r7g.4xlarge | 16 | 128 GiB | up to 15 Gbps | up to 10 Gbps |
r7g.8xlarge | 32 | 256 GiB | 15 Gbps | 10 Gbps |
r7g.12xlarge | 48 | 384 GiB | 22.5 Gbps | 15 Gbps |
r7g.16xlarge | 64 | 512 GiB | 30 Gbps | 20 Gbps |
r7g.metal | 64 | 512 GiB | 30 Gbps | 20 Gbps |
Both types of instances are equipped with DDR5 memory, which provides up to 50% higher memory bandwidth than the DDR4 memory used in previous generations. Here’s an infographic that I created to highlight the principal performance and capacity improvements that we have made available with the new instances:
If you are not yet running your application on Graviton instances, be sure to take advantage of the AWS Graviton Ready Program. The partners in this program provide services and solutions that will help you to migrate your application and to take full advantage of all that the Graviton instances have to offer. Other helpful resources include the Porting Advisor for Graviton and the Graviton Fast Start program.
The instances are built on the AWS Nitro System, and benefit from multiple features that enhance security: always-on memory encryption, a dedicated cache for each vCPU, and support for pointer authentication. They also support encrypted EBS volumes, which protect data at rest on the volume, data moving between the instance and the volume, snapshots created from the volume, and volumes created from those snapshots. To learn more about these and other Nitro-powered security features, be sure to read The Security Design of the AWS Nitro System.
On the network side the instances are EBS-Optimized with dedicated networking between the instances and the EBS volumes, and also support Enhanced Networking (read How do I enable and configure enhanced networking on my EC2 instances? for more info). The 16xlarge and metal instances also support Elastic Fabric Adapter (EFA) for applications that need a high level of inter-node communication.
Pricing and Regions
M7g and R7g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.
— Jeff;
PS – Launch one today and let me know what you think!
from AWS News Blog https://ift.tt/rIs86om
via IFTTT
Week in Review – February 13, 2023
AWS announced 32 capabilities since we published the last Week in Review blog post a week ago. I also read a couple of other news and blog posts.
Here is my summary.
The VPC section of the AWS Management Console now allows you to visualize your VPC resources, such as the relationships between a VPC and its subnets, routing tables, and gateways. This visualization was available at VPC creation time only, and now you can go back to it using the Resource Map tab in the console. You can read the details in Channy’s blog post.
CloudTrail Lake now gives you the ability to ingest activity events from non-AWS sources. This lets you immutably store and then process activity events without regard to their origin–AWS, on-premises servers, and so forth. All of this power is available to you with a single API call: PutAuditEvents
. We launched AWS CloudTrail Lake about a year ago. It is a managed organization-scale data lake that aggregates, immutably stores, and allows querying of events recorded by CloudTrail. You can use it for auditing, security investigation, and troubleshooting. Again, my colleague Channy wrote a post with the details.
There are three new Amazon CloudWatch metrics for asynchronous AWS Lambda function invocations: AsyncEventsReceived
, AsyncEventAge
, and AsyncEventsDropped
. These metrics provide visibility for asynchronous Lambda function invocations. They help you to identify the root cause of processing issues such as throttling, concurrency limit, function errors, processing latency because of retries, or missing events. You can learn more and have access to a sample application in this blog post.
Amazon Simple Notification Service (Amazon SNS) now supports AWS X-Ray to visualize, analyze, and debug applications. Developers can now trace messages going through Amazon SNS, making it easier to understand or debug microservices or serverless applications.
Amazon EC2 Mac instances now support replacing root volumes for quick instance restoration. Stopping and starting EC2 Mac instances trigger a scrubbing workflow that can take up to one hour to complete. Now you can swap the root volume of the instance with an EBS snapshot or an AMI. It helps to reset your instance to a previous known state in 10–15 minutes only. This significantly speeds up your CI and CD pipelines.
Amazon Polly launches two new Japanese NTTS voices. Neural Text To Speech (NTTS) produces the most natural and human-like text-to-speech voices possible. You can try these voices in the Polly section of the AWS Management Console. With this addition, according to my count, you can now choose among 52 NTTS voices in 28 languages or language variants (French from France or from Quebec, for example).
The AWS SDK for Java now includes the AWS CRT HTTP Client. The HTTP client is the center-piece powering our SDKs. Every single AWS API call triggers a network call to our API endpoints. It is therefore important to use a low-footprint and low-latency HTTP client library in our SDKs. AWS created a common HTTP client for all SDKs using the C programming language. We also offer 11 wrappers for 11 programming languages, from C++ to Swift. When you develop in Java, you now have the option to use this common HTTP client. It provides up to 76 percent cold start time reduction on AWS Lambda functions and up to 14 percent less memory usage compared to the Netty-based HTTP client provided by default. My colleague Zoe has more details in her blog post.
X in Y Jeff started this section a while ago to list the expansion of new services and capabilities to additional Regions. I noticed 10 Regional expansions this week:
- Amazon Kendra now available in Asia Pacific (Tokyo) AWS Region
- AWS DataSync is now available in 3 additional AWS Regions (Asia Pacific (Hyderabad) and Europe (Spain, Zurich))
- AWS announces new AWS Direct Connect location in Kolkata, India
- Amazon EC2 R6gd instances now available in AWS Europe (London) Region
- Amazon GuardDuty now available in AWS Europe (Spain) Region
- Amazon ElastiCache for Redis now supports auto scaling in six new regions (Asia Pacific (Hyderabad, Jakarta, Melbourne), Europe (Spain, Zurich), and Middle East (UAE))
- Amazon EC2 X2idn instances now available in Europe (Zurich) Region
- AWS Mainframe Modernization service is now available in 3 new Regions (US East (Ohio), US West (N. California), and Asia Pacific (Seoul))
- Amazon EC2 M6i instances now available in AWS Asia Pacific (Jakarta)
- AWS Console Mobile Application adds support for new AWS Regions (Asia Pacific (Hyderabad, Melbourne) and Europe (Spain, Zurich))
Other AWS News
This week, I also noticed these AWS news items:
My colleague Mai-Lan shared some impressive customer stories and metrics related to the use and scale of Amazon S3 Glacier. Check it out to learn how to put your cold data to work.
Space is the final (edge) frontier. I read this blog post published on avionweek.com. It explains how AWS helps to deploy AIML models on observation satellites to analyze image quality before sending them to earth, saving up to 40 percent satellite bandwidth. Interestingly, the main cause for unusable satellite images is…clouds.
Upcoming AWS Events
Check your calendars and sign up for these AWS events:
AWS re:Invent recaps in your area. During the re:Invent week, we had lots of new announcements, and in the next weeks, you can find in your area a recap of all these launches. All the events are posted on this site, so check it regularly to find an event nearby.
AWS re:Invent keynotes, leadership sessions, and breakout sessions are available on demand. I recommend that you check the playlists and find the talks about your favorite topics in one collection.
AWS Summits season will restart in Q2 2023. The dates and locations will be announced here. Paris and Sidney are kicking off the season on April 4th. You can register today to attend these in-person, free events (Paris, Sidney).
Stay Informed
That was my selection for this week! To better keep up with all of this news, do not forget to check out the following resources:
- What’s New with AWS – All AWS announcements. You might want to add the RSS feed to your news reader.
- The Official AWS Podcast – Listen each week for updates on the latest AWS news and deep dives into exciting use cases. There are also official AWS podcasts in your local languages. Check out the ones in French, German, Italian, and Spanish.
- AWS News Blog – This blog.
from AWS News Blog https://ift.tt/El9Jspt
via IFTTT