Monday, September 30, 2024

AWS Weekly Roundup: Jamba 1.5 family, Llama 3.2, Amazon EC2 C8g and M8g instances and more (Sep 30, 2024)

Every week, there’s a new Amazon Web Services (AWS) community event where you can network, learn something new, and immerse yourself in the community. When you’re in a community, everyone grows together, and no one is left behind. Last week was no exception. I can highlight the Dach AWS Community Day where Viktoria Semaan closed with a talk titled How to Create Impactful Content and Build a Strong Personal Brand, and the Peru User Group, who organized two days of talks and learning opportunities: UGCONF & SERVERLESSDAY 2024, featuring Jeff Barr, who spoke about how to Create Your Own Luck. The community events continue, so check them out at Upcoming AWS Community Days.

Last week’s launches
Here are the launches that got my attention.

Jamba 1.5 family of models by AI21 Labs is now available in Amazon Bedrock – The Jamba 1.5 Large and 1.5 Mini models feature a 256k context window, one of the longest on the market, enabling complex tasks like lengthy document analysis. With native support for structured JSON output, function calling, and document processing, they integrate into enterprise workflows for specialized AI solutions. To learn more, read Jamba 1.5 family of models by AI21 Labs is now available in Amazon Bedrock, visit the AI21 Labs in Amazon Bedrock page, and read the documentation.

AWS Lambda now supports Amazon Linux 2023 runtimes in AWS GovCloud (US) Regions – These runtimes offer the latest language features, including Python 3.12, Node.js 20, Java 21, .NET 8, Ruby 3.3, and Amazon Linux 2023. They have smaller deployment footprints, updated libraries, and a new package manager. Additionally, you can also use the container base images to build and deploy functions as a container image.

Amazon SageMaker Studio now supports automatic shutdown of idle applications – You can now enable automatic shutdown of inactive JupyterLab and CodeEditor applications using Amazon SageMaker Distribution image v2.0 or newer. Administrators can set idle shutdown times at domain or user profile levels, with optional user customization. This cost control mechanism helps avoid charges for unused instances and is available across all AWS Regions where SageMaker Studio is offered.

Amazon S3 is implementing a default 128 KB minimum object size for S3 Lifecycle transition rules to any S3 storage class – Reduce transition costs for datasets with many small objects by decreasing transition requests. Users can override the default and customize minimum object sizes. Existing rules remain unchanged, but the new default applies to new or modified configurations.

AWS Lake Formation centralized access control for Amazon Redshift data sharing is now available in 11 additional Regions – Enabling granular permissions management, including table, column, and row-level access to shared Amazon Redshift data. It also supports tag-based access control and trusted identity propagation with AWS IAM Identity Center for improved security and simplified management.

Llama 3.2 generative AI models now available in Amazon Bedrock – The collection includes 90B and 11B parameter multimodal models for sophisticated reasoning tasks, and 3B and 1B text-only models for edge devices. These models support vision tasks, offer improved performance, and are designed for responsible AI innovation across various applications. These models support a 128K context length and multilingual capabilities in eight languages. Learn more about it in Introducing Llama 3.2 models from Meta in Amazon Bedrock.

Share AWS End User Messaging SMS resources across multiple AWS accounts – You can use AWS Resource Access Manager (RAM), to share phone numbers, sender IDs, phone pools, and opt-out lists. Additionally, Amazon SNS now delivers SMS text messages through AWS End User Messaging, offering enhanced features like two-way messaging and granular permissions. These updates provide greater flexibility and control for SMS messaging across AWS services.

AWS Serverless Application Repository now supports AWS PrivateLink Enabling direct connection from Amazon Virtual Private Cloud (VPC) without internet exposure. This enhances security by keeping communication within the AWS network. Available in all Regions where AWS Serverless Application Repository is offered, it can be set up using the AWS Management Console or AWS Command Line Interface (AWS CLI).

Amazon SageMaker with MLflow now supports AWS PrivateLink for secure traffic routing – Enabling secure data transfer from Amazon Virtual Private Cloud (VPC) to MLflow Tracking Servers within the AWS network. This enhances protection of sensitive information by avoiding public internet exposure. Available in most AWS Regions, it improves security for machine learning (ML) and generative AI experimentation using MLflow.

Introducing Amazon EC2 C8g and M8g Instances – Enhanced performance for compute-intensive and general-purpose workloads. With up to three times more vCPUs, three times more memory, 75 percent more memory bandwidth, and two times more L2 cache, these instances improve data processing, scalability, and cost-efficiency for various applications including high performance computing (HPC), batch processing, and microservices. Read more in Run your compute-
intensive and general purpose workloads sustainably with the new Amazon EC2 C8g, M8g instances.

Llama 3.2 models are now available in Amazon SageMaker JumpStart – These models offer various sizes from 1B to 90B parameters, support multimodal tasks, including image reasoning, and are more efficient for AI workloads. The 1B and 3B models can be fine-tuned, while Llama Guard 3 11B Vision supports responsible innovation and system-level safety. Learn more in Llama 3.2 models from Meta are now available in Amazon SageMaker JumpStart.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases – This solution enables low-latency customer interactions, answering queries from a knowledge base. Features include conversation analytics, automated testing, and hallucination detection in a serverless architecture.

How AWS WAF threat intelligence features help protect the player experience for betting and gaming customersAWS WAF enhances bot protection for betting and gaming. New features include browser fingerprinting, automation detection, and ML models to identify coordinated bots. These tools combat scraping, fraud, distributed denial of service (DDoS) attacks, and cheating, safeguarding player experiences.

How to migrate 3DES keys from a FIPS to a non-FIPS AWS CloudHSM cluster – Learn how to securely transfer Triple Data Encryption Algorithm (3DES) keys from Federal Information Processing Standard (FIPS) hsm1 to non-FIPS hsm2 clusters using RSA-AES wrapping, without backups. This enables using new hsm2.medium instances with FIPS 140-3 Level 3 support, non-FIPS mode, increased key capacity, and mutual TLS (mTLS).

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events offer technical sessions, demonstrations, and workshops delivered by experts. There is only one event left that you can still register for: Ottawa (October 9).

AWS Community Days – Join community-led conferences featuring technical discussions, workshops, and hands-on labs driven by expert AWS users and industry leaders from around the world. Upcoming AWS Community Days are scheduled for October 3 in the Netherlands and Romania, and on October 5 in Jaipur, Mexico, Bolivia, Ecuador, and Panama. I’m happy to share with you that I will be joining the Panama community on October 5.

AWS GenAI Lofts – Collaborative spaces and immersive experiences that showcase AWS’s expertise with the cloud and AI, while providing startups and developers with hands-on access to AI products and services, exclusive sessions with industry leaders, and valuable networking opportunities with investors and peers. Find a GenAI Loft location near you and don’t forget to register. I’ll be in the San Francisco lounge with some demos on October 15 at the Gen AI Developer Day. If you’re attending, feel free to stop by and say hello!

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Thanks to Dmytro Hlotenko and Diana Alfaro for the photos of their community events.

Eli

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!



from AWS News Blog https://ift.tt/plIm4Lh
via IFTTT

Wednesday, September 25, 2024

Run your compute-intensive and general purpose workloads sustainably with the new Amazon EC2 C8g, M8g instances

Today we’re announcing general availability of the Amazon Elastic Compute Cloud (Amazon EC2) C8g and M8g instances.

C8g instances are AWS Graviton4 based and are ideal for compute-intensive workloads such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving.

Also Graviton4 based, M8g instances provide the best price performance for general purpose workloads. M8g instances are ideal for applications such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets.

Now looking at some of the improvements that we have made available in both these instances. C8g and M8g instances offer larger instance sizes with up to three times more vCPUs (up to 48xl), three times the memory (up to 384GB for C8g and up to 768GB for M8g), 75 percent more memory bandwidth, and two times more L2 cache over equivalent 7g instances. This helps you to process larger amounts of data, scale up your workloads, improve time to results, and lower your total cost of ownership (TCO). These instances also offer up to 50 Gbps network bandwidth and up to 40 Gbps Amazon Elastic Block Storage (Amazon EBS) bandwidth compared to up to 30 Gbps network bandwidth and up to 20 Gbps Amazon EBS bandwidth on Graviton3-based instances. Similar to R8g instances, C8g and M8g instances offer two bare metal sizes (metal-24xl and metal-48xl). You can right size your instances and deploy workloads that benefit from direct access to physical resources.

The specs for the C8g instances are as follows.

Instance size
vCPUs
Memory (GiB)
Network bandwidth (Gbps)
EBS bandwidth (Gbps)
c8g.medium 1 2 Up to 12.5 Up to 10
c8g.large 2 4 Up to 12.5 Up to 10
c8g.xlarge 4 8 Up to 12.5 Up to 10
c8g.2xlarge 8 16 Up to 15 Up to 10
c8g.4xlarge 16 32 Up to 15 Up to 10
c8g.8xlarge 32 64 15 10
c8g.12xlarge 48 96 22.5 15
c8g.16xlarge 64 128 30 20
c8g.24xlarge 96 192 40 30
c8g.48xlarge 192 384 50 40
c8g.metal-24xl 96 192 40 30
c8g.metal-48xl 192 384 50 40

The specs for the M8g instances are as follows.

Instance size
vCPUs
Memory (GiB)
Network bandwidth (Gbps)
EBS bandwidth (Gbps)
m8g.medium 1 4 Up to 12.5 Up to 10
m8g.large 2 8 Up to 12.5 Up to 10
m8g.xlarge 4 16 Up to 12.5 Up to 10
m8g.2xlarge 8 32 Up to 15 Up to 10
m8g.4xlarge 16 64 Up to 15 Up to 10
m8g.8xlarge 32 128 15 10
m8g.12xlarge 48 192 22.5 15
m8g.16xlarge 64 256 30 20
m8g.24xlarge 96 384 40 30
m8g.48xlarge 192 768 50 40
m8g.metal-24xl 96 384 40 30
m8g.metal-48xl 192 768 50 40

Good to know

  • AWS Graviton4 processors offer enhanced security with always-on memory encryption, dedicated caches for every vCPU, and support for pointer authentication.
  • These instances are built on the AWS Nitro System which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security, thus reducing virtualization overhead.
  • The C8g and M8g instances are ideal for Linux-based workloads including containerized and microservices-based applications such as those running on Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS), as well as applications written in popular programming languages such as C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP.

Available now
C8g and M8g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Frankfurt) AWS Regions. As usual with Amazon EC2, you pay only for what you use. For more information, see Amazon EC2 Pricing. Check out the collection of AWS Graviton resources to help you start migrating your applications to Graviton instance types. You can also visit the AWS Graviton Fast Start program to begin your Graviton adoption journey.

To learn more, visit our Amazon EC2 instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

– Veliswa



from AWS News Blog https://ift.tt/eM2B8cL
via IFTTT

Introducing Llama 3.2 models from Meta in Amazon Bedrock: A new generation of multimodal vision and lightweight models

In July, we announced the availability of Llama 3.1 models in Amazon Bedrock. Generative AI technology is improving at incredible speed and today, we are excited to introduce the new Llama 3.2 models from Meta in Amazon Bedrock.

Llama 3.2 offers multimodal vision and lightweight models representing Meta’s latest advancement in large language models (LLMs) and providing enhanced capabilities and broader applicability across various use cases. With a focus on responsible innovation and system-level safety, these new models demonstrate state-of-the-art performance on a wide range of industry benchmarks and introduce features that help you build a new generation of AI experiences.

These models are designed to inspire builders with image reasoning and are more accessible for edge applications, unlocking more possibilities with AI.

The Llama 3.2 collection of models are offered in various sizes, from lightweight text-only 1B and 3B parameter models suitable for edge devices to small and medium-sized 11B and 90B parameter models capable of sophisticated reasoning tasks including multimodal support for high resolution images. Llama 3.2 11B and 90B are the first Llama models to support vision tasks, with a new model architecture that integrates image encoder representations into the language model. The new models are designed to be more efficient for AI workloads, with reduced latency and improved performance, making them suitable for a wide range of applications.

All Llama 3.2 models support a 128K context length, maintaining the expanded token capacity introduced in Llama 3.1. Additionally, the models offer improved multilingual support for eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

In addition to the existing text capable Llama 3.1 8B, 70B, and 405B models, Llama 3.2 supports multimodal use cases. You can now use four new Llama 3.2 models — 90B, 11B, 3B, and 1B — from Meta in Amazon Bedrock to build, experiment, and scale your creative ideas:

Llama 3.2 90B Vision (text + image input) – Meta’s most advanced model, ideal for enterprise-level applications. This model excels at general knowledge, long-form text generation, multilingual translation, coding, math, and advanced reasoning. It also introduces image reasoning capabilities, allowing for image understanding and visual reasoning tasks. This model is ideal for the following use cases: image captioning, image-text retrieval, visual grounding, visual question answering and visual reasoning, and document visual question answering.

Llama 3.2 11B Vision (text + image input) – Well-suited for content creation, conversational AI, language understanding, and enterprise applications requiring visual reasoning. The model demonstrates strong performance in text summarization, sentiment analysis, code generation, and following instructions, with the added ability to reason about images. This model use cases are similar to the 90B version: image captioning, image-text-retrieval, visual grounding, visual question answering and visual reasoning, and document visual question answering.

Llama 3.2 3B (text input) – Designed for applications requiring low-latency inferencing and limited computational resources. It excels at text summarization, classification, and language translation tasks. This model is ideal for the following use cases: mobile AI-powered writing assistants and customer service applications.

Llama 3.2 1B (text input) – The most lightweight model in the Llama 3.2 collection of models, perfect for retrieval and summarization for edge devices and mobile applications. This model is ideal for the following use cases: personal information management and multilingual knowledge retrieval.

In addition, Llama 3.2 is built on top of the Llama Stack, a standardized interface for building canonical toolchain components and agentic applications, making building and deploying easier than ever. Llama Stack API adapters and distributions are designed to most effectively leverage the Llama model capabilities and it gives customers the ability to benchmark Llama models across different vendors.

Meta has tested Llama 3.2 on over 150 benchmark datasets spanning multiple languages and conducted extensive human evaluations, demonstrating competitive performance with other leading foundation models. Let’s see how these models work in practice.

Using Llama 3.2 models in Amazon Bedrock
To get started with Llama 3.2 models, I navigate to the Amazon Bedrock console and choose Model access on the navigation pane. There, I request access for the new Llama 3.2 models: Llama 3.2 1B, 3B, 11B Vision, and 90B Vision.

To test the new vision capability, I open another browser tab and download from the Our World in Data website the Share of electricity generated by renewables chart in PNG format. The chart is very high resolution and I resize it to be 1024 pixel wide.

Back in the Amazon Bedrock console, I choose Chat under Playgrounds in the navigation pane, select Meta as the category, and choose the Llama 3.2 90B Vision model.

I use Choose files to select the resized chart image and use this prompt:

Based on this chart, which countries in Europe have the highest share?

I choose Run and the model analyzes the image and returns its results:

Using Meta Llama 3.2 models in the Amazon Bedrock console

I can also access the models programmatically using the AWS Command Line Interface (AWS CLI) and AWS SDKs. Compared to using the Llama 3.1 models, I only need to update the model IDs as described in the documentation. I can also use the new cross-region inference endpoint for the US and the EU Regions. These endpoints work for any Region within the US and the EU respectively. For example, the cross-region inference endpoints for the Llama 3.2 90B Vision model are:

  • us.meta.llama3-2-90b-instruct-v1:0
  • eu.meta.llama3-2-90b-instruct-v1:0

Here’s a sample AWS CLI command using the Amazon Bedrock Converse API. I use the --query parameter of the CLI to filter the result and only show the text content of the output message:

aws bedrock-runtime converse --messages '[{ "role": "user", "content": [ { "text": "Tell me the three largest cities in Italy." } ] }]' --model-id us.meta.llama3-2-90b-instruct-v1:0 --query 'output.message.content[*].text' --output text

In output, I get the response message from the "assistant".

The three largest cities in Italy are:

1. Rome (Roma) - population: approximately 2.8 million
2. Milan (Milano) - population: approximately 1.4 million
3. Naples (Napoli) - population: approximately 970,000

It’s not much different if you use one of the AWS SDKs. For example, here’s how you can use Python with the AWS SDK for Python (Boto3) to analyze the same image as in the console example:

import boto3

MODEL_ID = "us.meta.llama3-2-90b-instruct-v1:0"
# MODEL_ID = "eu.meta.llama3-2-90b-instruct-v1:0"

IMAGE_NAME = "share-electricity-renewable-small.png"

bedrock_runtime = boto3.client("bedrock-runtime")

with open(IMAGE_NAME, "rb") as f:
    image = f.read()

user_message = "Based on this chart, which countries in Europe have the highest share?"

messages = [
    {
        "role": "user",
        "content": [
            {"image": {"format": "png", "source": {"bytes": image}}},
            {"text": user_message},
        ],
    }
]

response = bedrock_runtime.converse(
    modelId=MODEL_ID,
    messages=messages,
)
response_text = response["output"]["message"]["content"][0]["text"]
print(response_text)

Llama 3.2 models are also available in Amazon SageMaker JumpStart, a machine learning (ML) hub that makes it easy to deploy pre-trained models using the console or programmatically through the SageMaker Python SDK. From SageMaker JumpStart, you can also access and deploy new safeguard models that can help classify the safety level of model inputs (prompts) and outputs (responses), including Llama Guard 3 11B Vision, which are designed to support responsible innovation and system-level safety.

In addition, you can easily fine-tune Llama 3.2 1B and 3B models with SageMaker JumpStart today. Fine-tuned models can then be imported as custom models into Amazon Bedrock. Fine-tuning for the full collection of Llama 3.2 models in Amazon Bedrock and Amazon SageMaker JumpStart is coming soon.

The publicly available weights of Llama 3.2 models make it easier to deliver tailored solutions for custom needs. For example, you can fine-tune a Llama 3.2 model for a specific use case and bring it into Amazon Bedrock as a custom model, potentially outperforming other models in domain-specific tasks. Whether you’re fine-tuning for enhanced performance in areas like content creation, language understanding, or visual reasoning, Llama 3.2’s availability in Amazon Bedrock and SageMaker empowers you to create unique, high-performing AI capabilities that can set your solutions apart.

More on Llama 3.2 model architecture
Llama 3.2 builds upon the success of its predecessors with an advanced architecture designed for optimal performance and versatility:

Auto-regressive language model – At its core, Llama 3.2 uses an optimized transformer architecture, allowing it to generate text by predicting the next token based on the previous context.

Fine-tuning techniques – The instruction-tuned versions of Llama 3.2 employ two key techniques:

  • Supervised fine-tuning (SFT) – This process adapts the model to follow specific instructions and generate more relevant responses.
  • Reinforcement learning with human feedback (RLHF) – This advanced technique aligns the model’s outputs with human preferences, enhancing helpfulness and safety.

Multimodal capabilities – For the 11B and 90B Vision models, Llama 3.2 introduces a novel approach to image understanding:

  • Separately trained image reasoning adaptor weights are integrated with the core LLM weights.
  • These adaptors are connected to the main model through cross-attention mechanisms. Cross-attention allows one section of the model to focus on relevant parts of another component’s output, enabling information flow between different sections of the model.
  • When an image is input, the model treats the image reasoning process as a “tool use” operation, allowing for sophisticated visual analysis alongside text processing. In this context, tool use is the generic term used when a model uses external resources or functions to augment its capabilities and complete tasks more effectively.

Optimized inference – All models support grouped-query attention (GQA), which enhances inference speed and efficiency, particularly beneficial for the larger 90B model.

This architecture enables Llama 3.2 to handle a wide range of tasks, from text generation and understanding to complex reasoning and image analysis, all while maintaining high performance and adaptability across different model sizes.

Things to know
Llama 3.2 models from Meta are now generally available in Amazon Bedrock in the following AWS Regions:

  • Llama 3.2 1B and 3B models are available in the US West (Oregon) and Europe (Frankfurt) Regions, and are available in the US East (Ohio, N. Virginia) and Europe (Ireland, Paris) Regions via cross-region inference.
  • Llama 3.2 11B Vision and 90B Vision models are available in the US West (Oregon) Region, and are available in the US East (Ohio, N. Virginia) Regions via cross-region inference.

Check the full AWS Region list for future updates. To estimate your costs, visit the Amazon Bedrock pricing page.

To lean more about how you can use Llama 3.2 11B and 90B models to support vision tasks, read the Vision use cases with Llama 3.2 11B and 90B models from Meta blog on the AWS Machine Learning blog channel.

To learn more about Llama 3.2 features and capabilities, visit the Llama models section of the Amazon Bedrock documentation. Give Llama 3.2 a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock.

You can find deep-dive technical content and discover how our Builder communities are using Amazon Bedrock at community.aws. Let us know what you build with Llama 3.2 in Amazon Bedrock!

Danilo



from AWS News Blog https://ift.tt/K9nkJPw
via IFTTT

Monday, September 23, 2024

AWS Weekly Roundup: Amazon EC2 X8g Instances, Amazon Q generative SQL for Amazon Redshift, AWS SDK for Swift, and more (Sep 23, 2024)

AWS Community Days have been in full swing around the world. I am going to put the spotlight on AWS Community Day Argentina where Jeff Barr delivered the keynote, talks and shared his nuggets of wisdom with the community, including a fun story of how he once followed Bill Gates to a McDonald’s!

I encourage you to read about his experience.

Last week’s launches
Here are the launches that got my attention, starting off with the GA releases.

Amazon EC2 X8g Instances are now generally availableX8g instances are powered by AWS Graviton4 processors and deliver up to 60% better performance than AWS Graviton2-based Amazon EC2 X2gd instances. These instances offer larger sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 3TiB) than Graviton2-based X2gd instances.

Amazon Q generative SQL for Amazon Redshift is now generally available – Amazon Q generative SQL in Amazon Redshift Query Editor is an out-of-the-box web-based SQL editor for Amazon Redshift. It uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights.

AWS SDK for Swift is now generally availableAWS SDK for Swift provides a modern, user-friendly, and native Swift interface for accessing Amazon Web Services from Apple platforms, AWS Lambda, and Linux-based Swift on Server applications. Now that it’s GA, customers can use AWS SDK for Swift for production workloads. Learn more in the AWS SDK for Swift Developer Guide.

AWS Amplify now supports long-running tasks with asynchronous server-side function calls – Developers can use AWS Amplify to invoke Lambda function asynchronously for operations like generative AI model inferences, batch processing jobs, or message queuing without blocking the GraphQL API response. This improves responsiveness and scalability, especially for scenarios where immediate responses are not required or where long-running tasks need to be offloaded.

Amazon Keyspaces (for Apache Cassandra) now supports add-column for multi-Region tables – With this launch, you can modify the schema of your existing multi-Region tables in Amazon Keyspaces (for Apache Cassandra) to add new columns. You only have to modify the schema in one of its replica Regions and Keyspaces will replicate the new schema to the other Regions where the table exists.

Amazon Corretto 23 is now generally availableAmazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Corretto 23 is an OpenJDK 23 Feature Release that includes an updated Vector API, expanded pattern matching and switch expression, and more. It will be supported through April, 2025.

Use OR1 instances for existing Amazon OpenSearch Service domains – With OpenSearch 2.15, you can leverage OR1 instances for your existing Amazon OpenSearch Service domains by simply updating your existing domain configuration, and choosing OR1 instances for data nodes. This will seamlessly move domains running OpenSearch 2.15 to OR1 instances using a blue/green deployment.

Amazon S3 Express One Zone now supports AWS KMS with customer managed keys – By default, S3 Express One Zone encrypts all objects with server-side encryption using S3 managed keys (SSE-S3). With S3 Express One Zone support for customer managed keys, you have more options to encrypt and manage the security of your data. S3 Bucket Keys are always enabled when you use SSE-KMS with S3 Express One Zone, at no additional cost.

Use AWS Chatbot to interact with Amazon Bedrock agents from Microsoft Teams and Slack – Before, customers had to develop custom chat applications in Microsoft Teams or Slack and integrate it with Amazon Bedrock agents. Now they can invoke their Amazon Bedrock agents from chat channels by connecting the agent alias with an AWS Chatbot channel configuration.

AWS CodeBuild support for managed GitLab runners – Customers can configure their AWS CodeBuild projects to receive GitLab CI/CD job events and run them on ephemeral hosts. This feature allows GitLab jobs to integrate natively with AWS, providing security and convenience through features such as IAM, AWS Secrets Manager, AWS CloudTrail, and Amazon VPC.

We launched existing services in additional Regions:

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

Secure Cross-Cluster Communication in EKS – It demonstrates how you can use Amazon VPC Lattice and Pod Identity to secure cross-EKS-cluster application communication, along with an example that you can use as a reference to adapt to your own microservices applications.

Improve RAG performance using Cohere Rerank – This post focuses on improving search efficiency and accuracy in RAG systems using Cohere Rerank.

AWS open source news and updates – My colleague Ricardo Sueiras writes about open source projects, tools, and events from the AWS Community; check out Ricardo’s page for the latest updates.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world. Upcoming AWS Community Days are in Italy (Sep. 27), Taiwan (Sep. 28), Saudi Arabia (Sep. 28)), Netherlands (Oct. 3), and Romania (Oct. 5).

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Abhishek

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!



from AWS News Blog https://ift.tt/pFdyvRV
via IFTTT

Mastering Amazon’s Near Real-Time Offer Data

Leverage Near Real-Time Price Updates, BuyBox Insights, and Competitor Analysis to Boost Your Amazon Sales Performance

Amazon provides sellers with continuous data points that deliver valuable insights into everything from pricing changes to inventory levels. These data points enable businesses to make informed decisions quickly, whether adjusting prices, monitoring sales trends, or ensuring products are eligible for the BuyBox.

Rather than manually checking for updates, third-party tools, or waiting for reports, this data can flow directly from Amazon, delivering data in near real-time. This is first-party, authoritative data directly from Amazon systems, which includes product-level price adjustments by competitors, shifts in BuyBox eligibility, and changes in product rankings.

Has A Product Offer Changed On Amazon?

Amazon will send data whenever the top 20 offers for a product change, such as a price adjustment or a change in the BuyBox winner.

What can you do with this data?

  1. Pricing Optimization: By constantly monitoring the landed and BuyBox prices, you can understand changes over time and optimize pricing strategies that balance sell-thru and profitability.
  2. BuyBox Eligibility Tracking: Monitoring the BuyBox eligible offers and BuyBox prices allows you to understand when and why you lose the BuyBox. This data can help inform adjustments to your pricing, shipping, and fulfillment strategy to increase your chances of winning.
  3. Competitor Analysis: By analyzing the lowest prices and the number of offers in various conditions, you can identify when competitors are undercutting your prices. This can trigger strategic adjustments in your offers, such as free shipping or discounts, to regain a competitive edge.
  4. Sales Performance Monitoring: Tracking the sales rank in various product categories helps you understand how price changes affect your product’s visibility and performance. A decline in sales rank could indicate the need to lower your price or offer promotional deals.

What Triggers Cause Changed Offers Updates?

There are a few primary types of changes to offers that will trigger new data to be sent by Amazon:

  1. External: A price change from a non-Amazon seller.
  2. Internal: A price change on Amazon’s retail site.
  3. Featured Offer: A change in the BuyBox winner or BuyBox price.

In addition to those changes, the item’s condition may experience a change (e.g., new, used, collectible), triggering an update from Amazon.

Now that we know what will trigger Amazon to send data, we can dig into the available data.

What types of offer data are available?

Amazon offers various products and competitive offers of data specific to the Amazon marketplace.

Number Of Offers

  1. Condition: The condition of the item (e.g., new, used, collectible).
  2. Fulfillment Channel: Whether the offer is fulfilled by Amazon (FBA) or the seller (MFN).
  3. Offer Count: The total number of offers for the specified condition and fulfillment channel.

Tracking the number of offers for each condition and fulfillment channel helps monitor market competitiveness. For example, many new offers suggest increasing competition, requiring you to revisit your pricing or inventory management.

Lowest Prices

  1. Landed Price: The price of the item plus shipping minus any Amazon Points.
  2. Listing Price: The price of the item before shipping costs.
  3. Shipping: The cost of shipping.

This pricing information provides a view of the cost to the customer. Comparing your landed price against the lowest offer enables you to refine pricing and offer strategies to remain competitive.

Buy Box Prices

  1. Landed Price: The total price (item price + shipping — points) for the BuyBox-winning offer.
  2. Listing Price: The price of the item for the BuyBox-winning offer.
  3. Shipping: The shipping cost for the BuyBox-winning offer.

Understanding the current BuyBox price allows you to adjust your offer to increase the likelihood of winning the BuyBox. If your landed price is too high compared to the BuyBox price, consider adjusting your listing price or offering free shipping to stay competitive.

ListPrice

  1. Amount: The suggested retail price from the manufacturer.

Comparing your price to the list price helps you gauge how aggressively you are pricing your item. Offering discounts below the list price can make your product more attractive to buyers, especially if competitors are sticking close to the MSRP.

Sales Rankings

  1. Product Category Id: The product category (e.g., lawn_and_garden_display_on_website).
  2. Rank: The rank of the product in that category.

Monitoring sales rank in specific categories helps track the performance of your product over time. If you notice a drop in rank, it might indicate that your product is priced too high, or that competitors are offering more attractive deals.

Number Of BuyBox Eligible Offers

  1. Condition: The condition of the item (e.g., new, used, collectible).
  2. Fulfillment Channel: Whether Amazon or the seller fulfills the offer.
  3. Offer Count: The total number of BuyBox-eligible offers.

Knowing how many offers BuyBox is eligible for allows you to better understand the competition. If there are many eligible offers, BuyBox becomes more competitive, requiring more aggressive pricing or fulfillment optimizations.

Understanding Competitive Product Offers

This section provides details about the top 20 competitive offers for the product, including seller ratings, shipping times, and prices.

  1. Seller Id: The identifier for the seller of the offer.
  2. Sub Condition: The sub-condition of the item (e.g., new, mint, very good).
  3. Seller Feedback Rating: Data on the seller’s feedback rating and total feedback count.
  4. Shipping Time: The estimated minimum and maximum shipping times for the offer.
  5. Listing Price: The price of the item.
  6. Shipping: The shipping cost.
  7. Is Fulfilled By Amazon: Indicates whether the offer is fulfilled by Amazon.
  8. Is BuyBox Winner: Indicates whether this offer is currently the BuyBox winner.

Analyze competitive offers to understand how they compare to your own. Offers with better seller feedback ratings or shorter shipping times might win the BuyBox even if their prices are slightly higher. Monitoring whether an offer is fulfilled by Amazon or eligible for Prime can also guide fulfillment decisions to stay competitive.

The Power of Storing Offer Snapshot Data for Long-Term Analysis

Many of these data points, such as pricing changes, offer fluctuations, and BuyBox eligibility, are transitory, meaning they reflect the current state of the market at a particular moment. If you do not store the snapshot, it is gone forever.

Without retaining this information, you cannot look back and see how these metrics evolved over time. Storing this data in your private data warehouse or data lake allows access to the cumulative history of how your offers, prices, and inventory have changed over time. This historical perspective allows for a level of analysis and strategic planning that competitors relying solely on real-time data will not have.

Here are a few opportunities:

  1. Historical Trend Analysis: By capturing data in real-time, you can build daily, weekly, or monthly trends for price fluctuations, inventory levels, and sales rank changes. This enables you to see how your pricing decisions impacted sales or how seasonality affects your product categories.
  2. Performance Benchmarking: Over time, you can use this data to set benchmarks for your performance. By comparing current metrics against previous weeks, months, quarters, or years, you can assess how well your strategies are performing and make data-driven decisions to adjust course if needed.
  3. Competitive Pricing Insights: Analyzing historical pricing data allows you to spot patterns in competitor behavior. For instance, you may notice recurring discounts from competitors at certain times of the year, allowing you to proactively adjust your pricing strategy to stay competitive.
  4. BuyBox Eligibility Monitoring: Monitoring changes in BuyBox ownership and tracking how often you’ve won or lost the BuyBox can reveal opportunities for optimization. By understanding the factors that influenced past BuyBox wins, such as pricing or fulfillment methods, you can refine your approach for future success.
  5. Seasonal and Market Trend Identification: Storing data over long periods allows for detailed analysis of broader market trends. Whether identifying product seasonality or spotting shifts in consumer behavior, long-term data can provide unique insights that are not available from real-time data alone.

Looking back over weeks, months, or even years can offer powerful and actionable insights, turning data into a long-term strategic asset.

Openbridge: A Code-Free, Fully Automated Solution

Teams seeking a rapid, hassle-free implementation with minimal setup time and no coding required will find Openbridge the perfect fit.

  • Automation Accelerates Insights: Seamlessly integrate data with tools like Tableau, Looker, Power BI, Looker Data Studio, and AWS Quicksight for forecasting, analysis, reporting, and marketing efforts.
  • Unify Data in Your Cloud Warehouse: Consolidate data in a trusted, private data lake or cloud warehouse such as Snowflake, Azure Data Lake, AWS Redshift, Databricks, AWS Athena, and Google BigQuery.

Openbridge provides a code-free, fully automated solution that swiftly and securely gets your offer data up and running.

Ready to harness the power of your Amazon data?

Sign Up Now for Your Free 30-day Trial For Amazon Notification API Offer Data Automation.


Mastering Amazon’s Near Real-Time Offer Data was originally published in Openbridge on Medium, where people are continuing the conversation by highlighting and responding to this story.



from Openbridge - Medium https://ift.tt/Bl9Er8M
via Openbridge

Thursday, September 19, 2024

AWS named as a Leader in the 2024 Gartner Magic Quadrant for Desktop as a Service (DaaS)

The 2024 Gartner Magic Quadrant for DaaS (Desktop as a Service) positions AWS as a Leader for the first time. Last year we were recognized as a Challenger. We believe this is a result of our commitment to meet a wide range of customer needs by delivering a diverse portfolio of virtual desktop services with license portability (including Microsoft 365 Apps for Enterprise), our geographic strategy, and operational capabilities focused on cost optimization and automation. Also, our focus on easy-to-use interfaces for managing each aspect of our virtual desktop services means that our customers rarely need to make use of third-party tools.

You can access the complete 2024 Gartner Magic Quadrant for Desktop as a Service (DaaS) to learn more.

2024-Gartner-MQ-for-DaaS-Graph

AWS DaaS Offerings
Let’s take a quick look at our lineup of DaaS offerings (part of our End User Computing portfolio):

Amazon WorkSpaces Family – Originally launched in early 2014 and enhanced frequently ever since, Amazon WorkSpaces gives you a desktop computing environment running Microsoft Windows, Ubuntu, Amazon Linux, or Red Hat Enterprise Linux in the cloud. Designed to support remote & hybrid workers, knowledge workers, developer workstations, and learning environments, WorkSpaces is available in sixteen AWS Regions, in your choice of six bundle sizes, including the GPU-equipped Graphics G4dn bundle. WorkSpaces Personal gives each user a persistent desktop — perfect for developers, knowledge workers, and others who need to install apps and save files or data. If your users do not need persistent desktops (often the case for contact centers, training, virtual learning, and back office access) you can use WorkSpaces Pools to simplify management and reduce costs. WorkSpaces Core provides managed virtual desktop infrastructure that is designed to work with third-party VDI solutions such as those from Citrix, Leostream, Omnissa, and Workspot.

Amazon WorkSpaces clients are available for desktops and tablets, with web access (Amazon WorkSpaces Secure Browser) and the Amazon WorkSpaces Thin Client providing even more choices. If you have the appropriate Windows 10 or 11 desktop license from Microsoft, you can bring your own license to the cloud (also known as BYOL), where it will run on hardware that is dedicated to you.

You can read about the Amazon WorkSpaces Family and review the WorkSpaces Features to learn more about what WorkSpaces has to offer.

Amazon AppStream 2.0 – Launched in late 2016, Amazon AppStream gives you instant, streamed access to SaaS applications and desktop applications without writing code or refactoring the application. You can easily scale applications and make them available to users across the globe without the need to manage any infrastructure. A wide range of compute, memory, storage, GPU, and operating system options let you empower remote workers, while also taking advantage of auto-scaling to avoid overprovisioning. Amazon AppStream offers three fleet types: Always on (instant connections), On-Demand (2 minutes to launch), and Elastic (for unpredictable demand). Pricing varies by type, with per second and per hour granularity for Windows and Linux; read Amazon AppStream 2.0 Pricing to learn more.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from AWS.



from AWS News Blog https://ift.tt/Q3gyiAR
via IFTTT

Wednesday, September 18, 2024

Now available: Graviton4-powered memory-optimized Amazon EC2 X8g instances

Graviton-4-powered, memory-optimized X8g instances are now available in ten virtual sizes and two bare metal sizes, with up to 3 TiB of DDR5 memory and up to 192 vCPUs. The X8g instances are our most energy efficient to date, with the best price performance and scale-up capability of any comparable EC2 Graviton instance to date. With a 16 to 1 ratio of memory to vCPU, these instances are designed for Electronic Design Automation, in-memory databases & caches, relational databases, real-time analytics, and memory-constrained microservices. The instances fully encrypt all high-speed physical hardware interfaces and also include additional AWS Nitro System and Graviton4 security features.

Over 50K AWS customers already make use of the existing roster of over 150 Graviton-powered instances. They run a wide variety of applications including Valkey, Redis, Apache Spark, Apache Hadoop, PostgreSQL, MariaDB, MySQL, and SAP HANA Cloud. Because they are available in twelve sizes, the new X8g instances are an even better host for these applications by allowing you to choose between scaling up (using a bigger instance) and scaling out (using more instances), while also providing additional flexibility for existing memory-bound workloads that are currently running on distinct instances.

The Instances
When compared to the previous generation (X2gd) instances, the X8g instances offer 3x more memory, 3x more vCPUs, more than twice as much EBS bandwidth (40 Gbps vs 19 Gbps), and twice as much network bandwidth (50 Gbps vs 25 Gbps).

The Graviton4 processors inside the X8g instances have twice as much L2 cache per core as the Graviton2 processors in the X2gd instances (2 MiB vs 1 MiB) along with 160% higher memory bandwidth, and can deliver up to 60% better compute performance.

The X8g instances are built using the 5th generation of AWS Nitro System and Graviton4 processors, which incorporates additional security features including Branch Target Identification (BTI) which provides protection against low-level attacks that attempt to disrupt control flow at the instruction level. To learn more about this and Graviton4’s other security features, read How Amazon’s New CPU Fights Cybersecurity Threats and watch the re:Invent 2023 AWS Graviton session.

Here are the specs:

Instance Name vCPUs
Memory (DDR5)
EBS Bandwidth
Network Bandwidth
x8g.medium 1 16 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.large 2 32 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.xlarge 4 64 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.2xlarge 8 128 GiB Up to 10 Gbps Up to 15 Gbps
x8g.4xlarge 16 256 GiB Up to 10 Gbps Up to 15 Gbps
x8g.8xlarge 32 512 GiB 10 Gbps 15 Gbps
x8g.12xlarge 48 768 GiB 15 Gbps 22.5 Gbps
x8g.16xlarge 64 1,024 GiB 20 Gbps 30 Gbps
x8g.24xlarge 96 1,536 GiB 30 Gbps 40 Gbps
x8g.48xlarge 192 3,072 GiB 40 Gbps 50 Gbps
x8g.metal-24xl 96 1,536 GiB 30 Gbps 40 Gbps
x8g.metal-48xl 192 3,072 GiB 40 Gbps 50 Gbps

The instances support ENA, ENA Express, and EFA Enhanced Networking. As you can see from the table above they provide a generous amount of EBS bandwidth, and support all EBS volume types including io2 Block Express, EBS General Purpose SSD, and EBS Provisioned IOPS SSD.

X8g Instances in Action
Let’s take a look at some applications and use cases that can make use of 16 GiB of memory per vCPU and/or up to 3 TiB per instance:

Databases – X8g instances allow SAP HANA and SAP Data Analytics Cloud to handle larger and more ambitious workloads than before. Running on Graviton4 powered instances, SAP has measured up to 25% better performance for analytical workloads and up to 40% better performance for transactional workloads in comparison to the same workloads running on Graviton3 instances. X8g instances allow SAP to expand their Graviton-based usage to even larger memory bound solutions.

Electronic Design Automation – EDA workloads are central to the process of designing, testing, verifying, and taping out new generations of chips, including Graviton, Trainium, Inferentia, and those that form the building blocks for the Nitro System. AWS and many other chip makers have adopted the AWS Cloud for these workloads, taking advantage of scale and elasticity to supply each phase of the design process with the appropriate amount of compute power. This allows engineers to innovate faster because they are not waiting for results. Here’s a long-term snapshot from one of the clusters that was used to support development of Graviton4 in late 2022 and early 2023. As you can see this cluster runs at massive scale, with peaks as high as 5x normal usage:

You can see bursts of daily and weekly activity, and then a jump in overall usage during the tape-out phase. The instances in the cluster are on the large end of the size spectrum so the peaks represent several hundred thousand cores running concurrently. This ability to spin up compute when we need it and down when we don’t gives us access to unprecedented scale without a dedicated investment in hardware.

The new X8g instances will allow us and our EDA customers to run even more workloads on Graviton processors, reducing costs and decreasing energy consumption, while also helping to get new products to market faster than ever.

Available Now
X8g instances are available today in the US East (N. Virginia), US West (Oregon), and Europe (Frankfurt) AWS Regions in On Demand, Spot, Reserved Instance, Savings Plan, Dedicated Instance, and Dedicated Host form. To learn more, visit the X8g page.



from AWS News Blog https://ift.tt/HLmpMcI
via IFTTT

Data engineering professional certificate: New hands-on specialization by DeepLearning.AI and AWS

Data engineers play a crucial role in the modern data-driven landscape, managing essential tasks from data ingestion and processing to transformation and serving. Their expertise is particularly valuable in the era of generative AI, where harnessing the value of vast datasets is paramount.

To empower aspiring and experienced data professionals, DeepLearning.AI and Amazon Web Services (AWS) have partnered to launch the Data Engineering Specialization, an advanced professional certificate on Coursera. This comprehensive program covers a wide range of data engineering concepts, tools, and techniques relevant to modern organizations. It’s designed for learners with some experience working with data who are interested in learning the fundamentals of data engineering. The specialization comprises four hands-on courses, each culminating in a Coursera course certificate upon completion.

Specialization overview

This Data Engineering Specialization is a joint initiative by AWS and DeepLearning.AI, a leading provider of world-class AI education founded by renowned machine learning (ML) pioneer Andrew Ng.

Joe Reis, a prominent figure in data engineering and coauthor of the bestselling book Fundamentals of Data Engineering, leads the program as a primary instructor. By providing a foundational framework, the curriculum ensures learners gain a holistic understanding of the data engineering lifecycle, while covering key aspect such as data architecture, orchestration, DataOps, and data management.

Further enhancing the learning experience, the program features hands-on labs and technical assessments hosted on the AWS Cloud. These practical, cloud-based exercises were designed in partnership with AWS technical experts, including Gal Heyne, Navnit Shukla, and Morgan Willis. Learners will apply theoretical concepts using AWS services and tools, such as Amazon Kinesis, AWS Glue, Amazon Simple Storage Service (Amazon S3), and Amazon Redshift, equipping them with hands-on skill and experience.

Specialization highlights

Participants will be introduced to several key learning opportunities.

Acquisition of core skills and strategies

The specialization equips data engineers with the ability to design data engineering solutions for various use cases, select the right technologies for their data architecture, and circumvent potential pitfalls. The skills gained universally apply across various platforms and technologies, offering learners a program that is versatile.

Unparalleled approach to data engineering education

Unlike conventional courses focused on specific technologies, this specialization provides a comprehensive understanding of data engineering fundamentals. It emphasizes the importance of aligning data engineering strategies with broader business goals, fostering a more integrated and effective approach to building and maintaining data solutions.

Holistic understanding of data engineering

By using the insights from the Fundamentals of Data Engineering book, the curriculum offers a well-rounded education that prepares professionals for success in the data-driven focused industries.

Practical skills through AWS cloud labs

The hands-on labs hosted by AWS Partner Vocareum let learners apply the techniques directly in an AWS environment provided with the course. This practical experience is crucial for mastering the intricacies of data engineering and developing the skills needed to excel in the industry.

Why choose this specialization?

  • Structured learning path–The specification is thoughtfully structured to provide a step-by-step learning journey, from foundational concepts to advanced applications.
  • Expert insights–Gain insights from the authors of Fundamentals of Data Engineering and other industry experts. Learn how to apply practical knowledge to build modern data architecture on the cloud, using cloud services for data engineering.
  • Hands-on experience–Engage in hands-on labs in the AWS Cloud, where you not only learn but also apply the knowledge in real-world scenarios.
  • Comprehensive curriculum–This program encompasses all aspects of the data engineering lifecycle, including data generation in source systems, ingestion, transformation, storage, and serving. It also addresses key undercurrents of data engineering, such as security, data management, and orchestration.

At the end of this specialization, learners will be well-equipped with the necessary skills and expertise to embark on a career in data engineering, an in-demand role at the core of any organization that is looking to use data to create value. Data-centric ML and analytics would not be possible without the foundation of data engineering.

Course modules

The Data Engineering Specialization comprises four courses:

  • Course 1–Introduction to Data Engineering–This foundational module explores the collaborative nature of data engineering, identifying key stakeholders and understanding their requirements. The course delves into a mental framework for building data engineering solutions, emphasizing holistic ecosystem understanding, critical factors like data quality and scalability, and effective requirements gathering. The course then examines the data engineering lifecycle, illustrating interconnections between stages. By showcasing the AWS data engineering stack, the course teaches how to use the right technologies. By the end of this course, learners will have the skills and mindset to tackle data engineering challenges and make informed decisions.
  • Course 2–Source Systems, Data Ingestion, and Pipelines–In this course, data engineers dive deep into the practical aspects of working with diverse data sources, ingestion patterns, and pipeline construction. Learners explore the characteristics of different data formats and the appropriate source systems for generating each type of data, equipping them with the knowledge to design effective data pipelines. The course covers the fundamentals of relational and NoSQL databases, including ACID compliance and CRUD operations, so that engineers learn to interact with a wide range of data source systems. The course covers the significance of cloud networking, resolving database connection issues, and using message queues and streaming platforms—crucial skills for creating strong and scalable data architectures. By mastering the concepts in this course, data engineers will be able to automate data ingestion processes, optimize connectivity, and establish the foundation for successful data engineering projects.
  • Course 3–Data Storage and Queries–This course equips data engineers with principles and best practices for designing robust, efficient data storage and querying solutions. Learners explore the data lake house concept, implementing a medallion-like architecture and using open table formats to build transactional data lakes. The course enhances SQL proficiency by teaching advanced queries, such as aggregations and joins on streaming data, while also exploring data warehouse and data lake capabilities. Learners compare storage performance and discover optimization strategies, like indexing. Data engineers can achieve high performance and scalability in data services by comprehending query execution and processing.
  • Course 4–Data Modeling, Transformation, and Serving–In this capstone course, data engineers explore advanced data modeling techniques, including data vault and star schemas. Learners differentiate between modeling approaches like Inmon and Kimball, gaining the ability to transform data and structure it for optimal analytical and ML use cases. The course equips data engineers with preprocessing skills for textual, image, and tabular data. Learners understand the distinctions between supervised and unsupervised learning, as well as classification and regression tasks, empowering them to design data solutions supporting a range of predictive applications. By mastering these data modeling, transformation, and serving concepts, data engineers can build robust, scalable, and business-aligned data architectures to deliver maximum value.

Enrollment

Whether you’re new to data engineering or looking to enhance your skills, this specialization provides a balanced mix of theory and hands-on experience through 4 courses, each culminating in a Coursera course certificate.

Embark on your data engineering journey from here:

By enrolling in these courses, you’ll also earn the DeepLearning.AI Data Engineering Professional Certificate upon completing all four courses.

Enroll now and take the first step towards mastering data engineering with this comprehensive and practical program, built on the foundation of Fundamentals of Data Engineering and powered by AWS.



from AWS News Blog https://ift.tt/dtrf7AG
via IFTTT

Tuesday, September 17, 2024

Amazon S3 Express One Zone now supports AWS KMS with customer managed keys

Amazon S3 Express One Zone, a high-performance, single-Availability Zone (AZ) S3 storage class, now supports server-side encryption with AWS Key Management Service (KMS) keys (SSE-KMS). S3 Express One Zone already encrypts all objects stored in S3 directory buckets with Amazon S3 managed keys (SSE-S3) by default. Starting today, you can use AWS KMS customer managed keys to encrypt data at rest, with no impact on performance. This new encryption capability gives you an additional option to meet compliance and regulatory requirements when using S3 Express One Zone, which is designed to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications.

S3 directory buckets allow you to specify only one customer managed key per bucket for SSE-KMS encryption. Once the customer managed key is added, you cannot edit it to use a new key. On the other hand, with S3 general purpose buckets, you can use multiple KMS keys either by changing the default encryption configuration of the bucket or during S3 PUT requests. When using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are always enabled. S3 Bucket Keys are free and reduce the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

Using SSE-KMS with Amazon S3 Express One Zone
To show you this new capability in action, I first create an S3 directory bucket in the Amazon S3 console following the steps to create a S3 directory bucket and use apne1-az4 as the Availability Zone. In Base name, I enter s3express-kms and a suffix that includes the Availability Zone ID wich is automatically added to create the final name. Then, I select the checkbox to acknowledge that Data is stored in a single Availability Zone.

In the Default encryption section, I choose Server-side encryption with AWS Key Management Service keys (SSE-KMS). Under AWS KMS Key I can Choose from your AWS KMS keys, Enter AWS KMS key ARN, or Create a KMS key. For this example, I previously created an AWS KMS key, which I selected from the list, and then choose Create bucket.

Now, any new object I upload to this S3 directory bucket will be automatically encrypted using my AWS KMS key.

SSE-KMS with Amazon S3 Express One Zone in action
To use SSE-KMS with S3 Express One Zone via the AWS Command Line Interface (AWS CLI), you need an AWS Identity and Access Management (IAM) user or role with the following policy . This policy allows the CreateSession API operation, which is necessary to successfully upload and download encrypted files to and from your S3 directory bucket.

{
        "Version": "2012-10-17",
        "Statement": [
                {
                        "Effect": "Allow",
                        "Action": [
                                "s3express:CreateSession"
                        ],
                        "Resource": [
                                "arn:aws:s3express:*:<account>:bucket/s3express-kms--apne1-az4--x-s3"
                        ]
                },
                {
                        "Effect": "Allow",
                        "Action": [
                                "kms:Decrypt",
                                "kms:GenerateDataKey"
                        ],
                        "Resource": [
                                "arn:aws:kms:*:<account>:key/<keyId>"
                        ]
                }
        ]
}

With the PutObject command, I upload a new file named confidential-doc.txt to my S3 directory bucket.

aws s3api put-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt \
--body confidential-doc.txt

As a success of the previous command I receive the following output:

{
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ChecksumCRC32": "0duteA==",
    "ServerSideEncryption": "aws:kms",
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true
}

Checking the object’s properties with HeadObject command, I see that it’s encrypted using SSE-KMS with the key that I created before:

aws s3api head-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt

I get the following output:

 
{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

I download the encrypted object with GetObject:

aws s3api get-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt output-confidential-doc.txt

As my session has the necessary permissions, the object is downloaded and decrypted automatically.

{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

For this second test, I use a different IAM user with a policy that is not granted the necessary KMS key permissions to download the object. This attempt fails with an AccessDenied error, demonstrating that the SSE-KMS encryption is functioning as intended.

An error occurred (AccessDenied) when calling the CreateSession operation: Access Denied

This demonstration shows how SSE-KMS works seamlessly with S3 Express One Zone, providing an additional layer of security while maintaining ease of use for authorized users.

Things to know
Getting started – You can enable SSE-KMS for S3 Express One Zone using the Amazon S3 console, AWS CLI, or AWS SDKs. Set the default encryption configuration of your S3 directory bucket to SSE-KMS and specify your AWS KMS key. Remember, you can only use one customer managed key per S3 directory bucket for its lifetime.

Regions – S3 Express One Zone support for SSE-KMS using customer managed keys is available in all AWS Regions where S3 Express One Zone is currently available.

Performance – Using SSE-KMS with S3 Express One Zone does not impact request latency. You’ll continue to experience the same single-digit millisecond data access.

Pricing – You pay AWS KMS charges to generate and retrieve data keys used for encryption and decryption. Visit the AWS KMS pricing page for more details. In addition, when using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are enabled by default for all data plane operations except for CopyObject and UploadPartCopy, and can’t be disabled. This reduces the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

AWS CloudTrail integration – You can audit SSE-KMS actions on S3 Express One Zone objects using AWS CloudTrail. Learn more about that in my previous blog post.

– Eli.



from AWS News Blog https://ift.tt/SjVWMPJ
via IFTTT