Wednesday, July 31, 2024
Plan your advertising campaigns with Amazon Marketing Cloud on AWS Clean Rooms, now generally available
Today, we are announcing the general availability of Amazon Marketing Cloud (AMC) on AWS Clean Rooms to help advertisers use their first-party signals to collaborate with Amazon Ads unique signals. With this collaboration, advertisers can generate differentiated insights, discover new audiences, and enable advertising campaign planning, activation, and measurement use cases, all without having to move their underlying signals outside of their AWS account. With AMC on AWS Clean Rooms, customers can easily prepare their data, match and create audiences, use custom insights to activate more relevant advertising campaigns with Amazon Ads, and measure return on ad spend. All of this can be accomplished from the most secure cloud computing environment available today.
Advertisers continually strive to reach new audiences and deliver relevant, marketing campaigns to better engage their customers. Yet, the advertising and marketing landscape is undergoing a fundamental shift with signal loss and fragmentation. As such, advertisers and their partners need to collaborate together using signals that are stored across many applications to personalize their advertising campaigns. However, to work with one another to gather insights, companies typically need to share a copy of their signals with their partners, which is often not aligned with their data governance, security and privacy, IT, and legal teams’ policies. As a result, many businesses miss opportunities to fully maximize the value of their first-party signals and improve planning, activation, and measurement outcomes for their campaigns.
AMC on AWS Clean Rooms makes it easier and scalable for advertisers to use their first-party signals with Amazon Ads, including collaborating across event-level signals and modeling unique audiences to help improve media planning, activation, and outcomes without having to move underlying signals outside their cloud environment.
AMC on AWS Clean Rooms prerequisites (environment setup)
To get started with AMC on AWS Clean Rooms, the advertiser needs an AWS account and a dataset that contains user population and event-level data stored in open data formats (CSV, Parquet, or Iceberg) in an Amazon Simple Storage Service (Amazon S3) bucket. The next step is to send an email to the Amazon Ads team to request the creation of an AMC instance. Once an instance has been created, the Amazon Ads team will create an AWS Clean Rooms collaboration and invite the advertiser to join the collaboration.
How it works
1. Join an AWS Clean Rooms collaboration and create an ID namespace.
2. Configure and associate tables to an AMC collaboration.
3. Run an ID mapping workflow to create and populate the ID mapping table.
4. Run a query in AMC.
Walkthrough
1. Join an AWS Clean Rooms collaboration and create an ID namespace.
The advertiser will accept the collaboration invite by creating a membership in their AWS account. Once in the collaboration, the advertiser will access the AWS Clean Rooms console and then select the AWS Entity Resolution ID namespace generated when the collaboration was created to start the process of using their data for matching and collaboration in AWS Clean Rooms. Next, specify the AWS Glue table and the associated schema mapping and choose the S3 bucket in the same AWS Region as the collaboration for temporarily storing your data while it processes. Lastly, the advertiser will provide permissions to read your data input from AWS Glue and write to Amazon S3 on their behalf.
In the AirportLink collaboration shown in the following screenshot, the advertiser (member AirportLink2) accepts a collaboration invite sent by member AirportLink1.
2. Configure and associate tables to an AMC collaboration.
After joining the collaboration, the advertiser will create configured tables on their purchase data, add custom analysis rule, and associate the configured table to the collaboration.
Within the collaboration, the advertiser will set up a collaboration analysis rule to control which party can receive the result of a query run on the associated table.
3. Run an ID mapping workflow to create and populate the ID mapping table.
Now that the ID namespace is associated with the collaboration, the Amazon Ads team will create an ID mapping table in the AWS Clean Rooms console. This step requires both the advertiser (source) and the Amazon Ads team (target) to associate their ID namespace resources to the collaboration. Amazon Ads will provide the methods of mapping and configuration, add the details for querying to name the ID mapping table, and provide permission for AWS Clean Rooms to execute and track the ID mapping workflow job on their behalf. Finally, the Amazon Ads team will select Create and Populate to start the mapping workflow and generate an ID mapping table that captures a common user cohort, who were matched on the rules provided in Step 2.
4. Run a query in AMC.
Advertisers can either use templates or write a SQL query to run for analysis and get query results for further insights. They can run the SQL query in the following ways:
- Run a SQL query with AMC data and the advertiser’s data that return the results to the advertiser’s S3 bucket using aggregate analysis. An example query is “How many of the customers who are registered for my email list saw the ads I’m running on Amazon?”
- Run a SQL query to create an audience on the advertiser’s data or overlap with AMC signals that returns results to the S3 bucket of Amazon Ads. An example query is to generate an audience to target in an ad campaign.
- Run an AWS Clean Rooms ML lookalike modeling job where Amazon Ads contributes the configured model and the advertiser contributes a seed audience. The resulting segment (list of user ad IDs) is sent to Amazon Ads.
After running the query, the advertiser can create an audience using a rule-based audience or a similar audience by navigating to the Audience tab in AMC. The output of the audience query will be sent directly to Amazon Demand Side Platform (DSP). The following table shows the options available to you when creating the audience:
If you want to |
Then |
Use pre-built audience templates | Select Create with instructional query from the dropdown list |
Create custom audience queries | Select Create new query from the dropdown list |
When creating a new query, the advertiser will configure various options such as name, description, and date adjustments. Additionally, the advertiser can choose from the two following audience types:
– Rule-based audience – Create audience-based on the audience query.
– Similar audience – Create machine learning (ML) based audiences based on the seed audience outputs from the audience query.
Now available
AMC on AWS Clean Rooms is available in in the US East (N. Virginia) Region. Be sure to check the full Region list for future updates. Learn more about AMC on AWS Clean Rooms in the AWS documentation.
Give it a try by emailing the Amazon Ads team to get started and send feedback to the AWS re:Post for AWS Clean Rooms or through your usual AWS Support contacts.
— Veliswa
from AWS News Blog https://ift.tt/lbZvzLn
via IFTTT
Monday, July 29, 2024
AWS and Multicloud: Existing capabilities & continued enhancements
When I speak to large-scale AWS customers about their challenges and concerns, the conversation often turns to the topic of multicloud. Whether by intent or by accident, these customers sometimes choose to make use of services from more than one cloud provider, sometimes in conjunction with applications or services that are still hosted on-premises. In some cases they made early, bottom-up choices at the team and division level, choosing cloud offerings from multiple vendors in the absence of a top-down mandate. In others, they acquired or merged with another organization and discovered a similar multi-vendor situation.
Regardless of the path, these customers tell me that they want to simplify and centralize their oversight and management of this diverse portfolio of cloud and on-premises resources. It is sometimes the case that the “multi” situation is time-bound, with a plan in place to ultimately consolidate operations in one place. It is also sometimes the case that the customer plans to retain their diverse portfolio.
AWS and multicloud
Our goal with AWS is to make you successful no matter what architectural choices you have made. In this post I want to outline our approach, share some capabilities that our customers have been using over the years, and provide you with an update on some of the more recent service announcements and content that we have created to give you guidance that will help you to succeed.
Our approach is to extend existing AWS operational and management capabilities to work in multicloud and hybrid environments. Because we extend existing capabilities, your investment in training, development, scripting, and runbooks is preserved, and actually becomes even more worthwhile since it applies to your other (non-AWS) resources. For example, you can use the same service (AWS Systems Manager) to patch and update Amazon Elastic Compute Cloud (Amazon EC2) instances, servers running on-premises, and servers provided by other cloud providers. Similarly, you can use Amazon CloudWatch to monitor applications, compute resources, and other cloud resources in all of those environments. These are two examples of how we are putting our approach into practice for you.
The AWS Solutions for Hybrid and Multicloud page contains additional examples of our extension-based approach to adding new capabilities, along some success stories from customers who have put the capabilities to use including Phillips 66 and Deutsche Börse.
Whether you choose to operate entirely on AWS or in multicloud and hybrid environments, one of the primary reasons to adopt AWS is the broad choice of services we offer, enabling you to innovate, build, deploy, and monitor your workloads. Just as we recently launched free data transfer out to the internet (DTO) when you want to move outside of AWS, we are committed to helping you be successful regardless of your approach.
Now that I have explained our approach and highlighted some of the principal multicloud service offerings, let’s take a look at a few of the newest multicloud and hybrid capabilities.
Multicloud launches
Since the beginning of 2023 we have launched eighteen new multicloud capabilities to existing AWS services, including 15 for data & analytics, 1 for security, and 2 for identity. Many of these launches add to the existing multicloud capabilities of the respective services:
AWS DataSync – This service transfers data between storage services. In addition to existing support for Google Cloud Storage, Azure Files, and Azure Blob Storage, we added support for five additional cloud service providers and storage services including Oracle Cloud Storage and DigitalOcean Spaces (full list). To learn more about this service, read What is AWS DataSync. To get started, I create a source location:
AWS Glue – This data integration service helps you to discover, prepare, and integrate all of your data at any scale. You can use it to connect to more than 80 different data sources, including cloud databases and analytics services. In October 2023, we introduced additional new connectors that allow you to move data bidirectionally between Amazon Simple Storage Service (Amazon S3), and either Azure Blob Storage or Azure Data Lake Storage (full list). We also launched six database connectors for AWS Glue for Apache Spark, including Teradata, SAP HANA, Azure SQL, Azure Cosmos DB, Vertica, and MongoDB (full list). To learn more about AWS Glue, read What is AWS Glue. I create a visual job flow to get started:
Amazon Athena – This serverless analytics service lets you use interactive SQL queries to analyze petabyte-scale data where it lives (more than 25 external data sources, including other cloud data stores), without copying or transforming it. Last year we added a new data source connector that allows you to query data in Google Cloud Storage. To learn more about Amazon Athena, read What is Amazon Athena.
Amazon AppFlow – You can take advantage of data and analytics in Google BigQuery using a connector available in Amazon AppFlow. To get started with Amazon AppFlow I create a flow and configure a data source:
Amazon Security Lake – This service helps you to achieve a more complete, organization-wide view of your security posture. It centralizes security data from your AWS environments, SaaS providers, on-premises environments, and cloud sources (Azure and GCP) into a purpose-built data lake. It became generally available last year, and now supports collection and analysis of security data from sources that support the Open Cybersecurity Schema Framework (OCSF) standard—more than 80 sources (full list).
AWS Secrets Manager – This service centrally manages secrets such as database credentials and API keys. Secrets are securely encrypted and can be centrally audited, with support for replication to support disaster recovery and multi-region applications. Last year we announced that you can Use AWS Secrets Manager to store and manage secrets in on-premises or multicloud workloads. To learn more, read What is AWS Secrets Manager.
AWS Identity and Access Management (IAM) – AWS IAM Identity Center now supports automated user provisioning from Google Workspace. The integration helps administrators simplify AWS access management across multiple accounts while maintaining familiar Google Workspace experiences for end users as they sign in.
Amazon CloudWatch – This service lets you query, visualize, and alarm on metrics of all sorts: application, AWS, on-premises, and multicloud. At re:Invent 2023 we added even more support for consolidation of hybrid, multicloud, and on-premises metrics. This new feature allows you to select and configure connectors that pull data from Amazon Managed Service for Prometheus, generic Prometheus, Amazon OpenSearch Service, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, CSV files stored in Amazon Simple Storage Service (Amazon S3), and Microsoft Azure Monitor.
Multicloud content and guidance
Now that you know about some of our latest multicloud launches, let’s take a look at some of the blog posts and other content that my colleagues have created.
First, some blog posts:
- Proven Practices for Developing a Multicloud Strategy
- Observe your Azure and AWS workloads simultaneously with Amazon CloudWatch
- Get custom data into Amazon Security Lake through ingesting Azure activity logs
- Set up AWS Private Certificate Authority to issue certificates for use with IAM Roles Anywhere
- Simplify data transfer: Google BigQuery to Amazon S3 using Amazon AppFlow
- Use AWS Secrets Manager to store and manage secrets in on-premises or multicloud workloads
- Enable external pipeline deployments to AWS Cloud by using IAM Roles Anywhere
- Train and deploy ML models in a multicloud environment using Amazon SageMaker
- How to view Azure costs using Amazon QuickSight
- Using AWS CloudFormation and AWS Cloud Development Kit to provision multicloud resources
- How to copy data from Azure Blob Storage to Amazon S3 using code
- Monitor hybrid and multicloud environments using AWS Systems Manager and Amazon CloudWatch
- Multicloud data lake analytics with Amazon Athena
Next, some of the most popular multicloud videos from AWS re:Invent 2023:
- Centralize your operations (COP320)
- Centralize hybrid & multicloud management with AWS (COP324)
- Strategies for navigating multicloud decisions and difficulties (ENT217)
And finally, be sure to bookmark the AWS Solutions for Hybrid and Multicloud page.
We’re here to help
If you are running in a multicloud environment and are ready to simplify and centralize, be sure to reach out to your AWS Account Manager (AM) or Technical Account Manager (TAM). Both will be happy to help!
— Jeff;
from AWS News Blog https://ift.tt/bipXIsa
via IFTTT
AWS Weekly Roundup: Llama 3.1, Mistral Large 2, AWS Step Functions, AWS Certifications update, and more (July 29, 2024)
I’m always amazed by the talent and passion of our Amazon Web Services (AWS) community members, especially in their efforts to increase diversity, equity, and inclusion in the tech community.
Last week, I had the honor of speaking at the AWS User Group Women Bay Area meetup, led by Natalie. This group is dedicated to empowering and connecting women, providing a supportive environment to explore cloud computing. In Latin America, we recently had the privilege of supporting 12 women-led AWS User Groups from 10 countries in organizing two regional AWSome Women Community Summits, reaching over 800 women builders. There’s still more work to be done, but initiatives like these highlight the power of community in fostering an inclusive and diverse tech environment.
Now, let’s turn our attention to other exciting news in the AWS universe from last week.
Last week’s launches
Here are some launches that got my attention:
Meta Llama 3.1 models – The Llama 3.1 models are Meta’s most advanced and capable models to date. The Llama 3.1 models are a collection of 8B, 70B, and 405B parameter size models that demonstrate state-of-the-art performance on a wide range of industry benchmarks and offer new capabilities for your generative artificial intelligence (generative AI) applications. Llama 3.1 models are now available in Amazon Bedrock (see Announcing Llama 3.1 405B, 70B, and 8B models from Meta in Amazon Bedrock) and Amazon SageMaker JumpStart (see Llama 3.1 models are now available in Amazon SageMaker JumpStart).
My colleagues Tiffany and Mike explored Llama 3.1 in last week’s episode of the weekly Build On Generative AI live stream. You can watch the full episode here!
Mistral Large 2 model – Mistral Large 2 is the newest version of Mistral Large, and according to Mistral AI, it offers significant improvements across multilingual capabilities, math, reasoning, coding, and much more. Mistral AI’s Mistral Large 2 foundation model (FM) is now available in Amazon Bedrock. See Mistral Large 2 is now available in Amazon Bedrock for all the details. You can find code examples in the Mistral-on-AWS repo and the Amazon Bedrock User Guide.
Faster auto scaling for generative AI models – This new capability in Amazon SageMaker inference can help you reduce the time it takes for your generative AI models to scale automatically. You can now use sub-minute metrics and significantly reduce overall scaling latency for generative AI models. With this enhancement, you can improve the responsiveness of your generative AI applications as demand fluctuates. For more details, check out Amazon SageMaker inference launches faster auto scaling for generative AI models.
AWS Step Functions now supports customer managed keys – AWS Step Functions now supports the use of customer managed keys with AWS Key Management Service (AWS KMS) to encrypt Step Functions state machine and activity resources. This new capability lets you encrypt your workflow definitions and execution data using your own encryption keys. Visit the AWS Step Functions documentation and the AWS KMS documentation to learn more.
For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page.Other AWS news
Here are some additional news items and posts that you might find interesting:
AWS Certification: Addition of new exam question types – If you are planning to take the AWS Certified AI Practitioner or AWS Certified Machine Learning Engineer – Associate exam anytime soon, check out AWS Certification: Addition of new exam question types. These exams will be the first to include three new question types: ordering, matching, and case study. The post shares insights about the new question types and offers information to help you prepare.
Amazon’s exabyte-scale migration from Apache Spark to Ray on Amazon EC2 – The Business Data Technologies (BDT) team at Amazon Retail has just flipped the switch to start quietly moving management of some of their largest production business intelligence (BI) datasets from Apache Spark over to Ray to help reduce both data processing time and cost. They’ve also contributed a critical component of their work (The Flash Compactor) back to Ray’s open source DeltaCAT project. Find the full story at Amazon’s Exabyte-Scale Migration from Apache Spark to Ray on Amazon EC2.
From community.aws
Here are my top three personal favorites posts from community.aws:
- Travel Support Agent Powered With RAG PostgreSQL by Elizabeth Fuentes
- Save time reading Hacker News comments using Converse API by Ricardo Sueiras
- Getting Started with Amazon Q Developer Customizations by Ricardo Ferreira
Upcoming AWS events
Check your calendars and sign up for these AWS events:
AWS Summits – The 2024 AWS Summit season is almost wrapping up! Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Mexico City (August 7), São Paulo (August 15), and Jakarta (September 5).
AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: New Zealand (August 15), Colombia (August 24), New York (August 28), Belfast (September 6), and Bay Area (September 13).
You can browse all upcoming in-person and virtual events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
— Antje
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
from AWS News Blog https://ift.tt/ETl3cBg
via IFTTT
Tuesday, July 23, 2024
Announcing Llama 3.1 405B, 70B, and 8B models from Meta in Amazon Bedrock
Today, we are announcing the availability of Llama 3.1 models in Amazon Bedrock. The Llama 3.1 models are Meta’s most advanced and capable models to date. The Llama 3.1 models are a collection of 8B, 70B, and 405B parameter size models that demonstrate state-of-the-art performance on a wide range of industry benchmarks and offer new capabilities for your generative artificial intelligence (generative AI) applications.
All Llama 3.1 models support a 128K context length (an increase of 120K tokens from Llama 3) that has 16 times the capacity of Llama 3 models and improved reasoning for multilingual dialogue use cases in eight languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
You can now use three new Llama 3.1 models from Meta in Amazon Bedrock to build, experiment, and responsibly scale your generative AI ideas:
- Llama 3.1 405B (preview) is the world’s largest publicly available large language model (LLM) according to Meta. The model sets a new standard for AI and is ideal for enterprise-level applications and research and development (R&D). It is ideal for tasks like synthetic data generation where the outputs of the model can be used to improve smaller Llama models and model distillations to transfer knowledge to smaller models from the 405B model. This model excels at general knowledge, long-form text generation, multilingual translation, machine translation, coding, math, tool use, enhanced contextual understanding, and advanced reasoning and decision-making. To learn more, visit the AWS Machine Learning Blog about using Llama 3.1 405B to generate synthetic data for model distillation.
- Llama 3.1 70B is ideal for content creation, conversational AI, language understanding, R&D, and enterprise applications. The model excels at text summarization and accuracy, text classification, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.
- Llama 3.1 8B is best suited for limited computational power and resources. The model excels at text summarization, text classification, sentiment analysis, and language translation requiring low-latency inferencing.
Meta measured the performance of Llama 3.1 on over 150 benchmark datasets that span a wide range of languages and extensive human evaluations. As you can see in the following chart, Llama 3.1 outperforms Llama 3 in every major benchmarking category.
To learn more about Llama 3.1 features and capabilities, visit the Llama 3.1 Model Card from Meta and Llama models in the AWS documentation.
You can take advantage of Llama 3.1’s responsible AI capabilities, combined with the data governance and model evaluation features of Amazon Bedrock to build secure and reliable generative AI applications with confidence.
- Guardrails for Amazon Bedrock – By creating multiple guardrails with different configurations tailored to specific use cases, you can use Guardrails to promote safe interactions between users and your generative AI applications by implementing safeguards customized to your use cases and responsible AI policies. With Guardrails for Amazon Bedrock, you can continually monitor and analyze user inputs and model responses that might violate customer-defined policies, detect hallucination in model responses that are not grounded in enterprise data or are irrelevant to the user’s query, and evaluate across different models including custom and third-party models. To get started, visit Create a guardrail in the AWS documentation.
- Model evaluation on Amazon Bedrock – You can evaluate, compare, and select the best Llama models for your use case in just a few steps using either automatic evaluation or human evaluation. With model evaluation on Amazon Bedrock, you can choose automatic evaluation with predefined metrics such as accuracy, robustness, and toxicity. Alternatively, you can choose human evaluation workflows for subjective or custom metrics such as relevance, style, and alignment to brand voice. Model evaluation provides built-in curated datasets or you can bring in your own datasets. To get started, visit Get started with model evaluation in the AWS documentation.
To learn more about how to keep your data and applications secure and private in AWS, visit the Amazon Bedrock Security and Privacy page.
Getting started with Llama 3.1 models in Amazon Bedrock
If you are new to using Llama models from Meta, go to the Amazon Bedrock console and choose Model access on the bottom left pane. To access the latest Llama 3.1 models from Meta, request access separately for Llama 3.1 8B Instruct, Llama 3.1 70B Instruct, or Llama 3.1 405B Instruct.
To request to be considered for access to the preview of Llama 3.1 405B in Amazon Bedrock, contact your AWS account team or submit a support ticket via the AWS Management Console. When creating the support ticket, select Amazon Bedrock as the Service and Models as the Category.
To test the Llama 3.1 models in the Amazon Bedrock console, choose Text or Chat under Playgrounds in the left menu pane. Then choose Select model and select Meta as the category and Llama 3.1 8B Instruct, Llama 3.1 70B Instruct, or Llama 3.1 405B Instruct as the model.
In the following example I selected the Llama 3.1 405B Instruct model.
By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. You can use model IDs such as meta.llama3-1-8b-instruct-v1
, meta.llama3-1-70b-instruct-v1
, or meta.llama3-1-405b-instruct-v1
.
Here is a sample of the AWS CLI command:
aws bedrock-runtime invoke-model \
--model-id meta.llama3-1-405b-instruct-v1:0 \
--body "{\"prompt\":\" [INST]You are a very intelligent bot with exceptional critical thinking[/INST] I went to the market and bought 10 apples. I gave 2 apples to your friend and 2 to the helper. I then went and bought 5 more apples and ate 1. How many apples did I remain with? Let's think step by step.\",\"max_gen_len\":512,\"temperature\":0.5,\"top_p\":0.9}" \
--cli-binary-format raw-in-base64-out \
--region us-east-1 \
invoke-model-output.txt
You can use code examples for Llama models in Amazon Bedrock using AWS SDKs to build your applications using various programming languages. The following Python code examples show how to send a text message to Llama using the Amazon Bedrock Converse API for text generation.
import boto3
from botocore.exceptions import ClientError
# Create a Bedrock Runtime client in the AWS Region you want to use.
client = boto3.client("bedrock-runtime", region_name="us-east-1")
# Set the model ID, e.g., Llama 3 8b Instruct.
model_id = "meta.llama3-1-405b-instruct-v1:0"
# Start a conversation with the user message.
user_message = "Describe the purpose of a 'hello world' program in one line."
conversation = [
{
"role": "user",
"content": [{"text": user_message}],
}
]
try:
# Send the message to the model, using a basic inference configuration.
response = client.converse(
modelId=model_id,
messages=conversation,
inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9},
)
# Extract and print the response text.
response_text = response["output"]["message"]["content"][0]["text"]
print(response_text)
except (ClientError, Exception) as e:
print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
exit(1)
You can also use all Llama 3.1 models (8B, 70B, and 405B) in Amazon SageMaker JumpStart. You can discover and deploy Llama 3.1 models with a few clicks in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK. You can operate your models with SageMaker features such as SageMaker Pipelines, SageMaker Debugger, or container logs under your virtual private cloud (VPC) controls, which help provide data security.
The fine-tuning for Llama 3.1 models in Amazon Bedrock and Amazon SageMaker JumpStart will be coming soon. When you build fine-tuned models in SageMaker JumpStart, you will also be able to import your custom models into Amazon Bedrock. To learn more, visit Meta Llama 3.1 models are now available in Amazon SageMaker JumpStart on the AWS Machine Learning Blog.
For customers who want to deploy Llama 3.1 models on AWS through self-managed machine learning workflows for greater flexibility and control of underlying resources, AWS Trainium and AWS Inferentia-powered Amazon Elastic Compute Cloud (Amazon EC2) instances enable high performance, cost-effective deployment of Llama 3.1 models on AWS. To learn more, visit AWS AI chips deliver high performance and low cost for Meta Llama 3.1 models on AWS in the AWS Machine Learning Blog.
To celebrate this launch, Parkin Kent, Business Development Manager at Meta, talks about the power of the Meta and Amazon collaboration, highlighting how Meta and Amazon are working together to push the boundaries of what’s possible with generative AI.
Discover how businesses are leveraging Llama models in Amazon Bedrock to harness the power of generative AI. Nomura, a global financial services group spanning 30 countries and regions, is democratizing generative AI across its organization using Llama models in Amazon Bedrock.
Now available
Llama 3.1 8B and 70B models from Meta are generally available and Llama 450B model is preview today in Amazon Bedrock in the US West (Oregon) Region. To request to be considered for access to the preview of Llama 3.1 405B in Amazon Bedrock, contact your AWS account team or submit a support ticket. Check the full Region list for future updates. To learn more, check out the Llama in Amazon Bedrock product page and the Amazon Bedrock pricing page.
Give Llama 3.1 a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.
Visit our community.aws site to find deep-dive technical content and to discover how our Builder communities are using Amazon Bedrock in their solutions. Let me know what you build with Llama 3.1 in Amazon Bedrock!
— Channy
from AWS News Blog https://ift.tt/qGFg8Bl
via IFTTT
Monday, July 22, 2024
AWS Weekly Roundup: Global AWS Heroes Summit, AWS Lambda, Amazon Redshift, and more (July 22, 2024)
Last week, AWS Heroes from around the world gathered to celebrate the 10th anniversary of the AWS Heroes program at Global AWS Heroes Summit. This program recognizes a select group of AWS experts worldwide who go above and beyond in sharing their knowledge and making an impact within developer communities.
Matt Garman, CEO of AWS and a long-time supporter of developer communities, made a special appearance for a Q&A session with the Heroes to listen to their feedback and respond to their questions.
Here’s an epic photo from the AWS Heroes Summit:
As Matt mentioned in his Linkedin post, “The developer community has been core to everything we have done since the beginning of AWS.” Thank you, Heroes, for all you do. Wishing you all a safe flight home.
Last week’s launches
Here are some launches that caught my attention last week:
Announcing the July 2024 updates to Amazon Corretto — The latest updates for the Corretto distribution of OpenJDK is now available. This includes security and critical updates for the Long-Term Supported (LTS) and Feature (FR) versions.
New open-source Advanced MYSQL ODBC Driver now available for Amazon Aurora and RDS — The new AWS ODBC Driver for MYSQL provides faster switchover and failover times, and authentication support for AWS Secrets Manager and AWS Identity and Access Management (IAM), making it a more efficient and secure option for connecting to Amazon RDS and Amazon Aurora MySQL-compatible edition databases.
Productionize Fine-tuned Foundation Models from SageMaker Canvas — Amazon SageMaker Canvas now allows you to deploy fine-tuned Foundation Models (FMs) to SageMaker real-time inference endpoints, making it easier to integrate generative AI capabilities into your applications outside the SageMaker Canvas workspace.
AWS Lambda now supports SnapStart for Java functions that use the ARM64 architecture — Lambda SnapStart for Java functions on ARM64 architecture delivers up to 10x faster function startup performance and up to 34% better price performance compared to x86, enabling the building of highly responsive and scalable Java applications using AWS Lambda.
Amazon QuickSight improves controls performance — Amazon QuickSight has improved the performance of controls, allowing readers to interact with them immediately without having to wait for all relevant controls to reload. This enhancement reduces the loading time experienced by readers.
Amazon OpenSearch Serverless levels up speed and efficiency with smart caching — The new smart caching feature for indexing in Amazon OpenSearch Serverless automatically fetches and manages data, leading to faster data retrieval, efficient storage usage, and cost savings.
Amazon Redshift Serverless with lower base capacity available in the Europe (London) Region — Amazon Redshift Serverless now allows you to start with a lower data warehouse base capacity of 8 Redshift Processing Units (RPUs) in the Europe (London) region, providing more flexibility and cost-effective options for small to large workloads.
AWS Lambda now supports Amazon MQ for ActiveMQ and RabbitMQ in five new regions — AWS Lambda now supports Amazon MQ for ActiveMQ and RabbitMQ in five new regions, enabling you to build serverless applications with Lambda functions that are invoked based on messages posted to Amazon MQ message brokers.
From community.aws
Here’s my top 5 personal favorites posts from community.aws:
- A Developer’s Guide to Advanced Chunking and Parsing with Amazon Bedrock by Suman Debnath.
- Enhancing Document Analysis with Embedding Adapters on AWS by Mehdi Nemlaghi.
- De-Bugging with Amazon Q and Generative AI by Kasun de Silva.
- Using Amazon Q Developer to update Valkey client code by Ricardo Sueiras.
- Software coding practices in an AI assistant world by cak Derek Bingham.
Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:
AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. To learn more about future AWS Summit events, visit the AWS Summit page. Register in your nearest city: AWS Summit Taipei (July 23–24), AWS Summit Mexico City (Aug. 7), and AWS Summit Sao Paulo (Aug. 15).
AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world. Upcoming AWS Community Days are in Aotearoa (Aug. 15), Nigeria (Aug. 24), New York (Aug. 28), and Belfast (Sept. 6).
You can browse all upcoming in-person and virtual events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
— Donnie
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
from AWS News Blog https://ift.tt/0h81mXB
via IFTTT
Monday, July 15, 2024
AWS Weekly Roundup: Advanced capabilities in Amazon Bedrock and Amazon Q, and more (July 15, 2024).
As expected, there were lots of exciting launches and updates announced during the AWS Summit New York. You can quickly scan the highlights in Top Announcements of the AWS Summit in New York, 2024.
My colleagues and fellow AWS News Blog writers Veliswa Boya and Sébastien Stormacq were at the AWS Community Day Cameroon last week. They were energized to meet amazing professionals, mentors, and students – all willing to learn and exchange thoughts about cloud technologies. You can access the video replay to feel the vibes or just watch some of the talks!
Last week’s launches
In addition to the launches at the New York Summit, here are a few others that got my attention.
Advanced RAG capabilities Knowledge Bases for Amazon Bedrock – These include custom chunking options to enable customers to write their own chunking code as a Lambda function; smart parsing to extract information from complex data such as tables; and query reformulation to break down queries into simpler sub-queries, retrieve relevant information for each, and combine the results into a final comprehensive answer.
Amazon Bedrock Prompt Management and Prompt Flows – This is a preview launch of Prompt Management that help developers and prompt engineers get the best responses from foundation models for their use cases; and Prompt Flows accelerates the creation, testing, and deployment of workflows through an intuitive visual builder.
Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock (preview) – By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.
IDE workspace context awareness in Amazon Q Developer chat – Users can now add @workspace to their chat message in Q Developer to ask questions about the code in the project they currently have open in the IDE. Q Developer automatically ingests and indexes all code files, configurations, and project structure, giving the chat comprehensive context across your entire application within the IDE.
New features in Amazon Q Business – The new personalization capabilities in Amazon Q Business are automatically enabled and will use your enterprise’s employee profile data to improve their user experience. You can now get answers from text content in scanned PDFs, and images embedded in PDF documents, without having to use OCR for preprocessing and text extraction.
Amazon EC2 R8g instances powered by AWS Graviton4 are now generally available – Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances.
Vector search for Amazon MemoryDB is now generally available – Vector search for MemoryDB enables real-time machine learning (ML) and generative AI applications. It can store millions of vectors with single-digit millisecond query and update latencies at the highest levels of throughput with >99% recall.
Introducing Valkey GLIDE, an open source client library for Valkey and Redis open source – Valkey is an open source key-value data store that supports a variety of workloads such as caching, and message queues. Valkey GLIDE is one of the official client libraries for Valkey and it supports all Valkey commands. GLIDE supports Valkey 7.2 and above, and Redis open source 6.2, 7.0, and 7.2.
Amazon OpenSearch Service enhancements – Amazon OpenSearch Serverless now supports workloads up to 30TB of data for time-series collections enabling more data-intensive use cases, and an innovative caching mechanism that automatically fetches and intelligently manages data, leading to faster data retrieval, efficient storage usage, and cost savings. Amazon OpenSearch Service has now added support for AI powered Natural Language Query Generation in OpenSearch Dashboards Log Explorer so you can get started quickly with log analysis without first having to be proficient in PPL.
Open source release of Secrets Manager Agent for AWS Secrets Manager – Secrets Manager Agent is a language agnostic local HTTP service that you can install and use in your compute environments to read secrets from Secrets Manager and cache them in memory, instead of making a network call to Secrets Manager.
Amazon S3 Express One Zone now supports logging of all events in AWS CloudTrail – This capability lets you get details on who made API calls to S3 Express One Zone and when API calls were made, thereby enhancing data visibility for governance, compliance, and operational auditing.
Amazon CloudFront announces managed cache policies for web applications – Previously, Amazon CloudFront customers had two options for managed cache policies, and had to create custom cache policies for all other cases. With the new managed cache policies, CloudFront caches content based on the Cache-Control
headers returned by the origin, and defaults to not caching when the header is not returned.
We launched existing services in additional Regions:
- Amazon Relational Database Service (RDS) Data API for Aurora PostgreSQL is now available in 10 additional AWS regions.
- Amazon Managed Workflows for Apache Airflow (MWAA) is now available in nine new AWS Regions.
- Amazon Simple Notification Service (Amazon SNS) customers can now host their applications in Canada West (Calgary) region, and send text messages (SMS) to consumers in more than 200 countries and territories.
- Amazon EMR support for backup and restore for Apache HBase Tables is available in Asia Pacific (Seoul) region.
- Amazon Cognito is now available in Canada West (Calgary) and Asia Pacific (Hong Kong) regions.
Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:
Context window overflow: Breaking the barrier – This blog post dives into intricate workings of generative artificial intelligence (AI) models, and why is it crucial to understand and mitigate the limitations of CWO (context window overflow).
Using Agents for Amazon Bedrock to interactively generate infrastructure as code – This blog post explores how Agents for Amazon Bedrock can be used to generate customized, organization standards-compliant IaC scripts directly from uploaded architecture diagrams.
Automating model customization in Amazon Bedrock with AWS Step Functions workflow – This blog post covers orchestrating repeatable and automated workflows for customizing Amazon Bedrock models and how AWS Step Functions can help overcome key pain points in model customization.
AWS open source news and updates – My colleague Ricardo Sueiras writes about open source projects, tools, and events from the AWS Community; check out Ricardo’s page for the latest updates.
Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:
AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. To learn more about future AWS Summit events, visit the AWS Summit page. Register in your nearest city: Bogotá (July 18), Taipei (July 23–24), AWS Summit Mexico City (Aug. 7), and AWS Summit Sao Paulo (Aug. 15).
AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world. Upcoming AWS Community Days are in Aotearoa (Aug. 15), Nigeria (Aug. 24), New York (Aug. 28), and Belfast (Sept. 6).
Browse all upcoming AWS led in-person and virtual events and developer-focused events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
— Abhishek
This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!
from AWS News Blog https://ift.tt/QoSUfMD
via IFTTT
Mastering Amazon Cancelled Orders: A Comprehensive Guide for Sellers
Effectively Manage Amazon Order Cancellations, Improve Seller Metrics, and Optimize Your Amazon Business Performance
Understanding and effectively managing order cancellations ensures a healthy account and improves customer satisfaction for all Amazon sellers.
This comprehensive guide will walk you through the intricacies of Amazon's canceled orders, providing you with the knowledge and strategies to navigate this aspect of e-commerce successfully.
Understanding Amazon Order Cancellations
Order cancellations on Amazon occur when a purchase is terminated before the item is shipped. While cancellations are a normal part of e-commerce, they can impact your inventory management, seller metrics, and overall business performance if not handled properly.
The Buyer’s Perspective: When and How Customers Can Cancel Orders
Amazon provides buyers with a straightforward process for canceling orders, but the options available to them depend on the timing:
- Within the first 30 minutes: Buyers can cancel their order directly using the “Cancel Items” option in their Amazon account under “Your Orders.”
- After 30 minutes: Direct cancellation is no longer possible. Instead, buyers must submit a cancellation request for the seller to review.
Understanding this timeline is crucial for sellers, as it affects how you interact with cancellation requests and manage your inventory.
Types of Order Cancellations and Their Impact on Sellers
Amazon categorizes cancellations into several types, each with different implications for sellers:
- Official buyer-initiated cancellations: These are processed through Amazon’s system and don’t negatively impact your seller metrics.
- Unofficial buyer-initiated cancellations: When buyers request cancellations through the Buyer-Seller Messaging tool, these can affect your Cancellation Rate metric if not handled correctly.
- Seller-initiated cancellations: Occur when you can’t fulfill an order due to inventory issues, pricing errors, or other reasons. They generally impact your Cancellation Rate.
- Amazon-initiated automatic cancellations: Amazon may cancel orders automatically in certain situations, such as when a seller hasn’t confirmed shipment within seven days of the expected shipping date.
The Official Cancellation Process: Step-by-Step Guide
When a buyer submits an official cancellation request, follow these steps to process it:
- Log into your Seller Central account and go to “Manage Orders.”
- In the “Unshipped” tab, use the “Buyer Requested Cancel” filter to find relevant orders.
- Look for orders with a banner stating, “The buyer has requested that this order be canceled. Canceling this order will not affect your Cancellation Rate metric.”
- Click “Cancel order” under the Actions column.
- On the cancellation page, “Buyer canceled” will be pre-selected as the reason. This cannot be edited for official buyer-requested cancellations.
- Click “Submit” to complete the cancellation.
Processing cancellations this way ensures they won’t negatively impact your seller metrics.
Navigating Unofficial Cancellation Requests
When buyers request cancellations through the Buyer-Seller Messaging tool, it’s considered an unofficial request. These messages are typically labeled as “Inquiries from Amazon customers.” If you cancel an order based on these messages, it will count against your Cancellation Rate metric.
To handle these situations:
- Respond to the buyer’s message.
- Request that they submit an official cancellation through their Amazon account.
- Provide instructions: “You can cancel the order in your Amazon account at Your Account > Your Orders > Request Cancellation.”
This approach helps maintain your metrics while still addressing the buyer’s needs.
Impact of Cancellations on Seller Metrics
Amazon uses a Cancellation Rate metric to assess seller performance. Not all cancellations affect this metric equally:
- Official buyer-requested cancellations don’t impact your rate.
- Unofficial cancellations (via messaging) and seller-initiated cancellations do count against you.
- Some Amazon-initiated automatic cancellations may affect your rate, while others (like fraudulent buyer detection) don’t.
Maintaining a low Cancellation Rate is crucial for your account health and selling privileges.
Partial Cancellations and Refunds: What Sellers Need to Know
Amazon doesn’t currently support partial order cancellations. However, you can issue full or partial refunds for individual items in an order using the Refund Calculator in Seller Central. Remember, to initiate a refund, you must have already confirmed the shipment for the order.
Leveraging Technology for Efficient Cancellation Management
Amazon provides several tools to help sellers manage cancellations more effectively:
- Order Reports: A field called “is-buyer-requested-cancellation” is in your order reports. This field shows “TRUE” for orders with buyer cancellation requests.
- APIs for tracking cancellations: Selling Partner (SP) API includes cancellation information in order item responses, including a “isBuyerRequestedCancel” flag and “buyerCancelReason” string.
Utilizing this data can streamline cancellation management and help you stay on top of order status changes.
Best Practices for Minimizing Cancellations
While some cancellations are inevitable, you can take steps to minimize their occurrence:
1. Maintain accurate inventory levels to avoid stockouts.
2. Provide clear, detailed product descriptions to set accurate expectations.
3. Price your items competitively and accurately.
4. Process and ship orders promptly to reduce the window for cancellations.
5. Use vacation settings when you cannot fulfill orders to prevent unwanted orders and subsequent cancellations.
How to be a Data-Driven Advertiser with Amazon Cancelled Orders
Understanding and analyzing your canceled order data can provide valuable insights for your Amazon business. Here’s how you can leverage this information:
Manual Data Access Process
1. Log into your Amazon Seller Central account.
2. Navigate to the Reports section.
3. Generate order reports, including those with cancellation data.
4. Download these reports as CSV or Excel files for analysis.
While this process provides useful information, it can be time-consuming and may not offer real-time insights for frequent analysis.
Amazon Order Data Automation
To truly harness the power of your Amazon data, including insights from canceled orders, consider leveraging data automation solutions like Openbridge.
No more manual file downloads. Get code-free, fully automated Amazon Selling Partner API data pipelines for orders, inventory, traffic, fulfillment, finance, and more.
- FBA Fees
- Order Reports
- Finance API
- Fulfillment
- Inventory
- Sales & Traffic
- Orders API
- FBM/MFN PII
- Settlement
- Returns
Openbridge’s unified data approach can significantly enhance your reporting and analytics capabilities, powering tools like Google Data Studio, Tableau, Microsoft Power BI, Looker, Amazon QuickSight, SAP, Alteryx, dbt, Azure Data Factory, Qlik Sense, and many others.
This integration creates an analytics-ready single source of truth, enabling more effective decision-making across various aspects of your Amazon selling strategy.
Mastering Cancellations for Amazon Selling Success
Managing Amazon's canceled orders effectively is crucial for any successful seller. By understanding the different types of cancellations, following the correct procedures, and leveraging data and technology, you can minimize the negative impact of cancellations on your business.
Remember, while cancellations are a normal part of e-commerce, how you handle them can set you apart as a top-performing Amazon seller.
Getting Started with Amazon Cancelled Orders Automation
Ready to take your Amazon business to the next level? Ditch the messy, manual reporting for Amazon Cancelled Orders.
>>>>
Get a 30-day free trial to try Amazon Orders automation and see how it can transform your approach to Amazon selling.
<<<<<
Mastering Amazon Cancelled Orders: A Comprehensive Guide for Sellers was originally published in Openbridge on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Openbridge - Medium https://ift.tt/RjIcHpN
via Openbridge
Thursday, July 11, 2024
Essential Security for Amazon Seller Accounts: Two-Step Verification (2FA)
Shielding Your Amazon Business from Bots, Backdoors and Breaches
Based on recent revelations about the serious risks of data scraping bots for Amazon and data theft and extortion for Snowflake users, using two-step verification for Amazon seller accounts is critical. As a matter of fact, since March 28, 2024, Amazon added a two-factor authentication (2FA) requirement for all logins.
A two-step verification security measure safeguards against specific, documented threats to your Amazon business operations and financial stability. Leveraging 2FA requires minimal time investment, but its protection is substantial.
The Business Case for Two-Step Verification
Two-step verification, or two-factor authentication (2FA), requires two forms of identification to access your account: your password and a second verification code sent to a designated device.
For Amazon sellers, implementing this security measure is crucial for several reasons:
- Financial Protection: Your Amazon seller account is directly linked to your business’s revenue stream. Unauthorized access could lead to financial losses through fraudulent transactions or redirected funds.
- Data Breach Prevention: Seller accounts contain sensitive customer information. A breach could result in significant legal and financial liabilities under data protection regulations like GDPR.
- Operational Continuity: Account compromise can lead to business disruptions, affecting your ability to process orders and manage inventory.
- Brand Reputation: Security incidents can damage your reputation with customers and partners, potentially leading to long-term business impacts.
Specific Threats Mitigated by Two-Step Verification
Recent security research has uncovered several specific threats that Two-Step Verification helps mitigate:
- Credential Stuffing Attacks: Cybercriminals use stolen username/password combinations from other breaches to attempt access to Amazon seller accounts. Two-step verification renders these attacks ineffective.
- Phishing Campaigns: Sophisticated phishing attempts target Amazon sellers to steal login credentials. Even if credentials are compromised, Two-Step Verification provides an additional layer of defense.
- Data Scraping Bots: Some third-party software providers use unauthorized data scraping bots to access seller accounts programmatically. These bots bypass Amazon’s official APIs and security protocols, potentially exposing sellers to:
— Unauthorized access to customer PII (Personally Identifiable Information)
— Increased risk of financial fraud
— Potential compliance violations - Supply Chain Attacks: As recent high-profile incidents have demonstrated, attackers may target software providers or contractors with access to multiple seller accounts. Two-step verification adds a crucial layer of protection against such broad-scale compromises.
Implementation of Two-Step Verification
Enabling Two-Step Verification on your Amazon seller account is a straightforward process:
- Log in to Seller Central
- Navigate to Account Settings
- Select “Login Settings” and click “Edit” next to Two-Step Verification
- Follow the on-screen instructions to set up your preferred verification method
Amazon offers multiple options for receiving verification codes:
- SMS text message
- Voice call
- Authenticator app (recommended for enhanced security)
Best Practices for Two-Step Verification
To maximize the effectiveness of Two-Step Verification:
- Use Authenticator Apps: These provide superior security compared to SMS or voice calls and don’t require network access.
- Never allow bots to access your account for data scraping, as they can provide a backdoor despite 2FA protections.
- Implement Multiple Verification Methods: Set up at least two methods to ensure account access if one method becomes unavailable.
- Regularly Update Recovery Methods: Maintain current backup phone numbers and email addresses.
- Limit Use of Trusted Devices: While Amazon allows marking devices as trusted to skip verification, this should be done judiciously. Regular verification is often safer.
- Enforce Company-Wide Adoption: If multiple employees access the seller account, mandate Two-Step Verification use for all users.
Integration with Broader Security Strategy
While Two-Step Verification is crucial, it should be part of a comprehensive security approach:
- Implement robust password policies, including regular updates and unique passwords for each system.
- Never grant user-level access to your accounts for bots to perform data scraping. You can spot programmatic bot access requests to your Amazon accounts because they will ask you to create user-level credentials like client-[brandname]-[marketplace]@domain.com for bot access.
- Conduct regular security awareness training for all staff, focusing on phishing detection and safe browsing practices.
- Perform routine security audits of your Amazon seller account and associated systems.
- Carefully vet any third-party tools or services before granting access to your seller account. Never allow bots access to your account.
Openbridge has a bot-free policy, and we only leverage official, approved APIs for account authorizations (Login With Amazon- LWA), SP-API (Seller Central and Vendor Central), and Amazon Advertising.
Why do we have a bot-free policy? See Why A Bot-Free Policy Is Good For Security.
Activate Two-Step Verification Today!
All Amazon sellers should review their two-step verification immediately to ensure it is properly configured. The potential business risks of account compromise far outweigh the minimal inconvenience of this additional security step.
It’s not just about protecting your business — it’s about safeguarding your customers’ data and maintaining the integrity of your operations on the Amazon platform.
Essential Security for Amazon Seller Accounts: Two-Step Verification (2FA) was originally published in Openbridge on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Openbridge - Medium https://ift.tt/eGn3CAa
via Openbridge