{"id":951,"date":"2021-09-28T06:43:22","date_gmt":"2021-09-28T06:43:22","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/09\/28\/detect-anomalies-using-amazon-lookout-for-metrics-and-review-inference-through-amazon-a2i\/"},"modified":"2021-09-28T06:43:22","modified_gmt":"2021-09-28T06:43:22","slug":"detect-anomalies-using-amazon-lookout-for-metrics-and-review-inference-through-amazon-a2i","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/09\/28\/detect-anomalies-using-amazon-lookout-for-metrics-and-review-inference-through-amazon-a2i\/","title":{"rendered":"Detect anomalies using Amazon Lookout for Metrics and review inference through Amazon A2I"},"content":{"rendered":"<div id=\"\">\n<p>Proactively detecting unusual or unexpected variances in your business metrics and reducing false alarms can help you stay on top of sudden changes and improve your business performance. Accurately identifying the root cause of deviation from normal business metrics and taking immediate steps to remediate an anomaly can not only boost user engagement but also improve customer experience.<\/p>\n<p>As the volume of data monitored by your business grows, detecting anomalies gets challenging. On March 25, 2021, AWS announced the general availability of <a href=\"https:\/\/aws.amazon.com\/lookout-for-metrics\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Lookout for Metrics<\/a>, a service that uses machine learning (ML) to automatically detect anomalies that are most important to businesses with greater speed and accuracy, and identifies their root cause.<\/p>\n<p>ML models often need human oversight to retrain and continuously improve model accuracy. In this post, we show how you can set up Lookout for Metrics to train a model to detect anomalies. We then use a human-in-the-loop workflow to review the predictions using <a href=\"https:\/\/aws.amazon.com\/augmented-ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Augmented AI<\/a> (Amazon A2I), and use the feedback to improve model accuracy.<\/p>\n<h2>Solution overview<\/h2>\n<p>Lookout for Metrics uses ML to automatically detect and diagnose anomalies (outliers from the norm) in business and operational data, such as a sudden dip in sales revenue or customer acquisition rates. In a couple of clicks, you can connect Lookout for Metrics to popular data stores like <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3), <a href=\"http:\/\/aws.amazon.com\/redshift\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Redshift<\/a>, and <a href=\"http:\/\/aws.amazon.com\/rds\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Relational Database Service<\/a> (Amazon RDS), as well as third-party software as a service (SaaS) applications such as Salesforce, ServiceNow, Zendesk, and Marketo, and start monitoring metrics that are important to your business.<\/p>\n<p>Lookout for Metrics automatically inspects and prepares the data from these sources to detect anomalies with greater speed and accuracy than traditional methods used for anomaly detection. You can also provide feedback on detected anomalies to tune the results and improve accuracy over time. Lookout for Metrics makes it easy to diagnose detected anomalies by grouping together anomalies that are related to the same event and sending an alert that includes a summary of the potential root cause. It also ranks anomalies in order of severity so you can prioritize your attention to what matters most to your business.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image001-1.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-27691\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image001-1.jpg\" alt=\"\" width=\"2142\" height=\"812\"><\/a><\/p>\n<p>Amazon A2I is an ML service that makes it easy to build the workflows required for human review. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers, whether running on AWS or not.<\/p>\n<p>To get started with Lookout for Metrics, we create a detector, create and attach a dataset with metrics that you want to train on and monitor, activate the detector, and view anomalies. Following these steps, we show how you can set up a human review workflow using Amazon A2I. Finally, it updates the detector with human review feedback, which helps retrain the model and further improve accuracy.<\/p>\n<h2>Architecture overview<\/h2>\n<p>The following diagram illustrates the solution architecture.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image003.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-27693\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image003.png\" alt=\"\" width=\"1442\" height=\"702\"><\/a><\/p>\n<p>The solution has the following workflow:<\/p>\n<ol>\n<li>Upload data from your source to Amazon S3.<\/li>\n<li>We run Lookout for Metrics in continuous mode to process data continuously from the Amazon S3 path.<\/li>\n<li>Inference results are stored in Amazon S3<\/li>\n<li>When Lookout for Metrics detects anomalies, the inference input and outputs are presented to the private workforce for validation via Amazon A2I.<\/li>\n<li>A private workforce investigates and validates the detected anomalies and provides feedback.<\/li>\n<li>We update the results with corresponding feedback from human loop through Amazon A2I.<\/li>\n<li>Updated feedback improves accuracy of future trainings.<\/li>\n<\/ol>\n<p>In the accompanying Jupyter notebook downloadable from <a href=\"https:\/\/github.com\/aws-samples\/amazon-lookout-for-metrics-a2i-integration.git\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>, we walk you through the following steps:<\/p>\n<ol>\n<li>Generate a synthetic dataset.<\/li>\n<li>Create a detector and map measures and dimensions to metrics.<\/li>\n<li>Activate the detector.<\/li>\n<li>Detect anomalies.<\/li>\n<li>Set up Amazon A2I to review predictions from Lookout for Metrics.<\/li>\n<li>Update the model based on output from human loop through Amazon A2I.<\/li>\n<\/ol>\n<h2>Prerequisites<\/h2>\n<p>Before you get started, complete the following steps to set up the Jupyter notebook:<\/p>\n<ol>\n<li><a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/howitworks-create-ws.html\" target=\"_blank\" rel=\"noopener noreferrer\">Create a notebook instance<\/a> in <a href=\"https:\/\/aws.amazon.com\/sagemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon SageMaker<\/a>.<\/li>\n<li>When the notebook is active, choose <strong>Open Jupyter<\/strong>.<\/li>\n<li>On the Jupyter dashboard, choose <strong>New<\/strong>, and choose <strong>Terminal<\/strong>.<\/li>\n<li>In the terminal, enter the following code:<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">git clone \nhttps:\/\/github.com\/aws-samples\/amazon-lookout-for-metrics-a2i-integration.git<\/code><\/pre>\n<\/p><\/div>\n<\/p><\/div>\n<ol start=\"5\">\n<li>Open the notebook for this post from next_steps\/A2I_Integration:<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Amazon_L4M_Detector_A2I.ipynb<\/code><\/pre>\n<\/p><\/div>\n<p>You\u2019re now ready to run the notebook cells. Run the environment setup step to set up the necessary Python SDKs and libraries, S3 bucket, and Region that we use throughout the notebook. Make sure that the SageMaker Studio IAM role has the necessary permissions.<\/p>\n<p>NOTE: The code uses Python 3.7. Please use the Python 3 (Data Science) kernel for this notebook.<\/p>\n<h2>Generate synthetic data<\/h2>\n<p>In this section, we generate synthetic data for the detector and for predicting anomalies. The metrics data is aggregated on an hourly basis, and the detector runs in continuous mode in real time, every hour. In this example, we use the detector to detect anomalies. Starting from the current date, we generate data for 6 months in the past and 3 days in the future. The historical data is used for training the model, and we use the current and future data for predicting anomalies on an ongoing basis. Note the following configuration:<\/p>\n<ul>\n<li>Historical data is created as a CSV file called <code>.\/data\/ecommerce\/backtest\/input.csv<\/code><\/li>\n<li>Hourly data files are stored in the folder <code>.\/data\/ecommerce\/live\/&lt;yyyyMMdd&gt;\/&lt;HH:mm&gt;\/&lt;yyyyMMdd_HH:mm:ss&gt;.csv<\/code><\/li>\n<li>Complete data along with the anomaly labels is available in <code>.\/data\/ecommerce\/label.csv<\/code><\/li>\n<\/ul>\n<p>Access the notebook section <strong>generate synthetic data<\/strong> and run the cells. Then inspect the DataFrame:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">backtest_df = pd.read_csv('data\/ecommerce\/backtest\/input.csv')\nbacktest_df.head()<\/code><\/pre>\n<\/p><\/div>\n<p>Data points are generated in a random manner, so your response might look different than the following screenshot.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image005-1.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-27694\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image005-1.jpg\" alt=\"\" width=\"964\" height=\"290\"><\/a><\/p>\n<p>Save the data to the S3 bucket previously created:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">%%time\n!aws s3 sync {DIR_PATH}\/{DATASET_NAME}\/ s3:\/\/{bucket_name}\/{DATASET_NAME}\/ \u2014quiet \u2014delete<\/code><\/pre>\n<\/p><\/div>\n<h2>Create a detector and map measures and dimensions to metrics<\/h2>\n<p>A detector is a Lookout for Metrics resource that monitors a dataset and identifies anomalies. To detect outliers, Lookout for Metrics builds an ML model that is trained with your source data. This model is automatically trained with the ML algorithm that best fits your data and use case. Access the notebook section <strong>Create Lookout for Metrics Detector<\/strong> and run the cell to create a detector:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">if len(ecom_anomaly_detector_arn) == 0:\n    # Detector for ecommerce example does not exists. Create the anomaly detector.\n    create_anomaly_detector_response = L4M.create_anomaly_detector( \n        AnomalyDetectorName = ecom_anomaly_detector_name,\n        AnomalyDetectorDescription = \"Anomaly detection on a sample ecommerce dataset.\",\n        AnomalyDetectorConfig = {\n            \"AnomalyDetectorFrequency\" : FREQUENCY,   \n        },\n    )<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Anomaly Detector ARN: arn:aws:lookoutmetrics:<span>[REGION]<\/span>:<span>[ACCOUNT NUMBER]<\/span>:AnomalyDetector:ecommerce-continuous-detector<\/code><\/pre>\n<\/p><\/div>\n<p><em>Measures<\/em> are the primary fields that the detector monitors. You can also configure up to five additional fields as <em>dimensions<\/em>. Dimensions are secondary fields that create subgroups of measures based on their value.<\/p>\n<p>In this ecommerce example, views and revenue are our measures and platform and marketplace are our dimensions. You may want to monitor your data for anomalies in number of views or revenue for every platform, marketplace, and combination of both.<\/p>\n<p>Each combination of measure and dimension is called a <em>metric<\/em>. Measures, dimensions, and metrics map to datasets, which also contain the Amazon S3 locations of your source data, an <a href=\"http:\/\/aws.amazon.com\/iam\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Identity and Access Management<\/a> (IAM) role that has both read and write permissions to those Amazon S3 locations, and the rate at which data should be ingested from the source location (the upload frequency and data ingestion delay). Run the cells in the <strong>Measures and Dimensions<\/strong> section to create a metric set for your detector that points to the live data in Amazon S3:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">create_metric_set_response = L4M.create_metric_set( ** params )\necom_metric_set_arn = create_metric_set_response[\"MetricSetArn\"]<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Metric Set ARN: arn:aws:lookoutmetrics:<span>&lt;REGION&gt;<\/span>:<span>&lt;ACCOUNT-NUMBER&gt;<\/span>:MetricSet\/ecommerce-continuous-detector\/ecommerce-metric-set-1<\/code><\/pre>\n<\/p><\/div>\n<h2>Activate your detector<\/h2>\n<p>Now it\u2019s time to activate your detector. During activation, the model is trained with historical data that was generated in a previous cell and stored in the <code>.\/data\/ecommerce\/backtest<\/code> folder. Run the cells under <strong>Activate the Detector<\/strong>:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">if ecom_detector_status in [\"INACTIVE\", \"ACTIVATING\"]:\n    \n    # Activate the detector\n    if ecom_detector_status == \"INACTIVE\":\n        L4M.activate_anomaly_detector(AnomalyDetectorArn = ecom_anomaly_detector_arn)\n    \n        print(\"nActivating ecommerce example Detector.\")<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Activating ecommerce example Detector.\nDetector status: ACTIVATING\n...\nDetector status: ACTIVATING\n...\nDetector status: LEARNING\n...\nDetector status: ACTIVE<\/code><\/pre>\n<\/p><\/div>\n<p>You can also check the status of your detector on the Lookout for Metrics console.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image007.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-27695\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image007.jpg\" alt=\"\" width=\"1424\" height=\"574\"><\/a><\/p>\n<h2>Detect anomalies<\/h2>\n<p>In this section, you review the anomalies found by the detector. We have created a continuous detector that operates on live data. It expects to receive input data every hour. We already generated some data into the future, which is in the <code>.\/data\/ecommerce\/live<\/code> folder. Run the cell from the notebook section <strong>Fetch Anomalies<\/strong> to detect anomalies:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">....\n    if next_token:\n        params[\"NextToken\"] = next_token\n\n    response = L4M.list_anomaly_group_summaries(**params )\n    \n    print(\"Anomaly group summaries:n {}\".format(response))\n\n    anomaly_groups += response[\"AnomalyGroupSummaryList\"]\n    print('ntype of AnomalyGroupSummaryList: {}'.format(type(anomaly_groups)))<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image009.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-27696\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/07\/ML-3507-image009.jpg\" alt=\"\" width=\"1808\" height=\"1208\"><\/a><\/p>\n<p>You may have to wait for the detector to run at the top of the hour to detect anomalies. So, if no anomalies are found when running the next cell, you may want to come back later and run it again<em>.<\/em><\/p>\n<h2>Set up Amazon A2I to review predictions from Lookout for Metrics<\/h2>\n<p>In this section, you set up a human review loop for low-confidence detection in Amazon A2I. It includes the following steps:<\/p>\n<ol>\n<li>Create a private workforce.<\/li>\n<li>Create a human task UI.<\/li>\n<li>Create a human task workflow.<\/li>\n<li>Send predictions to Amazon A2I human loops.<\/li>\n<li>Complete your review and check the human loop status.<\/li>\n<\/ol>\n<h3>Create a private workforce<\/h3>\n<p>You must create a <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sms-workforce-create-private-console.html\" target=\"_blank\" rel=\"noopener noreferrer\">private workforce<\/a> on the SageMaker console. For instructions, see <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sms-workforce-create-private-console.html#create-workforce-sm-console\" target=\"_blank\" rel=\"noopener noreferrer\">Create an Amazon Cognito Workforce Using the Labeling Workforces Page<\/a>. Once created, note the <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sms-workforce-management-private-api.html\" target=\"_blank\" rel=\"noopener noreferrer\">ARN of the workforce<\/a> and enter its value in the notebook cell:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">WORKTEAM_ARN = '<span>your private workforce team ARN<\/span>'<\/code><\/pre>\n<\/p><\/div>\n<h3>Create a human task UI<\/h3>\n<p>You now create a human task UI resource, giving a UI template in liquid HTML. This HTML page is rendered to the human workers whenever a human loop is required. For over 70 pre-built UIs, see the <a href=\"https:\/\/github.com\/aws-samples\/amazon-a2i-sample-task-uis\" target=\"_blank\" rel=\"noopener noreferrer\">amazon-a2i-sample-task-uis<\/a> GitHub repo.<\/p>\n<p>Follow the steps provided in the notebook section <strong>Create a human task UI<\/strong> to create the web form and initialize the Amazon A2I APIs:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">...\nif not describe_human_task_ui_response:\n    # Create the human task UI\n    create_human_task_ui_response = sagemaker_client.create_human_task_ui(\n        HumanTaskUiName=l4m_taskUIName,\n        UiTemplate={'Content': ecom_a2i_template}) \n\n    print(\"nCreate human task ui response: \")\n    pprint.pprint(create_human_task_ui_response, width = 2)\n...\n<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Human task UI ARN: arn:aws:sagemaker:<span>[REGION]<\/span>:<span>[ACCOUNT NUMBER]<\/span>:human-task-ui\/l4m-ecommerce-ui<\/code><\/pre>\n<\/p><\/div>\n<h3>Create a human task workflow<\/h3>\n<p>Workflow definitions allow you to specify the following:<\/p>\n<ul>\n<li>The worker template or human task UI you created in the previous step.<\/li>\n<li>The workforce that your tasks are sent to. For this post, it\u2019s the private workforce you created in the prerequisite steps.<\/li>\n<li>The instructions that your workforce receives.<\/li>\n<\/ul>\n<p>This post uses the <code>Create Flow Definition<\/code> API to create a workflow definition. The results of the human review are stored in an S3 bucket, which can be accessed by the client application. Run the cell <strong>Create a Human task Workflow<\/strong> in the notebook:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">...\nif not describe_flow_definition_response:\n    create_workflow_definition_response = sagemaker_client.create_flow_definition(\n        FlowDefinitionName = l4m_flowDefinitionName,\n        RoleArn=sagemaker_role_arn,\n        HumanLoopConfig= {\n            \"WorkteamArn\": workteam_ARN,\n            \"HumanTaskUiArn\": l4m_review_ui_arn,\n            \"TaskCount\": 1,\n            \"TaskDescription\": \"Review the anomalies detected by Amazon Lookout for Metrics\",\n            \"TaskTitle\": \"Ecommerce Anomalies Review\"\n        },\n        OutputConfig={\n            \"S3OutputPath\" : s3_output_path\n        }\n    )\n...    <\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">S3 output path: s3:\/\/<span>[ACCOUNT NUMBER]<\/span>-<span>[REGION]<\/span>-lookoutmetrics-lab\/ecommerce\/a2i-results\nFlow definition Arn: arn:aws:sagemaker:<span>[REGION]<\/span>:<span>[ACCOUNT NUMBER]<\/span>:flow-definition\/l4m-ecommerce-workflow<\/code><\/pre>\n<\/p><\/div>\n<h3>Send predictions to Amazon A2I human loops<\/h3>\n<p>Run cells in the <strong>start human review loop<\/strong> section to find the URL of a portal for providing feedback on anomalies. Open the URL in a browser and log in with the credentials of the human review worker. You should have sent an invitation email to a worker for joining the work team when you created the work team on the Amazon A2I console. Complete the review from the portal and inspect the output:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">....\nstart_human_loop_response = a2i_client.start_human_loop(\n            HumanLoopName=humanLoopName,\n            FlowDefinitionArn=flowDefinitionArn,\n            HumanLoopInput={\n                \"InputContent\": json.dumps(ip_content)\n            }\n        )\n\nprint(\"nStart human loop response: \")\n#pprint.pprint(start_human_loop_response, width=2)\n<\/code><\/pre>\n<\/p><\/div>\n<h3>Complete the review and check the human loop status<\/h3>\n<p>Complete your review and check the human loop status with the following code:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">...\ntry:\n    describe_human_loop_response = a2i_client.describe_human_loop(HumanLoopName=humanLoopName)\n    print(\"nDescribe human loop response: \")\n    pprint.pprint(describe_human_loop_response, width=2)\n    \n    completed_human_loops_s3_output = describe_human_loop_response[\"HumanLoopOutput\"][\"OutputS3Uri\"]\n    print(\"HumanLoop Status: {}\".format(describe_human_loop_response[\"HumanLoopStatus\"]))\nexcept:\n    print(\"Error getting human loop\")\n\n\n#print(\"nHumanLoop Name: {}\".format(humanLoopName))\n#print(\"HumanLoop Status: {}\".format(describe_human_loop_response[\"HumanLoopStatus\"]))\nprint(\"nOutput in S3 at: n{}\".format(describe_human_loop_response[\"HumanLoopOutput\"][\"OutputS3Uri\"]))<\/code><\/pre>\n<\/p><\/div>\n<p>You get the following response:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">HumanLoop Status: Completed\n\nOutput in S3 at: \ns3:\/\/[ACCOUNT NAME]-[REGION]-lookoutmetrics-lab-2\/ecommerce\/a2i-results\/l4m-ecommerce-workflow\/2021\/06\/22\/05\/35\/13\/51e352ff-38f3-4154-a162-1fb6661462da\/output.json<\/code><\/pre>\n<\/p><\/div>\n<h2>Update the detector based on human feedback from Amazon A2I<\/h2>\n<p>In this section, we review our results and update the detector to improve prediction accuracy. Refer to the accompanying notebook in <a href=\"https:\/\/github.com\/aws-samples\/amazon-lookout-for-metrics-a2i-integration\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a> for detailed steps to add human loop. We check whether the human review detected an anomaly:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">while anomaly_col_name in review_result:\n    \n    #tseriesid = review_result['tseriesid-' + str(col_name_suffix)]    \n    \n    print(\"n{}\".format(str(col_name_suffix)))\n    \n    is_anomaly = review_result[anomaly_col_name]['on']\n    print(\"Is Anomaly: {}\".format(is_anomaly))<\/code><\/pre>\n<\/p><\/div>\n<p>If the review results indicated an anomaly, get the corresponding time series ID and anomaly group ID from the DataFrame and update the training set using the <a href=\"https:\/\/docs.aws.amazon.com\/lookoutmetrics\/latest\/api\/API_PutFeedback.html\" target=\"_blank\" rel=\"noopener noreferrer\">put feedback API<\/a> call:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">    row_value = df_anomalies_by_ts.loc[(df_anomalies_by_ts['timestamp'] == timestamp) &amp; (df_anomalies_by_ts['marketplace'] ==marketplace) &amp; (df_anomalies_by_ts['platform'] ==platform)]\n    #print(\"Row :{}\".format(row_value))\n\n    ....\n    ....\n    put_feedback_response = L4M.put_feedback(\n            AnomalyDetectorArn=ecom_anomaly_detector_arn,\n            AnomalyGroupTimeSeriesFeedback={\n                'AnomalyGroupId': anomaly_group_id,\n                'TimeSeriesId': tseriesid,\n                'IsAnomaly': is_anomaly}\n    )<\/code><\/pre>\n<\/p><\/div>\n<p>You can now retrain your model using the updated dataset to improve your model accuracy.<\/p>\n<h2>Clean up<\/h2>\n<p>Run the <strong>clean-up resources <\/strong>cell to clean up the resources that you created. Because we created a continuous detector, it continues to run one time every hour and incur charges until it is deleted.<\/p>\n<h2>Conclusion<\/h2>\n<p>In this post, we walked you through how to use Lookout for Metrics to train a model to detect anomalies, review diagnostics from the trained model, review the predictions from the model with a human in the loop using Amazon A2I, augment our original training dataset, and retrain our model with the feedback from the human reviews.<\/p>\n<p>With Lookout for Metrics and Amazon A2I, you can set up a continuous prediction, review, train, and feedback loop to audit predictions and improve the accuracy of your models. Use the <a href=\"https:\/\/github.com\/aws-samples\/amazon-lookout-for-metrics-a2i-integration\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repo<\/a> to access the notebook used in this post.<\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/01\/Neel-Sendas.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-27644 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/01\/Neel-Sendas.png\" alt=\"\" width=\"100\" height=\"115\"><\/a><strong>Neel Sendas<\/strong> is a Senior Technical Account Manager at Amazon Web Services. Neel works with enterprise customers to design, deploy, and scale cloud applications to achieve their business goals. He has worked on various ML use cases, ranging from anomaly detection to predictive product quality for manufacturing and logistics optimization. When he isn\u2019t helping customers, he dabbles in golf and salsa dancing.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/01\/Rawat.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-27645 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/09\/01\/Rawat.png\" alt=\"\" width=\"100\" height=\"122\"><\/a><strong>Ashish Rawat<\/strong> is a Senior Solutions Architect at Amazon Web Services, based in Atlanta, Georgia. Ashish provides architecture guidance to enterprise customers and helps them implement strategic industry solutions on AWS. He is passionate about AI\/ML and Internet of Things.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/detect-anomalies-using-amazon-lookout-for-metrics-and-review-inference-through-amazon-a2i\/<\/p>\n","protected":false},"author":0,"featured_media":952,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/951"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=951"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/951\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/952"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=951"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=951"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=951"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}