{"id":414,"date":"2020-10-16T07:21:07","date_gmt":"2020-10-16T07:21:07","guid":{"rendered":"https:\/\/machine-learning.webcloning.com\/2020\/10\/16\/detecting-playful-animal-behavior-in-videos-using-amazon-rekognition-custom-labels\/"},"modified":"2020-10-16T07:21:07","modified_gmt":"2020-10-16T07:21:07","slug":"detecting-playful-animal-behavior-in-videos-using-amazon-rekognition-custom-labels","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2020\/10\/16\/detecting-playful-animal-behavior-in-videos-using-amazon-rekognition-custom-labels\/","title":{"rendered":"Detecting playful animal behavior in videos using Amazon Rekognition Custom Labels"},"content":{"rendered":"<div id=\"\">\n<p>Historically, humans have observed animal behaviors and applied them for different purposes. For example, behavioral observation is important in animal ecology, such as how often the behaviors are, when the behaviors occur, or whether there is individual difference or not. However, identifying and monitoring these behaviors and movements can be hard and can take a long time. To provide an automation for this workflow, a team from the agile members of pharmaceutical customer (Sumitomo Dainippon Pharma Co., Ltd.) and AWS Solutions Architects created a solution with <a href=\"https:\/\/aws.amazon.com\/rekognition\/custom-labels-features\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Rekognition Custom Labels<\/a>. Amazon Rekognition Custom Labels makes it easy to label specific movements in images, and train and build a model that detects these movements.<\/p>\n<p>In this post, we show you how machine learning (ML) can help automate this workflow in a fun and simple way. We trained a custom model that detects playful behaviors of cats in a video using Amazon Rekognition Custom Labels. We hope to contribute to the afore-mentioned fields, biology and others by publicizing the architecture, our building process, and the source code for this solution.<\/p>\n<h2>About Amazon Rekognition Custom Labels<\/h2>\n<p>Amazon Rekognition Custom Labels is an automated ML feature that enables you to quickly train your own custom models for detecting business-specific objects and scenes from images\u2014no ML experience required. For example, you can train a custom model to find your company logos in social media posts, identify your products on store shelves, or classify unique machine parts in an assembly line.<\/p>\n<p>Amazon Rekognition Custom Labels builds off the existing capabilities of <a href=\"http:\/\/aws.amazon.com\/rekognition\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Rekognition<\/a>, which is already trained on tens of millions of images across many categories. Instead of thousands of images, you simply need to upload a small set of training images (typically a few hundred images or less) that are specific to your use case. If your images are already labeled, Amazon Rekognition Custom Labels can begin training in just a few clicks. If not, you can label them directly within the Amazon Rekognition Custom Labels labeling interface, or use <a href=\"https:\/\/aws.amazon.com\/sagemaker\/groundtruth\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon SageMaker Ground Truth<\/a> to label them for you.<\/p>\n<p>After Amazon Rekognition begins training from your image set, it can produce a custom image analysis model for you in just a few hours. Amazon Rekognition Custom Labels automatically loads and inspects the training data, selects the right ML algorithms, trains a model, and provides model performance metrics. You can then use your custom model via the Amazon Rekognition Custom Labels API and integrate it into your applications.<\/p>\n<h2>Solution overview<\/h2>\n<p>The following diagram shows the architecture of the solution. When you have model in place, the whole process of detecting specific behaviors in a video is automated; all you need to do is upload a video file (.mp4).<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17015 size-full\" title=\"Solution architecture\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/1-Diagram.jpg\" alt=\"\" width=\"900\" height=\"471\"><\/p>\n<p>The workflow contains the following steps:<\/p>\n<ol>\n<li>You upload a video file (.mp4) to <a href=\"https:\/\/aws.amazon.com\/s3\/?nc=sn&amp;loc=1\">Amazon Simple Storage Service<\/a> (Amazon S3), which invokes <a href=\"https:\/\/aws.amazon.com\/lambda\/?nc2=type_a\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Lambda<\/a>, which in turn calls an Amazon Rekognition Custom Labels inference endpoint and <a href=\"https:\/\/aws.amazon.com\/sqs\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Queue Service<\/a> (Amazon SQS). It takes about 10 minutes to launch the inference endpoint, so we use a deferred run of Amazon SQS.<\/li>\n<li>Amazon SQS invokes a Lambda function to do a status check of the inference endpoint, and launches <a href=\"https:\/\/aws.amazon.com\/ec2\/?nc2=type_a\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Elastic Compute Cloud<\/a> (Amazon EC2) if the status is <code>Running<\/code>.<\/li>\n<li>\n<a href=\"https:\/\/aws.amazon.com\/cloudwatch\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon CloudWatch Events<\/a> detects the <code>Running<\/code> status of Amazon EC2 and invokes a Lambda function, which runs a script on Amazon EC2 using the <a href=\"https:\/\/aws.amazon.com\/systems-manager\/?nc2=type_a\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Systems Manager<\/a> Run<\/li>\n<li>On Amazon EC2, the script calls the inference endpoint of Amazon Rekognition Custom Labels to detect specific behaviors in the video uploaded to Amazon S3 and writes the inferred results to the video on Amazon S3.<\/li>\n<li>When the inferred result file is uploaded to Amazon S3, a Lambda function launches to stop Amazon EC2 and the Amazon Rekognition Custom Labels inference endpoint.<\/li>\n<\/ol>\n<h2>Prerequisites<\/h2>\n<p>For this walkthrough, you should have the following prerequisites:<\/p>\n<ul>\n<li>\n<strong>An AWS account<\/strong> \u2013 You can <a href=\"https:\/\/portal.aws.amazon.com\/billing\/signup#\/start\" target=\"_blank\" rel=\"noopener noreferrer\">create a new account<\/a> if you don\u2019t have one yet.<\/li>\n<li>\n<strong>A key pair<\/strong> \u2013 You need a key pair to log in to the EC2 instance that uses Amazon Rekognition Custom Labels to detect specific behaviors. You can either use an existing key pair or create a new key pair. For more information, see <a href=\"https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ec2-key-pairs.html\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon EC2 key pairs and Linux instances<\/a>.<\/li>\n<li>\n<strong>A video for inference<\/strong> \u2013 This solution uses a video (.mp4 format) for inference. You can use your own video or the one we provide in this post.<\/li>\n<\/ul>\n<h2>Launching your AWS CloudFormation stack<\/h2>\n<p>Launch the provided <a href=\"http:\/\/aws.amazon.com\/cloudformation\" target=\"_blank\" rel=\"noopener noreferrer\">AWS CloudFormation<\/a><\/p>\n<p><a href=\"https:\/\/console.aws.amazon.com\/cloudformation\/home?region=us-east-1#\/stacks\/create\/review?templateURL=https:\/\/aws-ml-blog.s3.amazonaws.com\/artifacts\/Detecting-Animal-Behavior\/cfn_template.yaml&amp;stackName=RekognitionCustomLabel\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-16174\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/09\/22\/LaunchStack.jpg\" alt=\"\" width=\"144\" height=\"27\"><\/a><\/p>\n<p>After you launch the template, you\u2019re prompted to enter the following parameters:<\/p>\n<ul>\n<li>\n<strong>KeyPair<\/strong> \u2013 The name of the key pair used to connect to the EC2 instance<\/li>\n<li>\n<strong>ModelName<\/strong> \u2013 The model name used for Amazon Rekognition Custom Labels<\/li>\n<li>\n<strong>ProjectARN<\/strong> \u2013 The project ARN used for Amazon Rekognition Custom Labels<\/li>\n<li>\n<strong>ProjectVersionARN<\/strong> \u2013 The model version name used for Amazon Rekognition Custom Labels<\/li>\n<li>\n<strong>YourCIDR<\/strong> \u2013 The CIDR including your public IP address<\/li>\n<\/ul>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17016 size-full\" title=\"Entering parameters\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/2-Specify-Stack-Details.jpg\" alt=\"\" width=\"900\" height=\"589\"><\/p>\n<p>For this post, we use the following <a href=\"https:\/\/mixkit.co\/free-stock-video\/pet-owner-playing-with-a-cute-cat-1779\/\" target=\"_blank\" rel=\"noopener noreferrer\">video<\/a> to detect whether a cat is punching or not. For our object detection model, we prepared an annotated dataset and trained it in advance, as shown in the following section.<\/p>\n<p>This solution uses the US East (N. Virginia) Region, so make sure to work in that Region when following along with this post.<\/p>\n<h2>Adding annotations to images from the video<\/h2>\n<p>To annotate your images, complete the following steps:<\/p>\n<ol>\n<li>To create images that the model uses for learning, you need to split the video into a series of still images. For this post, we prepared 377 images (the ratio of normal videos to punching videos is about 2:1) and annotated them.<\/li>\n<li>Store the series of still images in Amazon S3 and annotate them. You can <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sms-getting-started.html\" target=\"_blank\" rel=\"noopener noreferrer\">use Ground Truth to annotate them<\/a>.<\/li>\n<li>Because we\u2019re creating an object detection model, select <strong>Bounding box<\/strong> for the <strong>Task type<\/strong>.<\/li>\n<li>For our use case, we want to tell if a cat is punching or not in the video, so we create a labeling job using two labels: <code>normal<\/code> to define basic sitting behavior, and <code>punch<\/code> to define playful behavior.<\/li>\n<li>For annotation, you should surround the cat with the <code>normal<\/code> label bounding box when the cat isn\u2019t punching, and surround the cat with the <code>punch<\/code> label bounding box when the cat is punching.<\/li>\n<\/ol>\n<p>When the cat is punching, the image of the cat\u2019s paws should look blurred, so based on how blurred the image is, you can determine whether the cat is punching or not and annotate the image.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17017 size-full\" title=\"Cat punching\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/3-Cats.jpg\" alt=\"\" width=\"900\" height=\"264\"><\/p>\n<h2>Training a custom ML model<\/h2>\n<p>To start training your model, complete the following steps:<\/p>\n<ol>\n<li>Create an object detection model using Amazon Rekognition Custom Labels. For instructions, see <a href=\"https:\/\/docs.aws.amazon.com\/rekognition\/latest\/customlabels-dg\/gs-introduction.html\" target=\"_blank\" rel=\"noopener noreferrer\">Getting Started with Amazon Rekognition Custom Labels<\/a>.<\/li>\n<li>When you create a dataset, choose <strong>Import images labeled by SageMaker Ground Truth<\/strong> for <strong>Image location<\/strong>\n<\/li>\n<li>Set the <code>output.manifest<\/code> file path that was output by the Ground Truth labeling job.<\/li>\n<\/ol>\n<p>To find the path out the <code>output.manifest<\/code> file, on the Amazon SageMaker console, on the <strong>Labeling jobs <\/strong>page, choose your video. The information is located on the <strong>Labeling job summary <\/strong>page.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17018 size-full\" title=\"Labeling job summary page\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/4-Movie-cats.jpg\" alt=\"\" width=\"900\" height=\"480\"><\/p>\n<ol start=\"4\">\n<li>When the model has finished learning, save the ARN listed in the <strong>Use your model<\/strong> section at the bottom of the model details page. We use this ARN later on.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17019 size-full\" title=\"Use your model section\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/5-Use-your-model.jpg\" alt=\"\" width=\"900\" height=\"225\"><\/p>\n<p>For reference, the F1 score for normal and punch was above 0.9 in our use case.<\/p>\n<h2>Uploading a video for inference on Amazon S3<\/h2>\n<p>You can now upload your video for inference.<\/p>\n<ol>\n<li>On the Amazon S3 console, navigate to the bucket you created with the CloudFormation stack (it should include <code>rekognition<\/code> in the name).<\/li>\n<li>Choose <strong>Create folder<\/strong>.<\/li>\n<li>Create the folder <code>inputMovie<\/code>.<\/li>\n<li>Upload the file you want to infer.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17020 size-full\" title=\"Uploading the files you want to infer\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/6-1-3.jpg\" alt=\"\" width=\"900\" height=\"278\"><\/p>\n<h2>Setting up a script on Amazon EC2<\/h2>\n<p>This solution calls the Amazon Rekognition API to infer the video on Amazon EC2, so you need to set up a script on Amazon EC2.<\/p>\n<ol>\n<li>Log in to Amazon EC2 via SSH with the following code and the key pair you created:<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-bash\">ssh -i &lt;<em><span>Your key Pair<\/span><\/em>&gt; ubuntu@&lt;EC2 IPv4 Public IP&gt;\r\nAre you sure you want to continue connecting (yes\/no)? yes\r\nWelcome to Ubuntu 18.04.4 LTS (GNU\/Linux 4.15.0-1065-aws x86_64)\r\nubuntu@ip-10-0-0-207:~$ cd code\/\r\nubuntu@ip-10-0-0-207:~\/code$ vi rekognition.py\r\n<\/code><\/pre>\n<\/div>\n<p>It takes approximately 30 minutes to install and build the necessary libraries.<\/p>\n<ol start=\"2\">\n<li>Copy the following code to <code>rekognition.py<\/code> and replace <em><span>&lt;BucketName&gt;<\/span><\/em> with your S3 bucket name created by AWS CloudFormation. This code uses OpenCV to split the video into frames and throws each frame to the inference endpoint of Amazon Rekognition Custom Labels to perform behavior detection. It merges the inferred behavior detection result with each frame and puts the frames together to reconstruct a video.<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\">import boto3\r\nimport cv2\r\nimport json\r\nfrom decimal import *\r\nimport os\r\nimport ffmpeg\r\n\r\ndef get_parameters(param_key):\r\n    ssm = boto3.client('ssm', region_name='us-east-1')\r\n    response = ssm.get_parameters(\r\n        Names=[\r\n            param_key,\r\n        ]\r\n    )\r\n    return response['Parameters'][0]['Value']\r\n\r\ndef analyzeVideo():\r\n    ssm = boto3.client('ssm',region_name='us-east-1')\r\n    s3 = boto3.resource('s3')\r\n    rekognition = boto3.client('rekognition','us-east-1')\r\n   \r\n    parameter_value = get_parameters('\/Movie\/&lt;BucketName&gt;')\r\n    dirname, video = os.path.split(parameter_value)\r\n    bucket = s3.Bucket('&lt;BucketName&gt;')\r\n    bucket.download_file(parameter_value, video)\r\n\r\n    customLabels = []\r\n    cap = cv2.VideoCapture(video)\r\n    frameRate = cap.get(cv2.CAP_PROP_FPS)\r\n    width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)\r\n    height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)\r\n    fourcc = cv2.VideoWriter_fourcc(*'XVID')\r\n    writer = cv2.VideoWriter( video + '-output.avi', fourcc, 18, (int(width), int(height)))\r\n\r\n    while(cap.isOpened()):\r\n        frameId = cap.get(cv2.CAP_PROP_POS_FRAMES)\r\n        print(frameId)\r\n        print(\"Processing frame id: {}\".format(frameId))\r\n        ret, frame = cap.read()\r\n        if (ret != True):\r\n            break\r\n        hasFrame, imageBytes = cv2.imencode(\".jpg\", frame)\r\n\r\n        if(hasFrame):\r\n            response = rekognition.detect_custom_labels(\r\n                Image={\r\n                    'Bytes': imageBytes.tobytes(),\r\n                },\r\n                ProjectVersionArn = get_parameters('ProjectVersionArn')\r\n            )\r\n\r\n            for output in response[\"CustomLabels\"]:\r\n                Name = output['Name']\r\n                Confidence = str(output['Confidence'])\r\n                w = output['Geometry']['BoundingBox']['Width']\r\n                h = output['Geometry']['BoundingBox']['Height']\r\n                left = output['Geometry']['BoundingBox']['Left']\r\n                top = output['Geometry']['BoundingBox']['Top']\r\n                w = int(w * width)\r\n                h = int(h * height)\r\n                left = int(left*width)\r\n                top = int(top*height)\r\n\r\n                output[\"Timestamp\"] = (frameId\/frameRate)*1000\r\n                customLabels.append(output)\r\n                if Name == 'Moving':\r\n                    cv2.rectangle(frame,(left,top),(left+w,top+h),(0,0,255),2)\r\n                    cv2.putText(frame,Name + \":\" +Confidence +\"%\",(left,top),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0, 0, 255), 1, cv2.LINE_AA)\r\n                else:\r\n                    cv2.rectangle(frame,(left,top),(left+w,top+h),(0,255,0),2)\r\n                    cv2.putText(frame,Name + \":\" +Confidence +\"%\",(left,top),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0, 255, 0), 1, cv2.LINE_AA)\r\n\r\n        writer.write(frame)\r\n    print(customLabels)\r\n\r\n    with open(video + \".json\", \"w\") as f:\r\n        f.write(json.dumps(customLabels))\r\n    bucket.upload_file(video + \".json\",'output-json\/ec2-output.json')\r\n    stream = ffmpeg.input(video + '-output.avi')\r\n    stream = ffmpeg.output(stream, video + '-output.mp4', pix_fmt='yuv420p', vcodec='libx264')\r\n    stream = ffmpeg.overwrite_output(stream)\r\n    ffmpeg.run(stream)\r\n    bucket.upload_file( video + '-output.mp4','output\/' +video + '-output.mp4')\r\n\r\n    writer.release()\r\n    cap.release()\r\n\r\nanalyzeVideo()\r\n<\/code><\/pre>\n<\/div>\n<h2>Stopping the EC2 instance<\/h2>\n<p>Stop the EC2 instance after you create the script in it. The EC2 instance is automatically launched when a video file is uploaded to Amazon S3.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17021 size-full\" title=\"Stopping the EC2 instance\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/7-1-2.jpg\" alt=\"\" width=\"900\" height=\"369\"><\/p>\n<p>The solution is now ready for use.<\/p>\n<h2>Detecting movement in the video<\/h2>\n<p>To implement your solution, upload a video file (.mp4) to the <code>inputMovie<\/code> folder you created. This launches the endpoint for Amazon Rekognition Custom Labels.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17022 size-full\" title=\"Launching the endpoint for Amazon Rekognition Custom Labels\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/8-1-2.jpg\" alt=\"\" width=\"900\" height=\"148\"><\/p>\n<p>When the status of the endpoint changes to <code>Running<\/code>, Amazon EC2 launches and performs behavior detection. A video containing behavior detection data is uploaded to the <code>output<\/code> folder in Amazon S3.<\/p>\n<p>When you log in to Amazon EC2, you can see that a video file that merged the inferred results was created under the <code>code<\/code> folder.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17023 size-full\" title=\"Merged inferred results video file \" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/9-1-3.jpg\" alt=\"\" width=\"900\" height=\"247\"><\/p>\n<p>The video file is stored in the <code>output<\/code> folder created in Amazon S3. This causes the endpoint for Amazon Rekognition Custom Labels and Amazon EC2 to stop.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17024 size-full\" title=\"Output folder in Amazon S3\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/10-rekognition-custom-label.jpg\" alt=\"\" width=\"900\" height=\"272\"><\/p>\n<p>The following video is the result of detecting a specific movement (punch) of the cat:<\/p>\n<h2>Cleaning Up<\/h2>\n<p>To avoid incurring future charges, delete the resources you created.<\/p>\n<h2>Conclusion and next steps<\/h2>\n<p>This solution automates detecting specific actions in a video. In this post, we created a model to detect specific cat behaviors using Amazon Rekognition Custom Labels, but you can also use custom labels to identify cell images (such data is abundant in the research field). For example, the following screenshot shows the inferred results of a model that learned leukocytes, erythrocytes, and platelets. We had the model learn from 20 datasets, and it can now detect cells with distinctive features that are identifiable with human eyes. Its accuracy can increase as more high-resolution data is added and as annotations are done more carefully.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17025 size-full\" title=\"Inferred results of a model that learned leukocytes\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/11-1-1.jpg\" alt=\"\" width=\"900\" height=\"368\"><\/p>\n<p>Amazon Rekognition Custom Labels has a wide range of use cases in the research field. If you want to try this in your organization and have any questions, please reach out to us or your Solutions Architects team and they will be excited to assist you.<\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-17026 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/Hidenori.jpg\" alt=\"\" width=\"100\" height=\"134\"> Hidenori Koizumi<\/strong> is a Solutions Architect in Japan\u2019s Healthcare and Life Sciences team. He is good at developing solutions in the research field based on his scientific background (biology, chemistry, and more). His specialty is machine learning, and he has recently been developing applications using React and TypeScript. His hobbies are traveling and photography.<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-17028 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/13\/Mari.jpg\" alt=\"\" width=\"100\" height=\"121\">Mari Ohbuchi<\/strong> is a Machine Learning Solutions Architect at Amazon Web Services Japan. She worked on developing image processing algorithms for about 10 years at a manufacturing company before joining AWS. In her current role, she supports the implementation of machine learning solutions and creating prototypes for manufacturing and ISV\/SaaS customers. She is a cat lover and has published blog posts, hands-on content, and other content that involves both AWS AI\/ML services and cats.<\/p>\n<p>\u00a0<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/detecting-playful-animal-behavior-in-videos-using-amazon-rekognition-custom-labels\/<\/p>\n","protected":false},"author":0,"featured_media":415,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/414"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=414"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/414\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/415"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=414"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=414"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}