{"id":1377,"date":"2021-12-15T21:00:55","date_gmt":"2021-12-15T21:00:55","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/15\/build-a-computer-vision-model-using-amazon-rekognition-custom-labels-and-compare-the-results-with-a-custom-trained-tensorflow-model\/"},"modified":"2021-12-15T21:00:55","modified_gmt":"2021-12-15T21:00:55","slug":"build-a-computer-vision-model-using-amazon-rekognition-custom-labels-and-compare-the-results-with-a-custom-trained-tensorflow-model","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/15\/build-a-computer-vision-model-using-amazon-rekognition-custom-labels-and-compare-the-results-with-a-custom-trained-tensorflow-model\/","title":{"rendered":"Build a computer vision model using Amazon Rekognition Custom Labels and compare the results with a custom trained TensorFlow model"},"content":{"rendered":"<div id=\"\">\n<p>Building accurate computer vision models to detect objects in images requires deep knowledge of each step in the process\u2014from labeling, processing, and preparing the training and validation data, to making the right model choice and tuning the model\u2019s hyperparameters adequately to achieve the maximum accuracy. Fortunately, these complex steps are simplified by <a href=\"https:\/\/aws.amazon.com\/rekognition\/custom-labels-features\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Rekognition Custom Labels<\/a>, a service of <a href=\"https:\/\/aws.amazon.com\/rekognition\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Rekognition<\/a> that enables you to build your own custom computer vision models for image classification and object detection tasks without requiring any prior computer vision expertise or advanced programming skills.<\/p>\n<p>In this post, we showcase how we can train a model to detect bees in images using Amazon Rekognition Custom Labels. We also compare these results against a custom-trained TensorFlow model (DIY model). We use <a href=\"https:\/\/aws.amazon.com\/sagemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon SageMaker<\/a> as the platform to develop and train our model. Finally, we demonstrate how to build a serverless architecture to process new images using Amazon Rekognition APIs.<\/p>\n<h2>When and where to use each model<\/h2>\n<p>Before diving deeper, it is important to understand the use cases that drive the decision of which model to use, whether it\u2019s an Amazon Rekognition Custom Labels model or a DIY model.<\/p>\n<p>Amazon Rekognition Custom Labels models are a great choice when our desired goal is to achieve maximum quality results in our task quickly. These models are heavily optimized and fine-tuned to perform at a high accuracy and recall. This is a cloud service, so when the model is trained, images must be uploaded to the cloud to be analyzed. A great advantage of this service is that the user doesn\u2019t need to have expertise to run this training pipeline. You can do it on the <a href=\"http:\/\/aws.amazon.com\/console\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Management Console<\/a> with just a few clicks, and it takes care of the heavy lifting of training and fine-tuning the model for you. Then, a simple set of API calls is offered, tailored to this specific model, for you to apply when needed.<\/p>\n<p>DIY models are the choice for advanced users with expertise in machine learning (ML). They allow you to control every aspect of the model, and tune the training data and the necessary parameters as needed. This requires advanced coding skills. These models trade off accuracy for latency: you can run them faster at the expense of lower qualitative performance. This lower latency fits really well in low bandwidth scenarios where the model needs to be deployed on the edge. For instance, IoT devices that support these models can host and run them and only upload the inference results to the cloud, which reduces the amount of data sent upstream.<\/p>\n<h2>Overview of solution<\/h2>\n<p>To build our DIY model, we follow the solution from the GitHub repo <a href=\"https:\/\/github.com\/aws-samples\/amazon-sagemaker-tensorflow-object-detection-api\" target=\"_blank\" rel=\"noopener noreferrer\">TensorFlow 2 Object Detection API SageMaker<\/a>, which consists of these steps:<\/p>\n<ol>\n<li>Download and prepare our bee dataset.<\/li>\n<li>Train the model using a SageMaker custom container instance.<\/li>\n<li>Test the model using a SageMaker model endpoint.<\/li>\n<\/ol>\n<p>After we have our DIY model, we can proceed with the steps to build our bee detection model using Amazon Rekognition Custom Labels:<\/p>\n<ol>\n<li>Deploy a serverless architecture using <a href=\"http:\/\/aws.amazon.com\/cloudformation\" target=\"_blank\" rel=\"noopener noreferrer\">AWS CloudFormation<\/a>.<\/li>\n<li>Download and prepare our bee dataset.<\/li>\n<li>Create a project in Amazon Rekognition Custom Labels and import the dataset.<\/li>\n<li>Train the Amazon Rekognition Custom Labels model.<\/li>\n<li>Test the Amazon Rekognition Custom Labels model using the automatically generated API endpoint using <a href=\"https:\/\/aws.amazon.com\/s3\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3) events.<\/li>\n<\/ol>\n<p>Amazon Rekognition Custom Labels lets you manage the ML model training process on the <a href=\"https:\/\/console.aws.amazon.com\/rekognition\/custom-labels#\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Rekognition console<\/a>, which simplifies the end-to-end process. After we train both models, we can compare them.<\/p>\n<h2>Set up the environment<\/h2>\n<p>We prepare our serverless environment using the CloudFormation template on <a href=\"https:\/\/github.com\/aws-samples\/serverless-rekognition-custom-labels\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>. On the AWS CloudFormation console, we create a new stack and use the <code>template.yaml<\/code> file present in the root folder of our code repository. We provide a unique <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3) bucket name when prompted, where our images are downloaded for further processing. We also provide a name for the inference processing <a href=\"https:\/\/aws.amazon.com\/sqs\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Queue Service<\/a> (Amazon SQS) queue, as well as an <a href=\"https:\/\/aws.amazon.com\/kms\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Key Management Service<\/a> (AWS KMS) alias to securely encrypt the inference pipeline.<\/p>\n<p>The architecture diagram is as follows, and it is used for detecting objects in new images as they are uploaded to our bucket.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image001.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31694\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image001.jpg\" alt=\"\" width=\"701\" height=\"371\"><\/a><\/p>\n<p>Following the first notebook (<code>1_prepare_data<\/code>), we download and store our images in a bucket in Amazon S3. The dataset is already curated and annotated, and the images used have been licensed under <a href=\"https:\/\/creativecommons.org\/share-your-work\/public-domain\/cc0\/\" target=\"_blank\" rel=\"noopener noreferrer\">CC0<\/a>. For convenience, the dataset is stored in a single .zip archive: <a href=\"https:\/\/tf2-object-detection.s3-eu-west-1.amazonaws.com\/data\/bees\/input\/dataset.zip\" target=\"_blank\" rel=\"noopener noreferrer\">dataset.zip<\/a>.<\/p>\n<p>Inside the dataset folder, the manifest file <code>output.manifest<\/code> contains the bounding box annotations of the dataset. The Amazon S3 references of these images belong to a different S3 bucket where the images were annotated originally. To import this manifest in Amazon Rekognition Custom Labels, the notebook rewrites the manifest file according to the bucket name we chose.<\/p>\n<h2>Train your DIY model<\/h2>\n<p>To establish a comparison between a DIY and Amazon Rekognition Custom Labels model, we follow the steps in the following <a href=\"https:\/\/github.com\/aws-samples\/amazon-sagemaker-tensorflow-object-detection-api\" target=\"_blank\" rel=\"noopener noreferrer\">public repository<\/a> that demonstrates how to train a TensorFlow2 model using the same dataset.<\/p>\n<p>We follow the steps described in this repository to train an EfficientNet object detector using our bee dataset. We modify the training notebook so that it runs for 10,000 steps. The model trains for about 2 hours, achieving an average precision of 83% and a recall of 56%.<\/p>\n<h2>Create your Amazon Rekognition Custom Labels project<\/h2>\n<p>To create your bee detection project, complete the following steps:<\/p>\n<ol>\n<li>On the Amazon Rekognition console, choose <strong>Amazon Rekognition Custom Labels<\/strong>.<\/li>\n<li>Choose <strong>Get Started<\/strong>.<\/li>\n<li>For Project name, enter <code>bee-detection<\/code>.<\/li>\n<li>Choose <strong>Create project<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image002.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31695\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image002.jpg\" alt=\"\" width=\"1197\" height=\"541\"><\/a><\/li>\n<\/ol>\n<h2>Import your dataset<\/h2>\n<p>We created a manifest using the first notebook (<code>1_prepare_data<\/code>) that contains the Amazon S3 URIs of our image annotations. We follow these steps to import our manifest into Amazon Rekognition Custom Labels:<\/p>\n<ol>\n<li>On the Amazon Rekognition Custom Labels console, choose <strong>Create dataset<\/strong>.<\/li>\n<li>Select <strong>Import images labeled by Amazon SageMaker Ground Truth<\/strong>.<\/li>\n<li>Name your dataset (for example, <code>bee_dataset<\/code>).<\/li>\n<li>Enter the <strong>Amazon S3 URI<\/strong> of the manifest file that we created.<\/li>\n<li>Copy the bucket policy that appears on the console.<\/li>\n<li>Open the Amazon S3 console in a new tab and access the bucket where the images are stored.<\/li>\n<li>On the <strong>Permissions <\/strong>tab, enter the bucket policy to allow access of the dataset by Amazon Rekognition Custom Labels.<\/li>\n<li>Go back to the dataset creation console and choose <strong>Submit<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image003.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31696\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image003.png\" alt=\"\" width=\"879\" height=\"470\"><\/a><\/li>\n<\/ol>\n<h2>Train your model<\/h2>\n<p>After the dataset is imported into Amazon Rekognition Custom Labels, we can train a model immediately.<\/p>\n<ol>\n<li>Choose <strong>Train Model<\/strong> from the dataset page.<\/li>\n<li>For <strong>Choose project<\/strong>, choose your <code>bee-detection<\/code> project.<\/li>\n<li>For <strong>Choose training dataset<\/strong>, choose your <code>bee_dataset<\/code> dataset.<\/li>\n<\/ol>\n<p>As part of model training, Amazon Rekognition Custom Labels requires a labeled test dataset to validate the model training. Amazon Rekognition Custom Labels uses the test dataset to verify how well your trained model predicts the correct labels and to generate evaluation metrics. Images in the test dataset are not used to train your model and should represent the same types of images you use your model to analyze.<\/p>\n<ol start=\"4\">\n<li>For <strong>Create test set<\/strong>, select how you want to provide your test dataset.<\/li>\n<\/ol>\n<p>Amazon Rekognition Custom Labels provides three options:<\/p>\n<ul>\n<li>Choose an existing test dataset<\/li>\n<li>Create a new test dataset<\/li>\n<li>Split training dataset<\/li>\n<\/ul>\n<p>For this post, we choose to split our training dataset, which sets aside 20% of our dataset for testing the model.<\/p>\n<ol start=\"5\">\n<li>Select <strong>Split training dataset.<\/strong><\/li>\n<li>Choose <strong>Train<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image004.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31697\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image004.png\" alt=\"\" width=\"879\" height=\"414\"><\/a><\/li>\n<\/ol>\n<p>Our model took approximately 1.5 hours to train. The model achieved an average precision of 99% with a recall of 90% on the test data. The training time required for your model depends on many factors, including the number of images provided in the dataset and the complexity of the model. When training is complete, Amazon Rekognition Custom Labels outputs key quality metrics including F1 score, precision, recall, and the assumed threshold for each label. For more information about metrics, see <a href=\"https:\/\/docs.aws.amazon.com\/rekognition\/latest\/customlabels-dg\/tr-metrics-use.html\" target=\"_blank\" rel=\"noopener noreferrer\">Metrics for evaluating your model<\/a>.<\/p>\n<h2>Serverless inference architecture<\/h2>\n<p>After our model is trained, Amazon Rekognition Custom Labels provides the API calls for starting, using, and stopping your model. In the environment setup section, we set up a serverless architecture to process test images that are uploaded to our S3 bucket via Amazon S3 events. It uses an <a href=\"https:\/\/aws.amazon.com\/lambda\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Lambda<\/a> function to call the inference API, and manages these API calls using Amazon SQS.<\/p>\n<p>We\u2019re ready now to start applying our trained model to new images. We first need to start the project model version via the Amazon Rekognition Custom Labels console.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image005.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31698\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image005.png\" alt=\"\" width=\"1525\" height=\"420\"><\/a><\/p>\n<p>We take note of our model\u2019s ARN and update the Lambda function <code>bee-detection-inference<\/code> with it. This indicates which endpoint we must invoke to retrieve the object detection results. We can also change the assumed threshold to accept or reject results with a low confidence score.<\/p>\n<p>Now it\u2019s time to start uploading our test images to our S3 bucket prefix (<code>s3:\/\/your-bucket\/test_images<\/code>). We can either use the Amazon S3 console or the <a href=\"http:\/\/aws.amazon.com\/cli\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Command Line Interface<\/a> (AWS CLI). We choose some test images present in our bee detection dataset and upload them using the console. As the images are uploaded, they\u2019re queued in Amazon SQS and then processed by our Lambda function, leaving the result with the same file name, plus the .json suffix.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image006.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31699\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image006.png\" alt=\"\" width=\"879\" height=\"443\"><\/a><\/p>\n<p>We visualize the results of the JSON response from our Amazon Rekognition Custom Labels model using the second notebook (<code>2_visualize_images<\/code>). The following is an example of a response output:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">{'CustomLabels': [{'Name': 'bee',\n   \t\t\t   'Confidence': 99.9679946899414,\n   \t\t\t   'Geometry': {'BoundingBox': {'Width': 0.17472000420093536,\n     'Height': 0.23267999291419983,\n     'Left': 0.34907999634742737,\n     'Top': 0.36125999689102173}}}],\n 'ResponseMetadata': {'RequestId': '4f98fdc8-a7d3-4251-b21e-484baf958efb',\n  'HTTPStatusCode': 200,\n  'HTTPHeaders': {'content-type': 'application\/x-amz-json-1.1',\n   'date': 'Thu, 11 Mar 2021 15:23:39 GMT',\n   'x-amzn-requestid': '4f98fdc8-a7d3-4251-b21e-484baf958efb',\n   'content-length': '202',\n   'connection': 'keep-alive'},\n  'RetryAttempts': 0}}<\/code><\/pre>\n<\/p><\/div>\n<p>This bee is detected with a confidence of 99.97%<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image007.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31700\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image007.jpg\" alt=\"\" width=\"500\" height=\"333\"><\/a><\/p>\n<p>In the following image on the left, we find six bees over 99.4% confidence, which is our optimal threshold. The image on the right shows the same result with a threshold of 90% (15 bees).<\/p>\n<table width=\"696\">\n<tbody>\n<tr>\n<td><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image008.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31701\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image008.jpg\" alt=\"\" width=\"499\" height=\"373\" data-wp-editing=\"1\"><\/a><\/td>\n<td><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image009.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31702\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image009.jpg\" alt=\"\" width=\"499\" height=\"373\"><\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Clean up<\/h2>\n<p>When you\u2019re done, remember to follow these steps to avoid incurring in unnecessary charges:<\/p>\n<ol>\n<li>Stop the model version on the Amazon Rekognition Custom Labels console.<\/li>\n<li>Empty the S3 bucket that was created where images were uploaded.<\/li>\n<li>Delete the CloudFormation stack to remove all provisioned resources.<\/li>\n<\/ol>\n<h2>Comparison with a custom DIY model<\/h2>\n<p>The performance of our Amazon Rekognition Custom Labels model is quantitatively better than our DIY model, achieving almost perfect precision (99%). It is noticeable how it\u2019s also able to prevent false negatives, yielding a very robust recall of 90%, smashing the 56% recall of our DIY model. This is partly due to the optimized tuning that Amazon Rekognition Custom Labels applies to the model, and the assumed thresholds that it yields after training to achieve the best performance at test time.<\/p>\n<p>For the first example, our single bee is detected at a much lower confidence score (64%), and with a rather large bounding box that doesn\u2019t reflect accurately the size of the bee.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image010.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31703\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image010.jpg\" alt=\"\" width=\"556\" height=\"371\"><\/a><\/p>\n<p>For the more challenging picture, we must lower our threshold to 81% to find the very first detection (left), and lower it even more to 50% to find 7 bees (right).<\/p>\n<table width=\"718\">\n<tbody>\n<tr>\n<td><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image011.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31704\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image011.jpg\" alt=\"\" width=\"557\" height=\"418\"><\/a><\/td>\n<td><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image012.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31705\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/ML-3316-image012.jpg\" alt=\"\" width=\"557\" height=\"418\"><\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Playing with this threshold can be risky. Setting a very low threshold can detect more bees (better recall), but at the same time find false detections, lowering our model precision. However, Amazon Rekognition Custom Labels can detect bees with a much higher confidence, which allows us to set a higher threshold for a much better overall performance.<\/p>\n<h2>Conclusion<\/h2>\n<p>In this post, we showed you how to create a computer vision object detection model with Amazon Rekognition Custom Labels using annotated data, and compared the results with a custom DIY model. Amazon Rekognition Custom Labels brings a great advantage over using your own models. Amazon Rekognition Custom Labels enables you to build and optimize your own specialized computer vision models to detect unique objects without the need of advanced programming knowledge.<\/p>\n<p>With more experiments with other model architectures and hyperparameters, an ML scientist can improve the DIY model we tested in this post. The Amazon Rekognition Custom Labels value proposition is that it does these experiments on your behalf, thereby reducing the time to get a usable model and its development costs. Finally, we also showed how to set up a minimal serverless architecture to process new images using our trained model.<\/p>\n<p>For more information about using custom labels, see <a href=\"https:\/\/docs.aws.amazon.com\/rekognition\/latest\/customlabels-dg\/what-is.html\" target=\"_blank\" rel=\"noopener noreferrer\">What Is Amazon Rekognition Custom Labels?<\/a><\/p>\n<hr>\n<h3>About the Author<\/h3>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/garczrau.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-31713 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/10\/garczrau.jpg\" alt=\"\" width=\"100\" height=\"133\"><\/a>Ra\u00fal D\u00edaz Garc\u00eda<\/strong> is a Sr Data Scientist in the EMEA SDT IoT Team. Ra\u00fal works with customers across the EMEA region, where he helps them enable solutions related to Computer Vision and Machine Learning in the IoT space.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/build-a-computer-vision-model-using-amazon-rekognition-custom-labels-and-compare-the-results-with-a-custom-trained-tensorflow-model\/<\/p>\n","protected":false},"author":0,"featured_media":1378,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1377"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1377"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1377\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1378"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1377"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1377"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1377"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}