{"id":1355,"date":"2021-12-14T00:02:09","date_gmt":"2021-12-14T00:02:09","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/14\/amazon-lookout-for-vision-now-supports-visual-inspection-of-product-defects-at-the-edge\/"},"modified":"2021-12-14T00:02:09","modified_gmt":"2021-12-14T00:02:09","slug":"amazon-lookout-for-vision-now-supports-visual-inspection-of-product-defects-at-the-edge","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/14\/amazon-lookout-for-vision-now-supports-visual-inspection-of-product-defects-at-the-edge\/","title":{"rendered":"Amazon Lookout for Vision now supports visual inspection of product defects at the edge"},"content":{"rendered":"<div id=\"\">\n<p>Discrete and continuous manufacturing lines generate a high volume of products at low latency, ranging from milliseconds to a few seconds. To identify defects at the same throughput of production, camera streams of images must be processed at low latency. Additionally, factories may have low network bandwidth or intermittent cloud connectivity. In such scenarios, you may need to run the defect detection system on your on-premises compute infrastructure, and upload the processed results for further development and monitoring purposes to the AWS Cloud. This hybrid approach with both local edge hardware and the cloud can address the low latency requirements and help reduce storage and network transfer costs to the cloud. This may also fulfill your data privacy and other regulatory requirements.<\/p>\n<p>In this post, we show you how to detect defective parts using <a href=\"https:\/\/aws.amazon.com\/lookout-for-vision\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Lookout for Vision<\/a> machine learning (ML) models running on your on-premises edge appliance.<\/p>\n<p>Lookout for Vision is an ML service that helps spot product defects using computer vision to automate the quality inspection process in your manufacturing lines, with no ML expertise required. The fully managed service enables you to build, train, optimize, and deploy the models in the AWS Cloud or edge. You can use the <a href=\"https:\/\/aws.amazon.com\/lookout-for-vision\/resources\/\" target=\"_blank\" rel=\"noopener noreferrer\">cloud APIs<\/a> or deploy Amazon Lookout for Vision models on any NVIDIA Jetson edge appliance or x86 compute platform running Linux with an NVIDIA GPU accelerator. You can use <a href=\"https:\/\/aws.amazon.com\/greengrass\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS IoT Greengrass<\/a> to deploy, and manage your edge compatible customized models on your fleet of devices.<\/p>\n<h2>Solution overview<\/h2>\n<p>In this post, we use a <a href=\"https:\/\/github.com\/aws-samples\/amazon-lookout-for-vision\/tree\/main\/circuitboard\" target=\"_blank\" rel=\"noopener noreferrer\">printed circuit board dataset<\/a> composed of normal and defective images such as scratches, solder blobs, and damaged components on the board. We train a Lookout for Vision model in the cloud to identify defective and normal printed circuit boards. We compile the model to a target ARM architecture, package the trained Lookout for Vision model as an <a href=\"https:\/\/aws.amazon.com\/greengrass\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS IoT Greengrass<\/a> component, and deploy the model to an NVIDIA Jetson edge device using the AWS IoT Greengrass console. Finally, we demonstrate a Python-based sample application running on the NVIDIA Jetson edge device that sources the printed circuit board image from the edge device file system, runs the inference on the Lookout for Vision model using the <a href=\"https:\/\/grpc.io\/docs\/what-is-grpc\/introduction\/\" target=\"_blank\" rel=\"noopener noreferrer\">gRPC<\/a> interface, and sends the inference data to an MQTT topic in the AWS Cloud.<\/p>\n<p>The following diagram illustrates the solution architecture.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31188\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/1-6752-Architecture.jpg\" alt=\"\" width=\"800\" height=\"311\"><\/p>\n<p>The solution has the following workflow:<\/p>\n<ol>\n<li>Upload a training dataset to <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3).<\/li>\n<li>Train a Lookout for Vision model in the cloud.<\/li>\n<li>Compile the model to the target architecture (ARM) and deploy the model to the NVIDIA Jetson edge device using the AWS IoT Greengrass console.<\/li>\n<li>Source images from local disk.<\/li>\n<li>Run inferences on the deployed model via the gRPC interface.<\/li>\n<li>Post the inference results to an MQTT client running on the edge device.<\/li>\n<li>Receive the MQTT message on a topic in <a href=\"https:\/\/aws.amazon.com\/iot-core\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS IoT Core<\/a> in the AWS Cloud for further monitoring and visualization.<\/li>\n<\/ol>\n<p>Steps 4, 5 and 6 are coordinated with the sample Python application.<\/p>\n<div class=\"hide-language\">\n<h2>Prerequisites<\/h2>\n<p>Before you get started, complete the following prerequisites:<\/p>\n<ol>\n<li><a href=\"https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/create-and-activate-aws-account\/\">Create an AWS account<\/a>.<\/li>\n<li>On your NVIDIA Jetson edge device, complete the following:\n<ol type=\"a\">\n<li><a href=\"https:\/\/docs.aws.amazon.com\/lookout-for-vision\/latest\/developer-guide\/models-devices-setup-core-device.html\">Set up your edge device<\/a> (we have set IoT THING_NAME = <code>l4vJetsonXavierNx when installing AWS IoT Greengrass V2<\/code>).<\/li>\n<li>Clone the sample project containing the Python-based sample application (<code>warmup-model.py<\/code> to load the model, and <code>sample-client-file-mqtt.py<\/code> to run inferences). Load the Python modules. See the following code:<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\">git clone https:\/\/github.com\/aws-samples\/ds-peoplecounter-l4v-workshop.git\ncd ds-peoplecounter-l4v-workshop \npip3 install -r requirements.txt\ncd lab2\/inference_client  \n# Replace ENDPOINT variable in sample-client-file-mqtt.py with the \n# value on the AWS console AWS IoT-&gt;Things-&gt;l4JetsonXavierNX-&gt;Interact.  \n# Under HTTPS. It will be of type &lt;name&gt;-ats.iot.&lt;region&gt;.amazon.com \n<\/code><\/pre>\n<\/p><\/div>\n<\/p><\/div>\n<h2>Dataset and model training<\/h2>\n<p>We use the <a href=\"https:\/\/github.com\/aws-samples\/amazon-lookout-for-vision\/tree\/main\/circuitboard\" target=\"_blank\" rel=\"noopener noreferrer\">printed circuit board dataset<\/a> to demonstrate the solution. The dataset contains normal and anomalous images. Here are a few sample images from the dataset.<\/p>\n<p>The following image shows a normal printed circuit board.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31189\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/2-6752A.jpg\" alt=\"\" width=\"272\" height=\"184\"><\/p>\n<p>The following image shows a printed circuit board with scratches.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31190\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/2-6752B.jpg\" alt=\"\" width=\"250\" height=\"167\"><\/p>\n<p>The following image shows a printed circuit board with a soldering defect.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31191\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/3-6752C.jpg\" alt=\"\" width=\"251\" height=\"168\"><\/p>\n<p>To train a Lookout for Vision model, we follow the steps outlined in <a href=\"https:\/\/aws.amazon.com\/blogs\/aws\/amazon-lookout-for-vision-new-machine-learning-service-that-simplifies-defect-detection-for-manufacturing\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Lookout for Vision \u2013 New ML Service Simplifies Defect Detection for Manufacturing<\/a>. After you complete these steps, you can navigate to the project and the <strong>Models<\/strong> page to check the performance of the trained model. You can start the process of exporting the model to the target edge device any time after the model is trained.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31192\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/4-6752-Console.jpg\" alt=\"\" width=\"800\" height=\"304\"><\/p>\n<h2>Compile and package the model as an AWS IoT Greengrass component<\/h2>\n<p>In this section, we walk through the steps to compile the printed circuit board model to our target edge device and package the model as an AWS IoT Greengrass component.<\/p>\n<ol>\n<li>On the Lookout for Vision console, choose your project.<\/li>\n<li>In the navigation pane, choose <strong>Edge model packages<\/strong>.<\/li>\n<li>Choose <strong>Create model packaging job<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31193\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/5-6752.jpg\" alt=\"\" width=\"800\" height=\"269\"><\/p>\n<ol start=\"4\">\n<li>For <strong>Job name<\/strong>, enter a name.<\/li>\n<li>For <strong>Job description<\/strong>, enter an optional description.<\/li>\n<li>Choose <strong>Browse models<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31194\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/6-6752.jpg\" alt=\"\" width=\"800\" height=\"338\"><\/p>\n<ol start=\"7\">\n<li>Select the model version (the printed circuit board model built in the previous section).<\/li>\n<li>Choose <strong>Choose<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31195\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/7-6752-Choose-model.jpg\" alt=\"\" width=\"800\" height=\"354\"><\/p>\n<ol start=\"9\">\n<li>Select <strong>Target device <\/strong>and enter the compiler options.<\/li>\n<\/ol>\n<p>Our target device is on JetPack 4.5.1. See\u00a0<a class=\"c-link\" href=\"https:\/\/docs.aws.amazon.com\/lookout-for-vision\/latest\/developer-guide\/models-devices-setup-requirements.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"http:\/\/reesch-devdesk.aka.corp.amazon.com\/builds\/lfv-edge\/models-devices-setup-requirements.html\" data-sk=\"tooltip_parent\" data-remove-tab-index=\"true\">this page<\/a>\u00a0for additional details on supported platforms. You can find the supported compiler options such as <code>trt-ver<\/code> and <code>cuda-ver<\/code> in the\u00a0<a class=\"c-link\" href=\"https:\/\/developer.nvidia.com\/jetpack-sdk-451-archive\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"https:\/\/developer.nvidia.com\/jetpack-sdk-451-archive\" data-sk=\"tooltip_parent\" data-remove-tab-index=\"true\">NVIDIA JetPack 4.5.1 archive<\/a>.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31196\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/8-6752-target.jpg\" alt=\"\" width=\"800\" height=\"191\"><\/p>\n<ol start=\"10\">\n<li>Enter the details for <strong>Component name<\/strong>, <strong>Component description<\/strong> (optional), <strong>Component version<\/strong>, and <strong>Component location.<\/strong><\/li>\n<\/ol>\n<p>Amazon Lookout for Vision stores the component recipes and artifacts in this Amazon S3 location.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31197\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/9-6752-AWS-IoT.jpg\" alt=\"\" width=\"800\" height=\"335\"><\/p>\n<ol start=\"11\">\n<li>Choose <strong>Create model packaging job<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31198\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/10-6752-Tags.jpg\" alt=\"\" width=\"800\" height=\"165\"><\/p>\n<p>You can see your job name and status showing as <code>In progress<\/code>. The model packaging job may take a few minutes to complete.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31199\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/11-6752.jpg\" alt=\"\" width=\"800\" height=\"278\"><\/p>\n<p>When the model packaging job is complete, the status shows as <code>Success<\/code>.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31200\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/12-6752.jpg\" alt=\"\" width=\"800\" height=\"196\"><\/p>\n<ol start=\"12\">\n<li>Choose your job name (in our case it\u2019s <code>ComponentCircuitBoard<\/code>) to see the job details.<\/li>\n<\/ol>\n<p>The Greengrass component and model artifacts have been created in your AWS account.<\/p>\n<ol start=\"13\">\n<li>Choose <strong>Continue deployment to Greengrass<\/strong> to deploy the component to the target edge device.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31201\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/13-6752-ComponentCircuitBoard.jpg\" alt=\"\" width=\"800\" height=\"356\"><\/p>\n<h3>Deploy the model<\/h3>\n<p>In this section, we walk through the steps to deploy the printed circuit board model to the edge device using the AWS IoT Greengrass console.<\/p>\n<ol>\n<li>Choose <strong>Deploy<\/strong> to initiate the deployment steps.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31202\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/14-6752.jpg\" alt=\"\" width=\"800\" height=\"334\"><\/p>\n<ol start=\"2\">\n<li>Select <strong>Core device<\/strong> (because the deployment is to a single device) and enter a name for <strong>Target name<\/strong>.<\/li>\n<\/ol>\n<p>The target name is the same name you used to name the core device during the AWS IoT Greengrass V2 installation process.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31203\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/15-6752-Specify-Target.jpg\" alt=\"\" width=\"800\" height=\"443\"><\/p>\n<ol start=\"3\">\n<li>Choose your component. In our case, the component name is <code>ComponentCircuitBoard<\/code>, which contains the circuit board model.<\/li>\n<li>Choose <strong>Next<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31204\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/16-6752-Select-components.jpg\" alt=\"\" width=\"800\" height=\"433\"><\/p>\n<ol start=\"5\">\n<li>Configure the component (optional).<\/li>\n<li>Choose <strong>Next<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31205\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/17-6752-Configure-components.jpg\" alt=\"\" width=\"800\" height=\"320\"><\/p>\n<ol start=\"7\">\n<li>Expand <strong>Deployment policies.<\/strong><\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31206\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/18-6752-Configure-advanced.jpg\" alt=\"\" width=\"800\" height=\"291\"><\/p>\n<ol start=\"8\">\n<li>For <strong>Component update policy<\/strong>, select <strong>Notify components<\/strong>.<\/li>\n<\/ol>\n<p>This allows the already deployed component (a prior version of the component) to defer an update until they are ready to update.<\/p>\n<ol start=\"9\">\n<li>For <strong>Failure handling policy<\/strong>, select <strong>Don\u2019t roll back<\/strong>.<\/li>\n<\/ol>\n<p>In case of a failure, this option allows us to investigate the errors in deployment.<\/p>\n<ol start=\"10\">\n<li>Choose <strong>Next<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31207\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/19-6752-Deployment-policies.jpg\" alt=\"\" width=\"800\" height=\"609\"><\/p>\n<ol start=\"11\">\n<li>Review the list of components that will be deployed on the target (edge) device.<\/li>\n<li>Choose <strong>Next<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31208\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/20-6752-Step2.jpg\" alt=\"\" width=\"800\" height=\"567\"><\/p>\n<p>You should see the message <code>Deployment successfully created<\/code>.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31209\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/21-6752.jpg\" alt=\"\" width=\"800\" height=\"435\"><\/p>\n<ol start=\"13\">\n<li>To validate the model deployment was successful, run the following command on your edge device:<\/li>\n<\/ol>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-bash\">sudo \/greengrass\/v2\/bin\/greengrass-cli component list<\/code><\/pre>\n<\/p><\/div>\n<p>You should see a similar looking output running the <code>ComponentCircuitBoard<\/code> lifecycle startup script:<\/p>\n<div class=\"hide-language\">\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-code\"> Components currently running in Greengrass:\n \n Component Name: aws.iot.lookoutvision.EdgeAgent\n    Version: 0.1.34\n    State: RUNNING\n    Configuration: {\"Socket\":\"unix:\/\/\/tmp\/aws.iot.lookoutvision.EdgeAgent.sock\"}\n Component Name: ComponentCircuitBoard\n    Version: 1.0.0\n    State: RUNNING\n    Configuration: {\"Autostart\":false}\n<\/code><\/pre>\n<\/p><\/div>\n<\/p><\/div>\n<h3>Run inferences on the model<\/h3>\n<p>We\u2019re now ready to run inferences on the model. On your edge device, run the following command to load the model:<\/p>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\"># run command to load the model\n# This will load the model into running state \npython3 warmup-model.py\n<\/code><\/pre>\n<\/p><\/div>\n<p>To generate inferences, run the following command with the source file name:<\/p>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\">python3 sample-client-file-mqtt.py \/path\/to\/images<\/code><\/pre>\n<\/p><\/div>\n<p>The following screenshot shows that the model correctly predicts the image as anomalous (bent pin) with a confidence score of 0.999766.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31210\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/22-6752.jpg\" alt=\"\" width=\"800\" height=\"245\"><\/p>\n<p>The following screenshot shows that the model correctly predicts the image as anomalous (solder blob) with a confidence score of 0.7701461.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31211\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/23-6752.jpg\" alt=\"\" width=\"800\" height=\"248\"><\/p>\n<p>The following screenshot shows that the model correctly predicts the image as normal with a confidence score of 0.9568462.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31212\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/24-6752.jpg\" alt=\"\" width=\"800\" height=\"245\"><\/p>\n<p>The following screenshot shows that the inference data posted an MQTT topic in AWS IoT Core.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31213\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/25-6752.jpg\" alt=\"\" width=\"800\" height=\"488\"><\/p>\n<h2>Customer Stories<\/h2>\n<p>With AWS IoT Greengrass and Amazon Lookout for Vision, you can now automate visual inspection with CV for processes like quality control and defect assessment \u2013 all on the edge and in real time. You can proactively identify problems such as parts damage (like dents, scratches, or poor welding), missing product components, or defects with repeating patterns, on the production line itself \u2013 saving you time and money. Customers like Tyson and Baxter are discovering the power of Amazon Lookout for Vision to increase quality and reduce operational costs by automating visual inspection.<\/p>\n<blockquote>\n<p><em>\u201cOperational excellence is a key priority at Tyson Foods. Predictive maintenance is an essential asset for achieving this objective by continuously improving overall equipment effectiveness (OEE). In 2021, Tyson Foods launched a machine learning based computer vision project to identify failing product carriers during production to prevent them from impacting Team Member safety, operations, or product quality.<\/em><\/p>\n<p><em>The models trained using Amazon Lookout for Vision performed well. The pin detection model achieved 95% accuracy across both classes. The Amazon Lookout for Vision model was tuned to perform at 99.1% accuracy for failing pin detection. By far the most exciting result of this project was the speedup in development time. Although this project utilizes two models and a more complex application code, it took 12% less developer time to complete. This project for monitoring the condition of the product carriers at Tyson Foods was completed in record time using AWS managed services such as Amazon Lookout for Vision.\u201d<\/em><\/p>\n<p><strong>Audrey Timmerman, Sr Applications Developer, Tyson Foods.<\/strong><\/p>\n<\/blockquote>\n<blockquote>\n<p><em>\u201cWe use Amazon Lookout for Vision to automate inspection tasks and solve complex process management problems that can\u2019t be addressed by manual inspection or traditional machine vision alone. Lookout for Vision\u2019s cloud and edge capabilities provide us the ability to leverage computer vision and AI\/ML-based solutions at scale in a rapid and agile manner, helping us to drive efficiencies on the manufacturing shop floor and enhance our operator\u2019s productivity and experience.\u201d<\/em><\/p>\n<p><strong>K. Karan, Global Senior Director \u2013 Digital Transformation, Integrated Supply Chain, Baxter International Inc.<\/strong><\/p>\n<\/blockquote>\n<h2>Conclusion<\/h2>\n<p>In this post, we described a typical scenario for industrial defect detection at the edge. We walked through the key components of the cloud and edge lifecycle with an end-to-end example using Lookout for Vision and AWS IoT Greengrass. With Lookout for Vision, we trained an anomaly detection model in the cloud using the printed circuit board dataset, compiled the model to a target architecture, and packaged the model as an AWS IoT Greengrass component. With AWS IoT Greengrass, we deployed the model to an edge device. We demonstrated a Python-based sample application that sources printed circuit board images from the edge device local file system, runs the inferences on the Lookout for Vision model at the edge using the gRPC interface, and sends the inference data to an MQTT topic in the AWS Cloud.<\/p>\n<p>In a future post, we will show how to run inferences on a real-time stream of images using a GStreamer media pipeline.<\/p>\n<p>Start your journey towards industrial anomaly detection and identification by visiting the <a href=\"https:\/\/aws.amazon.com\/lookout-for-vision\/resources\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Lookout for Vision<\/a> and <a href=\"https:\/\/aws.amazon.com\/greengrass\/resources\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS IoT Greengrass<\/a> resource pages.<\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-24592 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/05\/17\/Amit-Gupta.jpg\" alt=\"\" width=\"100\" height=\"120\">Amit Gupta<\/strong> is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.<\/p>\n<p><strong>\u00a0<img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-31217 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/11\/23\/Ryan-Vanderwerf.jpg\" alt=\"\" width=\"100\" height=\"129\">Ryan Vanderwerf<\/strong> is a partner solutions architect at Amazon Web Services. He previously provided Java virtual machine-focused consulting and project development as a software engineer at OCI on the Grails and Micronaut team. He was chief architect\/director of products at ReachForce, with a focus on software and system architecture for AWS Cloud SaaS solutions for marketing data management. Ryan has built several SaaS solutions in several domains such as financial, media, telecom, and e-learning companies since 1996.<\/p>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-31733 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/13\/Usha-Cheruku.jpg\" alt=\"\" width=\"100\" height=\"136\">Prathyusha Cheruku<\/strong> is an AI\/ML Computer Vision Product Manager at AWS. She focuses on building powerful, easy-to-use, no code\/ low code deep learning-based image and video analysis services for AWS customers.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/amazon-lookout-for-vision-now-supports-visual-inspection-of-product-defects-at-the-edge\/<\/p>\n","protected":false},"author":0,"featured_media":1356,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1355"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1355"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1355\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1356"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1355"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1355"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1355"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}