{"id":716,"date":"2020-12-19T00:51:34","date_gmt":"2020-12-19T00:51:34","guid":{"rendered":"https:\/\/machine-learning.webcloning.com\/2020\/12\/19\/building-and-deploying-an-object-detection-computer-vision-application-at-the-edge-with-aws-panorama\/"},"modified":"2020-12-19T00:51:34","modified_gmt":"2020-12-19T00:51:34","slug":"building-and-deploying-an-object-detection-computer-vision-application-at-the-edge-with-aws-panorama","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2020\/12\/19\/building-and-deploying-an-object-detection-computer-vision-application-at-the-edge-with-aws-panorama\/","title":{"rendered":"Building and deploying an object detection computer vision application at the edge with AWS Panorama"},"content":{"rendered":"<div id=\"\">\n<p>Computer vision (CV) is sought after technology among companies looking to take advantage of machine learning (ML) to improve their business processes. Enterprises have access to large amounts of video assets from their existing cameras, but the data remains largely untapped without the right tools to gain insights from it. CV provides the tools to unlock opportunities with this data, so you can automate processes that typically require visual inspection, such as evaluating manufacturing quality or identifying bottlenecks in industrial processes. You can take advantage of CV models running in the cloud to automate these inspection tasks, but there are circumstances when relying exclusively on the cloud isn\u2019t optimal due to latency requirements or intermittent connectivity that make a round trip to the cloud infeasible.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/panorama\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Panorama<\/a> enables you to bring CV to on-premises cameras and make predictions locally with high accuracy and low latency. On the AWS Panorama console, you can easily bring custom trained models to the edge and build applications that integrate with custom business logic. You can then deploy these applications on the AWS Panorama Appliance, which auto-discovers existing IP cameras and runs the applications on video streams to make real-time predictions. You can easily integrate the inference results with other AWS services such as <a href=\"https:\/\/aws.amazon.com\/quicksight\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon QuickSight<\/a> to derive ML-powered business intelligence (BI) or route the results to your on-premises systems to trigger an immediate action.<\/p>\n<p><a href=\"https:\/\/console.aws.amazon.com\/panorama\/home\" target=\"_blank\" rel=\"noopener noreferrer\">Sign up for the preview<\/a> to learn more and start building your own CV applications.<\/p>\n<p>In this post, we look at how you can use AWS Panorama to build and deploy a parking lot car counter application.<\/p>\n<h2>Parking lot car counter application<\/h2>\n<p>Parking facilities, like the one in the image below, need to know how many cars are parked in a given facility at any point of time, to assess vacancy and intake more customers. You also want to keep track of the number of cars that enter and exit your facility during any given time. You can use this information to improve operations, such as adding more parking payment centers, optimizing price, directing cars to different floors, and more. Parking center owners typically operate more than one facility and are looking for real-time aggregate details of vacancy in order to direct traffic to less-populated facilities and offer real-time discounts.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20054\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/1-Parking-Lot.jpg\" alt=\"\" width=\"800\" height=\"449\"><\/p>\n<p>To achieve these goals, parking centers sometimes manually count the cars to provide a tally. This inspection can be error prone and isn\u2019t optimal for capturing real-time data. Some parking facilities install sensors that give the number of cars in a particular lot, but these sensors are typically not integrated with analytics systems to derive actionable insights.<\/p>\n<p>With the AWS Panorama Appliance, you can get a real-time count of number of cars, collect metrics across sites, and correlate them to improve your operations. Let\u2019s see how we can solve this once manual (and expensive) problem using CV at the edge. We go through the details of the trained model, the business logic code, and walk through the steps to create and deploy an application on your AWS Panorama Appliance Developer Kit so you can view the inferences on a connected HDMI screen.<\/p>\n<h2>Computer vision model<\/h2>\n<p>A CV model helps us extract useful information from images and video frames. We can detect and localize objects in a scene, and identity and classify images and action recognition. You can choose from a variety of frameworks such as TensorFlow, MXNet, and PyTorch to build your CV models, or you can choose from a variety of pre-trained models available from AWS or from third parties such as ISVs.<\/p>\n<p>For this example, we use a pre-trained GluonCV model downloaded from the GluonCV <a href=\"https:\/\/cv.gluon.ai\/model_zoo\/detection.html\" target=\"_blank\" rel=\"noopener noreferrer\">model zoo<\/a>.<\/p>\n<p>The model we use is the <a href=\"https:\/\/cv.gluon.ai\/model_zoo\/detection.html#ssd\" target=\"_blank\" rel=\"noopener noreferrer\">ssd_512_resnet50_v1_voc<\/a> model. It\u2019s trained on the very popular <a href=\"https:\/\/pjreddie.com\/projects\/pascal-voc-dataset-mirror\/\" target=\"_blank\" rel=\"noopener noreferrer\">PASCAL VOC<\/a> dataset. It has 20 classes of objects annotated and labeled for a model to be trained on. The following code shows the classes and their indexes.<\/p>\n<div class=\"hide-language\">\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-json\">voc_classes = {\r\n\t'aeroplane'\t\t: 0,\r\n\t'bicycle'\t\t: 1,\r\n\t'bird'\t\t\t: 2,\r\n\t'boat'\t\t\t: 3,\r\n\t'bottle'\t\t: 4,\r\n\t'bus'\t\t\t: 5,\r\n\t'car'\t\t\t: 6,\r\n\t'cat'\t\t\t: 7,\r\n\t'chair'\t\t\t: 8,\r\n\t'cow'\t\t\t: 9,\r\n\t'diningtable'\t: 10,\r\n\t'dog'\t\t\t: 11,\r\n\t'horse'\t\t\t: 12,\r\n\t'motorbike'\t\t: 13,\r\n\t'person'\t\t: 14,\r\n\t'pottedplant'\t: 15,\r\n\t'sheep'\t\t\t: 16,\r\n\t'sofa'\t\t\t: 17,\r\n\t'train'\t\t\t: 18,\r\n\t'tvmonitor'\t\t: 19\r\n}<\/code><\/pre>\n<\/div>\n<p class=\"unlimited-height-code\"><span><br \/>For our use case, we\u2019re detecting and counting cars. Because we\u2019re talking about cars, we use class 6 as the index in our business logic later in this post.<\/span><\/p>\n<\/div>\n<p>Our input image shape is [1, 3, 512, 512]. These are the dimensions of the input image the model expects to be given:<\/p>\n<ul>\n<li>\n<strong>Batch size<\/strong> \u2013 1<\/li>\n<li>\n<strong>Number of channels<\/strong> \u2013 3<\/li>\n<li>\n<strong>Width and height of the input image<\/strong> \u2013 512, 512<\/li>\n<\/ul>\n<h2>Uploading the model artifacts<\/h2>\n<p>We need to upload the model artifacts to an <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3) bucket. The bucket name should have <code>aws-panorama-<\/code> in the beginning of the name. After downloading the model artifacts, we upload the <code>ssd_512_resnet50_v1_voc.tar.gz<\/code> file to the S3 bucket. To create your bucket, complete the following steps:<\/p>\n<ol>\n<li>Download the <a href=\"https:\/\/panorama-starter-kit.s3.amazonaws.com\/public\/v1\/Models\/Models.zip\" target=\"_blank\" rel=\"noopener noreferrer\">model artifacts<\/a>.<\/li>\n<li>On the Amazon S3 console, choose <strong>Create bucket<\/strong>.<\/li>\n<li>For <strong>Bucket name<\/strong>, enter a name starting with <code>aws-panorama-<\/code>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20055\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/2-Create-Bucket.jpg\" alt=\"\" width=\"800\" height=\"402\"><\/p>\n<ol start=\"4\">\n<li>Choose <strong>Create bucket<\/strong>.<\/li>\n<\/ol>\n<p>You can view the object details in the <strong>Object overview<\/strong> section. The model URI is <code>s3:\/\/aws-panorama-models-bucket\/ssd_512_resnet50_v1_voc.tar.gz<\/code>.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20056\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/3-Object-Overview.jpg\" alt=\"\" width=\"800\" height=\"361\"><\/p>\n<h2>The business logic code<\/h2>\n<p>After we upload the model artifacts to an S3 bucket, let\u2019s turn our attention to the business logic code. For more information about the sample developer code, see <a href=\"https:\/\/docs.aws.amazon.com\/panorama\/latest\/dev\/gettingstarted-code.html\" target=\"_blank\" rel=\"noopener noreferrer\">Sample application code<\/a>. For a comparative example of code samples, see <a href=\"https:\/\/github.com\/aws-samples\/aws-panorama-samples\/tree\/main\/PeopleCounter\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Panorama People Counter Example<\/a> on GitHub.<\/p>\n<p>Before we look at the full code, let\u2019s look at a skeleton of the business logic code we use:<\/p>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\">### Lambda skeleton\r\n\r\nclass car_counter(object):\r\n    def interface(self):\r\n        # defines the parameters that interface with other services from Panorama\r\n        return\r\n\r\n    def init(self, parameters, inputs, outputs):\r\n        # defines the attributes such as arrays and model objects that will be used in the application\r\n        return\r\n\r\n    def entry(self, inputs, outputs):\r\n        # defines the application logic responsible for predicting using the inputs and handles what to do\r\n        # with the outputs\r\n        return\r\n<\/code><\/pre>\n<\/div>\n<p>The business logic code and <a href=\"http:\/\/aws.amazon.com\/lambda\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Lambda<\/a> function expect to have at least the interface method, init method, and the entry method.<\/p>\n<p>Let\u2019s go through the python business logic code next.<\/p>\n<div class=\"hide-language\">\n<pre class=\"unlimited-height-code\"><code class=\"lang-python\">import panoramasdk\r\nimport cv2\r\nimport numpy as np\r\nimport time\r\nimport boto3\r\n\r\n# Global Variables \r\n\r\nHEIGHT = 512\r\nWIDTH = 512\r\n\r\nclass car_counter(panoramasdk.base):\r\n    \r\n    def interface(self):\r\n        return {\r\n                \"parameters\":\r\n                (\r\n                    (\"float\", \"threshold\", \"Detection threshold\", 0.10),\r\n                    (\"model\", \"car_counter\", \"Model for car counting\", \"ssd_512_resnet50_v1_voc\"), \r\n                    (\"int\", \"batch_size\", \"Model batch size\", 1),\r\n                    (\"float\", \"car_index\", \"car index based on dataset used\", 6),\r\n                ),\r\n                \"inputs\":\r\n                (\r\n                    (\"media[]\", \"video_in\", \"Camera input stream\"),\r\n                ),\r\n                \"outputs\":\r\n                (\r\n                    (\"media[video_in]\", \"video_out\", \"Camera output stream\"),\r\n                    \r\n                ) \r\n            }\r\n    \r\n            \r\n    def init(self, parameters, inputs, outputs):  \r\n        try:  \r\n            \r\n            print('Loading Model')\r\n            self.model = panoramasdk.model()\r\n            self.model.open(parameters.car_counter, 1)\r\n            print('Model Loaded')\r\n            \r\n            # Detection probability threshold.\r\n            self.threshold = parameters.threshold\r\n            # Frame Number Initialization\r\n            self.frame_num = 0\r\n            # Number of cars\r\n            self.number_cars = 0\r\n            # Bounding Box Colors\r\n            self.colours = np.random.rand(32, 3)\r\n            # Car Index for Model from parameters\r\n            self.car_index = parameters.car_index\r\n            # Set threshold for model from parameters \r\n            self.threshold = parameters.threshold\r\n                        \r\n            class_info = self.model.get_output(0)\r\n            prob_info = self.model.get_output(1)\r\n            rect_info = self.model.get_output(2)\r\n\r\n            self.class_array = np.empty(class_info.get_dims(), dtype=class_info.get_type())\r\n            self.prob_array = np.empty(prob_info.get_dims(), dtype=prob_info.get_type())\r\n            self.rect_array = np.empty(rect_info.get_dims(), dtype=rect_info.get_type())\r\n\r\n            return True\r\n        \r\n        except Exception as e:\r\n            print(\"Exception: {}\".format(e))\r\n            return False\r\n\r\n    def preprocess(self, img, size):\r\n        \r\n        resized = cv2.resize(img, (size, size))\r\n        mean = [0.485, 0.456, 0.406]  # RGB\r\n        std = [0.229, 0.224, 0.225]  # RGB\r\n        \r\n        # converting array of ints to floats\r\n        img = resized.astype(np.float32) \/ 255. \r\n        img_a = img[:, :, 0]\r\n        img_b = img[:, :, 1]\r\n        img_c = img[:, :, 2]\r\n        \r\n        # Extracting single channels from 3 channel image\r\n        # The above code could also be replaced with cv2.split(img)\r\n        # normalizing per channel data:\r\n        \r\n        img_a = (img_a - mean[0]) \/ std[0]\r\n        img_b = (img_b - mean[1]) \/ std[1]\r\n        img_c = (img_c - mean[2]) \/ std[2]\r\n        \r\n        # putting the 3 channels back together:\r\n        x1 = [[[], [], []]]\r\n        x1[0][0] = img_a\r\n        x1[0][1] = img_b\r\n        x1[0][2] = img_c\r\n        x1 = np.asarray(x1)\r\n        \r\n        return x1\r\n    \r\n    def get_number_cars(self, class_data, prob_data):\r\n        \r\n        # get indices of car detections in class data\r\n        car_indices = [i for i in range(len(class_data)) if int(class_data[i]) == self.car_index]\r\n        # use these indices to filter out anything that is less than self.threshold\r\n        prob_car_indices = [i for i in car_indices if prob_data[i] &gt;= self.threshold]\r\n        return prob_car_indices\r\n\r\n    \r\n    def entry(self, inputs, outputs):        \r\n        for i in range(len(inputs.video_in)):\r\n            stream = inputs.video_in[i]\r\n            car_image = stream.image\r\n\r\n            # Pre Process Frame\r\n            x1 = self.preprocess(car_image, 512)\r\n                                    \r\n            # Do inference on the new frame.\r\n            \r\n            self.model.batch(0, x1)        \r\n            self.model.flush()\r\n            \r\n            # Get the results.            \r\n            resultBatchSet = self.model.get_result()\r\n            class_batch = resultBatchSet.get(0)\r\n            prob_batch = resultBatchSet.get(1)\r\n            rect_batch = resultBatchSet.get(2)\r\n\r\n            class_batch.get(0, self.class_array)\r\n            prob_batch.get(1, self.prob_array)\r\n            rect_batch.get(2, self.rect_array)\r\n\r\n            class_data = self.class_array[0]\r\n            prob_data = self.prob_array[0]\r\n            rect_data = self.rect_array[0]\r\n            \r\n            \r\n            # Get Indices of classes that correspond to Cars\r\n            car_indices = self.get_number_cars(class_data, prob_data)\r\n            \r\n            try:\r\n                self.number_cars = len(car_indices)\r\n            except:\r\n                self.number_cars = 0\r\n            \r\n            # Visualize with Opencv or stream.(media) \r\n            \r\n            # Draw Bounding boxes on HDMI output\r\n            if self.number_cars &gt; 0:\r\n                for index in car_indices:\r\n                    \r\n                    left = np.clip(rect_data[index][0] \/ np.float(HEIGHT), 0, 1)\r\n                    top = np.clip(rect_data[index][1] \/ np.float(WIDTH), 0, 1)\r\n                    right = np.clip(rect_data[index][2] \/ np.float(HEIGHT), 0, 1)\r\n                    bottom = np.clip(rect_data[index][3] \/ np.float(WIDTH), 0, 1)\r\n                    \r\n                    stream.add_rect(left, top, right, bottom)\r\n                    stream.add_label(str(prob_data[index][0]), right, bottom) \r\n                    \r\n            stream.add_label('Number of Cars : {}'.format(self.number_cars), 0.8, 0.05)\r\n        \r\n            self.model.release_result(resultBatchSet)            \r\n            outputs.video_out[i] = stream\r\n        return True\r\n\r\n\r\ndef main():\r\n        \r\n    car_counter().run()\r\nmain()<\/code><\/pre>\n<\/div>\n<p>For a full explanation of the code and the methods used, see the <a href=\"https:\/\/docs.aws.amazon.com\/panorama\/latest\/dev\/panorama-welcome.html\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Panorama Developer Guide<\/a>.<\/p>\n<p>The code has the following notable features:<\/p>\n<ul>\n<li>\n<strong>car_index<\/strong> \u2013 <code>6<\/code>\n<\/li>\n<li>\n<strong>model_used<\/strong><strong> \u2013<\/strong> <code>ssd_512_resnet50_v1_voc (parameters.car_counter)<\/code>\n<\/li>\n<li>\n<strong>add_label<\/strong> \u2013 Adds text to the HDMI output<\/li>\n<li>\n<strong>add_rect <\/strong>\u2013 Adds bounding boxes around the object of interest<\/li>\n<li>\n<strong> Image <\/strong>\u2013 Gets the NumPy array of the frame read from the camera<\/li>\n<\/ul>\n<p>Now that we have the code ready, we need to create a Lambda function with the preceding code.<\/p>\n<ol>\n<li>On the Lambda console, choose <strong>Functions<\/strong>.<\/li>\n<li>Choose <strong>Create function<\/strong>.<\/li>\n<li>For <strong>Function name<\/strong>, enter a name.<\/li>\n<li>Choose <strong>Create function<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20057\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/4-Basic-Info.jpg\" alt=\"\" width=\"800\" height=\"332\"><\/p>\n<ol start=\"5\">\n<li>Rename the Python file to <code>car_counter.py<\/code>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20058\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/5-Function-Code.jpg\" alt=\"\" width=\"800\" height=\"166\"><\/p>\n<ol start=\"6\">\n<li>Change the handler to <code>car_counter_main<\/code>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20059\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/6-Runtime-Settings.jpg\" alt=\"\" width=\"800\" height=\"89\"><\/p>\n<ol start=\"7\">\n<li>In the <strong>Basic settings <\/strong>section, confirm that the memory is 2048 MB and the timeout is 2 minutes.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20060\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/7-Basic-Settings.jpg\" alt=\"\" width=\"800\" height=\"293\"><\/p>\n<ol start=\"8\">\n<li>On the <strong>Actions<\/strong> menu, choose <strong>Publish new version<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20061\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/8-Throtte.jpg\" alt=\"\" width=\"800\" height=\"452\"><\/p>\n<p>We\u2019re now ready to create our application and deploy to the device. We use the model we uploaded and the Lambda function we created in the subsequent steps.<\/p>\n<h2>Creating the application<\/h2>\n<p>To create your application, complete the following steps:<\/p>\n<ol>\n<li>On the AWS Panorama console, choose <strong>My applications<\/strong>.<\/li>\n<li>Choose <strong>Create application<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20062\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/9-Applications.jpg\" alt=\"\" width=\"800\" height=\"115\"><\/p>\n<ol start=\"3\">\n<li>Choose <strong>Begin creation<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20063\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/10-Application-Building-Blocks.jpg\" alt=\"\" width=\"800\" height=\"534\"><\/p>\n<ol start=\"4\">\n<li>For <strong>Name<\/strong>, enter <code>car_counter<\/code>.<\/li>\n<li>For <strong>Description<\/strong>, enter an optional description.<\/li>\n<li>Choose <strong>Next<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20064\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/11-Naming-our-App.jpg\" alt=\"\" width=\"800\" height=\"539\"><\/p>\n<ol start=\"7\">\n<li>Click <strong>Choose model<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20065\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/12-Choose-a-Model.jpg\" alt=\"\" width=\"800\" height=\"539\"><\/p>\n<ol start=\"8\">\n<li>For <strong>Model artifact path<\/strong>, enter the model S3 URI.<\/li>\n<li>For <strong>Model name<\/strong>\u00b8 enter the same name that you used in the business logic code.<\/li>\n<li>In the <strong>Input configuration<\/strong> section, choose <strong>Add input<\/strong>.<\/li>\n<li>For <strong>Input name<\/strong>, enter the input Tensor name (for this post, data).<\/li>\n<li>For <strong>Shape<\/strong>, enter the frame shape (for this post, <code>1<\/code>, <code>3<\/code>, <code>512<\/code>, <code>512<\/code>).<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20066\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/13-External-Model.jpg\" alt=\"\" width=\"800\" height=\"599\"><\/p>\n<ol start=\"13\">\n<li>Choose <strong>Next<\/strong>.<\/li>\n<li>Under <strong>Lambda functions<\/strong>, select your function (<code>CarCounter<\/code>).<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20067\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/14-Create-Application-Choose-Lambda.jpg\" alt=\"\" width=\"800\" height=\"328\"><\/p>\n<ol start=\"15\">\n<li>Choose <strong>Next<\/strong>.<\/li>\n<li>Choose <strong>Proceed to deployment<\/strong>.<\/li>\n<\/ol>\n<h2>Deploying your application<\/h2>\n<p>To deploy your new application, complete the following steps:<\/p>\n<ol>\n<li>Choose <strong>Choose appliance<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20068\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/15-Choose-Appliance.jpg\" alt=\"\" width=\"800\" height=\"540\"><\/p>\n<ol start=\"2\">\n<li>Choose the appliance you created.<\/li>\n<li>Choose <strong>Choose camera streams<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20069\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/16-Lambda-functions-use-camera.jpg\" alt=\"\" width=\"800\" height=\"531\"><\/p>\n<ol start=\"4\">\n<li>Select your camera stream.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-20101 size-full\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/19\/Choose-Camera-Stream.jpg\" alt=\"\" width=\"800\" height=\"319\"><\/p>\n<ol start=\"5\">\n<li>Choose <strong>Deploy<\/strong>.<\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-20100 size-full\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/19\/App-Ready-to-Deploy.jpg\" alt=\"\" width=\"800\" height=\"538\"><\/p>\n<h2>Checking the output<\/h2>\n<p>After we deploy the application, we can check the output HDMI output or use <a href=\"http:\/\/aws.amazon.com\/cloudwatch\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon CloudWatch Logs<\/a>. For more information, see <a href=\"https:\/\/docs.aws.amazon.com\/panorama\/latest\/dev\/gettingstarted-setup.html\" target=\"_blank\" rel=\"noopener noreferrer\">Setting up the AWS Panorama Appliance Developer Kit<\/a> or <a href=\"https:\/\/docs.aws.amazon.com\/panorama\/latest\/dev\/monitoring-logging.html\" target=\"_blank\" rel=\"noopener noreferrer\">Viewing AWS Panorama event logs in CloudWatch Logs<\/a>, respectively.<\/p>\n<p>If we have an HDMI output connected to the device, we should see the output from the device on the HDMI screen, as in the following screenshot.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-20072\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/19-Parking-Lot.jpg\" alt=\"\" width=\"800\" height=\"449\"><\/p>\n<p>And that\u2019s it. We have successfully deployed a car counting use case to the AWS Panorama Appliance.<\/p>\n<h2>Extending the solution<\/h2>\n<p>We can do so much more with this application and extend it to other parking-related use cases, such as the following:<\/p>\n<ul>\n<li>\n<strong>Parking lot routing<\/strong> \u2013 Where are the vacant parking spots?<\/li>\n<li>\n<strong>Parking lot monitoring<\/strong> \u2013 Are cars parked in appropriate spots? Are they too close to each other?<\/li>\n<\/ul>\n<p>You can integrate these use cases with other AWS services like QuickSight, S3 buckets, and MQTT, just to name a few, and get real-time inference data for monitoring cars in a parking lot.<\/p>\n<p>You can adapt this example and build other object detection applications for your use case. We will also continue to share more examples with you so you can build, develop, and test with the AWS Panorama Appliance Developer Kit.<\/p>\n<h2>Conclusion<\/h2>\n<p>The applications of computer vision at the edge are only now being imagined and built out. As a data scientist, I\u2019m very excited to be innovating in lockstep with AWS Panorama customers to help you ideate and build CV models that are uniquely tailored to solve your problems.<\/p>\n<p>And we\u2019re just scratching the surface of what\u2019s possible with CV at the edge and the AWS Panorama ecosystem.<\/p>\n<h2>Resources<\/h2>\n<p>For more information about using AWS Panorama, see the following resources:<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<hr>\n<h3><strong>About the Author<\/strong><\/h3>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-20076 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/12\/18\/Surya-Kari.jpg\" alt=\"\" width=\"100\" height=\"137\">Surya Kari<\/strong> is a Data Scientist who works on AI devices within AWS. His interests lie in computer vision and autonomous systems.<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/building-and-deploying-an-object-detection-computer-vision-application-at-the-edge-with-aws-panorama\/<\/p>\n","protected":false},"author":0,"featured_media":717,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/716"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=716"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/716\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/717"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=716"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=716"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=716"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}