{"id":1866,"date":"2022-03-01T17:50:03","date_gmt":"2022-03-01T17:50:03","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/03\/01\/ml-inferencing-at-the-edge-with-amazon-sagemaker-edge-and-ambarella-cv25\/"},"modified":"2022-03-01T17:50:03","modified_gmt":"2022-03-01T17:50:03","slug":"ml-inferencing-at-the-edge-with-amazon-sagemaker-edge-and-ambarella-cv25","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/03\/01\/ml-inferencing-at-the-edge-with-amazon-sagemaker-edge-and-ambarella-cv25\/","title":{"rendered":"ML inferencing at the edge with Amazon SageMaker Edge and Ambarella CV25"},"content":{"rendered":"<div id=\"\">\n<p>Ambarella builds computer vision SoCs (system on chips) based on a very efficient AI chip architecture and CVflow that provides the Deep Neural Network (DNN) processing required for edge inferencing use cases like intelligent home monitoring and smart surveillance cameras. Developers convert models trained with frameworks (such as TensorFlow or MXNET) to Ambarella CVflow format to be able to run these models on edge devices. <a href=\"https:\/\/aws.amazon.com\/sagemaker\/edge\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon SageMaker Edge<\/a> has integrated the Ambarella toolchain into its workflow, allowing you to easily convert and optimize your models for the platform.<\/p>\n<p>In this post, we show how to set up model optimization and conversion with SageMaker Edge, add the model to your edge application, and deploy and test your new model in an Ambarella CV25 device to build a smart surveillance camera application running on the edge.<\/p>\n<h2>Smart camera use case<\/h2>\n<p>Smart security cameras have use case-specific machine learning (ML) enabled features like detecting vehicles and animals, or identifying possible suspicious behavior, parking, or zone violations. These scenarios require ML models run on the edge computing unit in the camera with the highest possible performance.<\/p>\n<p>Ambarella\u2019s CVx processors, based on the company\u2019s proprietary CVflow architecture, provide high DNN inference performance at very low power. This combination of high performance and low power makes them ideal for devices that require intelligence at the edge. ML models need to be optimized and compiled for the target platform to run on the edge. SageMaker Edge plays a key role in optimizing and converting ML models to the most popular frameworks to be able to run on the edge device.<\/p>\n<h2>Solution overview<\/h2>\n<p>Our smart security camera solution implements ML model optimization and compilation configuration, runtime operation, inference testing, and evaluation on the edge device. SageMaker Edge provides model optimization and conversion for edge devices to run faster with no loss in accuracy. The ML model can be in any framework that SageMaker Edge supports. For more information, see <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/neo-supported-devices-edge.html\" target=\"_blank\" rel=\"noopener noreferrer\">Supported Frameworks, Devices, Systems, and Architectures<\/a>.<\/p>\n<p>The SageMaker Edge integration of Ambarella CVflow tools provides additional advantages to developers using Ambarella SoCs:<\/p>\n<ul>\n<li>Developers don\u2019t need to deal with updates and maintenance of the compiler toolchain, because the toolchain is integrated and opaque to the user<\/li>\n<li>Layers that CVflow doesn\u2019t support are automatically compiled to run on the ARM by the SageMaker Edge compiler<\/li>\n<\/ul>\n<p>The following diagram illustrates the solution architecture:<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image001.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33268\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image001.png\" alt=\"\" width=\"1232\" height=\"571\"><\/a><\/p>\n<p>The steps to implement the solution are as follows:<\/p>\n<ol>\n<li>Prepare the model package.<\/li>\n<li>Configure and start the model\u2019s compilation job for Ambarella CV25.<\/li>\n<li>Place the packaged model artifacts on the device.<\/li>\n<li>Test the inference on the device.<\/li>\n<\/ol>\n<h2>Prepare the model package<\/h2>\n<p>For Ambarella targets, SageMaker Edge requires a model package that contains a model configuration file called <code>amba_config.json<\/code>, calibration images, and a trained ML model file. This model package file is a compressed TAR file (*.tar.gz). You can use an <a href=\"https:\/\/aws.amazon.com\/sagemaker\/edge\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Sagemaker<\/a> notebook instance to train and test ML models and to prepare the model package file. To create a notebook instance, complete the following steps:<\/p>\n<ol>\n<li>On the SageMaker console, under <strong>Notebook <\/strong>in the navigation pane, choose <strong>Notebook instances<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image003.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33269\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image003.png\" alt=\"\" width=\"1257\" height=\"465\"><\/a><\/li>\n<li>Choose <strong>Create notebook instance<\/strong>.<\/li>\n<li>Enter a name for your instance and choose <strong>ml.t2.medium<\/strong> as the instance type.<\/li>\n<\/ol>\n<p>This instance is enough for testing and model preparation purposes.<\/p>\n<ol start=\"4\">\n<li>For <strong>IAM role<\/strong>, create a new <a href=\"http:\/\/aws.amazon.com\/iam\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Identity and Access Management<\/a> (IAM) role to allow access to <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3) buckets, or choose an existing role.<\/li>\n<li>Keep other configurations as default and choose <strong>Create notebook instance<\/strong>.<\/li>\n<\/ol>\n<p>When the status is <code>InService<\/code>, you can start using your new Sagemaker notebook instance.<\/p>\n<ol start=\"6\">\n<li>Choose <strong>Open JupyterLab<\/strong> to access your workspace.<\/li>\n<\/ol>\n<p>For this post, we use a pre-trained TFLite model to compile and deploy to the edge device. The chosen model is a pre-trained SSD object detection model from the TensorFlow model zoo on the COCO dataset.<\/p>\n<ol start=\"7\">\n<li><a href=\"https:\/\/aws-machine-learning-blog.s3.amazonaws.com\/artifacts\/ML-7753-Amazon-SageMaker-Edge-and-Ambarella-CV25\/ssd_mobilenet_v1_coco_2018_01_28.tflite\" target=\"_blank\" rel=\"noopener noreferrer\">Download<\/a> the converted TFLite model.<\/li>\n<\/ol>\n<p>Now you\u2019re ready to download, test, and prepare the model package.<\/p>\n<ol start=\"8\">\n<li>Create a new notebook with kernel <code>conda_tensorflow2_p36<\/code> on the launcher view.<\/li>\n<li>Import the required libraries as follows:\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">import cv2\nimport numpy as np\nfrom tensorflow.lite.python.interpreter import Interpreter<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<li>Save the following example image as <code>street-frame.jpg<\/code>, create a folder called calib_img in the workspace folder, and upload the image to the current folder.<\/li>\n<li>Upload the downloaded model package contents to the current folder.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image005.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33270\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image005.jpg\" alt=\"\" width=\"640\" height=\"360\"><\/a><\/li>\n<li>Run the following command to load your pre-trained TFLite model and print its parameters, which we need to configure our model for compilation:\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">interpreter = Interpreter(model_path='ssd_mobilenet_v1_coco_2018_01_28.tflite')\ninterpreter.allocate_tensors()\n\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\nheight = input_details[0]['shape'][1]\nwidth = input_details[0]['shape'][2]\n\nprint(\"Input name: '{}'\".format(input_details[0]['name']))\nprint(\"Input Shape: {}\".format(input_details[0]['shape'].tolist()))<\/code><\/pre>\n<p>The output contains the input name and input shape:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-bash\">Input name: 'normalized_input_image_tensor'\nInput Shape: [1, 300, 300, 3]<\/code><\/pre>\n<\/p><\/div>\n<\/p><\/div>\n<\/li>\n<li>Use following code to load the test image and run inference:\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">image = cv2.imread(\"calib_img\/street-frame.jpg\")\nimage_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\nimH, imW, _ = image.shape\nimage_resized = cv2.resize(image_rgb, (width, height))\ninput_data = np.expand_dims(image_resized, axis=0)\n\ninput_data = (np.float32(input_data) - 127.5) \/ 127.5\n\ninterpreter.set_tensor(input_details[0]['index'], input_data)\ninterpreter.invoke()\n\nboxes = interpreter.get_tensor(output_details[0]['index'])[0]\nclasses = interpreter.get_tensor(output_details[1]['index'])[0]\nscores = interpreter.get_tensor(output_details[2]['index'])[0]\nnum = interpreter.get_tensor(output_details[3]['index'])[0]<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<li>Use the following code to visualize the detected bounding boxes on the image and save the result image as <code>street-frame_results.jpg<\/code>:\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">with open('labelmap.txt', 'r') as f:\n    labels = [line.strip() for line in f.readlines()]\n\nfor i in range(len(scores)):\n    if ((scores[i] &gt; 0.1) and (scores[i] &lt;= 1.0)):\n        ymin = int(max(1, (boxes[i][0] * imH)))\n        xmin = int(max(1, (boxes[i][1] * imW)))\n        ymax = int(min(imH, (boxes[i][2] * imH)))\n        xmax = int(min(imW, (boxes[i][3] * imW)))\n\n        cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (10, 255, 0), 2)\n\n        object_name = labels[int(classes[i])]\n        label = '%s: %d%%' % (object_name, int(scores[i]*100))\n        labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2)\n        label_ymin = max(ymin, labelSize[1] + 10)\n        cv2.rectangle(image, (xmin, label_ymin-labelSize[1]-10), (xmin + labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED)\n        cv2.putText(image, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2)\n\ncv2.imwrite('street-frame_results.jpg', image)<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<li>Use following command to show the result image:\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">Image(filename='street-frame_results.jpg') <\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<\/ol>\n<p>You get an inference result like the following image.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image007.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33271\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image007.jpg\" alt=\"\" width=\"640\" height=\"360\"><\/a><\/p>\n<p>Our pre-trained TFLite model detects the car object from a security camera frame.<\/p>\n<p>We\u2019re done with testing the model; now let\u2019s package the model and configuration files that Amazon Sagemaker Neo requires for Ambarella targets.<\/p>\n<ol start=\"16\">\n<li>Create an empty text file called <code>amba_config.json<\/code> and use the following content for it:\n<div class=\"hide-language\">\n<pre><code class=\"lang-json\">{\n    \"inputs\": {\n        \"normalized_input_image_tensor\": {\n            \"shape\": \"1, 300, 300, 3\",\n            \"filepath\": \"calib_img\/\"\n        }\n    }\n}<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<\/ol>\n<p>This file is the compilation configuration file for Ambarella CV25. The <code>filepath<\/code> value inside <code>amba_config.json<\/code> should match the <code>calib_img<\/code> folder name; a mismatch may cause a failure.<\/p>\n<p>The model package contents are now ready.<\/p>\n<ol start=\"17\">\n<li>Use the following commands to compress the package as a .tar.gz file:\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">import tarfile\nwith tarfile.open('ssd_mobilenet_v1_coco_2018_01_28.tar.gz', 'w:gz') as f:\n    f.add('calib_img\/')\n    f.add('amba_config.json')\n    f.add('ssd_mobilenet_v1_coco_2018_01_28.tflite')<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<li>Upload the file to the SageMaker auto-created S3 bucket to use in the compilation job (or your designated S3 bucket):\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">import sagemaker\nsess = sagemaker.Session()\nbucket = sess.default_bucket() \nprint(\"S3 bucket: \"+bucket)\nprefix = 'raw-models'\nmodel_path = sess.upload_data(path='ssd_mobilenet_v1_coco_2018_01_28.tar.gz', key_prefix=prefix)\nprint(\"S3 uploaded model path: \"+model_path)<\/code><\/pre>\n<\/p><\/div>\n<\/li>\n<\/ol>\n<p>The model package file contains calibration images, the compilation config file, and model files. After you upload the file to Amazon S3, you\u2019re ready to start the compilation job.<\/p>\n<h2>Compile the model for Ambarella CV25<\/h2>\n<p>To start the compilation job, complete the following steps:<\/p>\n<ol>\n<li>On the SageMaker console, under <strong>Inference<\/strong> in the navigation pane, choose <strong>Compilation jobs<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image009.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33272\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image009.png\" alt=\"\" width=\"1256\" height=\"626\"><\/a><\/li>\n<li>Choose <strong>Create compilation job<\/strong>.<\/li>\n<li>For <strong>Job name<\/strong>, enter a name.<\/li>\n<li>For <strong>IAM role<\/strong>, create a role or choose an existing role to give Amazon S3 read and write permission for the model files.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image011.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33273\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image011.png\" alt=\"\" width=\"861\" height=\"401\"><\/a><\/li>\n<li>In the <strong>Input configuration<\/strong> section, for <strong>Location of model artifacts<\/strong>, enter the S3 path of your uploaded model package file.<\/li>\n<li>For <strong>Data input configuration<\/strong>, enter <code>{\"normalized_input_image_tensor\":[1, 300, 300, 3]}<\/code>, which is the model\u2019s input data shape obtained in previous steps.<\/li>\n<li>For <strong>Machine learning framework<\/strong>, choose <strong>TFLite<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image013.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33274\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image013.png\" alt=\"\" width=\"1866\" height=\"916\"><\/a><\/li>\n<li>In the <strong>Output configuration<\/strong> section, for <strong>Target device<\/strong>, choose your device (<code>amba_cv25<\/code>).<\/li>\n<li>For <strong>S3 Output location<\/strong>, enter a folder in your S3 bucket for the compiled model to be saved in.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image015.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-33276\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/ML-7753-image015.png\" alt=\"\" width=\"879\" height=\"623\"><\/a><\/li>\n<li>Choose <strong>Submit<\/strong> to start the compilation process.<\/li>\n<\/ol>\n<p>The compilation time depends on your model size and architecture. When your compiled model is ready in Amazon S3, the <strong>Status<\/strong> column shows as <code>COMPLETED<\/code>.<\/p>\n<p>If the compilation status shows <code>FAILED<\/code>, refer to <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/neo-troubleshooting-target-devices-ambarella.html\" target=\"_blank\" rel=\"noopener noreferrer\">Troubleshoot Ambarella Errors<\/a> to debug compilation errors.<\/p>\n<h2>Place the model artifacts on the device<\/h2>\n<p>When the compilation job is complete, Neo saves the compiled package to the provided output location in the S3 bucket. The compiled model package file contains the converted and optimized model files, their configuration, and runtime files.<\/p>\n<p>On the Amazon S3 console, download the compiled model package, then extract and transfer the model artifacts to your device to start using it with your edge ML inferencing app.<\/p>\n<h2>Test the ML inference on the device<\/h2>\n<p>Navigate to your Ambarella device\u2019s terminal and run the inferencing application binary on the device. The compiled and optimized ML model runs for the specified video source. You can observe detected bounding boxes on the output stream, as shown in the following screenshot.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full\" src=\"https:\/\/s3.amazonaws.com\/aws-machine-learning-blog\/artifacts\/ml-7553-optimized-amba-footage\/optimized_amba-footage.gif\" width=\"800\" height=\"450\"><\/p>\n<h2>Conclusion<\/h2>\n<p>In this post, we accomplished ML model preparation and conversion to Ambarella targets with SageMaker Edge, which has integrated the Ambarella toolchain. Optimizing and deploying high-performance ML models to Ambarella\u2019s low-power edge devices unlocks intelligent edge solutions like smart security cameras.<\/p>\n<p>As a next step, you can get started with SageMaker Edge and Ambarella CV25 to enable ML for edge devices. You can extend this use case with Sagemaker ML development features to build an end-to-end pipeline that includes edge processing and deployment.<\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/Emir-Ayar.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-33267 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/Emir-Ayar.png\" alt=\"\" width=\"100\" height=\"131\"><\/a> Emir Ayar<\/strong>\u00a0is an Edge Prototyping Lead Architect on the AWS Prototyping team. He specializes in helping customers build IoT, Edge AI, and Industry 4.0 solutions and implement architectural best practices. He lives in Luxembourg and enjoys playing synthesizers.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/Dinesh-Balasubramaniam.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-33266 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2022\/02\/18\/Dinesh-Balasubramaniam.jpg\" alt=\"\" width=\"100\" height=\"133\"><\/a>Dinesh Balasubramaniam <\/strong>is responsible for marketing and customer support for Ambarella\u2019s family of security SoCs, with expertise in systems engineering, software development, video compression, and product design. Dinesh He earned an MS EE degree from the University of Texas at Dallas with a focus on signal processing.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/ml-inferencing-at-the-edge-with-amazon-sagemaker-edge-and-ambarella-cv25\/<\/p>\n","protected":false},"author":0,"featured_media":1867,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1866"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1866"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1866\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1867"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}