{"id":1091,"date":"2021-10-28T08:40:17","date_gmt":"2021-10-28T08:40:17","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/28\/how-iproperty-com-my-accelerates-property-based-ml-model-delivery-with-amazon-sagemaker\/"},"modified":"2021-10-28T08:40:17","modified_gmt":"2021-10-28T08:40:17","slug":"how-iproperty-com-my-accelerates-property-based-ml-model-delivery-with-amazon-sagemaker","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/28\/how-iproperty-com-my-accelerates-property-based-ml-model-delivery-with-amazon-sagemaker\/","title":{"rendered":"How iProperty.com.my accelerates property-based ML model delivery with Amazon SageMaker"},"content":{"rendered":"<div id=\"\">\n<p><em>This post was created in collaboration with Mohammed Alauddin, Data Engineering and Data Science Regional Manager, and Kamal Hossain, Lead Data Scientist at iProperty.com.my, now part of PropertyGuru Group. <\/em><\/p>\n<p><em>iProperty.com.my<\/em> is the market-leading property portal in Malaysia and is now part of the PropertyGuru Group. <em>iProperty.com.my<\/em> offers a search experience that enables property seekers to go through thousands of property listings available in the market. Although the search function already serves its purpose in narrowing down potential properties for consumers, <em>iProperty.com.my <\/em>continues relentlessly to look for new ways to improve the consumer search experience.<\/p>\n<p>The major driving force of reinvention for consumers within <em>iProperty.com.my<\/em> is anchored on data and machine learning (ML), with ML models being trained, retrained, and deployed for their consumers almost on a daily basis. These innovations include property viewing and location-based recommendations, which display a set of listings based on the search behavior and user profiles.<\/p>\n<p>However, with more ML workloads deployed, challenges associated with scale began to surface. In this post, we discuss those challenges and how the <em>iProperty.com.my <\/em>Data Science team automated their workflows using <a href=\"https:\/\/aws.amazon.com\/sagemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon SageMaker<\/a>.<\/p>\n<h2>Challenges running ML projects at scale<\/h2>\n<p>When the <em>iProperty.com.my <\/em> Data Science team started out their ML journey, the team\u2019s primary focus was identifying and rolling out ML features that would benefit their consumers. In <em>iProperty.com.my <\/em>, experimenting and validating newly defined hypotheses quickly is a common practice. However, as their ML footprint grew, the team\u2019s focus gradually shifted from discovering new experiences to undifferentiated heavy lifting. The following are some of the challenges they encountered:<\/p>\n<ul>\n<li><strong>Operational overhead<\/strong> \u2013 Over time, they realized they had a variety of tools and frameworks to maintain, such as scikit-learn, TensorFlow, and PyTorch. Different ML frameworks were used for varying use cases. The team resorted to managing these framework updates via multiple self-managed container images, which was highly time-consuming. To keep up with the latest updates for each of these ML frameworks, frequent updates to the container images had to be made. This resulted in higher levels of maintenance, taking the team\u2019s focus away from building new experiences for their consumers.<\/li>\n<li><strong>Lack of automation and self-service capabilities<\/strong> \u2013 ML projects involved multiple different teams, such as data engineering, data science, platform engineering, and product teams. Without end-to-end automation, the completion of tasks to launch a feature to market took more time, especially for tasks that had to be processed by multiple teams. As more projects crept in, the wait time between teams increased, impacting the time the feature was delivered to market. The lack of self-service capabilities also contributed to more time spent waiting for each other.<\/li>\n<li><strong>High cost<\/strong> \u2013 ML is an iterative process that requires retraining to keep models relevant. Depending on the use cases and the volume of data, training can be costly because it requires the use of powerful virtual machines. The other issue faced was every ML model deployed had its own inference instance, which meant that as more ML models were deployed, the cost went up linearly.<\/li>\n<\/ul>\n<p>In light of these challenges, the team concluded they needed to rethink their process to build, train, and deploy models. They also identified the need to reevaluate their tooling to improve operational efficiency and manage their cost effectively.<\/p>\n<h2>Automating ML delivery with SageMaker<\/h2>\n<p>After much research, the team concluded that Amazon SageMaker was the most comprehensive ML platform that addressed their challenges. With SageMaker, data scientists and developers can quickly and easily build and train ML models, and then directly deploy them into a production-ready hosted environment. It provides self-service access to integrated Jupyter notebooks for easy access to the data sources for exploration and analysis, without the need to manage servers. With native and prebuilt support for various ML frameworks, such as PyTorch, TensorFlow, and MXNet, SageMaker offers flexible distributed training options that adjust to any specific workflows. The training and hosting are billed by the second, with no minimum fees and no upfront commitments. SageMaker also offers other attractive cost-optimization features such as <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/model-managed-spot-training.html\" target=\"_blank\" rel=\"noopener noreferrer\">managed spot training<\/a>, which can reduce cost up to 90%, <a href=\"https:\/\/aws.amazon.com\/savingsplans\/ml-pricing\/\" target=\"_blank\" rel=\"noopener noreferrer\">SageMaker Savings Plans<\/a>, and <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/multi-model-endpoints.html\" target=\"_blank\" rel=\"noopener noreferrer\">multi-model endpoints<\/a> that enable a single host to serve multiple models.<\/p>\n<p>The final piece that wrapped everything together was the integration of SageMaker with <em>iProperty.com.my<\/em>\u2019s continuous integration and continuous delivery (CI\/CD) tooling.<\/p>\n<p>To automate their ML delivery, the team redesigned their ML workflows with SageMaker as the underlying service for model development, training, and hosting, coupled with <em>iProperty.com.my<\/em>\u2019s CI\/CD tooling to automate the steps required to release new ML application updates. In the following sections, we discuss the redesigned workflows.<\/p>\n<h2>Data preparation workflow<\/h2>\n<p>With the introduction of SageMaker, the SageMaker notebook provided self-service environments with access to preprocessed data, which allowed data scientists to move faster with the CPU or GPU resources they needed.<\/p>\n<p>The team relied on the service for data preparation and curation. It provided a unified, web-based visual interface providing complete access, control, and visibility into each step required to build, train, and deploy models, without the need to set up compute instances and file storage.<\/p>\n<p>The team also used Apache Airflow as the workflow engine to schedule and run their complex data pipelines. They use Apache Airflow to automate the initial data preprocessing workflow that provides access to curated data.<\/p>\n<p>The following diagram illustrates the updated workflow.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image001.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-29054\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image001.png\" alt=\"\" width=\"721\" height=\"421\"><\/a><\/p>\n<p>The data preparation workflow has the following steps:<\/p>\n<ol>\n<li>The Data Science team inspects sample data (on their laptop) from the data lake and builds extract, transform, and load (ETL) scripts to prepare the data for downstream exploration. These scripts are uploaded to Apache Airflow.<\/li>\n<li>Multiple datasets extracted from <em>iProperty.com.my<\/em>\u2019s data lake go through multiple steps of data transformation (including joins, filtering, and enrichment). The initial data preprocessing workflow is orchestrated and run by Apache Airflow.<\/li>\n<li>The preprocessed data, in the form of Parquet, is stored and made available in an <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3) engagement data bucket.<\/li>\n<li>On the SageMaker notebook instance, the Data Science team downloads the data from the engagement data S3 bucket into <a href=\"https:\/\/aws.amazon.com\/efs\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Elastic File System<\/a> (Amazon EFS) to perform local exploration and testing.<\/li>\n<li>More data exploration and data preprocessing activities take place to transform the engagement data into features that better represent the underlying problems to the predictive models.<\/li>\n<li>The curated data is stored in the curated data S3 bucket.<\/li>\n<li>After the data is prepared, the team performs <a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/use-the-amazon-sagemaker-local-mode-to-train-on-your-notebook-instance\/\" target=\"_blank\" rel=\"noopener noreferrer\">local ML training and inference testing<\/a> on the SageMaker notebook instance. A subset of the curated data is used during this phase.<\/li>\n<li>Steps 5, 6, and 7 are repeated iteratively until satisfactory results are achieved.<\/li>\n<\/ol>\n<h2>ML model training and deployment workflow<\/h2>\n<p>The ML model training and deployment workflow relied on the team\u2019s private Git repository to trigger the workflow implemented on the CI\/CD pipeline.<\/p>\n<p>The method implemented was that Git served as the one and only source of truth for configuration settings and source code. This approach required the desired state of the system to be stored in version control, which allowed anyone to view the entire audit trail of changes. All changes to the desired state are fully traceable commits associated with committer information, commit IDs, and timestamps. This meant that both the application and the infrastructure are now versioned through code and can be audited using standard software development and delivery methodology.<\/p>\n<p>The following diagram illustrates this workflow.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image003.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-29055\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image003.png\" alt=\"\" width=\"1047\" height=\"796\"><\/a><\/p>\n<p>The workflow has the following steps:<\/p>\n<ol>\n<li>With the data curated from the data preparation workflow, local training and inference testing is performed iteratively on the SageMaker notebook instance.<\/li>\n<li>When the desired results are achieved, the Data Science team commits the configuration settings into Git. The configuration includes the following:\n<ol type=\"a\">\n<li>Data source location<\/li>\n<li>Cluster instance type and size<\/li>\n<li><a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/pre-built-containers-frameworks-deep-learning.html\" target=\"_blank\" rel=\"noopener noreferrer\">SageMaker prebuilt container image<\/a> to select the ML framework such as PyTorch, TensorFlow, or scikit-learn<\/li>\n<li>Pricing model to select either Spot or On-Demand Instances.<\/li>\n<\/ol>\n<\/li>\n<li>The Git commit triggers the CI\/CD pipeline. The CI\/CD pipeline fires up a Python Boto3 script to provision the SageMaker infrastructure.<\/li>\n<li>In the development AWS account, a new SageMaker training job is provisioned with the committed configuration settings with Spot Instances. The dataset from the curated data\u2019s S3 bucket is downloaded into the training cluster, with training starting immediately.<\/li>\n<li>After the ML training job is complete, a model artifact is created and stored in Amazon S3. Every epoch, evaluation metric, and log from the training job is stored in <a href=\"http:\/\/aws.amazon.com\/cloudwatch\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon CloudWatch Logs<\/a>.<\/li>\n<li>When a model artifact is stored in Amazon S3, it triggers an event that invokes an <a href=\"http:\/\/aws.amazon.com\/lambda\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Lambda<\/a> function to create a Slack notification that the training job is complete. The notification includes a link to the training job\u2019s CloudWatch Logs for review.<\/li>\n<li>If the Data Science team is satisfied with the evaluation report, the team unblocks the pipeline via an approval function in the CI\/CD pipeline and kicks off a Python Boto3 script to deploy the ML model onto the SageMaker hosting infrastructure for further inference testing.<\/li>\n<li>After validation, the team raises a Git pull request to have <em>iProperty.com.my<\/em>\u2019s ML engineers perform the final review. The ML engineers may run more tests against the development environment\u2019s SageMaker inference endpoint to validate the results.<\/li>\n<li>If everything works as expected, the ML engineer merges the pull request that triggers the CI\/CD pipeline to deploy the new model into the data production environment. The CI\/CD pipeline runs a Python script to deploy the model on the SageMaker multi-model endpoint. However, if there are issues with the inference results, the pull request is declined with feedback provided.<\/li>\n<li>The SageMaker hosting infrastructure is provisioned and the CI\/CD workflow runs a health check script against the SageMaker inference endpoint to validate the inference endpoint\u2019s health.<\/li>\n<\/ol>\n<h2>ML model serving and API layer workflow<\/h2>\n<p>For any ML use case, before any ML models are served to consumers, appropriate business logic must be applied to it. The business logic wraps the ML inferenced output (from SageMaker) with various calculations and computation to meet the use case requirements. In <em>iProperty.com.my<\/em>\u2019s case, the business logic is hosted on AWS Lambda, with a separate Lambda function for every ML use case. AWS Lambda was chosen because of its simplicity and cost-effectiveness.<\/p>\n<p>Lambda allows you to run code without provisioning or managing servers, with scaling and availability handled by the service. You pay only for the compute time you consume, and there is no charge when the code isn\u2019t running.<\/p>\n<p>To manage the serverless application development, <em>iProperty.com.my <\/em> uses the Serverless Framework (SLS) to develop and maintain their business logic on Lambda. The CI\/CD pipeline deploys new updates to Lambda.<\/p>\n<p>The Lambda functions are exposed to consumers via GraphQL APIs built on <a href=\"https:\/\/aws.amazon.com\/eks\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Elastic Kubernetes Service<\/a> (Amazon EKS) with <a href=\"https:\/\/aws.amazon.com\/fargate\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Fargate<\/a>.<\/p>\n<p>The following diagram illustrates this workflow.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image005.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-29056\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/07\/ML-4574-image005.png\" alt=\"\" width=\"846\" height=\"599\"><\/a><\/p>\n<p>The workflow includes the following steps:<\/p>\n<ol>\n<li>Continuing from the ML training and deployment workflow<strong>, <\/strong>multiple ML models may be deployed on the SageMaker hosting infrastructure. The ML models are overlayed with relevant business logic (implemented on Lambda) before being served to consumers.<\/li>\n<li>If there are any updates to the business logic, the data scientist updates the source code on the Serverless Framework and commits it to the Git repository.<\/li>\n<li>The Git commit triggers the CI\/CD pipeline to replace the Lambda function with the latest updates. This activity runs on the development account and is validated before being repeated on the production account.<\/li>\n<li>Multiple Lambda functions are deployed with associated business logics that query the SageMaker inference endpoints.<\/li>\n<\/ol>\n<p>For every API request made to the API layer, the GraphQL API processes the request and forwards the request to the corresponding Lambda function. An invoked function may query one or more SageMaker inference endpoints, and processes the business logic before providing a response to the requestor.<\/p>\n<p>To evaluate the effectiveness of the ML models deployed, a dashboard that tracks the metric (such as clickthrough rate or open rate) for every ML model is created to visualize the performance of the ML models in production. These metrics served as our guiding light on how <em>iProperty.com.my <\/em> continues to reiterate and improve the ML models.<\/p>\n<h2>Business results<\/h2>\n<p>The <em>iProperty.com.my <\/em> team observed valuable results from the improved workflows.<\/p>\n<p>\u201cBy implementing our data science workflows across SageMaker and our existing CI\/CD tools, the automation and reduction in operational overhead enabled us to focus on ML model enhancement activities, accelerating our ML models\u2019 time to market faster by 60%,\u201d says Mohammad Alauddin, Head of Data Science and Engineering. \u201cNot only that, with SageMaker Spot Instances, enabled with a simple switch, we were also able to reduce our data science infrastructure cost by 75%. Finally, by improving our ML model\u2019s time to market, the ability to gather our consumer\u2019s feedback was also accelerated, enabling us to tweak and improve our listing recommendations clickthrough rate by 250%.\u201d<\/p>\n<h2>Summary and next steps<\/h2>\n<p>Although the team was deeply encouraged with the business results, there is still plenty of room to improve their consumer\u2019s experience. They have plans to further enhance the ML model serving workflow, including <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/model-ab-testing.html\" target=\"_blank\" rel=\"noopener noreferrer\">A\/B testing<\/a> and <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/model-monitor.html\" target=\"_blank\" rel=\"noopener noreferrer\">model monitoring<\/a> features.<\/p>\n<p>To further reduce undifferentiated work, the team is also exploring <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sagemaker-projects-whatis.html\" target=\"_blank\" rel=\"noopener noreferrer\">SageMaker projects<\/a> to simplify management and maintenance of their ML workflows, and <a href=\"https:\/\/aws.amazon.com\/sagemaker\/pipelines\/\" target=\"_blank\" rel=\"noopener noreferrer\">SageMaker Pipelines<\/a> to automate steps such as data loading, data transformation, training and tuning, and deployment at scale.<\/p>\n<h2><strong>About PropertyGuru Group &amp; iProperty.com.my<\/strong><\/h2>\n<p><a href=\"https:\/\/www.iproperty.com.my\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>iProperty.com.my<\/strong><\/a> is headquartered in Kuala Lumpur, Malaysia and employs over 200 employees. iProperty.com.my is\u00a0the market leading property portal, offering a search experience in both English and Bahasa Malaysia. iProperty.com.my also provides consumer solutions such as\u00a0<a href=\"https:\/\/www.iproperty.com.my\/home-loan-eligibility\/\" target=\"_blank\" rel=\"noopener noreferrer\">LoanCare<\/a>\u00a0\u2013 a home loan eligibility indicator,\u00a0<a href=\"https:\/\/www.iproperty.com.my\/news\/\" target=\"_blank\" rel=\"noopener noreferrer\">News &amp; Lifestyle channel<\/a>\u00a0\u2013 content to enhance consumers\u2019 property journey,\u00a0<a href=\"https:\/\/www.iproperty.com.my\/events\/\" target=\"_blank\" rel=\"noopener noreferrer\">events<\/a>\u00a0\u2013 to connect property seekers with agents and developers offline, and much more.\u00a0The company is part of\u00a0PropertyGuru\u00a0Group, Southeast Asia\u2019s leading property technology company<sup>1<\/sup>.<\/p>\n<p><a href=\"https:\/\/www.propertyguru.com.my\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>PropertyGuru<\/strong><\/a> is Southeast Asia\u2019s leading property technology company. Founded in 2007, PropertyGuru has grown to become Southeast Asia\u2019s #1 digital property marketplace with leading positions in Singapore, Vietnam, Malaysia and Thailand. The Company currently hosts more than 2.8 million monthly real estate listings and serves over 50 million monthly property seekers and over 50,000 active property agents across the five largest economies in Southeast Asia \u2013 Indonesia, Malaysia, Singapore, Thailand and Vietnam.<\/p>\n<p><sup>1<\/sup>\u00a0In terms of relative engagement market share based on SimilarWeb data.<\/p>\n<p><em>The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.<\/em><\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/06\/11\/Mohammad-Alauddin-100.jpg\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-12905 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/06\/11\/Mohammad-Alauddin-100.jpg\" alt=\"\" width=\"100\" height=\"103\"><\/a>Mohammad Alauddin<\/strong> is the Engineering Manager for Data at PropertyGuru Group. Over the last 15 years, he\u2019s contributed to data analytics, data engineering, and machine learning projects in the Telco, Airline, and PropTech Digital Industry. He also speaks at Data &amp; AI public events. In his spare time, he enjoys indoor activities with family, reading, and watching TV.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/15\/Kamal-Headshot.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-29376 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/10\/15\/Kamal-Headshot.png\" alt=\"\" width=\"100\" height=\"84\"><\/a>Md Kamal Hossain<\/strong> is the Lead Data Scientist at PropertyGuru Group. He leads the Data Science Centre of Excellence (DS CoE) for ideation, design and productionizing end-to-end AI\/ML solutions using cloud services. Kamal has particular interest in reinforcement learning and cognitive science. In his spare time, he likes reading and tries to keep up with his kids.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/06\/08\/fabian-tan-100.jpg\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-12788 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/06\/08\/fabian-tan-100.jpg\" alt=\"\" width=\"100\" height=\"134\"><\/a>Fabian Tan<\/strong> is a Principal Solutions Architect at Amazon Web Services. He has a strong passion for software development, databases, data analytics and machine learning. He works closely with the Malaysian developer community to help them bring their ideas to life.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/how-iproperty-com-my-accelerates-property-based-ml-model-delivery-with-amazon-sagemaker\/<\/p>\n","protected":false},"author":0,"featured_media":1092,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1091"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1091"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1091\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1092"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}