{"id":1031,"date":"2021-10-15T08:38:20","date_gmt":"2021-10-15T08:38:20","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/15\/looks-can-be-perceiving-startups-build-highly-accurate-perception-software-on-nvidia-drive\/"},"modified":"2021-10-15T08:38:20","modified_gmt":"2021-10-15T08:38:20","slug":"looks-can-be-perceiving-startups-build-highly-accurate-perception-software-on-nvidia-drive","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/15\/looks-can-be-perceiving-startups-build-highly-accurate-perception-software-on-nvidia-drive\/","title":{"rendered":"Looks Can Be Perceiving: Startups Build Highly Accurate Perception Software on NVIDIA DRIVE"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2021\/10\/14\/startups-perception-software-nvidia-drive\/\" data-title=\"Looks Can Be Perceiving: Startups Build Highly Accurate Perception Software on NVIDIA DRIVE\">\n<p>For autonomous vehicles, what\u2019s on the surface matters.<\/p>\n<p>While humans are taught to avoid snap judgments, self-driving cars must be able to see, detect and act quickly and accurately to operate safely. This capability requires a robust perception software stack that can comprehensively identify and track the vehicle\u2019s surrounding environment.<\/p>\n<p>Startups worldwide are developing these perception stacks, using the high-performance, energy-efficient compute of <a href=\"https:\/\/www.nvidia.com\/en-us\/self-driving-cars\/drive-platform\/hardware\/\">NVIDIA DRIVE AGX <\/a>to deliver highly accurate object detection to autonomous vehicle manufacturers.<\/p>\n<p>The NVIDIA DRIVE AGX platform processes data from a range of sensors \u2014 including camera, radar, lidar and ultrasonic \u2014 to help autonomous vehicles perceive the surrounding environment, localize to a map, then plan and execute a safe path forward. This AI supercomputer enables autonomous driving, in-cabin functions and driver monitoring, plus other safety features \u2014 all in a compact package.<\/p>\n<p>Hundreds of automakers, suppliers and startups are building self-driving vehicles on NVIDIA DRIVE AGX, which is why perception stack developers across the globe have chosen to develop their solutions on the AI platform.<\/p>\n<h2><b>Development From Day One<\/b><\/h2>\n<p>NVIDIA DRIVE is the place to start for companies looking to get their perception solutions up and running.<\/p>\n<p><b>aiMotive<\/b>, a self-driving startup based out of Hungary, has built a modular software stack, called aiDrive, that delivers comprehensive perception capabilities for automated driving solutions.<\/p>\n<p>The company first began building out its solution on a compute platform with NVIDIA DRIVE in 2016. With high-performance, energy-efficient compute, aiDrive can perform perception using mono, stereo and fisheye cameras, as well as fuse data from radar, lidar and other sensors for a flexible and scalable solution.<\/p>\n<figure id=\"attachment_53243\" aria-describedby=\"caption-attachment-53243\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/aimotive.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/aimotive-672x353.png\" alt=\"\" width=\"672\" height=\"353\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-53243\" class=\"wp-caption-text\">aiDrive fuses data from vehicle sensors for flexible, scalable perception.<\/figcaption><\/figure>\n<p>\u201cWe\u2019ve been using NVIDIA DRIVE from day one,\u201d said P\u00e9ter Kov\u00e1cs, aiMotive\u2019s senior vice president of aiDrive. \u201cThe platforms work turnkey and support easy cross-target development \u2014 it\u2019s also a technology ecosystem that developers are familiar with.\u201d<\/p>\n<p><b>Stradvision<\/b>, a startup based in South Korea, was formed by perception specialists in 2014 to build advanced driver assistance systems at scale. By developing on NVIDIA DRIVE AGX, the company has deployed a robust perception deep neural network for AI-assisted driving platforms.<\/p>\n<figure id=\"attachment_53246\" aria-describedby=\"caption-attachment-53246\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/3.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/3-672x380.jpg\" alt=\"\" width=\"672\" height=\"380\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-53246\" class=\"wp-caption-text\">Stradvision\u2019s robust perception solution operates in poor weather conditions, such as snow.<\/figcaption><\/figure>\n<p>The DNN, known as SVNet, is one of a few networks that meet the accuracy and computational requirements of production vehicles.<\/p>\n<h2><b>Performance at Every Level<\/b><\/h2>\n<p>Even for lower levels of autonomy, such as ADAS or AI-assisted driving, a robust perception stack is critical for safety.<\/p>\n<figure id=\"attachment_53249\" aria-describedby=\"caption-attachment-53249\" class=\"wp-caption alignleft\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/PhantomVision_Final.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/PhantomVision_Final-400x274.png\" alt=\"\" width=\"400\" height=\"274\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-53249\" class=\"wp-caption-text\">PhantomVision uses cameras around the vehicle for complete, 360-degree coverage.<\/figcaption><\/figure>\n<p>Silicon Valley startup <b>Phantom AI <\/b>has leveraged years of automotive and tech industry experience to develop an intelligent perception stack that can predict object motion. The computer vision solution, known as PhantomVision, covers a 360-degree view from the vehicle in a combination of front-, side- and rear-view cameras.<\/p>\n<p>The real-time detection and target tracking on the bird\u2019s-eye view provides an accurate motion estimate of road objects. The high-performance processing of DRIVE AGX enables the software to perform live perception functions.<\/p>\n<figure id=\"attachment_53255\" aria-describedby=\"caption-attachment-53255\" class=\"wp-caption alignright\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/CalmCar.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/CalmCar.jpg\" alt=\"\" width=\"417\" height=\"266\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-53255\" class=\"wp-caption-text\">CalmCar\u2019s perception solution emphasizes safety, with automotive-grade compute at its core.<\/figcaption><\/figure>\n<p>With the mission of creating a safer road environment for all users, Chinese startup <b>CalmCar<\/b> has built a multi-camera active surround view perception system. With the automotive-grade <a href=\"https:\/\/www.nvidia.com\/en-us\/self-driving-cars\/drive-platform\/hardware\/\">NVIDIA DRIVE Xavier<\/a> at its core, CalmCar\u2019s solution enables <a href=\"https:\/\/blogs.nvidia.com\/blog\/2019\/02\/06\/what-is-level-2-automated-driving\/\">Level 2+ driving<\/a>, valet parking and mapping.<\/p>\n<p>By developing comprehensive solutions on NVIDIA DRIVE, these startups are delivering accurate and robust perception to AI-assisted and autonomous vehicles around the world.<\/p>\n<p>\u00a0<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>http:\/\/feedproxy.google.com\/~r\/nvidiablog\/~3\/HrKwcyeIzMY\/<\/p>\n","protected":false},"author":0,"featured_media":1032,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1031"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1031"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1031\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1032"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1031"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1031"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1031"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}