{"id":3765,"date":"2024-10-22T08:43:56","date_gmt":"2024-10-22T08:43:56","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/10\/22\/nvidia-brings-generative-ai-tools-simulation-and-perception-workflows-to-ros-developer-ecosystem\/"},"modified":"2024-10-22T08:43:56","modified_gmt":"2024-10-22T08:43:56","slug":"nvidia-brings-generative-ai-tools-simulation-and-perception-workflows-to-ros-developer-ecosystem","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/10\/22\/nvidia-brings-generative-ai-tools-simulation-and-perception-workflows-to-ros-developer-ecosystem\/","title":{"rendered":"NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>At ROSCon in Odense, one of Denmark\u2019s oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.<\/p>\n<p>Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/bringing-generative-ai-to-life-with-jetson\/\" rel=\"noopener\">NVIDIA Jetson platform<\/a> for edge AI and robotics. <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-ai\/\" rel=\"noopener\">Generative AI<\/a> enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.<\/p>\n<h2><b>Generative AI Comes to ROS Community<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/using-generative-ai-to-enable-robots-to-reason-and-act-with-remembr\/\" rel=\"noopener\">ReMEmbR<\/a>, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/large-language-models\/\" rel=\"noopener\">large language models<\/a> (LLMs), vision language models (VLMs) and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/retrieval-augmented-generation\/\" rel=\"noopener\">retrieval-augmented generation<\/a> to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.<\/p>\n<p>The speech recognition capability is powered by the <a target=\"_blank\" href=\"https:\/\/github.com\/NVIDIA-AI-IOT\/whisper_trt\/tree\/main\/examples\/ros2\" rel=\"noopener\">WhisperTRT ROS 2 node<\/a>. This node uses <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/tensorrt\" rel=\"noopener\">NVIDIA TensorRT<\/a> to optimize <a target=\"_blank\" href=\"https:\/\/openai.com\/index\/whisper\/\" rel=\"noopener\">OpenAI\u2019s Whisper model<\/a> to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.<\/p>\n<p>The <a target=\"_blank\" href=\"https:\/\/forums.developer.nvidia.com\/t\/jetbot-voice-activated-copilot-tools-with-nvidia-riva-and-nanollm-container-for-ros2-robot-version-2-0\/307490\" rel=\"noopener\">ROS 2 robots with voice control<\/a> project uses the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/riva\" rel=\"noopener\">NVIDIA Riva<\/a> ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2410.06472v1\" rel=\"noopener\">ROSA<\/a>, an AI-powered agent for ROS, operating on its <a target=\"_blank\" href=\"https:\/\/www.jpl.nasa.gov\/robotics-at-jpl\/nebula-spot\/\" rel=\"noopener\">Nebula-SPOT robot<\/a> and the<a target=\"_blank\" href=\"https:\/\/docs.omniverse.nvidia.com\/isaacsim\/latest\/landing_pages\/nova_carter_landing_page.html\" rel=\"noopener\"> NVIDIA Nova Carter<\/a> robot in <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/sim\" rel=\"noopener\">NVIDIA Isaac Sim<\/a>.<\/p>\n<p>At ROSCon, <a target=\"_blank\" href=\"https:\/\/canonical.com\/blog\/roscon24\" rel=\"noopener\">Canonical<\/a> is demonstrating <a target=\"_blank\" href=\"https:\/\/github.com\/NVIDIA-AI-IOT\/ROS2-NanoOWL\" rel=\"noopener\">NanoOWL<\/a>, a zero-shot object detection model running on the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems\/jetson-orin\/\" rel=\"noopener\">NVIDIA Jetson Orin Nano<\/a> system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.<\/p>\n<p>Developers can get started today with <a target=\"_blank\" href=\"https:\/\/www.jetson-ai-lab.com\/ros.html\" rel=\"noopener\">ROS 2 Nodes for Generative AI<\/a>, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.<\/p>\n<h2><b>Enhancing ROS Workflows With a \u2018Sim-First\u2019 Approach<\/b><\/h2>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-74715\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/roscon24-sim-ros-blog-1920x1080-1-960x540.jpg\" alt=\"\" width=\"960\" height=\"540\"><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"noopener\">Simulation<\/a> is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">OpenUSD<\/a>, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/a-beginners-guide-to-simulating-and-testing-robots-with-ros-2-and-nvidia-isaac-sim\/\" rel=\"noopener\">Beginner\u2019s Guide to ROS 2 Workflows With Isaac Sim<\/a>, which illustrates the end-to-end workflow for robot simulation and testing, is now available.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/foxglove.dev\/\" rel=\"noopener\">Foxglove<\/a>, a member of the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/startups\/\" rel=\"noopener\">NVIDIA Inception<\/a> program for startups, <a target=\"_blank\" href=\"https:\/\/foxglove.dev\/blog\/realtime-isaac-sim-data-visualization-using-foxglove\" rel=\"noopener\">demonstrated an integration<\/a> that helps developers visualize and debug simulation data in real time using Foxglove\u2019s custom extension, built on Isaac Sim.<\/p>\n<h2><b>New Capabilities for Isaac ROS 3.2<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/ros\" rel=\"noopener\">NVIDIA Isaac ROS<\/a>, built on the open-source <a target=\"_blank\" href=\"https:\/\/www.ros.org\/\" rel=\"noopener\">ROS 2<\/a> software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.<\/p>\n<p>Key improvements to <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/manipulator\" rel=\"noopener\">NVIDIA Isaac Manipulator<\/a> include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.<\/p>\n<p>Another is to <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/perceptor\" rel=\"noopener\">NVIDIA Isaac Perceptor<\/a>, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robot\u2019s (AMR) environmental awareness and performance in dynamic settings like warehouses.<\/p>\n<h2><b>Partners Adopting NVIDIA Isaac\u00a0<\/b><\/h2>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-74718\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/IMTS_AI_demo-7-960x540.jpg\" alt=\"\" width=\"960\" height=\"540\"><\/p>\n<p>Robotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.<\/p>\n<ul>\n<li><b>Universal Robots, a Teradyne Robotics company,<\/b> launched a <a target=\"_blank\" href=\"https:\/\/www.universal-robots.com\/about-universal-robots\/news-centre\/universal-robots-unveils-its-ai-accelerator\" rel=\"noopener\">new AI Accelerator toolkit<\/a> to enable the development of AI-powered cobot applications.<\/li>\n<li><b>Miso Robotics<\/b> is using Isaac ROS to speed up its AI-powered <a target=\"_blank\" href=\"https:\/\/misorobotics.com\/newsroom\/miso-to-enhance-kitchen-automation-using-nvidia-isaac-robotics-platform-and-ai-powered-vision-technology\/\" rel=\"noopener\">robotic french fry-making Flippy Fry Station<\/a> and drive advances in efficiency and accuracy in food service automation.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.wheel.me\/en-us\/solutions\" rel=\"noopener\"><b>Wheel.me<\/b><\/a> is partnering with <b>RGo Robotics<\/b> and NVIDIA to create a <a target=\"_blank\" href=\"https:\/\/www.rgorobotics.ai\/post\/evolutionizing-autonomous-mobile-robots-rgo-nvidia\" rel=\"noopener\">production-ready AMR<\/a> using Isaac Perceptor.<\/li>\n<li><b>Main Street Autonomy<\/b> is using Isaac Perceptor to <a target=\"_blank\" href=\"https:\/\/mainstreetautonomy.com\/blog\/2024-10-22-how-to-calibrate-sensors-with-msa-calibration-anywhere-for-nvidia-isaac-perceptor\/\" rel=\"noopener\">streamline sensor calibration<\/a>.<\/li>\n<li><b>Orbbec<\/b> announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.<\/li>\n<li><b>LIPS Corporation<\/b> has introduced a <a target=\"_blank\" href=\"https:\/\/bit.ly\/3UCCj39\" rel=\"noopener\">multi-camera perception devkit<\/a> for improved AMR navigation.<\/li>\n<li><b>Canonical<\/b> highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.<br \/>\n<h2><\/h2>\n<h2>Connecting With Partners at ROSCon<\/h2>\n<p>ROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:<\/p>\n<\/li>\n<li>\u201cNav2 User Meetup\u201d Birds of a Feather session with Steve Macenski from Open Navigation LLC<\/li>\n<li>\u201cROS in Large-Scale Factory Automation\u201d with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AG<\/li>\n<li>\u201cIntegrating AI in Robot Manipulation Workflows\u201d Birds of a Feather session with Kalyan Vadrevu from NVIDIA<\/li>\n<li>\u201cAccelerating Robot Learning at Scale in Simulation\u201d Birds of a Feather session with Markus Wuensch from NVIDIA<\/li>\n<li>\u201cOn Use of Nav2 Docking\u201d with Open Navigation\u2019s Macenski<\/li>\n<\/ul>\n<p>Additionally, <a href=\"https:\/\/blogs.nvidia.com\/blog\/nvidia-teradyne-siemens-robotics-autonomous-machines-ai\/\">Teradyne Robotics<\/a> and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.<\/p>\n<p>The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of <a target=\"_blank\" href=\"https:\/\/www.openrobotics.org\/blog\/2024\/10\/21\/roscon2024pr\" rel=\"noopener\">Open Robotics<\/a>, the umbrella organization for OSRF and all its initiatives.<\/p>\n<p>For the latest updates, visit the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/roscon\/\" rel=\"noopener\">ROSCon page<\/a>.<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/generative-ai-simulation-roscon\/<\/p>\n","protected":false},"author":0,"featured_media":3766,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3765"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3765"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3765\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3766"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3765"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3765"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}