{"id":3827,"date":"2024-12-11T14:43:59","date_gmt":"2024-12-11T14:43:59","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/11\/into-the-omniverse-how-openusd-based-simulation-and-synthetic-data-generation-advance-robot-learning\/"},"modified":"2024-12-11T14:43:59","modified_gmt":"2024-12-11T14:43:59","slug":"into-the-omniverse-how-openusd-based-simulation-and-synthetic-data-generation-advance-robot-learning","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/11\/into-the-omniverse-how-openusd-based-simulation-and-synthetic-data-generation-advance-robot-learning\/","title":{"rendered":"Into the Omniverse: How OpenUSD-Based Simulation and Synthetic Data Generation Advance Robot Learning"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This post is part of <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/news\/\" rel=\"noopener\"><i>Into the Omniverse<\/i><\/a><i>, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\"><i>OpenUSD<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\"><i>NVIDIA Omniverse<\/i><\/a><i>.<\/i><\/p>\n<p>Scalable simulation technologies are driving the future of autonomous robotics by reducing development time and costs.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">Universal Scene Description (OpenUSD)<\/a> provides a scalable and interoperable data framework for developing virtual worlds where robots can learn how to be robots. With <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse\/simready-assets\" rel=\"noopener\">SimReady<\/a> OpenUSD-based simulations, developers can create limitless scenarios based on the physical world.<\/p>\n<p>And <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/sim\" rel=\"noopener\">NVIDIA Isaac Sim <\/a>is advancing perception AI-based <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"noopener\">robotics simulation<\/a>. Isaac Sim is a reference application built on the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\">NVIDIA Omniverse<\/a> platform for developers to simulate and test AI-driven robots in physically based virtual environments.<\/p>\n<p>At AWS re:Invent, NVIDIA announced that Isaac Sim is <a href=\"https:\/\/blogs.nvidia.com\/blog\/physical-ai-robotics-isaac-sim-aws\/\">now available<\/a> on Amazon EC2 G6e instances powered by <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/l40s\/\" rel=\"noopener\">NVIDIA L40S GPUs<\/a>. These powerful instances enhance the performance and accessibility of Isaac Sim, making high-quality robotics simulations more scalable and efficient.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-76254 aligncenter\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/12\/aws-24-kv-blog-robotics_v004.png\" alt=\"\" width=\"649\" height=\"365\"><\/p>\n<p>These advancements in Isaac Sim mark a significant leap for robotics development. By enabling realistic testing and AI model training in virtual environments, companies can reduce time to deployment and improve robot performance across a variety of use cases.<\/p>\n<h2><b>Advancing Robotics Simulation With Synthetic Data Generation<\/b><\/h2>\n<p>Robotics companies like Cobot, Field AI and Vention are using Isaac Sim to simulate and validate robot performance while others, such as SoftServe and Tata Consultancy Services, use <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/synthetic-data\/\" rel=\"noopener\">synthetic data<\/a> to bootstrap AI models for diverse robotics applications.<\/p>\n<p>The evolution of <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/robot-learning\/\" rel=\"noopener\">robot learning<\/a> has been deeply intertwined with simulation technology. Early experiments in robotics relied heavily on labor-intensive, resource-heavy trials. Simulation is a crucial tool for the creation of physically accurate environments where robots can learn through trial and error, refine algorithms and even train AI models using synthetic data.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"noopener\">Physical AI<\/a> describes AI models that can understand and interact with the physical world. It embodies the next wave of <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/\" rel=\"noopener\">autonomous machines and robots<\/a>, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-robotics-simulation\/\">Robotics simulation<\/a>, which forms the second computer in the <a href=\"https:\/\/blogs.nvidia.com\/blog\/three-computers-robotics\/?ncid=pa-srch-goog-618180-vt49&amp;_bt=722552664669&amp;_bk=three%20computer%20solution&amp;_bm=b&amp;_bn=g&amp;_bg=169337646365&amp;gad_source=1&amp;gbraid=0AAAAAD4XAoEcPTARt9_2YogzZOXqY2FQG&amp;gclid=EAIaIQobChMIod7Go4SbigMVPi2tBh0sTAlyEAAYASAAEgL7X_D_BwE\">three computer solution<\/a>, is a cornerstone of physical AI development that lets engineers and researchers design, test and refine systems in a controlled virtual environment.<\/p>\n<\/p>\n<p>A simulation-first approach significantly reduces the cost and time associated with physical prototyping while enhancing safety by allowing robots to be tested in scenarios that might otherwise be impractical or hazardous in real life.<\/p>\n<p>With a <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/how-to-build-a-generative-ai-enabled-synthetic-data-pipeline-with-openusd\/\" rel=\"noopener\">new reference workflow<\/a><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/synthetic-data\/\" rel=\"noopener\">, developers can accelerate the generation of synthetic 3D datasets<\/a> with generative AI using <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/explore\/simulation\" rel=\"noopener\">OpenUSD NIM microservices<\/a>. This integration streamlines the pipeline from scene creation to data augmentation, enabling faster and more accurate training of perception AI models.<\/p>\n<p>Synthetic data can help address the challenge of limited, restricted or unavailable data needed to train various types of AI models, especially in computer vision. Developing action recognition models is a common use case that can benefit from synthetic data generation.<\/p>\n<p>To learn how to create a human action recognition video dataset with Isaac Sim, check out the technical blog on <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/scaling-action-recognition-models-with-synthetic-data\/\" rel=\"noopener\">Scaling Action Recognition Models With Synthetic Data<\/a>. 3D simulations offer developers precise control over image generation, eliminating hallucinations.<\/p>\n<h2><b>Robotic Simulation for Humanoids<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/humanoid-robot\/\" rel=\"noopener\">Humanoid robots<\/a> are the next wave of embodied AI, but they present a challenge at the intersection of mechatronics, control theory and AI. Simulation is crucial to solving this challenge by providing a safe, cost-effective and versatile platform for training and testing humanoids.<\/p>\n<p>With <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/lab\" rel=\"noopener\">NVIDIA Isaac Lab<\/a>, an open-source unified framework for robot learning built on top of Isaac Sim, developers can train humanoid robot policies at scale via simulations. <a href=\"https:\/\/blogs.nvidia.com\/blog\/robot-learning-humanoid-development\/\">Leading commercial robot makers<\/a> are adopting Isaac Lab to handle increasingly complex movements and interactions.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/project-gr00t\" rel=\"noopener\">NVIDIA Project GR00T<\/a>, an active research initiative to enable the humanoid robot ecosystem of builders, is pioneering workflows such as <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/advancing-humanoid-robot-sight-and-skill-development-with-nvidia-project-gr00t\/\" rel=\"noopener\">GR00T-Gen<\/a> to generate robot tasks and simulation-ready environments in OpenUSD. These can be used for training generalist robots to perform manipulation, locomotion and navigation.<\/p>\n<p>Recently published research from Project GR00T also shows how advanced simulation can be used to train interactive humanoids. Using Isaac Sim, the researchers developed a single unified controller for <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/unified-whole-body-control-for-physically-simulated-humanoids\/\" rel=\"noopener\">physically simulated humanoids called MaskedMimic<\/a>. The system is capable of generating a wide range of motions across diverse terrains from intuitive user-defined intents.<\/p>\n<h2><b>Physics-Based Digital Twins Simplify AI Training<\/b><\/h2>\n<p>Partners across industries are using Isaac Sim, Isaac Lab, Omniverse, and OpenUSD to design, simulate and deploy <a href=\"https:\/\/blogs.nvidia.com\/blog\/robot-learning-humanoid-development\/\">smarter, more capable autonomous machines<\/a>:<\/p>\n<ul>\n<li><a target=\"_blank\" href=\"https:\/\/agilityrobotics.com\/content\/crossing-sim2real-gap-with-isaaclab\" rel=\"noopener\"><b>Agility<\/b><\/a> uses Isaac Lab to create simulations that let simulated robot behaviors transfer directly to the robot, making it more intelligent, agile and robust when deployed in the real world.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.co.bot\/news\/cobot-develops-ai-driven-collaborative-robots-with-nvidia-isaac\" rel=\"noopener\"><b>Cobot<\/b><\/a> uses Isaac Sim with its AI-powered cobot, Proxie, to optimize logistics in warehouses, hospitals, manufacturing sites and more.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.cohesiverobotics.com\/blog\/cohesive-robotics-supercharges-high-mix-manufacturing-automation-with-nvidia-isaac-sim\" rel=\"noopener\"><b>Cohesive Robotics<\/b><\/a> has integrated Isaac Sim into its software framework called Argus OS for developing and deploying robotic workcells used in high-mix manufacturing environments.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/fieldai.com\/news\/field-ai-nvidia-partnership\" rel=\"noopener\"><b>Field AI<\/b><\/a>, a builder of robot foundation models, uses Isaac Sim and Isaac Lab to evaluate the performance of its models in complex, unstructured environments across industries such as construction, manufacturing, oil and gas, mining, and more.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/spotlight-fourier-trains-humanoid-robots-for-real-world-roles-using-nvidia-isaac-gym\/\" rel=\"noopener\"><b>Fourier<\/b><\/a> uses <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac-gym\" rel=\"noopener\">NVIDIA Isaac Gym<\/a> and Isaac Lab to train its GR-2 humanoid robot, using reinforcement learning and advanced simulations to accelerate development, enhance adaptability and improve real-world performance.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.youtube.com\/live\/pztkN1RFLKU?si=g9EIrVlQH2YeFTX6\" rel=\"noopener\"><b>Foxglove<\/b><\/a> integrates Isaac Sim and Omniverse to enable efficient robot testing, training and sensor data analysis in realistic 3D environments.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/spotlight-galbot-builds-a-large-scale-dexterous-hand-dataset-for-humanoid-robots-using-nvidia-isaac-sim\/\" rel=\"noopener\"><b>Galbot<\/b><\/a> used Isaac Sim to verify the data generation of DexGraspNet, a large-scale dataset of 1.32 million ShadowHand grasps, advancing robotic hand functionality by enabling scalable validation of diverse object interactions across 5,355 objects and 133 categories.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/standardbots.com\/blog\/standard-bots-adopts-nvidia-isaac-sim-to-advance-ai-powered-robotics\" rel=\"noopener\"><b>Standard Bots<\/b><\/a> is simulating and validating the performance of its R01 robot used in manufacturing and machining setups.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/building-custom-robot-simulations-with-wandelbots-nova-and-nvidia-isaac-sim\/\" rel=\"noopener\"><b>Wandelbots<\/b><\/a> integrates its NOVA platform with Isaac Sim to create physics-based digital twins and intuitive training environments, simplifying robot interaction and enabling seamless testing, validation and deployment of robotic systems in real-world scenarios.<\/li>\n<\/ul>\n<p>Learn more about how Wandelbots is advancing robot learning with NVIDIA technology in this livestream recording:<\/p>\n<\/p>\n<h2><b>Get Plugged Into the World of OpenUSD<\/b><\/h2>\n<p>NVIDIA experts and Omniverse Ambassadors are hosting <a target=\"_blank\" href=\"https:\/\/www.addevent.com\/calendar\/ae483892\" rel=\"noopener\">livestream office hours and study groups<\/a> to provide robotics developers with technical guidance and troubleshooting support for Isaac Sim and Isaac Lab. Learn how to get started simulating robots in Isaac Sim with <a target=\"_blank\" href=\"https:\/\/learn.nvidia.com\/courses\/course-detail?course_id=course-v1:DLI+S-OV-27+V1\" rel=\"noopener\">this new, free course<\/a> on NVIDIA Deep Learning Institute (DLI).<\/p>\n<p>For more on optimizing OpenUSD workflows, explore the new self-paced <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/learn\/learning-path\/openusd\/\" rel=\"noopener\">Learn OpenUSD<\/a> training curriculum that includes free DLI courses for 3D practitioners and developers. For more resources on OpenUSD, explore the <a target=\"_blank\" href=\"https:\/\/forum.aousd.org\/\" rel=\"noopener\">Alliance for OpenUSD forum<\/a> and the <a target=\"_blank\" href=\"https:\/\/aousd.org\/\" rel=\"noopener\">AOUSD website<\/a>.<\/p>\n<p>Don\u2019t miss the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/ces\/\" rel=\"noopener\">CES keynote<\/a> delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT for more on the future of AI and graphics.<\/p>\n<p><i>Stay up to date by subscribing to<\/i> <a target=\"_blank\" href=\"https:\/\/nvda.ws\/3u5KPv1\" rel=\"noopener\"><i>NVIDIA news<\/i><\/a><i>, joining the <\/i><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse\/community\" rel=\"noopener\"><i>community<\/i><\/a><i>, and following NVIDIA Omniverse on <\/i><a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidiaomniverse\/\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/nvidia-omniverse\/\" rel=\"noopener\"><i>LinkedIn<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/medium.com\/@nvidiaomniverse\" rel=\"noopener\"><i>Medium<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/twitter.com\/nvidiaomniverse\" rel=\"noopener\"><i>X<\/i><\/a><i>.<\/i><\/p>\n<p><i>Featured image courtesy of <\/i><b><i>Fourier<\/i><\/b><i>.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/openusd-sdg-advance-robot-learning\/<\/p>\n","protected":false},"author":0,"featured_media":3828,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3827"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3827"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3827\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3828"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3827"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3827"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3827"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}