{"id":4291,"date":"2025-09-30T13:41:47","date_gmt":"2025-09-30T13:41:47","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/09\/30\/into-the-omniverse-open-source-physics-engine-and-openusd-advance-robot-learning\/"},"modified":"2025-09-30T13:41:47","modified_gmt":"2025-09-30T13:41:47","slug":"into-the-omniverse-open-source-physics-engine-and-openusd-advance-robot-learning","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/09\/30\/into-the-omniverse-open-source-physics-engine-and-openusd-advance-robot-learning\/","title":{"rendered":"Into the Omniverse: Open-Source Physics Engine and OpenUSD Advance Robot Learning"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This blog is a part of <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/news\/\" rel=\"noopener\"><i>Into the Omniverse<\/i><\/a><i>, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.<\/i><\/p>\n<p>Building robots that can effectively operate alongside human workers in factories, hospitals and public spaces presents an enormous technical challenge. These robots require humanlike dexterity, perception, cognition and whole-body coordination to navigate unpredictable real-world environments in real time.<\/p>\n<p>A <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"noopener\">\u201csim-first\u201d approach<\/a> unlocks these critical skills by enabling parallel training of hundreds or thousands of robot instances using real robot-captured data and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/synthetic-data-generation\/\" rel=\"noopener\">synthetic data<\/a> in simulation environments. <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">Universal Scene Description<\/a> (OpenUSD) provides the foundational framework for this advanced robot development, serving as a scalable, interoperable data standard that enables developers to build physically accurate virtual worlds where robots can practice and perfect their skills before transferring them to real-world applications.<\/p>\n<h2><b>Accelerating Physical AI Development<\/b><\/h2>\n<p>NVIDIA announced at the Conference on Robot Learning this week groundbreaking advances in open-source physics simulation, open foundation models and development frameworks, including:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/newton-physics\" rel=\"noopener\"><b>Newton Physics Engine<\/b><\/a>: While robots learn faster and safer in <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robotics-simulation\/\" rel=\"noopener\">simulation<\/a>, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/humanoid-robot\/\" rel=\"noopener\">humanoid robots<\/a> \u2014 with complex joints, balance and movements \u2014 are pushing today\u2019s physics engines to the limit.<\/p>\n<p>Codeveloped by Google DeepMind, Disney Research and NVIDIA, and <a target=\"_blank\" href=\"https:\/\/www.linuxfoundation.org\/press\/linux-foundation-announces-contribution-of-newton-by-disney-research-google-deepmind-and-nvidia-to-accelerate-open-robot-learning\" rel=\"noopener\">managed by the Linux Foundation<\/a>, <a target=\"_blank\" href=\"https:\/\/github.com\/newton-physics\" rel=\"noopener\">Newton<\/a> is an open-source, GPU-accelerated physics engine to advance robot learning.<\/p>\n<p>Built on <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/warp-python\" rel=\"noopener\">NVIDIA Warp<\/a> and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">OpenUSD<\/a>, Newton enables robots to learn complex tasks more precisely while working seamlessly with <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/robot-learning\/\" rel=\"noopener\">robot learning<\/a> frameworks like MuJoCo Playground and <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/lab\" rel=\"noopener\">NVIDIA Isaac Lab<\/a>.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-85389\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/09\/into-the-omniverse-robot-960x502.jpg\" alt=\"\" width=\"960\" height=\"502\"><\/p>\n<p><i>Tune in to an upcoming <\/i><a target=\"_blank\" href=\"https:\/\/www.addevent.com\/event\/pi26720934\" rel=\"noopener\"><i>livestream for a Newton beta demonstration<\/i><\/a><i> to learn about Newton\u2019s core features and how to get started on Newton with NVIDIA Isaac Lab.<\/i><i>\u00a0\u00a0<\/i><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/gr00t\" rel=\"noopener\"><b>Isaac GR00T N1.6<\/b><\/a>: To perform humanlike tasks in the physical world, humanoids must understand ambiguous instructions and navigate unforeseen scenarios. The latest Isaac GR00T N1.6 open robot foundation model, available soon on Hugging Face, integrates <a target=\"_blank\" href=\"https:\/\/github.com\/nvidia-cosmos\/cosmos-reason1\" rel=\"noopener\">NVIDIA Cosmos Reason<\/a>, an open reasoning vision language model built for <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"noopener\">physical AI<\/a>. Cosmos Reason serves as the robot\u2019s deep-thinking brain and transforms vague instructions into step-by-step action plans using prior knowledge, common sense and physics understanding.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/github.com\/isaac-sim\/IsaacLab\/tree\/release\/2.3.0\" rel=\"noopener\"><b>NVIDIA Isaac Lab<\/b><\/a><b>:<\/b> The latest version of Isaac Lab, an open-source, modular robot learning framework built on NVIDIA Isaac Sim and OpenUSD, is now available as an early developer release. Version 2.3 brings a host of new features to robotics researchers and developers, including advanced whole-body control and expanded teleoperation for data collection.<\/p>\n<p>OpenUSD\u2019s interoperability ensures these advanced physics simulations, foundation models and learning frameworks work together seamlessly, enabling developers to build unified <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/robot-learning\/\" rel=\"noopener\">robot learning<\/a> pipelines that scale across different platforms and deployment scenarios.<\/p>\n<h2><b>See How Developers Are Accelerating Robot Learning<\/b><\/h2>\n<p>Leading humanoid and robotics developers, including Agility Robotics, Lightwheel, Mentee and Universal Robots are adopting simulation technologies and libraries to accelerate physical AI development and deployment.<\/p>\n<ul>\n<li><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/agility-robotics-digit-humanoid-robot\/\" rel=\"noopener\"><b>Agility Robotics<\/b><\/a> uses NVIDIA Isaac Lab to train a whole-body control foundation model for its Digit robot. Isaac Sim and OpenUSD enable precise digital twin creation of customer facilities, delivering a scalable way to optimize the robot\u2019s operation before deployment.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/lightwheel\/\" rel=\"noopener\"><b>Lightwheel<\/b><\/a> developed the Lightwheel Simulation Platform, built on <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\">NVIDIA Omniverse<\/a>. Lightwheel is also building <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/simready\/\" rel=\"noopener\">simulation-ready assets<\/a> that use the NVIDIA USD Search application programming interface to streamline asset discovery and assemble accurate digital twins to help robotics developers accelerate their training and simulation workflows.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/mentee-robotics\/\" rel=\"noopener\"><b>Mentee Robotics<\/b><\/a> harnesses <a href=\"https:\/\/blogs.nvidia.com\/blog\/three-computers-robotics\/\">NVIDIA\u2019s three-computer architecture<\/a> to develop MenteeBot\u2019s sophisticated learning capabilities, using OpenUSD as the foundation for developing synthetic data generation pipelines in Isaac Sim.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/universal-robots-accelerates-cobot-development-with-nvidia\/\" rel=\"noopener\"><b>Universal Robots<\/b><\/a> uses the NVIDIA Isaac platform for comprehensive robot simulation and learning, tapping OpenUSD to create interoperable digital twins of manufacturing environments that validate cobot safety protocols and optimize human-robot interaction across diverse industrial settings. Inbolt, a partner in the Universal Robots ecosystem, delivers dynamic vision guidance systems that enable robots to adapt to their environment on the fly, handling production variations with ease.<\/li>\n<li><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/volkswagen-with-wandelbots-nova-and-isaac-sim\/\" rel=\"noopener\"><b>Wandelbots<\/b><\/a>, a German robotics software company, is helping Volkswagen shorten automation projects at its Transparent Factory in Dresden. Using Wandelbots NOVA \u2014 an Isaac Sim\u2013integrated, no-code teaching platform \u2014 assembly workers can train robots to pick-and-place in a virtual twin, before deployment.<\/li>\n<\/ul>\n<p>NVIDIA\u2019s open frameworks and libraries are also being adopted by developers within the robotics community. Community member and NVIDIA Omniverse ambassador Dylan Tobin <a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/posts\/dylan-tobin313_nvidiaomniverse-nvidiarobotics-physicalai-activity-7363201316753444864-PrfD?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAo-ztUBZUDYK16N3mS3qkG-ohtZe11AbGc\" rel=\"noopener\">created an AI chatbot<\/a> trained on Isaac Sim workflows to help developers navigate Omniverse more efficiently.<\/p>\n<p>See how other developers in the community are using Isaac Sim and Isaac Lab for innovation in robotics navigation, control and reinforcement learning by watching this livestream replay:<\/p>\n<\/p>\n<p>Plus, an NVIDIA Robotics office hours session demonstrates how Brev makes it easy to run Isaac Sim and Isaac Lab on Omniverse:<\/p>\n<\/p>\n<h2><b>Get Plugged Into the World of OpenUSD<\/b><\/h2>\n<p>To learn more about robot learning with OpenUSD and NVIDIA\u2019s latest robotics technologies, explore these resources:<\/p>\n<p><i>Stay up to date by subscribing to<\/i> <a target=\"_blank\" href=\"https:\/\/nvda.ws\/3u5KPv1\" rel=\"noopener\"><i>NVIDIA Omniverse news<\/i><\/a><i>, joining the Omniverse <\/i><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse\/community\" rel=\"noopener\"><i>community<\/i><\/a><i> and following Omniverse on<\/i> <a target=\"_blank\" href=\"https:\/\/discord.com\/channels\/827959428476174346\/828737081479004230\" rel=\"noopener\"><i>Discord<\/i><\/a><i>,<\/i> <a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidiaomniverse\/\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/71986325\/admin\/dashboard\/\" rel=\"noopener\"><i>LinkedIn<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.threads.com\/@nvidiaomniverse\" rel=\"noopener\"><i>Threads<\/i><\/a><a target=\"_blank\" href=\"https:\/\/medium.com\/@nvidiaomniverse\" rel=\"noopener\"><i>,<\/i><\/a> <a target=\"_blank\" href=\"https:\/\/twitter.com\/nvidiaomniverse\" rel=\"noopener\"><i>X<\/i><\/a><i> and<\/i> <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/channel\/UCSKUoczbGAcMld7HjpCR8OA\" rel=\"noopener\"><i>YouTube<\/i><\/a><b><i>.\u00a0<\/i><\/b><\/p>\n<p><i>Explore the <\/i><a target=\"_blank\" href=\"https:\/\/forum.aousd.org\/\" rel=\"noopener\"><i>Alliance for OpenUSD forum<\/i><\/a><i> and the <\/i><a target=\"_blank\" href=\"https:\/\/aousd.org\/\" rel=\"noopener\"><i>AOUSD website<\/i><\/a><i>.<\/i><\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/newton-physics-engine-openusd\/<\/p>\n","protected":false},"author":0,"featured_media":4292,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4291"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4291"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4291\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4292"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4291"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4291"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4291"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}