{"id":4395,"date":"2025-12-17T17:39:58","date_gmt":"2025-12-17T17:39:58","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/12\/17\/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems\/"},"modified":"2025-12-17T17:39:58","modified_gmt":"2025-12-17T17:39:58","slug":"into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/12\/17\/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems\/","title":{"rendered":"Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This post is part of <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/news\/\" rel=\"noopener\"><i>Into the Omniverse<\/i><\/a><i>, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\"><i>OpenUSD<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\"><i>NVIDIA Omniverse<\/i><\/a><i>.<\/i><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"noopener\">Physical AI<\/a> is moving from research labs into the real world, powering intelligent robots and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/autonomous-vehicles\/\" rel=\"noopener\">autonomous vehicles (AVs)<\/a> \u2014 such as robotaxis \u2014 that must reliably sense, reason and act amid unpredictable conditions.<\/p>\n<p>To safely scale these systems, developers need workflows that connect real-world data, high-fidelity simulation and robust AI models atop the common foundation provided by the <a target=\"_blank\" href=\"https:\/\/docs.nvidia.com\/learn-openusd\/latest\/glossary.html\" rel=\"noopener\">OpenUSD<\/a> framework.<\/p>\n<p>The recently published <a target=\"_blank\" href=\"https:\/\/aousd.org\/uncategorized\/core-spec-announcement\/\" rel=\"noopener\">OpenUSD Core Specification 1.0<\/a>, OpenUSD \u2014 aka Universal Scene Description \u2014 now defines standard data types, file formats and composition behaviors, giving developers predictable, interoperable USD pipelines as they scale autonomous systems.<\/p>\n<p>Powered by OpenUSD, <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse?sortBy=developer_learning_library%2Fsort%2Ffeatured_in.omniverse%3Adesc%2Ctitle%3Aasc&amp;hitsPerPage=6\" rel=\"noopener\">NVIDIA Omniverse libraries<\/a> combine <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/rtx\/ray-tracing?sortBy=developer_learning_library%2Fsort%2Ftitle%3Aasc\" rel=\"noopener\">NVIDIA RTX<\/a> rendering, physics simulation and efficient runtimes to create digital twins and simulation-ready (<a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/simready\/\" rel=\"noopener\">SimReady<\/a>) assets that accurately reflect real-world environments for synthetic data generation and testing.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\" rel=\"noopener\">NVIDIA Cosmos<\/a> world foundation models can run on top of these simulations to amplify data variation, generating new weather, lighting and terrain conditions from the same scenes so teams can safely cover rare and challenging edge cases.<\/p>\n<p><i>Learn more by watching the OpenUSD livestream today at 11 a.m. PT or in replay, part of the NVIDIA Omniverse OpenUSD Insiders series:<\/i><\/p>\n<\/p>\n<p>In addition, advancements in <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/synthetic-data-physical-ai\/\" rel=\"noopener\">synthetic data generation<\/a>, multimodal datasets and SimReady workflows are now converging with the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-trust-center\/halos\/autonomous-vehicles\/\" rel=\"noopener\">NVIDIA Halos<\/a> framework for AV safety, creating a standards-based path to safer, faster, more cost-effective deployment of next-generation autonomous machines.<\/p>\n<h2><b>Building the Foundation for Safe Physical AI<\/b><\/h2>\n<p><b>Open Standards and SimReady Assets<\/b><\/p>\n<p>The OpenUSD <a target=\"_blank\" href=\"https:\/\/aousd.org\/uncategorized\/core-spec-announcement\/\" rel=\"noopener\">Core Specification 1.0<\/a> establishes the standard data models and behaviors that underpin SimReady assets, enabling developers to build interoperable simulation pipelines for AI factories and robotics on <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/usd\" rel=\"noopener\">OpenUSD<\/a>.<\/p>\n<p>Built on this foundation, SimReady 3D assets can be reused across tools and teams and loaded directly into <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/sim\" rel=\"noopener\">NVIDIA Isaac Sim<\/a>, where USDPhysics colliders, rigid body dynamics and composition-arc\u2013based variants let teams test robots in virtual facilities that closely mirror real operations.<\/p>\n<p><b>Open-Source Learning\u00a0<\/b><\/p>\n<p>The <a target=\"_blank\" href=\"https:\/\/docs.nvidia.com\/learn-openusd\/latest\/index.html\" rel=\"noopener\">Learn OpenUSD<\/a> curriculum is now open source and available on GitHub, enabling contributors to localize and adapt templates, exercises and content for different audiences, languages and use cases. This gives educators a ready-made foundation to onboard new teams into OpenUSD-centric simulation workflows.\u200b<\/p>\n<p><b>Generative Worlds as Safety Multiplier<\/b><\/p>\n<p>Gaussian splatting \u2014 a technique that uses editable 3D elements to render environments quickly and with high fidelity \u2014 and world models are accelerating simulation pipelines for safe robotics testing and validation.<\/p>\n<p>At SIGGRAPH Asia, the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/research\/\" rel=\"noopener\">NVIDIA Research<\/a> team introduced <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/publication\/2025-12_play4d-accelerated-and-interactive-free-viewpoint-video-streaming-virtual\" rel=\"noopener\">Play4D<\/a>, a streaming pipeline that enables 4D Gaussian splatting to accurately render dynamic scenes and improve realism.<\/p>\n<p>Spatial intelligence company <a target=\"_blank\" href=\"https:\/\/www.worldlabs.ai\/\" rel=\"noopener\">World Labs<\/a> is using its <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/simulate-robotic-environments-faster-with-nvidia-isaac-sim-and-world-labs-marble\/\" rel=\"noopener\">Marble generative world model with NVIDIA Isaac Sim<\/a> and <a target=\"_blank\" href=\"https:\/\/docs.nvidia.com\/nurec\/index.html\" rel=\"noopener\">Omniverse NuRec<\/a> so researchers can turn text prompts and sample images into photorealistic, Gaussian-based physics-ready 3D environments in hours instead of weeks.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-88411\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/12\/WorldLabs_IsaacSim_Clip.gif\" alt=\"\" width=\"600\" height=\"338\"><\/p>\n<p>Those worlds can then be used for physical AI training, testing and sim-to-real transfer. This high-fidelity simulation workflow expands the range of scenarios robots can practice in while keeping experimentation safely in simulation.<\/p>\n<p><b>Lightwheel Helps Teams Scale Robot Training With SimReady Assets<\/b><\/p>\n<p>Powered by OpenUSD, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/customer-stories\/lightwheel\/\" rel=\"noopener\">Lightwheel<\/a>\u2019s SimReady asset library includes a common scene description layer, making it easy to assemble high-fidelity digital twins for robots. The SimReady assets are embedded with precise geometry, materials and validated physical properties, which can be loaded directly into NVIDIA Isaac Sim and Isaac Lab for robot training. This allows robots to experience realistic contacts, dynamics and sensor feedback as they learn.<\/p>\n<h2><b>End-to-End Autonomous Vehicle Safety<\/b><\/h2>\n<p>End-to-end autonomous vehicle safety advancements are accelerating with new research, open frameworks and inspection services that make validation more rigorous and scalable.<\/p>\n<p>NVIDIA researchers, with collaborators at Harvard University and Stanford University, recently introduced the <a target=\"_blank\" href=\"https:\/\/www.arxiv.org\/pdf\/2506.20553\" rel=\"noopener\">Sim2Val framework<\/a> to statistically combine real-world and simulated test results, reducing AV developers\u2019 need for costly physical mileage while demonstrating how robotaxis and AVs can behave safely across rare and safety-critical scenarios.<\/p>\n<p>Learn more by watching NVIDIA\u2019s \u201cSafety in the Loop\u201d livestream:<\/p>\n<\/p>\n<p>These innovations are complemented by a new, open-source NVIDIA Omniverse NuRec Fixer, a Cosmos-based model trained on AV data that removes artifacts in neural reconstructions to produce higher-quality SimReady assets.<\/p>\n<p>To align these advances with rigorous global standards, the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-trust-center\/physical-ai\/safety-certification\/\" rel=\"noopener\">NVIDIA Halos AI Systems Inspection Lab<\/a> \u2014 accredited by ANAB \u2014 provides impartial inspection and certification of Halos elements across robotaxi fleets, AV stacks, sensors and manufacturer platforms through the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-trust-center\/physical-ai\/safety-certification\/\" rel=\"noopener\">Halos Certification Program<\/a>.<\/p>\n<p><strong>AV Ecosystem Leaders Putting Physical AI Safety to Work<\/strong><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/us.bosch-press.com\/pressportal\/us\/en\/press-release-28736.html\" rel=\"noopener\">Bosch<\/a>, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/solutions\/autonomous-vehicles\/partners\/nuro\/\" rel=\"noopener\">Nuro<\/a> and <a target=\"_blank\" href=\"https:\/\/wayve.ai\/thinking\/wayve-gen-3\/\" rel=\"noopener\">Wayve<\/a> are among the first participants in the NVIDIA Halos AI Systems Inspection Lab, which aims to accelerate the safe, large-scale deployment of robotaxi fleets. Onsemi, which makes sensor systems for AVs, industrial automation and medical applications, has recently become the first company to pass inspection for the NVIDIA Halos AI Systems Inspection Lab.<\/p>\n<\/p>\n<p>The open-source <a target=\"_blank\" href=\"https:\/\/carla.org\/\" rel=\"noopener\">CARLA<\/a> simulator integrates <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/accelerating-av-simulation-with-neural-reconstruction-and-world-foundation-models\/\" rel=\"noopener\">NVIDIA NuRec and Cosmos Transfer<\/a> to generate reconstructed drives and diverse scenario variations, while <a target=\"_blank\" href=\"https:\/\/voxel51.com\/\" rel=\"noopener\">Voxel51<\/a>\u2019s FiftyOne engine, linked to Cosmos Dataset Search, NuRec and Cosmos Transfer, helps teams curate, annotate and evaluate multimodal datasets across the AV pipeline.\u200b<\/p>\n<\/p>\n<p>Mcity at the University of Michigan is enhancing the digital twin of its <a target=\"_blank\" href=\"https:\/\/mcity.umich.edu\/mcity-enhances-digital-twin-of-av-test-facility-with-nvidia-omniverse\/\" rel=\"noopener\">32-acre AV test facility<\/a> using Omniverse libraries and technologies. The team is integrating the NVIDIA Blueprint for AV simulation and Omniverse Sensor RTX application programming interfaces to create physics-based models of camera, lidar, radar and ultrasonic sensors.<\/p>\n<p>By aligning real sensor recordings with high-fidelity simulated data and sharing assets openly, Mcity enables safe, repeatable testing of rare and hazardous driving scenarios before vehicles operate on public roads.<\/p>\n<h2><b>Get Plugged Into the World of OpenUSD and Physical AI Safety<\/b><\/h2>\n<p>Learn more about OpenUSD, NVIDIA Halos and physical AI safety by exploring these resources:<\/p>\n<p><i>Stay up to date by subscribing to<\/i> <a target=\"_blank\" href=\"https:\/\/nvda.ws\/3u5KPv1\" rel=\"noopener\"><i>NVIDIA news<\/i><\/a><i>, joining the <\/i><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse\/community\" rel=\"noopener\"><i>community<\/i><\/a><i> and following NVIDIA Omniverse on <\/i><a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidiaomniverse\/\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/nvidia-omniverse\/\" rel=\"noopener\"><i>LinkedIn<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/medium.com\/@nvidiaomniverse\" rel=\"noopener\"><i>Medium<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/twitter.com\/nvidiaomniverse\" rel=\"noopener\"><i>X<\/i><\/a><i>.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/openusd-halos-safety-robotaxi-physical-ai\/<\/p>\n","protected":false},"author":0,"featured_media":4396,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4395"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4395"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4395\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4396"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4395"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4395"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4395"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}