{"id":4049,"date":"2025-06-26T14:48:43","date_gmt":"2025-06-26T14:48:43","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/06\/26\/into-the-omniverse-world-foundation-models-advance-autonomous-vehicle-simulation-and-safety\/"},"modified":"2025-06-26T14:48:43","modified_gmt":"2025-06-26T14:48:43","slug":"into-the-omniverse-world-foundation-models-advance-autonomous-vehicle-simulation-and-safety","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/06\/26\/into-the-omniverse-world-foundation-models-advance-autonomous-vehicle-simulation-and-safety\/","title":{"rendered":"Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This blog is a part of <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/news\/\" rel=\"noopener\"><i>Into the Omniverse<\/i><\/a><i>, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\"><i>OpenUSD<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\"><i>NVIDIA Omniverse<\/i><\/a><i>.<\/i><\/p>\n<p>Simulated driving environments enable engineers to safely and efficiently train, test and validate <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/autonomous-vehicles\/\" rel=\"noopener\">autonomous vehicles<\/a> (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing.<\/p>\n<p>These simulated environments can be created through <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/3d-reconstruction\/\" rel=\"noopener\">neural reconstruction<\/a> of real-world data from AV fleets or generated with <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/world-models\/\" rel=\"noopener\">world foundation models (WFMs)<\/a> \u2014 neural networks that understand physics and real-world properties. WFMs can be used to generate <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/synthetic-data-generation\/\" rel=\"noopener\">synthetic datasets<\/a> for enhanced <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/accelerating-av-simulation-with-neural-reconstruction-and-world-foundation-models\/\" rel=\"noopener\">AV simulation<\/a>.<\/p>\n<p>To help <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"noopener\">physical AI<\/a> developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/gtc\/paris\/\" rel=\"noopener\">GTC Paris<\/a> and <a target=\"_blank\" href=\"https:\/\/cvpr.thecvf.com\/\" rel=\"noopener\">CVPR<\/a> conferences earlier this month. These new capabilities enhance <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\" rel=\"noopener\">NVIDIA Cosmos<\/a> \u2014 a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.<\/p>\n<p>Key innovations like <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/develop-custom-physical-ai-foundation-models-with-nvidia-cosmos-predict-2\/\" rel=\"noopener\">Cosmos Predict-2<\/a>, the <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/nvidia\/cosmos-transfer1-7b\" rel=\"noopener\">Cosmos Transfer-1 NVIDIA preview NIM microservice<\/a> and <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/dir\/cosmos-reason1\/\" rel=\"noopener\">Cosmos Reason<\/a> are improving how AV developers <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/synthetic-data\/\" rel=\"noopener\">generate synthetic data<\/a>, build realistic simulated environments and validate safety systems at unprecedented scale.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">Universal Scene Description (OpenUSD)<\/a>, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\">NVIDIA Omniverse<\/a>, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.<\/p>\n<p>Leading AV organizations \u2014 including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber \u2014 are among the first to adopt Cosmos models.<\/p>\n<\/p>\n<h2><b>Foundations for Scalable, Realistic Simulation<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/develop-custom-physical-ai-foundation-models-with-nvidia-cosmos-predict-2\/\" rel=\"noopener\">Cosmos Predict-2<\/a>, NVIDIA\u2019s latest WFM, generates high-quality <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-synthetic-data\/\">synthetic data<\/a> by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/simplify-end-to-end-autonomous-vehicle-development-with-new-nvidia-cosmos-world-foundation-models\/\" rel=\"noopener\">accelerate training and validation<\/a> of AVs and robots.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-82728 aligncenter\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/06\/task945-ezgif.com-video-to-gif-converter.gif\" alt=\"\" width=\"800\" height=\"440\"><\/p>\n<p>In addition, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\" rel=\"noopener\">Cosmos Transfer<\/a>, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be <a target=\"_blank\" href=\"https:\/\/carla.org\/2025\/06\/11\/release-0.9.16-pre\/\" rel=\"noopener\">available<\/a> to 150,000 developers on <a target=\"_blank\" href=\"https:\/\/carla.org\/\" rel=\"noopener\">CARLA<\/a>, a leading open-source AV simulator. This greatly expands the broad AV developer community\u2019s access to advanced AI-powered simulation tools.<\/p>\n<p>Developers can start integrating synthetic data into their own pipelines using the <a target=\"_blank\" href=\"https:\/\/huggingface.co\/collections\/nvidia\/physical-ai-67c643edbb024053dcbcd6d8\" rel=\"noopener\">NVIDIA Physical AI Dataset<\/a>. The latest release includes 40,000 clips <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/simplify-end-to-end-autonomous-vehicle-development-with-new-nvidia-cosmos-world-foundation-models\/\" rel=\"noopener\">generated using Cosmos<\/a>.<\/p>\n<p>Building on these foundations, the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/use-cases\/autonomous-vehicle-simulation\/\" rel=\"noopener\">Omniverse Blueprint for AV simulation<\/a> provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.<\/p>\n<p>The blueprint taps into OpenUSD\u2019s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.<\/p>\n<h2><b>Driving the Future of AV Safety<\/b><\/h2>\n<p>To bolster the operational safety of AV systems, NVIDIA earlier this year introduced <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/trust-center\/halos\/autonomous-vehicles\/\" rel=\"noopener\">NVIDIA Halos<\/a> \u2014 a comprehensive safety platform that integrates the company\u2019s full automotive hardware and software stack with AI research focused on AV safety.<\/p>\n<p>The new Cosmos models \u2014 Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason \u2014 deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.<\/p>\n<p>These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage \u2014 including rare and safety-critical events \u2014 while supporting post-training customization for specialized AV tasks.<\/p>\n<\/p>\n<p>At CVPR, NVIDIA was recognized as an <a href=\"https:\/\/blogs.nvidia.com\/blog\/auto-research-cvpr-2025\/\">Autonomous Grand Challenge winner<\/a>, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD\u2019s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.<\/p>\n<p>Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:<\/p>\n<\/p>\n<p>Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone <a target=\"_blank\" href=\"https:\/\/open.spotify.com\/episode\/2vTdw9f0NZrwI7GEkaPlP7\" rel=\"noopener\">on the NVIDIA AI Podcast<\/a> share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.<\/p>\n<h2><b>Get Plugged Into the World of OpenUSD<\/b><\/h2>\n<p>Learn more about what\u2019s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang\u2019s <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-eu\/gtc\/keynote\/?regcode=so-nvsh-668890-vt20&amp;ncid=so-nvsh-668890-vt20\" rel=\"noopener\">GTC Paris keynote<\/a>.<\/p>\n<p>Looking for more live opportunities to learn more about OpenUSD? Don\u2019t miss sessions and labs happening at <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\" rel=\"noopener\">SIGGRAPH 2025<\/a>, August 10\u201314.<\/p>\n<p>Discover <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=riqp4_eZa2Y\" rel=\"noopener\">why developers and 3D practitioners are using OpenUSD<\/a> and learn how to optimize 3D workflows with the self-paced \u201c<a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/learn\/learning-path\/openusd\/\" rel=\"noopener\">Learn OpenUSD<\/a>\u201d curriculum for 3D developers and practitioners, available for free through the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/training\/\" rel=\"noopener\">NVIDIA Deep Learning Institute<\/a>.<\/p>\n<p>Explore the <a target=\"_blank\" href=\"https:\/\/forum.aousd.org\/\" rel=\"noopener\">Alliance for OpenUSD forum<\/a> and the <a target=\"_blank\" href=\"https:\/\/aousd.org\/\" rel=\"noopener\">AOUSD website<\/a>.<\/p>\n<p><i>Stay up to date by subscribing to <\/i><a target=\"_blank\" href=\"https:\/\/nvda.ws\/3u5KPv1\" rel=\"noopener\"><i>NVIDIA Omniverse news<\/i><\/a><i>, joining the <\/i><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/omniverse\/community\" rel=\"noopener\"><i>community<\/i><\/a><i> and following NVIDIA Omniverse on <\/i><a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidiaomniverse\/\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/nvidia-omniverse\/\" rel=\"noopener\"><i>LinkedIn<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/medium.com\/@nvidiaomniverse\" rel=\"noopener\"><i>Medium<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/twitter.com\/nvidiaomniverse\" rel=\"noopener\"><i>X<\/i><\/a><i>.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/wfm-advance-av-sim-safety\/<\/p>\n","protected":false},"author":0,"featured_media":4050,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4049"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4049"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4049\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4050"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4049"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4049"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4049"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}