{"id":3785,"date":"2024-11-06T16:42:42","date_gmt":"2024-11-06T16:42:42","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/11\/06\/nvidia-advances-robot-learning-and-humanoid-development-with-new-ai-and-simulation-tools\/"},"modified":"2024-11-06T16:42:42","modified_gmt":"2024-11-06T16:42:42","slug":"nvidia-advances-robot-learning-and-humanoid-development-with-new-ai-and-simulation-tools","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/11\/06\/nvidia-advances-robot-learning-and-humanoid-development-with-new-ai-and-simulation-tools\/","title":{"rendered":"NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>Robotics developers can greatly accelerate their work on AI-enabled robots, including <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/humanoid-robot\/\" rel=\"noopener\">humanoids<\/a>, using new AI and simulation tools and workflows that NVIDIA revealed this week at the Conference for Robot Learning (<a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/corl\/\" rel=\"noopener\">CoRL<\/a>) in Munich, Germany.<\/p>\n<p>The lineup includes the general availability of the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/isaac\/lab\" rel=\"noopener\">NVIDIA Isaac Lab<\/a> robot learning framework; six new humanoid robot learning workflows for <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/project-gr00t\" rel=\"noopener\">Project GR00T<\/a>, an initiative to accelerate humanoid robot development; and new world-model development tools for video data curation and processing, including the <a target=\"_blank\" href=\"http:\/\/github.com\/NVIDIA\/Cosmos-Tokenizer\" rel=\"noopener\">NVIDIA Cosmos tokenizer<\/a> and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/products\/nemo\/\" rel=\"noopener\">NVIDIA NeMo Curator<\/a> for video processing.<\/p>\n<p>The open-source Cosmos tokenizer provides robotics developers superior visual tokenization by breaking down images and videos into high-quality tokens with exceptionally high compression rates. It runs up to 12x faster than current tokenizers, while NeMo Curator provides video processing curation up to 7x faster than unoptimized pipelines.<\/p>\n<p>Also timed with CoRL, NVIDIA presented 23 papers and nine workshops related to robot learning and released training and workflow guides for developers. Further, <a href=\"https:\/\/blogs.nvidia.com\/blog\/hugging-face-lerobot-open-source-robotics\">Hugging Face and NVIDIA announced<\/a> they\u2019re collaborating to accelerate open-source robotics research with LeRobot, NVIDIA Isaac Lab and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems\/?srsltid=AfmBOopRhPuQ-hi1834FOZZDwqICu01kyrYJdHVvJfa-1EJkFheDj9R5\" rel=\"noopener\">NVIDIA Jetson<\/a> for the developer community.<\/p>\n<h2><b>Accelerating Robot Development With Isaac Lab\u00a0<\/b><\/h2>\n<p>NVIDIA Isaac Lab is an <a target=\"_blank\" href=\"https:\/\/github.com\/isaac-sim\/IsaacLab\" rel=\"noopener\">open-source<\/a>, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/robot-learning\/\" rel=\"noopener\">robot learning<\/a> framework built on <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\">NVIDIA Omniverse<\/a>, a platform for developing <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">OpenUSD<\/a> applications for industrial digitalization and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-physical-ai\/\" rel=\"noopener\">physical AI<\/a> simulation.<\/p>\n<p>Developers can use Isaac Lab to train robot policies at scale. This open-source unified robot learning framework applies to any embodiment \u2014 from humanoids to quadrupeds to collaborative robots \u2014 to handle increasingly complex movements and interactions.<\/p>\n<p>Leading commercial robot makers, robotics application developers and robotics research entities around the world are adopting Isaac Lab, including 1X, <a target=\"_blank\" href=\"https:\/\/agilityrobotics.com\/content\/crossing-sim2real-gap-with-isaaclab\" rel=\"noopener\">Agility Robotics<\/a>, The AI Institute, <a target=\"_blank\" href=\"https:\/\/berkeley-humanoid.com\" rel=\"noopener\">Berkeley Humanoid<\/a>, <a target=\"_blank\" href=\"https:\/\/support.bostondynamics.com\/s\/article\/Get-Started-with-Reinforcement-Learning-for-Spot-49966\" rel=\"noopener\">Boston Dynamics<\/a>, <a target=\"_blank\" href=\"https:\/\/fieldai.com\/news\/field-ai-nvidia-partnership\" rel=\"noopener\">Field AI<\/a>, <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/spotlight-fourier-trains-humanoid-robots-for-real-world-roles-using-nvidia-isaac-gym\/\" rel=\"noopener\">Fourier<\/a>, <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/spotlight-galbot-builds-a-large-scale-dexterous-hand-dataset-for-humanoid-robots-using-nvidia-isaac-sim\/\" rel=\"noopener\">Galbot<\/a>, <a target=\"_blank\" href=\"https:\/\/menteebot.com\/blog\/#shopping-companion-2024\" rel=\"noopener\">Mentee Robotics<\/a>, Skild AI, Swiss-Mile, Unitree Robotics and XPENG Robotics.<\/p>\n<h2><b>Project GR00T: Foundations for General-Purpose Humanoid Robots\u00a0<\/b><\/h2>\n<p>Building advanced humanoids is extremely difficult, demanding multilayer technological and interdisciplinary approaches to make the robots perceive, move and learn skills effectively for human-robot and robot-environment interactions.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/project-gr00t\" rel=\"noopener\">Project GR00T<\/a> is an initiative to develop accelerated libraries, foundation models and data pipelines to accelerate the global humanoid robot developer ecosystem.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-75324\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/11\/projectgrootcorl.gif\" alt=\"\" width=\"800\" height=\"450\"><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/advancing-humanoid-robot-sight-and-skill-development-with-nvidia-project-gr00t\/\" rel=\"noopener\">Six new Project GR00T workflows<\/a> provide humanoid developers with blueprints to realize the most challenging humanoid robot capabilities. They include:<i><\/i><\/p>\n<ul>\n<li><b>GR00T-Gen<\/b> for building generative AI-powered, OpenUSD-based 3D environments<\/li>\n<li><b>GR00T-Mimic<\/b> for robot motion and trajectory generation<\/li>\n<li><b>GR00T-Dexterity<\/b> for robot dexterous manipulation<\/li>\n<li><b>GR00T-Control<\/b> for whole-body control<\/li>\n<li><b>GR00T-Mobility<\/b> for robot locomotion and navigation<\/li>\n<li><b>GR00T-Perception<\/b> for multimodal sensing<\/li>\n<\/ul>\n<p>\u201cHumanoid robots are the next wave of embodied AI,\u201d said Jim Fan, senior research manager of embodied AI at NVIDIA. \u201cNVIDIA research and engineering teams are collaborating across the company and our developer ecosystem to build Project GR00T to help advance the progress and development of global humanoid robot developers.\u201d<\/p>\n<h2><b>New Development Tools for World Model Builders<\/b><\/h2>\n<p>Today, robot developers are building world models \u2014 AI representations of the world that can predict how objects and environments respond to a robot\u2019s actions. Building these world models is incredibly compute- and data-intensive, with models requiring thousands of hours of real-world, curated image or video data.<\/p>\n<p>NVIDIA Cosmos tokenizers provide efficient, high-quality encoding and decoding to simplify the development of these world models. They set a new standard of minimal distortion and temporal instability, enabling high-quality video and image reconstructions.<\/p>\n<p>Providing high-quality compression and up to 12x faster visual reconstruction, the Cosmos tokenizer paves the path for scalable, robust and efficient development of generative applications across a broad spectrum of visual domains.<\/p>\n<p>1X, a humanoid robot company, has updated the <a target=\"_blank\" href=\"https:\/\/www.1x.tech\/discover\/1x-world-model-sampling-challenge\" rel=\"noopener\">1X World Model Challenge dataset<\/a> to use the Cosmos tokenizer.<\/p>\n<p>\u201cNVIDIA Cosmos tokenizer achieves really high temporal and spatial compression of our data while still retaining visual fidelity,\u201d said Eric Jang, vice president of AI at 1X Technologies. \u201cThis allows us to train world models with long horizon video generation in an even more compute-efficient manner.\u201d<\/p>\n<p>Other humanoid and general-purpose robot developers, including XPENG Robotics and Hillbot, are developing with the NVIDIA Cosmos tokenizer to manage high-resolution images and videos.<\/p>\n<p>NeMo Curator now includes a video processing pipeline. This enables robot developers to improve their world-model accuracy by processing large-scale text, image and video data.<\/p>\n<p>Curating video data poses challenges due to its massive size, requiring scalable pipelines and efficient orchestration for load balancing across GPUs. Additionally, models for filtering, captioning and embedding need optimization to maximize throughput.<\/p>\n<p>NeMo Curator overcomes these challenges by streamlining data curation with automatic pipeline orchestration, reducing processing time significantly. It supports linear scaling across multi-node, multi-GPU systems, efficiently handling over 100 petabytes of data. This simplifies AI development, reduces costs and accelerates time to market.<\/p>\n<h2><b>Advancing the Robot Learning Community at CoRL<\/b><\/h2>\n<p>The nearly two dozen research papers the NVIDIA robotics team released with CoRL cover breakthroughs in integrating vision language models for improved environmental understanding and task execution, temporal robot navigation, developing long-horizon planning strategies for complex multistep tasks and using human demonstrations for skill acquisition.<\/p>\n<p>Groundbreaking papers for humanoid robot control and synthetic data generation include <a target=\"_blank\" href=\"https:\/\/skillgen.github.io\/\" rel=\"noopener\">SkillGen<\/a>, a system based on <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-synthetic-data\/\">synthetic data generation<\/a> for training robots with minimal human demonstrations, and <a target=\"_blank\" href=\"https:\/\/hover-versatile-humanoid.github.io\/\" rel=\"noopener\">HOVER<\/a>, a robot foundation model for controlling humanoid robot locomotion and manipulation.<\/p>\n<p>NVIDIA researchers will also be participating in nine workshops at the conference. <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/corl\/\" rel=\"noopener\">Learn more<\/a> about the full schedule of events.<\/p>\n<h2><b>Availability<\/b><\/h2>\n<p>NVIDIA Isaac Lab 1.2 is available now and is open source on GitHub. NVIDIA Cosmos tokenizer is available now on <a target=\"_blank\" href=\"http:\/\/github.com\/NVIDIA\/cosmos-tokenizer\" rel=\"noopener\">GitHub<\/a> and <a target=\"_blank\" href=\"https:\/\/huggingface.co\/nvidia\/Cosmos-Tokenizer-CV8x8x8\" rel=\"noopener\">Hugging Face<\/a>. NeMo Curator for video processing will be available at the end of the month.<\/p>\n<p>The new NVIDIA Project GR00T workflows are coming soon to help robot companies build humanoid robot capabilities with greater ease. Read more about the workflows on the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/advancing-humanoid-robot-sight-and-skill-development-with-nvidia-project-gr00t\/\" rel=\"noopener\">NVIDIA Technical Blog<\/a>.<\/p>\n<p>Researchers and developers learning to use Isaac Lab can now access <a target=\"_blank\" href=\"https:\/\/isaac-sim.github.io\/IsaacLab\/main\/source\/tutorials\/index.html#\" rel=\"noopener\">developer guides and tutorials<\/a>, including an Isaac Gym to Isaac Lab <a target=\"_blank\" href=\"https:\/\/isaac-sim.github.io\/IsaacLab\/main\/source\/migration\/migrating_from_isaacgymenvs.html\" rel=\"noopener\">migration guide<\/a>.<\/p>\n<p>Discover the latest in robot learning and simulation in an <a target=\"_blank\" href=\"https:\/\/www.addevent.com\/event\/GA23422424\" rel=\"noopener\">upcoming OpenUSD insider livestream<\/a> on robot simulation and learning on Nov. 13, and attend the <a target=\"_blank\" href=\"https:\/\/www.addevent.com\/event\/Uz23738360\" rel=\"noopener\">NVIDIA Isaac Lab office hours<\/a> for hands-on support and insights.<\/p>\n<p>Developers can apply to join the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/humanoid-robot-program\" rel=\"noopener\">NVIDIA Humanoid Robot Developer Program<\/a>.<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/robot-learning-humanoid-development\/<\/p>\n","protected":false},"author":0,"featured_media":3786,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3785"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3785"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3785\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3786"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3785"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3785"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3785"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}