{"id":3847,"date":"2024-12-30T15:41:49","date_gmt":"2024-12-30T15:41:49","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/30\/research-galore-from-2024-recapping-ai-advancements-in-3d-simulation-climate-science-and-audio-engineering\/"},"modified":"2024-12-30T15:41:49","modified_gmt":"2024-12-30T15:41:49","slug":"research-galore-from-2024-recapping-ai-advancements-in-3d-simulation-climate-science-and-audio-engineering","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/30\/research-galore-from-2024-recapping-ai-advancements-in-3d-simulation-climate-science-and-audio-engineering\/","title":{"rendered":"Research Galore From 2024: Recapping AI Advancements in 3D Simulation, Climate Science and Audio Engineering"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>The pace of technology innovation has accelerated in the past year, most dramatically in AI. And in 2024, there was no better place to be a part of creating those breakthroughs than <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/research\/\" rel=\"noopener\">NVIDIA Research<\/a>.<\/p>\n<p>NVIDIA Research is comprised of hundreds of extremely bright people pushing the frontiers of knowledge, not just in AI, but across many areas of technology.<\/p>\n<p>In the past year, NVIDIA Research laid the groundwork for future improvements in GPU performance with major research discoveries in circuits, memory architecture and sparse arithmetic. The team\u2019s invention of novel graphics techniques continues to raise the bar for real-time rendering. And we developed new methods for improving the efficiency of AI \u2014 requiring less energy, taking fewer GPU cycles and delivering even better results.<\/p>\n<p>But the most exciting developments of the year have been in generative AI.<\/p>\n<p>We\u2019re now able to generate, not just images and text, but 3D models, music and sounds. We\u2019re also developing better control over what is generated: to generate realistic humanoid motion and to generate sequences of images with consistent subjects.<\/p>\n<p>The application of generative AI to science has resulted in high-resolution weather forecasts that are more accurate than conventional numerical weather models. AI models have given us the ability to accurately predict how blood glucose levels respond to different foods. Embodied generative AI is being used to develop autonomous vehicles and robots.<\/p>\n<p>And that was just this year. What follows is a deeper dive into some of NVIDIA Research\u2019s greatest generative AI work in 2024. Of course, we continue to develop new models and methods for AI, and expect even more exciting results next year.<\/p>\n<h2><b>ConsiStory: AI-Generated Images With Main Character Energy<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/par\/consistory\/\" rel=\"noopener\">ConsiStory<\/a>, a collaboration between researchers at NVIDIA and Tel Aviv University, makes it easier to generate multiple images with a consistent main character \u2014 an essential capability for storytelling use cases such as illustrating a comic strip or developing a storyboard.<\/p>\n<p>The researchers\u2019 approach introduced a technique called subject-driven shared attention, which reduces the time it takes to generate consistent imagery from 13 minutes to around 30 seconds.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2402.03286\" rel=\"noopener\">ConsiStory paper<\/a>.<\/p>\n<figure id=\"attachment_72922\" aria-describedby=\"caption-attachment-72922\" class=\"wp-caption aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\" wp-image-72922\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/07\/ConsiStory.jpeg\" alt=\"Panels of multiple AI-generated images featuring the same character\" width=\"1038\" height=\"583\"><figcaption id=\"caption-attachment-72922\" class=\"wp-caption-text\">ConsiStory is capable of generating a series of images featuring the same character.<\/figcaption><\/figure>\n<h2><b>Edify 3D: Generative AI Enters a New Dimension<\/b><\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/dir\/edify-3d\/\" rel=\"noopener\">NVIDIA Edify 3D<\/a> is a foundation model that enables developers and content creators to quickly generate 3D objects that can be used to prototype ideas and populate virtual worlds.<\/p>\n<p>Edify 3D helps creators quickly ideate, lay out and conceptualize immersive environments with AI-generated assets. Novice and experienced content creators can use text and image prompts to harness the model, which is now part of the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/gpu-cloud\/edify\/\" rel=\"noopener\">NVIDIA Edify<\/a> multimodal architecture for developing visual generative AI.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2411.07135\" rel=\"noopener\">Edify 3D paper<\/a> and watch the <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=ROqB8xhKZ6U\" rel=\"noopener\">video on YouTube<\/a>.<\/p>\n<h2><b>Fugatto: Flexible AI Sound Machine for Music, Voices and More<\/b><\/h2>\n<p>A team of NVIDIA researchers recently unveiled Fugatto, a foundational generative AI model that can create or transform any mix of music, voices and sounds based on text or audio prompts.<\/p>\n<p>The model can, for example, create music snippets based on text prompts, add or remove instruments from existing songs, modify the accent or emotion in a voice recording, or generate completely novel sounds. It could be used by music producers, ad agencies, video game developers or creators of language learning tools.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/publication\/2024-11_fugatto-1-foundational-generative-audio-transformer-opus-1\" rel=\"noopener\">Fugatto paper<\/a>.<\/p>\n<\/p>\n<h2><b>GluFormer: AI Predicts Blood Sugar Levels Four Years Out<\/b><\/h2>\n<p>Researchers from the Weizmann Institute of Science, Tel Aviv-based startup Pheno.AI and NVIDIA led the development of <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2408.11876\" rel=\"noopener\">GluFormer<\/a>, an AI model that can predict an individual\u2019s future glucose levels and other health metrics based on past glucose monitoring data.<\/p>\n<p>The researchers showed that, after adding dietary intake data into the model, GluFormer can also predict how a person\u2019s glucose levels will respond to specific foods and dietary changes, enabling precision nutrition. The research team validated GluFormer across 15 other datasets and found it generalizes well to predict health outcomes for other groups, including those with prediabetes, type 1 and type 2 diabetes, gestational diabetes and obesity.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2408.11876\" rel=\"noopener\">GluFormer paper<\/a>.<\/p>\n<h2><b>LATTE3D: Enabling Near-Instant Generation, From Text to 3D Shape\u00a0<\/b><\/h2>\n<p>Another 3D generator released by NVIDIA Research this year is <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/toronto-ai\/LATTE3D\/\" rel=\"noopener\">LATTE3D<\/a>, which converts text prompts into 3D representations within a second \u2014 like a speedy, virtual 3D printer. Crafted in a popular format used for standard rendering applications, the generated shapes can be easily served up in virtual environments for developing video games, ad campaigns, design projects or virtual training grounds for robotics.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2403.15385\" rel=\"noopener\">LATTE3D paper<\/a>.<\/p>\n<\/p>\n<h2><b>MaskedMimic: Reconstructing Realistic Movement for Humanoid Robots<\/b><\/h2>\n<p>To advance the development of humanoid robots, NVIDIA researchers introduced <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/par\/maskedmimic\/\" rel=\"noopener\">MaskedMimic<\/a>, an AI framework that applies inpainting \u2014 the process of reconstructing complete data from an incomplete, or masked, view \u2014 to descriptions of motion.<\/p>\n<p>Given partial information, such as a text description of movement, or head and hand position data from a virtual reality headset, MaskedMimic can fill in the blanks to infer full-body motion. It\u2019s become part of <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/project-gr00t\" rel=\"noopener\">NVIDIA Project GR00T<\/a>, a research initiative to accelerate humanoid robot development.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/html\/2409.14393v1\" rel=\"noopener\">MaskedMimic paper<\/a>.<\/p>\n<\/p>\n<h2><b>StormCast: Boosting Weather Prediction, Climate Simulation\u00a0<\/b><\/h2>\n<p>In the field of climate science, NVIDIA Research announced <a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/publication\/2024-08_kilometer-scale-convection-allowing-model-emulation-using-generative-diffusion\" rel=\"noopener\">StormCast<\/a>, a generative AI model for emulating atmospheric dynamics. While other machine learning models trained on global data have a spatial resolution of about 30 kilometers and a temporal resolution of six hours, StormCast achieves a 3-kilometer, hourly scale.<\/p>\n<p>The researchers trained StormCast on approximately three-and-a-half years of NOAA climate data from the central U.S. When applied with precipitation radars, StormCast offers forecasts with lead times of up to six hours that are up to 10% more accurate than the U.S. National Oceanic and Atmospheric Administration\u2019s state-of-the-art 3-kilometer regional weather prediction model.<\/p>\n<p>Read the <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2408.10958\" rel=\"noopener\">StormCast paper<\/a>, written in collaboration with researchers from Lawrence Berkeley\u00a0National Laboratory and the University of Washington.<\/p>\n<h2><b>NVIDIA Research Sets Records in AI, Autonomous Vehicles, Robotics<\/b><\/h2>\n<p>Through 2024, models that originated in NVIDIA Research set records across benchmarks for AI training and inference, route optimization, autonomous driving and more.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/products\/cuopt\/\" rel=\"noopener\">NVIDIA cuOpt<\/a>, an optimization AI microservice used for logistics improvements, has <a href=\"https:\/\/blogs.nvidia.com\/blog\/cuopt-route-optimization-metropolis-omniverse\/\">23 world-record benchmarks<\/a>. The NVIDIA Blackwell platform demonstrated world-class performance on <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/resources\/mlperf-benchmarks\/\" rel=\"noopener\">MLPerf<\/a> industry benchmarks for AI <a href=\"https:\/\/blogs.nvidia.com\/blog\/mlperf-training-blackwell\/\">training<\/a> and <a href=\"https:\/\/blogs.nvidia.com\/blog\/mlperf-inference-benchmark-blackwell\/\">inference<\/a>.<\/p>\n<p>In the field of autonomous vehicles, <a target=\"_blank\" href=\"https:\/\/opendrivelab.github.io\/Challenge%202024\/e2e_Team%20NVIDIA.pdf\" rel=\"noopener\">Hydra-MDP<\/a>, an end-to-end autonomous driving framework by NVIDIA Research, achieved first place on the End-To-End Driving at Scale track of the <a target=\"_blank\" href=\"https:\/\/opendrivelab.com\/challenge2024\/#end_to_end_driving_at_scale\" rel=\"noopener\">Autonomous Grand Challenge at CVPR 2024<\/a>.<\/p>\n<p>In robotics, <a target=\"_blank\" href=\"https:\/\/nvlabs.github.io\/FoundationPose\/\" rel=\"noopener\">FoundationPose<\/a>, a unified foundation model for 6D object pose estimation and tracking, obtained first place on the <a target=\"_blank\" href=\"https:\/\/bop.felk.cvut.cz\/leaderboards\/pose-estimation-unseen-bop23\/core-datasets\/\" rel=\"noopener\">BOP leaderboard<\/a> for model-based pose estimation of unseen objects.<\/p>\n<p><i>Learn more about <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/research\/\" rel=\"noopener\"><i>NVIDIA Research<\/i><\/a><i>, which has hundreds of scientists and engineers worldwide. NVIDIA Research teams are focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. <\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/ai-research-2024\/<\/p>\n","protected":false},"author":0,"featured_media":3848,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3847"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3847"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3847\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3848"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}