{"id":2487,"date":"2022-08-09T17:42:11","date_gmt":"2022-08-09T17:42:11","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/09\/at-siggraph-nvidia-ceo-jensen-huang-illuminates-three-forces-sparking-graphics-revolution\/"},"modified":"2022-08-09T17:42:11","modified_gmt":"2022-08-09T17:42:11","slug":"at-siggraph-nvidia-ceo-jensen-huang-illuminates-three-forces-sparking-graphics-revolution","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/09\/at-siggraph-nvidia-ceo-jensen-huang-illuminates-three-forces-sparking-graphics-revolution\/","title":{"rendered":"At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/siggraph-huang-metaverse-ai\/\" data-title=\"At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution\" data-hashtags=\"\">\n<p>In a swift, eye-popping special address at SIGGRAPH, NVIDIA execs described the forces driving the next era in graphics, and the company\u2019s expanding range of tools to accelerate them.<\/p>\n<p>\u201cThe combination of AI and computer graphics will power the <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/08\/10\/what-is-the-metaverse\/\">metaverse<\/a>, the next evolution of the internet,\u201d said Jensen Huang, founder and CEO of NVIDIA, kicking off the 45-minute talk.<\/p>\n<p>It will be home to connected virtual worlds and <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/solutions\/digital-twins\/\">digital twins<\/a>, a place for real work as well as play. And, Huang said, it will be vibrant with what will become one of the most popular forms of robots: digital human avatars.<\/p>\n<p>With 45 demos and slides, five NVIDIA speakers announced:<\/p>\n<ul>\n<li>A new platform for creating avatars, NVIDIA Omniverse Avatar Cloud Engine (<a href=\"https:\/\/nvidianews.nvidia.com\/news\/virtual-assistants-and-digital-humans-on-pace-to-ace-turing-test-with-new-nvidia-omniverse-avatar-cloud-engine\">ACE<\/a>).<\/li>\n<li>Plans to build out Universal Scene Description (<a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-and-partners-build-out-universal-scene-description-to-accelerate-industrial-metaverse-and-next-wave-of-ai\">USD<\/a>), the language of the metaverse.<\/li>\n<li>Major extensions to <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-major-release-of-omniverse-with-new-usd-connectors-and-tools-simulation-technologies-and-developer-frameworks\">NVIDIA Omniverse<\/a>, the computing platform for creating virtual worlds and digital twins.<\/li>\n<li>Tools to supercharge graphics workflows with machine learning.<\/li>\n<\/ul>\n<p>\u201cThe announcements we made today further advance the metaverse, a new computing platform with new programming models, new architectures and new standards,\u201d he said.<\/p>\n<p>Metaverse applications are already here.<\/p>\n<p>Huang pointed to consumers trying out virtual 3D products with augmented reality, telcos creating digital twins of their radio networks to optimize and deploy radio towers and companies creating digital twins of warehouses and factories to optimize their layout and logistics.<\/p>\n<h2><b>Enter the Avatars<\/b><\/h2>\n<p>The metaverse will come alive with virtual assistants, avatars we interact with as naturally as talking to another person. They\u2019ll work in digital factories, play in online games and provide customer service for e-tailers.<\/p>\n<p>\u201cThere will be billions of avatars,\u201d said Huang, calling them \u201cone of the most widely used kinds of robots\u201d that will be designed, trained and operated in Omniverse.<\/p>\n<p>Digital humans and avatars require natural language processing, computer vision, complex facial and body animations and more. To move and speak in realistic ways, this suite of complex technologies must be synced to the millisecond.<\/p>\n<p>It\u2019s hard work that NVIDIA aims to simplify and accelerate with Omniverse Avatar Cloud Engine. <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-platform\/ace\">ACE<\/a> is a collection of AI models and services that build on NVIDIA\u2019s work spanning everything from conversational AI to animation tools like <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/audio2face\/\">Audio2Face<\/a> and Audio2Emotion.<\/p>\n<\/p>\n<p><b>\u201c<\/b>With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud,\u201d said Simon Yuen, a senior director of graphics and AI at NVIDIA. \u201cWe want to democratize building interactive avatars for every platform.\u201d<\/p>\n<p>ACE will be available early next year, running on embedded systems and all major cloud services.<\/p>\n<p>Yuen also demonstrated the latest version of Omniverse Audio2Face, an AI model that can create facial animation directly from voices.<\/p>\n<\/p>\n<p>\u201cWe just added more features to analyze and automatically transfer your emotions to your avatar,\u201d he said.<\/p>\n<p>Future versions of Audio2Face will create avatars from a single photo, applying textures automatically and generating animation-ready 3D meshes. They\u2019ll sport high-fidelity simulations of muscle movements an AI can learn from watching a video \u2014 even lifelike hair that responds as expected to virtual grooming.<\/p>\n<h2><b>USD, a Foundation for the 3D Internet<\/b><\/h2>\n<p>Many superpowers of the metaverse will be grounded in USD, a foundation for the 3D internet.<\/p>\n<p>The metaverse \u201cneeds a standard way of describing all things within 3D worlds,\u201d said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA.<\/p>\n<p>\u201cWe believe Universal Scene Description, invented and open sourced by Pixar, is the standard scene description for the next era of the internet,\u201d he added, comparing USD to HTML in the 2D web.<\/p>\n<p>Lebaredian described NVIDIA\u2019s vision for <a href=\"http:\/\/usd.nvidia.com\">USD<\/a> as a key to opening even more opportunities than those in the physical world.<\/p>\n<p>\u201cOur next milestones aim to make USD performant for real-time, large-scale virtual worlds and industrial digital twins,\u201d he said, noting NVIDIA\u2019s plans to help build out support in USD for international character sets, geospatial coordinates and real-time streaming of IoT data.<\/p>\n<figure id=\"attachment_58848\" aria-describedby=\"caption-attachment-58848\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/Nvidia-plans-for-USD-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/Nvidia-plans-for-USD-672x359.jpg\" alt=\"NVIDIA's planned investments in USD\" width=\"672\" height=\"359\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-58848\" class=\"wp-caption-text\">Examples of NVIDIA\u2019s planned investments in USD<\/figcaption><\/figure>\n<p>To further accelerate USD adoption, NVIDIA will release a compatibility testing and certification suite for USD. It lets developers know their custom USD components produce an expected result.<\/p>\n<p>In addition, NVIDIA announced a set of simulation-ready USD assets, designed for use in industrial digital twins and AI training workflows. They join a wealth of <a href=\"https:\/\/developer.nvidia.com\/blog\/universal-scene-description-as-the-language-of-the-metaverse\/\">USD resources<\/a> available online for free including USD-ready scenes, on-demand tutorials, documentation and instructor-led courses.<\/p>\n<p>\u201cWe want everyone to help build and advance USD,\u201d said Lebaredian.<\/p>\n<h2><b>Omniverse Expands Its Palette<\/b><\/h2>\n<p>One of the biggest announcements of the special address was a major new release of <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\">NVIDIA Omniverse<\/a>, a platform that\u2019s been downloaded nearly 200,000 times.<\/p>\n<p>Huang called Omniverse \u201ca USD platform, a toolkit for building metaverse applications, and a compute engine to run virtual worlds.\u201d<\/p>\n<p>The latest version packs several upgraded core technologies and more connections to popular tools.<\/p>\n<p>The links, called <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/omniverse-siggraph\/\">Omniverse Connectors<\/a>, are now in development for Unity, Blender, Autodesk Alias, Siemens JT, SimScale, the Open Geospatial Consortium and more. Connectors are now available in beta for PTC Creo, Visual Components and SideFX Houdini. These new developments join <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/digital-twins\/siemens\/\">Siemens Xcelerator<\/a>, now part of the Omniverse network, welcoming more industrial customers into the era of digital twins.<\/p>\n<p>Like the internet itself, Omniverse is \u201ca network of networks,\u201d connecting users across industries and disciplines, said Steve Parker, NVIDIA\u2019s vice president of professional graphics.<\/p>\n<figure id=\"attachment_58851\" aria-describedby=\"caption-attachment-58851\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/New-in-OV-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/New-in-OV-672x357.jpg\" alt=\"New features in NVIDIA Omniverse\" width=\"672\" height=\"357\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-58851\" class=\"wp-caption-text\">Examples of new features in NVIDIA Omniverse.<\/figcaption><\/figure>\n<p>Nearly a dozen leading companies will showcase Omniverse capabilities at SIGGRAPH, including hardware, software and cloud-service vendors ranging from AWS and Adobe to Dell, Epic and Microsoft. A half dozen companies will conduct NVIDIA-powered sessions on topics such as AI and virtual worlds.<\/p>\n<h2><b>Speeding Physics, Animating Animals<\/b><\/h2>\n<p>Parker detailed several technology upgrades in Omniverse. They span enhancements for simulating physically accurate materials with the Material Definition Language (<a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/technologies\/material-definition-language\/\">MDL<\/a>), real-time physics with <a href=\"https:\/\/developer.nvidia.com\/physx-sdk\">PhysX<\/a> and the hybrid rendering and AI system, RTX.<\/p>\n<p>\u201cThese core technology pillars are powered by NVIDIA high performance computing from the edge to the cloud,\u201d Parker said.<\/p>\n<p>For example, PhysX now supports soft-body and particle-cloth simulation, bringing more physical accuracy to virtual worlds in real time. And NVIDIA is fully open sourcing MDL so it can readily support graphics API standards like OpenGL or Vulkan, making the materials standard more broadly available to developers.<\/p>\n<p>Omniverse also will include neural graphics capabilities developed by NVIDIA Research that combine RTX graphics and AI. For example:<\/p>\n<ul>\n<li>Animal Modelers let artists iterate on an animal\u2019s form with point clouds, then automatically generate a 3D mesh.<\/li>\n<li>GauGAN360, the next evolution of <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/11\/22\/gaugan2-ai-art-demo\/\">NVIDIA GauGAN<\/a>, generates 8K, 360-degree panoramas that can easily be loaded into an Omniverse scene.<\/li>\n<li>Instant NeRF creates 3D objects and scenes from 2D images.<\/li>\n<\/ul>\n<p>An <a href=\"https:\/\/developer.nvidia.com\/blog\/visualizing-interactive-simulations-with-omniverse-extension-for-nvidia-modulus\/\">Omniverse Extension<\/a> for <a href=\"https:\/\/developer.nvidia.com\/modulus\">NVIDIA Modulus<\/a>, a machine learning framework, will let developers use AI to speed simulations of real-world physics up to 100,000x, so the metaverse looks and feels like the physical world.<\/p>\n<p>In addition, <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/machinima\/\">Omniverse Machinima<\/a> \u2014 subject of a <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/in-the-nvidia-studio-august-9\/\">lively contest<\/a> at SIGGRAPH \u2014 now sports content from <i>Post Scriptum<\/i>, <i>Beyond the Wire<\/i> and <i>Shadow Warrior 3<\/i> as well as new AI animation tools like Audio2Gesture.<\/p>\n<p>A demo from Industrial Light &amp; Magic showed another new feature. <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/ilm-omniverse-deepsearch\/\">Omniverse DeepSearch<\/a> uses AI to help teams intuitively search through massive databases of untagged assets, bringing up accurate results for terms even when they\u2019re not specifically listed in metadata.<\/p>\n<h2><b>Graphics Get Smart<\/b><\/h2>\n<p>One of the essential pillars of the emerging metaverse is <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/neural-graphics-sdk-metaverse-content\/\">neural graphics<\/a>. It\u2019s a hybrid discipline that harnesses neural network models to accelerate and enhance computer graphics.<\/p>\n<p>\u201cNeural graphics intertwines AI and graphics, paving the way for a future graphics pipeline that is amenable to learning from data,\u201d said Sanja Fidler, vice president of AI at NVIDIA. \u201cNeural graphics will redefine how virtual worlds are created, simulated and experienced by users,\u201d she added.<\/p>\n<p>AI will help artists spawn the massive amount of 3D content needed to create the metaverse. For example, they can use neural graphics to capture objects and behaviors in the physical world quickly.<\/p>\n<p>Fidler described NVIDIA software to do just that, <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/25\/instant-nerf-research-3d-ai\/\">Instant NeRF<\/a>, a tool to create a 3D object or scene from 2D images. It\u2019s the subject of one of NVIDIA\u2019s two best paper awards at SIGGRAPH.<\/p>\n<p>In the other best paper award, neural graphics powers a model that can predict and reduce reaction latencies in esports and AR\/VR applications. The two best papers are among 16 total that NVIDIA researchers are presenting this week at SIGGRAPH.<\/p>\n<figure id=\"attachment_58857\" aria-describedby=\"caption-attachment-58857\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/Neural-graphics-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/Neural-graphics-672x356.jpg\" alt=\"neural graphics\" width=\"672\" height=\"356\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-58857\" class=\"wp-caption-text\">Neural graphics blends AI into the graphics pipeline.<\/figcaption><\/figure>\n<p>Designers and researchers can apply neural graphics and other techniques to create their own award-winning work using <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/neural-graphics-sdk-metaverse-content\/\">new software development kits<\/a> NVIDIA unveiled at the event.<\/p>\n<p>Fidler described one of them, <a href=\"https:\/\/github.com\/NVIDIAGameWorks\/kaolin-wisp\">Kaolin Wisp<\/a>, a suite of tools to create neural fields \u2014 AI models that represent a 3D scene or object \u2014 with just a few lines of code.<\/p>\n<p>Separately, NVIDIA announced <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/neuralvdb-ai\/\">NeuralVDB<\/a>, the next evolution of the open-sourced standard OpenVDB that industries from visual effects to scientific computing use to simulate and render water, fire, smoke and clouds.<\/p>\n<p>NeuralVDB uses neural models and GPU optimization to dramatically reduce memory requirements so users can interact with extremely large and complex datasets in real time and share them more efficiently.<\/p>\n<p>\u201cAI, the most powerful technology force of our time, will revolutionize every field of computer science, including computer graphics, and NVIDIA RTX is the engine of neural graphics,\u201d Huang said.<\/p>\n<p>Watch the full special address at NVIDIA\u2019s SIGGRAPH <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\">event site<\/a>. That\u2019s where you\u2019ll also find details of labs, presentations and the debut of a behind-the-scenes documentary on how we created our latest GTC keynote.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/08\/09\/siggraph-huang-metaverse-ai\/<\/p>\n","protected":false},"author":0,"featured_media":2488,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2487"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2487"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2487\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2488"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}