{"id":3103,"date":"2023-08-08T18:50:43","date_gmt":"2023-08-08T18:50:43","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/08\/siggraph-special-address-nvidia-ceo-brings-generative-ai-to-la-show\/"},"modified":"2023-08-08T18:50:43","modified_gmt":"2023-08-08T18:50:43","slug":"siggraph-special-address-nvidia-ceo-brings-generative-ai-to-la-show","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/08\/siggraph-special-address-nvidia-ceo-brings-generative-ai-to-la-show\/","title":{"rendered":"SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/08\/siggraph-2023-special-address\/\" data-title=\"SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show\" data-hashtags=\"\">\n<p>As generative AI continues to sweep an increasingly digital, hyperconnected world, NVIDIA founder and CEO Jensen Huang made a thunderous return to SIGGRAPH, the world\u2019s premier computer graphics conference.<\/p>\n<p>\u201cThe generative AI era is upon us, the iPhone moment if you will,\u201d Huang told an audience of thousands Tuesday during an in-person <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\">special address<\/a> in Los Angeles.<\/p>\n<p>News highlights include the next-generation <a href=\"https:\/\/nvidianews.nvidia.com\/news\/gh200-grace-hopper-superchip-with-hbm3e-memory\">GH200 Grace Hopper Superchip platform<\/a>, <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-ai-workbench-speeds-adoption-of-custom-generative-ai-for-worlds-enterprises\">NVIDIA AI Workbench<\/a> \u2014 a new unified toolkit that introduces simplified model tuning and deployment on NVIDIA AI platforms \u2014 and <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-omniverse-opens-portals-to-vast-worlds-of-openusd\">a major upgrade to NVIDIA Omniverse with generative AI and OpenUSD<\/a>.<\/p>\n<p>The announcements are about bringing all of the past decade\u2019s innovations \u2014 AI, virtual worlds, acceleration, simulation, collaboration and more \u2014 together.<\/p>\n<p>\u201cGraphics and artificial intelligence are inseparable, graphics needs AI, and AI needs graphics,\u201d Huang said, explaining that AI will learn skills in virtual worlds, and that AI will help create virtual worlds.<\/p>\n<figure id=\"attachment_66035\" aria-describedby=\"caption-attachment-66035\" class=\"wp-caption aligncenter\">\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/sigg-23-photography-social-size-keynote-image-3-1200x628-1.png\" alt=\"\" width=\"1200\" height=\"628\"><figcaption id=\"caption-attachment-66035\" class=\"wp-caption-text\">A packed house at the SIGGRAPH professional graphics conference attended NVIDIA founder and CEO Jensen Huang\u2019s keynote address.<\/figcaption><\/figure>\n<h2>Fundamental to AI, Real-Time Graphics<\/h2>\n<p>Five years ago at SIGGRAPH, NVIDIA reinvented graphics by bringing AI and real-time ray tracing to GPUs. But \u201cwhile we were reinventing computer graphics with artificial intelligence, we were reinventing the GPU altogether for artificial intelligence,\u201d Huang said.<\/p>\n<p>The result: increasingly powerful systems such as the NVIDIA HGX H100, which harnesses eight GPUs\u00a0 \u2014 and a total of 1 trillion transistors \u2014 that offer dramatic acceleration over CPU-based systems.<\/p>\n<p>\u201cThis is the reason why the world\u2019s data centers are rapidly transitioning to accelerated computing,\u201d Huang told the audience. \u201cThe more you buy, the more you save.\u201d<\/p>\n<p>To continue AI\u2019s momentum, NVIDIA created the Grace Hopper Superchip, the NVIDIA GH200, which combines a 72-core Grace CPU with a Hopper GPU, and which went into full production in May.<\/p>\n<p>Huang announced that NVIDIA GH200, which is already in production, will be complemented with an additional version with cutting-edge HBM3e memory.<\/p>\n<p>He followed up on that by announcing the next-generation GH200 Grace Hopper superchip platform with the ability to connect multiple GPUs for exceptional performance and easily scalable server design.<\/p>\n<p>Built to handle the world\u2019s most complex generative workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations.<\/p>\n<p>The dual configuration \u2014 which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering \u2014 comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance, and 282GB of the latest HBM3e memory technology.<\/p>\n<p>Leading system manufacturers are expected to deliver systems based on the platform in the second quarter of 2024.<br \/><b><\/b><\/p>\n<h2>NVIDIA AI Workbench Speeds Adoption of Custom Generative AI<\/h2>\n<p>To speed custom adoption of generative AI for the world\u2019s enterprises, Huang announced NVIDIA AI Workbench. It provides developers with a unified, easy-to-use toolkit to quickly create, test and fine-tune generative AI models on a PC or workstation \u2014 then scale them to virtually any data center, public cloud or <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-cloud\/\">NVIDIA DGX Cloud<\/a>.<\/p>\n<p>AI Workbench removes the complexity of getting started with an enterprise AI project. Accessed through a simplified interface running on a local system, it allows developers to fine-tune models from popular repositories such as Hugging Face, GitHub and NGC using custom data. The models can then be shared easily across multiple platforms.<\/p>\n<p>While hundreds of thousands of pretrained models are now available, customizing them with the many open-source tools available can be challenging and time consuming.<\/p>\n<p>\u201cIn order to democratize this ability, we have to make it possible to run pretty much everywhere,\u201d Huang said.<\/p>\n<p>With AI Workbench, developers can customize and run generative AI in just a few clicks. It allows them to pull together all necessary enterprise-grade models, frameworks, software development kits and libraries into a unified developer workspace.<\/p>\n<p>\u201cEverybody can do this,\u201d Huang said.<\/p>\n<p>Leading AI infrastructure providers \u2014 including Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo and Supermicro \u2014 are embracing AI Workbench for its ability to bring enterprise generative AI capability to wherever developers want to work \u2014 including a local device.<\/p>\n<p><a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-and-hugging-face-to-connect-millions-of-developers-to-generative-ai-supercomputing\">Huang also announced a partnership between NVIDIA and startup Hugging Face<\/a>, which has 2 million users, that will put generative AI supercomputing at the fingertips of millions of developers building large language models and other advanced AI applications.<\/p>\n<p>Developers will be able to access NVIDIA DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models.<\/p>\n<p>\u201cThis is going to be a brand new service to connect the world\u2019s largest AI community to the world\u2019s best training and infrastructure,\u201d Huang said.<\/p>\n<p>In a video, Huang showed how AI Workbench and ChatUSD bring it all together: allowing a user to start a project on a GeForce RTX 4090 laptop and scale, seamlessly to a workstation, or the data center\u00a0 as it grows more complex.<\/p>\n<p>Using Jupyter Notebook, a user can prompt the model to generate a picture of Toy Jensen in space. When the model provides a result that doesn\u2019t work, because it\u2019s never seen Toy Jensen, the user can fine-tune the model with eight images of Toy Jensen and then prompt it again to get a correct result.<\/p>\n<p>Then with AI Workbench, the new model can be deployed to an enterprise application.<\/p>\n<h2>New NVIDIA Enterprise 4.0 Software Advances AI Deployment<\/h2>\n<p>In a further step to accelerate the adoption of generative AI, NVIDIA announced the latest version of its enterprise software suite, <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\">NVIDIA AI Enterprise 4.0<\/a>.<\/p>\n<p>NVIDIA AI Enterprise gives businesses access to the tools needed to adopt generative AI, while also offering the security and API stability required for large-scale enterprise deployments.<\/p>\n<h2>Major Omniverse Release Converges Generative AI, OpenUSD for Industrial Digitalization<\/h2>\n<p>Offering new foundation applications and services for developers and industrial enterprises to optimize and enhance their 3D pipelines with the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/%5D\">OpenUSD<\/a> framework and <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/generative-ai\/\">generative AI<\/a>, Huang announced a major release of NVIDIA Omniverse, an OpenUSD-native development platform for building, simulating, and collaborating across tools and virtual worlds.<\/p>\n<p>He also announced NVIDIA\u2019s contributions to OpenUSD, the framework and universal interchange for describing, simulating and collaborating across 3D tools. <i><br \/><\/i><i><br \/><\/i>Updates to the Omniverse platform include advancements to Omniverse Kit \u2014 the engine for developing native OpenUSD applications and extensions \u2014 as well as to the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/audio2face\/\">NVIDIA Omniverse Audio2Face<\/a> foundation app and <a href=\"https:\/\/developer.nvidia.com\/blog\/rtx-powered-spatial-framework-delivers-full-ray-tracing-with-usd-for-xr-pipelines\">spatial-computing capabilities<\/a>.<\/p>\n<p>Cesium, Convai, Move AI, SideFX Houdini and Wonder Dynamics are now connected to Omniverse via OpenUSD.<\/p>\n<p>And expanding their collaboration across Adobe Substance 3D, generative AI and OpenUSD initiatives, Adobe and NVIDIA announced plans to make Adobe Firefly \u2014 Adobe\u2019s family of creative generative AI models \u2014 available as APIs in Omniverse.<\/p>\n<p>Omniverse users can now <a href=\"https:\/\/developer.nvidia.com\/blog\/rtx-powered-spatial-framework-delivers-full-ray-tracing-with-usd-for-xr-pipelines\">build content, experiences and applications<\/a> that are compatible with other OpenUSD-based spatial computing platforms such as ARKit and RealityKit.<i><br \/><\/i><i><br \/><\/i>Huang <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-omniverse-opens-portals-to-vast-worlds-of-openusd\">announced<\/a> a broad range of frameworks, resources and services for developers and companies to accelerate the adoption of Universal Scene Description, known as <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\">OpenUSD<\/a>, including contributions such as geospatial data models, metrics assembly and simulation-ready, or <a href=\"https:\/\/developer.nvidia.com\/omniverse\/simready-assets\">SimReady<\/a>, specifications for OpenUSD.<br \/><b><i><br \/><\/i><\/b>Huang also announced four new <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/cloud\/\">Omniverse Cloud<\/a> APIs built by NVIDIA for developers to more seamlessly implement and deploy OpenUSD pipelines and applications.<\/p>\n<ul>\n<li>ChatUSD \u2014 Assisting developers and artists working with OpenUSD data and scenes, ChatUSD is a large language model (LLM) agent for generating Python-USD code scripts from text and answering USD knowledge questions.<\/li>\n<li><a href=\"https:\/\/developer.nvidia.com\/usd\/validator\">RunUSD<\/a> \u2014 a cloud API that translates OpenUSD files into fully path-traced rendered images by checking compatibility of the uploaded files against versions of OpenUSD releases, and generating renders with Omniverse Cloud.<\/li>\n<li>DeepSearch \u2014 an LLM agent enabling fast semantic search through massive databases of untagged assets.<\/li>\n<li>USD-GDN Publisher \u2014 a one-click service that enables enterprises and software makers to publish high-fidelity, OpenUSD-based experiences to the Omniverse Cloud <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/solutions\/stream-3d-apps\/\">Graphics Delivery Network (GDN)<\/a> from an Omniverse-based application such as <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/create\/\">USD Composer<\/a>, as well as stream in real time to web browsers and mobile devices.<\/li>\n<\/ul>\n<p>These contributions are an evolution of last week\u2019s announcement of NVIDIA\u2019s co-founding of the Alliance for OpenUSD along with Pixar, Adobe, Apple and Autodesk.<\/p>\n<h2>Powerful New Desktop Systems, Servers<\/h2>\n<p>Providing more computing power for all of this, <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-global-workstation-manufacturers-to-launch-powerful-systems-for-generative-ai-and-llm-development-content-creation-data-science\">Huang said NVIDIA and global workstation manufacturers<\/a> are announcing powerful new RTX workstations for development and content creation in the age of generative AI and digitization.<\/p>\n<p>The systems, including those from BOXX, Dell Technologies, HP and Lenovo, are based on <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/rtx-6000\/\">NVIDIA RTX 6000 Ada Generation GPUs<\/a> and incorporate <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\">NVIDIA AI Enterprise<\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/enterprise\/\">NVIDIA Omniverse Enterprise<\/a> software.<\/p>\n<p>Separately, NVIDIA released three new desktop workstation Ada Generation GPUs \u2014 the <a href=\"http:\/\/www.nvidia.com\/rtx-5000\">NVIDIA RTX 5000<\/a>, <a href=\"http:\/\/www.nvidia.com\/rtx-4500\">RTX 4500<\/a> and <a href=\"http:\/\/www.nvidia.com\/rtx-4000\">RTX 4000 \u2014 to d<\/a>eliver the latest AI, graphics and real-time rendering technology to professionals worldwide.<\/p>\n<p><a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-global-data-center-system-manufacturers-to-supercharge-generative-ai-and-industrial-digitalization\">Huang also detailed how, together with global data center system manufacturers, NVIDIA is continuing to supercharge generative AI and industrial digitization<\/a> with new NVIDIA OVX featuring the new NVIDIA L40S GPU, a powerful, universal data center processor design.<\/p>\n<p>The powerful new systems will accelerate the most compute-intensive, complex applications, including AI training and inference, 3D design and visualization, video processing and industrial digitalization with the NVIDIA Omniverse platform.<\/p>\n<h2>NVIDIA Research Bringing New Capabilities<\/h2>\n<p>More innovations are coming, thanks to NVIDIA Research.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/08\/siggraph-research-generative-ai-materials-3d-scenes\/\">At the show\u2019s Real Time Live Event, NVIDIA researchers will demonstrate a generative AI workflow<\/a> that helps artists rapidly create and iterate on materials for 3D scenes, using text or image prompts to generate custom textured materials faster and with finer creative control.<\/p>\n<p>And NVIDIA Research also demo\u2019d how AI can take video conferencing to the next level with new 3D features. NVIDIA Research recently published a <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3588037.3595385\">paper<\/a> demonstrating how AI could power a 3D video-conferencing system with minimal capture equipment.<\/p>\n<p>The production version of Maxine, now available in NVIDIA Enterprise, allows professionals, teams, creators and others to tap into the power of AI to create high-quaity audio and video effects, even using standard microphone and webcams.<i><\/i><b><br \/><\/b><b><br \/><\/b><i>Watch Huang\u2019s full special address at NVIDIA\u2019s SIGGRAPH <\/i><a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\"><i>event site<\/i><\/a><i>. where there are also details of labs, presentations and more happening throughout the show.\u00a0<\/i><i><br \/><\/i><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2023\/08\/08\/siggraph-2023-special-address\/<\/p>\n","protected":false},"author":0,"featured_media":3104,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3103"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3103"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3103\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3104"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3103"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}