{"id":3623,"date":"2024-06-02T14:48:03","date_gmt":"2024-06-02T14:48:03","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/06\/02\/accelerate-everything-nvidia-ceo-says-ahead-of-computex\/"},"modified":"2024-06-02T14:48:03","modified_gmt":"2024-06-02T14:48:03","slug":"accelerate-everything-nvidia-ceo-says-ahead-of-computex","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/06\/02\/accelerate-everything-nvidia-ceo-says-ahead-of-computex\/","title":{"rendered":"\u2018Accelerate Everything,\u2019 NVIDIA CEO Says Ahead of COMPUTEX"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>\u201cGenerative AI is reshaping industries and opening new opportunities for innovation and growth,\u201d NVIDIA founder and CEO Jensen Huang said in an address ahead of this week\u2019s COMPUTEX technology conference in Taipei.<\/p>\n<p>\u201cToday, we\u2019re at the cusp of a major shift in computing,\u201d Huang told the audience, clad in his trademark black leather jacket. \u201cThe intersection of AI and accelerated computing is set to redefine the future.\u201d<\/p>\n<p>Huang spoke ahead of one of the world\u2019s premier technology conferences to an audience of more than 6,500 industry leaders, press, entrepreneurs, gamers, creators and AI enthusiasts gathered at the glass-domed National Taiwan University Sports Center set in the verdant heart of Taipei.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-71980\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/crowd-computex-2024-672x464.jpg\" alt=\"\" width=\"672\" height=\"464\"><\/p>\n<p>The theme: NVIDIA accelerated platforms are in full production, whether through AI PCs and consumer devices featuring a host of NVIDIA RTX-powered capabilities or enterprises building and deploying AI factories with NVIDIA\u2019s full-stack computing platform.<\/p>\n<p>\u201cThe future of computing is accelerated,\u201d Huang said. \u201cWith our innovations in AI and accelerated computing, we\u2019re pushing the boundaries of what\u2019s possible and driving the next wave of technological advancement.\u201d<br \/><strong>\u00a0<\/strong><\/p>\n<h2>\u2018One-Year Rhythm\u2019<\/h2>\n<p>More\u2019s coming, with Huang revealing a roadmap for new semiconductors that will arrive on a one-year rhythm. Revealed for the first time, the Rubin platform will succeed the upcoming Blackwell platform, featuring new GPUs, a new Arm-based CPU \u2014 Vera \u2014 and advanced networking with NVLink 6, CX9 SuperNIC and the X1600 converged InfiniBand\/Ethernet switch.<\/p>\n<p>\u201cOur company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology limits,\u201d Huang explained.<\/p>\n<p>NVIDIA\u2019s creative team used AI tools from members of the <a href=\"https:\/\/www.nvidia.com\/en-us\/startups\/\">NVIDIA Inception<\/a> startup program, built on <a href=\"https:\/\/www.nvidia.com\/en-us\/ai\/\">NVIDIA NIM<\/a> and NVIDIA\u2019s accelerated computing, to create the COMPUTEX keynote. Packed with demos, this showcase highlighted these innovative tools and the transformative impact of NVIDIA\u2019s technology.<\/p>\n<h2>\u2018Accelerated Computing Is Sustainable Computing\u2019<\/h2>\n<p>NVIDIA is driving down the cost of turning data into intelligence, Huang explained as he began his talk.<\/p>\n<p>\u201cAccelerated computing is sustainable computing,\u201d he emphasized, outlining how the combination of GPUs and CPUs can deliver up to a 100x speedup while only increasing power consumption by a factor of three, achieving 25x more performance per Watt over CPUs alone.<\/p>\n<p>\u201cThe more you buy, the more you save,\u201d Huang noted, highlighting this approach\u2019s significant cost and energy savings.<\/p>\n<h2>Industry Joins NVIDIA to Build AI Factories to Power New Industrial Revolution<\/h2>\n<p>Leading computer manufacturers, particularly from Taiwan, the global IT hub, have embraced NVIDIA GPUs and networking solutions. Top companies include <a href=\"https:\/\/www.asrockrack.com\/general\/news.asp?id=239\">ASRock Rack<\/a>, <a href=\"https:\/\/servers.asus.com\/NEWS\/ASUS-Presents-ESC-AI-POD-with-NVIDIA-GB200-NVL72-at-Computex-2024\">ASUS<\/a>, <a href=\"https:\/\/www.gigabyte.com\/Press\/News\/2168\">GIGABYTE<\/a>, Ingrasys, Inventec, <a href=\"https:\/\/svr.pegatroncorp.com\/News\/6\">Pegatron<\/a>, QCT, Supermicro, Wistron and Wiwynn, which are creating cloud, on-premises and edge AI systems.<\/p>\n<p>The NVIDIA MGX modular reference design platform now supports Blackwell, including the GB200 NVL2 platform, designed for optimal performance in large language model inference, retrieval-augmented generation and data processing.<\/p>\n<p>AMD and Intel are supporting the MGX architecture with plans to deliver, for the first time, their own CPU host processor module designs. Any server system builder can use these reference designs to save development time while ensuring consistency in design and performance.<\/p>\n<h2>Next-Generation Networking with Spectrum-X<\/h2>\n<p>In networking, Huang unveiled plans for the <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-supercharges-ethernet-networking-for-generative-ai\">annual release of Spectrum-X products<\/a> to cater to the growing demand for high-performance Ethernet networking for AI.<\/p>\n<p>NVIDIA Spectrum-X, the first Ethernet fabric built for AI, enhances network performance by 1.6x more than traditional Ethernet fabrics. It accelerates the processing, analysis and execution of AI workloads and, in turn, the development and deployment of AI solutions.<\/p>\n<p>CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global and Yotta are among the first AI cloud service providers embracing Spectrum-X to bring extreme networking performance to their AI infrastructures.<\/p>\n<h2>NVIDIA NIM to Transform Millions Into Gen AI Developers<\/h2>\n<p>With <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-nim-model-deployment-generative-ai-developers\">NVIDIA NIM<\/a>, the world\u2019s 28 million developers can now easily create generative AI applications. NIM \u2014 inference microservices that provide models as optimized containers \u2014 can be deployed on clouds, data centers or workstations.<\/p>\n<p>NIM also enables enterprises to maximize their infrastructure investments. For example, running Meta Llama 3-8B in a NIM produces up to 3x more generative AI tokens on accelerated infrastructure than without NIM.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-71989\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/2G8A4248-672x448.jpg\" alt=\"\" width=\"672\" height=\"448\"><br \/>Nearly <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-nim-model-deployment-generative-ai-developers\">200 technology partners<\/a> \u2014 including Cadence, Cloudera, <a href=\"https:\/\/www.cohesity.com\/press\/unlock-gen-ai-capabilities-via-nvidia-collaboration\/\">Cohesity<\/a>, <a href=\"https:\/\/www.datastax.com\/press-release\/datastax-to-deliver-high-performance-rag-solution-with-20x-faster-embeddings-and-indexing-at-80-lower-cost-using-nvidia-microservices\">DataStax<\/a>, <a href=\"https:\/\/www.netapp.com\/newsroom\/press-releases\/news-rel-20240514-813887\/\">NetApp<\/a>, Scale AI, and <a href=\"https:\/\/news.synopsys.com\/2024-03-18-Synopsys-Showcases-EDA-Performance-and-Next-Gen-Capabilities-with-NVIDIA-Accelerated-Computing,-Generative-AI-and-Omniverse\">Synopsys <\/a>\u2014 are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications, such as copilots, code assistants, digital human avatars and more. <a href=\"https:\/\/huggingface.co\/blog\/train-dgx-cloud\">Hugging Face<\/a> is now offering NIM \u2014 starting with <a href=\"https:\/\/ai.meta.com\/blog\/meta-llama-3\/\">Meta Llama 3<\/a>.<\/p>\n<p>\u201cToday we just posted up in Hugging Face the Llama 3 fully optimized, it\u2019s available there for you to try. You can even take it with you,\u201d Huang said. \u201cSo you could run it in the cloud, run it in any cloud, download this container, put it into your own data center, and you can host it to make it available for your customers.\u201d<\/p>\n<h2>NVIDIA Brings AI Assistants to Life With GeForce RTX AI PCs<\/h2>\n<p><a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/computex-2024-nvidia-geforce-announcements\">NVIDIA\u2019s RTX AI PCs<\/a>, powered by RTX technologies, are set to revolutionize consumer experiences with over 200 RTX AI laptops and more than 500 AI-powered apps and games.<\/p>\n<p>The <a href=\"https:\/\/developer.nvidia.com\/blog\/streamline-ai-powered-app-development-with-nvidia-rtx-ai-toolkit-for-windows-rtx-pcs\/\">RTX AI Toolkit<\/a> and newly available PC-based NIM inference microservices for the <a href=\"https:\/\/nvidianews.nvidia.com\/news\/digital-humans-ace-generative-ai-microservices\">NVIDIA ACE digital human platform<\/a> underscore NVIDIA\u2019s commitment to AI accessibility.<\/p>\n<p><a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/g-assist-ai-assistant\">Project G-Assist, an RTX-powered AI assistant technology demo<\/a>, was also announced, showcasing context-aware assistance for PC games and apps.<\/p>\n<p>And Microsoft and NVIDIA are collaborating to help developers bring new generative AI capabilities to their Windows native and web apps with easy API access to RTX-accelerated SLMs that enable RAG capabilities that run on-device as part of Windows Copilot Runtime.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-71968\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/2G8A4569-672x448.jpg\" alt=\"\" width=\"672\" height=\"448\"><\/p>\n<h2>NVIDIA Robotics Adopted by Industry Leaders<\/h2>\n<p>NVIDIA is spearheading the $50 trillion industrial digitization shift, with sectors embracing autonomous operations and digital twins \u2014 virtual models that enhance efficiency and cut costs. Through its Developer Program, NVIDIA offers access to NIM, fostering AI innovation.<\/p>\n<p>Taiwanese manufacturers are transforming their factories using NVIDIA\u2019s technology, with Huang showcasing Foxconn\u2019s use of NVIDIA Omniverse, Isaac and Metropolis to create digital twins, combining vision AI and robot development tools for enhanced robotic facilities.<\/p>\n<p>\u201cThe next wave of AI is physical AI. AI that understands the laws of physics, AI that can work among us,\u201d Huang said, emphasizing the importance of robotics and AI in future developments.<\/p>\n<p>The <a href=\"https:\/\/www.nvidia.com\/en-us\/industries\/robotics\/\">NVIDIA Isaac platform<\/a> provides a robust toolkit for developers to build AI robots, including AMRs, industrial arms and humanoids, powered by AI models and supercomputers like Jetson Orin and Thor.<\/p>\n<p>\u201cRobotics is here. Physical AI is here. This is not science fiction, and it\u2019s being used all over Taiwan. It\u2019s just really, really exciting,\u201d Huang added.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-71986 size-large\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/robots-computex-2024-672x449.jpg\" alt=\"\" width=\"672\" height=\"449\"><\/p>\n<p>Global electronics giants are integrating NVIDIA\u2019s autonomous robotics into their factories, leveraging simulation in Omniverse to test and validate this new wave of AI for the physical world. This includes over 5 million preprogrammed robots worldwide.<\/p>\n<p>\u201cAll the factories will be robotic. The factories will orchestrate robots, and those robots will be building products that are robotic,\u201d Huang explained.<\/p>\n<p>Huang emphasized NVIDIA Isaac\u2019s role in boosting factory and warehouse efficiency, with global leaders like BYD Electronics, Siemens, Teradyne Robotics and Intrinsic adopting its advanced libraries and AI models.<\/p>\n<p><a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\">NVIDIA AI Enterprise<\/a> on the IGX platform, with partners like ADLINK, Advantech and ONYX, delivers edge AI solutions meeting strict regulatory standards, essential for medical technology and other industries.<\/p>\n<p>Huang ended his keynote on the same note he began it on, paying tribute to Taiwan and NVIDIA\u2019s many partners there. \u201cThank you,\u201d Huang said. \u201cI love you guys.\u201d<\/p>\n<\/p>\n<p>\t\t<!-- .entry-footer --><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/computex-2024-jensen-huang\/<\/p>\n","protected":false},"author":0,"featured_media":3624,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3623"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3623"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3623\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3624"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3623"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3623"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3623"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}