{"id":3857,"date":"2025-01-07T13:43:45","date_gmt":"2025-01-07T13:43:45","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/01\/07\/ces-2025-ai-advancing-at-incredible-pace-nvidia-ceo-says\/"},"modified":"2025-01-07T13:43:45","modified_gmt":"2025-01-07T13:43:45","slug":"ces-2025-ai-advancing-at-incredible-pace-nvidia-ceo-says","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/01\/07\/ces-2025-ai-advancing-at-incredible-pace-nvidia-ceo-says\/","title":{"rendered":"CES 2025: AI Advancing at \u2018Incredible Pace,\u2019 NVIDIA CEO Says"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>NVIDIA founder and CEO Jensen Huang kicked off CES 2025 with a 90-minute keynote that included new products to advance gaming, autonomous vehicles, robotics, and agentic AI.<\/p>\n<p>AI has been \u201cadvancing at an incredible pace,\u201d he said before an audience of more than 6,000 packed into the Michelob Ultra Arena in Las Vegas.<\/p>\n<p>\u201cIt started with perception AI \u2014 understanding images, words, and sounds. Then generative AI \u2014 creating text, images and sound,\u201d Huang said. Now, we\u2019re entering the era of \u201cphysical AI, AI that can proceed, reason, plan and act.\u201d<\/p>\n<p>NVIDIA GPUs and platforms are at the heart of this transformation, Huang explained, enabling breakthroughs across industries, including gaming, robotics and autonomous vehicles (AVs).<\/p>\n<p>Huang\u2019s keynote showcased how NVIDIA\u2019s latest innovations are enabling this new era of AI, with several groundbreaking announcements, including:<\/p>\n<\/p>\n<p>Huang started off his talk by reflecting on NVIDIA\u2019s three-decade journey. In 1999, NVIDIA invented the programmable GPU. Since then, modern AI has fundamentally changed how computing works, he said. \u201cEvery single layer of the technology stack has been transformed, an incredible transformation, in just 12 years.\u201d<\/p>\n<p><strong>Revolutionizing Graphics With GeForce RTX 50 Series<br \/><\/strong><br \/>\u201cGeForce enabled AI to reach the masses, and now AI is coming home to GeForce,\u201d Huang said.<\/p>\n<p>With that, he introduced <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/rtx-50-series-graphics-cards-gpu-laptop-announcements\" rel=\"noopener\">the NVIDIA GeForce RTX 5090 GPU<\/a>, the most powerful GeForce RTX GPU so far, with 92 billion transistors and delivering 3,352 trillion AI operations per second (TOPS).<\/p>\n<p>\u201cHere it is \u2014 our brand-new GeForce RTX 50 series, Blackwell architecture,\u201d Huang said, holding the blacked-out GPU aloft and noting how it\u2019s able to harness advanced AI to enable breakthrough graphics. \u201cThe GPU is just a beast.\u201d<\/p>\n<p>\u201cEven the mechanical design is a miracle,\u201d Huang said, noting that the graphics card has two cooling fans.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-77057\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/01\/GDOH4366-960x640.jpg\" alt=\"\" width=\"960\" height=\"640\"><\/p>\n<p>More variations in the GPU series are coming. The GeForce RTX 5090 and GeForce RTX 5080 desktop GPUs are scheduled to be available Jan. 30. The GeForce RTX 5070 Ti and the GeForce RTX 5070 desktops are slated to be available starting in February. Laptop GPUs are expected in March.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/dlss4-multi-frame-generation-ai-innovations\" rel=\"noopener\">DLSS 4 introduces<\/a> Multi Frame Generation, working in unison with the complete suite of DLSS technologies to boost performance by up to 8x. <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/reflex-2-even-lower-latency-gameplay-with-frame-warp\/\" rel=\"noopener\">NVIDIA also unveiled NVIDIA Reflex 2<\/a>, which can reduce PC latency by up to 75%.<\/p>\n<p>The latest generation of DLSS can generate three additional frames for every frame we calculate, Huang explained. \u201cAs a result, we\u2019re able to render at incredibly high performance, because AI does a lot less computation.\u201d<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/rtx-kit\" rel=\"noopener\">RTX Neural Shaders<\/a> use small neural networks to improve textures, materials and lighting in real-time gameplay. RTX Neural Faces and RTX Hair advance real-time face and hair rendering, using generative AI to animate the most realistic digital characters ever. RTX Mega Geometry increases the number of ray-traced triangles by up to 100x, providing more detail.<\/p>\n<p><strong>Advancing Physical AI With Cosmos|<\/strong><\/p>\n<p>In addition to advancements in graphics, Huang introduced the <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-launches-cosmos-world-foundation-model-platform-to-accelerate-physical-ai-development\" rel=\"noopener\">NVIDIA Cosmos<\/a> world foundation model platform, describing it as a game-changer for robotics and industrial AI.<\/p>\n<p>The next frontier of AI is physical AI, Huang explained. He likened this moment to the transformative impact of large language models on generative AI.<\/p>\n<p>\u201cThe ChatGPT moment for general robotics is just around the corner,\u201d he explained.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-77063\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/01\/JL1_0575-960x640.jpg\" alt=\"\" width=\"960\" height=\"640\"><\/p>\n<p>Like large language models, world foundation models are fundamental to advancing robot and AV development, yet not all developers have the expertise and resources to train their own, Huang said.<\/p>\n<p>Cosmos integrates generative models, tokenizers, and a video processing pipeline to power physical AI systems like AVs and robots.<\/p>\n<p>Cosmos aims to bring the power of foresight and multiverse simulation to AI models, enabling them to simulate every possible future and select optimal actions.<\/p>\n<p>Cosmos models ingest text, image or video prompts and generate virtual world states as videos, Huang explained. \u201cCosmos generations prioritize the unique requirements of AV and robotics use cases like real-world environments, lighting and object permanence.\u201d<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-launches-cosmos-world-foundation-model-platform-to-accelerate-physical-ai-development\" rel=\"noopener\">Leading robotics and automotive companies, including<\/a> 1X, Agile Robots, Agility, Figure AI, Foretellix, Fourier, <a target=\"_blank\" href=\"https:\/\/medium.com\/@xk.li\/galbot-leverages-nvidia-cosmos-to-accelerate-humanoid-development-35be9f8a54d9\" rel=\"noopener\">Galbot<\/a>, <a target=\"_blank\" href=\"https:\/\/hillbot.ai\/blog\/hillbot-nvidia-cosmos\" rel=\"noopener\">Hillbot<\/a>, <a target=\"_blank\" href=\"https:\/\/blog.intbot.ai\/intbot-nvidia-cosmos-service-robotics\" rel=\"noopener\">IntBot<\/a>, <a target=\"_blank\" href=\"https:\/\/neura-robotics.com\/nvidia-cosmos-neura-robotics-ai-cognitive-robots-platform\" rel=\"noopener\">Neura Robotics<\/a>, Skild AI, Virtual Incision, Waabi and XPENG, along with ridesharing giant Uber, are among the first to adopt Cosmos.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/hyundai-motor-group-ces\/\">In addition, Hyundai Motor Group is adopting NVIDIA AI and Omniverse<\/a> to create safer, smarter vehicles, supercharge manufacturing and deploy cutting-edge robotics.<\/p>\n<p>Cosmos is open license and available on GitHub.<\/p>\n<p><strong>Empowering Developers With AI Foundation Models<\/strong><\/p>\n<p>Beyond robotics and autonomous vehicles, NVIDIA is empowering developers and creators with AI foundation models.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-launches-ai-foundation-models-for-rtx-ai-pcs\" rel=\"noopener\">Huang introduced AI foundation models for RTX PCs<\/a> that supercharge digital humans, content creation, productivity and development.<\/p>\n<p>\u201cThese AI models run in every single cloud because NVIDIA GPUs are now available in every single cloud,\u201d Huang said. \u201cIt\u2019s available in every single OEM, so you could literally take these models, integrate them into your software packages, create AI agents and deploy them wherever the customers want to run the software.\u201d<\/p>\n<p>These models \u2014 offered as <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai\/\" rel=\"noopener\">NVIDIA NIM<\/a> microservices \u2014 are accelerated by the new <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-blackwell-geforce-rtx-50-series-opens-new-world-of-ai-computer-graphics\" rel=\"noopener\">GeForce RTX 50 Series GPUs<\/a>.<\/p>\n<p>The GPUs have what it takes to run these swiftly, adding support for FP4 computing, boosting AI inference by up to 2x and enabling generative AI models to run locally in a smaller memory footprint compared with previous-generation hardware.<\/p>\n<p>Huang explained the potential of new tools for creators: \u201cWe\u2019re creating a whole bunch of blueprints that our ecosystem could take advantage of. All of this is completely open source, so you could take it and modify the blueprints.\u201d<\/p>\n<p>Top PC manufacturers and system builders are launching NIM-ready RTX AI PCs with GeForce RTX 50 Series GPUs. \u201cAI PCs are coming to a home near you,\u201d Huang said.<\/p>\n<p>While these tools bring AI capabilities to personal computing, NVIDIA is also advancing AI-driven solutions in the automotive industry, where safety and intelligence are paramount.<\/p>\n<p><strong>Innovations in Autonomous Vehicles<\/strong><\/p>\n<p>Huang announced the <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-drive-hyperion-platform-achieves-critical-automotive-safety-and-cybersecurity-milestones-for-av-development\" rel=\"noopener\">NVIDIA DRIVE Hyperion AV platform<\/a>, built on the new NVIDIA AGX Thor system-on-a-chip (SoC), designed for generative AI models and delivering advanced functional safety and autonomous driving capabilities.<\/p>\n<p>\u201cThe autonomous vehicle revolution is here,\u201d Huang said. \u201cBuilding autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.\u201d<\/p>\n<p>DRIVE Hyperion, the first end-to-end AV platform, integrates advanced SoCs, sensors, and safety systems for next-gen vehicles, a sensor suite and an active safety and level 2 driving stack, with adoption by automotive safety pioneers such as Mercedes-Benz, JLR and Volvo Cars.<\/p>\n<p>Huang highlighted the critical role of synthetic data in advancing autonomous vehicles. Real-world data is limited, so synthetic data is essential for training the autonomous vehicle data factory, he explained.<\/p>\n<p>Powered by NVIDIA Omniverse AI models and Cosmos, this approach \u201cgenerates synthetic driving scenarios that enhance training data by orders of magnitude.\u201d<\/p>\n<p>Using Omniverse and Cosmos, NVIDIA\u2019s AI data factory can scale \u201chundreds of drives into billions of effective miles,\u201d Huang said, dramatically increasing the datasets needed for safe and advanced autonomous driving.<\/p>\n<p>\u201cWe are going to have mountains of training data for autonomous vehicles,\u201d he added.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/toyota-aurora-continental-nvidia-drive\" rel=\"noopener\">Toyota, the world\u2019s largest automaker, will build its next-generation vehicles on the NVIDIA DRIVE AGX Orin<\/a>, running the safety-certified NVIDIA DriveOS operating system, Huang said.<\/p>\n<p>\u201cJust as computer graphics was revolutionized at such an incredible pace, you\u2019re going to see the pace of AV development increasing tremendously over the next several years,\u201d Huang said. These vehicles will offer functionally safe, advanced driving assistance capabilities.<\/p>\n<p><b>Agentic AI and Digital Manufacturing<\/b><\/p>\n<p>NVIDIA and its partners have launched AI <a href=\"https:\/\/blogs.nvidia.com\/blog\/agentic-ai-blueprints\">Blueprints for agentic AI<\/a>, including PDF-to-podcast for efficient research and video search and summarization for analyzing large quantities of video and images \u2014 enabling developers to build, test and run AI agents anywhere.<\/p>\n<p>AI Blueprints empower developers to deploy custom agents for automating enterprise workflows This new category of partner blueprints integrates NVIDIA AI Enterprise software, including NVIDIA NIM microservices and NVIDIA NeMo, with platforms from leading providers like CrewAI, Daily, LangChain, LlamaIndex and Weights &amp; Biases.<\/p>\n<p>Additionally, Huang announced new <a href=\"https:\/\/blogs.nvidia.com\/blog\/nemotron-model-families\">Llama Nemotron<\/a>.<\/p>\n<p>Developers can use NVIDIA NIM microservices to build AI agents for tasks like customer support, fraud detection, and supply chain optimization.<\/p>\n<p>Available as NVIDIA NIM microservices, the models can supercharge AI agents on any accelerated system.<\/p>\n<p>NVIDIA NIM microservices streamline video content management, boosting efficiency and audience engagement in the media industry.<\/p>\n<p>Moving beyond digital applications, NVIDIA\u2019s innovations are paving the way for AI to revolutionize the physical world with robotics.<\/p>\n<p>\u201cAll of the enabling technologies that I\u2019ve been talking about are going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs, in general robotics.\u201d<\/p>\n<p>In manufacturing, the <a href=\"https:\/\/blogs.nvidia.com\/blog\/isaac-gr00t-blueprint-humanoid-robotics\/\">NVIDIA Isaac GR00T Blueprint<\/a> for synthetic motion generation will help developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.<\/p>\n<p>Huang emphasized the importance of training robots efficiently, using NVIDIA\u2019s Omniverse to generate millions of synthetic motions for humanoid training.<\/p>\n<p>The Mega blueprint enables large-scale simulation of robot fleets, adopted by leaders like Accenture and KION for warehouse automation.<\/p>\n<p>These AI tools set the stage for NVIDIA\u2019s latest innovation: a personal AI supercomputer called Project DIGITS.<\/p>\n<p><strong>NVIDIA Unveils Project Digits<\/strong><\/p>\n<p>Putting NVIDIA Grace Blackwell on every desk and at every AI developer\u2019s fingertips, Huang unveiled <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-puts-grace-blackwell-on-every-desk-and-at-every-ai-developers-fingertips\" rel=\"noopener\">NVIDIA Project DIGITS<\/a>.<\/p>\n<p>\u201cI have one more thing that I want to show you,\u201d Huang said. \u201cNone of this would be possible if not for this incredible project that we started about a decade ago. Inside the company, it was called Project DIGITS \u2014 deep learning GPU intelligence training system.\u201d<\/p>\n<p>Huang highlighted the legacy of NVIDIA\u2019s AI supercomputing journey, telling the story of how in 2016 he delivered the first NVIDIA DGX system to OpenAI. \u201cAnd obviously, it revolutionized artificial intelligence computing.\u201d<\/p>\n<p>The new Project DIGITS takes this mission further. \u201cEvery software engineer, every engineer, every creative artist \u2014 everybody who uses computers today as a tool \u2014 will need an AI supercomputer,\u201d Huang said.<\/p>\n<p>Huang revealed that Project DIGITS, powered by the GB10 Grace Blackwell Superchip, represents NVIDIA\u2019s smallest yet most powerful AI supercomputer. \u201cThis is NVIDIA\u2019s latest AI supercomputer,\u201d Huang said, showcasing the device. \u201cIt runs the entire NVIDIA AI stack \u2014 all of NVIDIA software runs on this. DGX Cloud runs on this.\u201d<\/p>\n<p>The compact yet powerful Project DIGITS is expected to be available in May.<\/p>\n<p><strong>A Year of Breakthroughs<\/strong><\/p>\n<p>\u201cIt\u2019s been an incredible year,\u201d Huang said as he wrapped up the keynote. Huang highlighted NVIDIA\u2019s major achievements: Blackwell systems, physical AI foundation models, and breakthroughs in agentic AI and robotics<\/p>\n<p>\u201cI want to thank all of you for your partnership,\u201d Huang said.<\/p>\n<p><i>See <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-eu\/about-nvidia\/terms-of-service\/\" rel=\"noopener\"><i>notice<\/i><\/a><i> regarding software product information.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/ces-2025-jensen-huang\/<\/p>\n","protected":false},"author":0,"featured_media":3858,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3857"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3857"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3857\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3858"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3857"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3857"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3857"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}