{"id":3283,"date":"2023-12-04T16:43:36","date_gmt":"2023-12-04T16:43:36","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/12\/04\/why-gpus-are-great-for-ai\/"},"modified":"2023-12-04T16:43:36","modified_gmt":"2023-12-04T16:43:36","slug":"why-gpus-are-great-for-ai","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/12\/04\/why-gpus-are-great-for-ai\/","title":{"rendered":"Why GPUs Are Great for AI"},"content":{"rendered":"<div id=\"bsf_rt_marker\">\n<p>GPUs have been called the rare Earth metals \u2014 even the gold \u2014 of artificial intelligence, because they\u2019re foundational for today\u2019s <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/generative-ai\/\">generative AI<\/a> era.<\/p>\n<p>Three technical reasons, and many stories, explain why that\u2019s so. Each reason has multiple facets well worth exploring, but at a high level:<\/p>\n<ul>\n<li>GPUs employ parallel processing.<\/li>\n<li>GPU systems scale up to supercomputing heights.<\/li>\n<li>The GPU software stack for AI is broad and deep.<\/li>\n<\/ul>\n<p>The net result is GPUs perform technical calculations faster and with greater <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/energy-efficiency\/\">energy efficiency<\/a> than CPUs. That means they deliver leading performance for AI training and inference as well as gains across a wide array of applications that use <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-accelerated-computing\/\">accelerated computing<\/a>.<\/p>\n<p>In its <a href=\"https:\/\/aiindex.stanford.edu\/wp-content\/uploads\/2023\/04\/HAI_AI-Index-Report_2023.pdf\">recent report<\/a> on AI, Stanford\u2019s Human-Centered AI group provided some context. GPU performance \u201chas increased roughly 7,000 times\u201d since 2003 and price per performance is \u201c5,600 times greater,\u201d it reported.<\/p>\n<figure id=\"attachment_68465\" aria-describedby=\"caption-attachment-68465\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/Stanford-2023-AI-report-GPU-performance-final.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/Stanford-2023-AI-report-GPU-performance-final-672x329.jpg\" alt=\"Stanford report on GPU performance increases\" width=\"672\" height=\"329\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68465\" class=\"wp-caption-text\">A 2023 report captured the steep rise in GPU performance and price\/performance.<\/figcaption><\/figure>\n<p>The report also cited analysis from Epoch, an independent research group that measures and forecasts AI advances.<\/p>\n<p>\u201cGPUs are the dominant computing platform for accelerating machine learning workloads, and most (if not all) of the biggest models over the last five years have been trained on GPUs \u2026 [they have] thereby centrally contributed to the recent progress in AI,\u201d Epoch said on <a href=\"https:\/\/epochai.org\/blog\/trends-in-gpu-price-performance\">its site<\/a>.<\/p>\n<p>A <a href=\"https:\/\/cset.georgetown.edu\/wp-content\/uploads\/AI-Chips%E2%80%94What-They-Are-and-Why-They-Matter.pdf\">2020 study<\/a> assessing AI technology for the U.S. government drew similar conclusions.<\/p>\n<p>\u201cWe expect [leading-edge] AI chips are one to three orders of magnitude more cost-effective than leading-node CPUs when counting production and operating costs,\u201d it said.<\/p>\n<p>NVIDIA GPUs have increased performance on AI inference 1,000x in the last ten years, said Bill Dally, the company\u2019s chief scientist in a <a href=\"https:\/\/blogs.nvidia.com\/blog\/hot-chips-dally-research\/\">keynote<\/a> at Hot Chips, an annual gathering of semiconductor and systems engineers.<\/p>\n<h2><b>ChatGPT Spread the News<\/b><\/h2>\n<p>ChatGPT provided a powerful example of how GPUs are great for AI. The <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-are-large-language-models-used-for\/\">large language model<\/a> (LLM), trained and run on thousands of NVIDIA GPUs, runs generative AI services used by more than 100 million people.<\/p>\n<p>Since its 2018 launch, <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/resources\/mlperf-benchmarks\/\">MLPerf<\/a>, the industry-standard benchmark for AI, has provided numbers that detail the leading performance of NVIDIA GPUs on both AI training and inference.<\/p>\n<p>For example, <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/grace-hopper-superchip\/\">NVIDIA Grace Hopper Superchips<\/a> swept the <a href=\"https:\/\/blogs.nvidia.com\/blog\/grace-hopper-inference-mlperf\/\">latest round<\/a> of inference tests. <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">NVIDIA TensorRT-LLM<\/a>, inference software released since that test, delivers up to an 8x boost in performance and more than a 5x reduction in energy use and total cost of ownership. Indeed, NVIDIA GPUs have won every round of MLPerf training and inference tests since the benchmark was released in 2019.<\/p>\n<p>In February, NVIDIA GPUs<a href=\"https:\/\/blogs.nvidia.com\/blog\/stac-ml-inference-gpu\/\"> delivered leading results<\/a> for inference, serving up thousands of inferences per second on the most demanding models in the STAC-ML Markets benchmark, a key technology performance gauge for the financial services industry.<\/p>\n<p>A RedHat software engineering team put it succinctly in <a href=\"https:\/\/developers.redhat.com\/articles\/2022\/11\/21\/why-gpus-are-essential-computing\">a blog<\/a>: \u201cGPUs have become the foundation of artificial intelligence.\u201d<\/p>\n<h2><b>AI Under the Hood<\/b><\/h2>\n<p>A brief look under the hood shows why GPUs and AI make a powerful pairing.<\/p>\n<p>An AI model, also called a neural network, is essentially a mathematical lasagna, made from layer upon layer of linear algebra equations. Each equation represents the likelihood that one piece of data is related to another.<\/p>\n<p>For their part, GPUs pack thousands of cores, tiny calculators working in parallel to slice through the math that makes up an AI model. This, at a high level, is how <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-ai-computing\/\">AI computing<\/a> works.<\/p>\n<h2><b>Highly Tuned Tensor Cores<\/b><\/h2>\n<p>Over time, NVIDIA\u2019s engineers have tuned GPU cores to the evolving needs of AI models. The latest GPUs include <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/tensor-cores\/\">Tensor Cores<\/a> that are 60x more powerful than the first-generation designs for processing the matrix math neural networks use.<\/p>\n<p>In addition, <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h100\/\">NVIDIA Hopper Tensor Core GPUs<\/a> include a <a href=\"https:\/\/blogs.nvidia.com\/blog\/h100-transformer-engine\/\">Transformer Engine<\/a> that can automatically adjust to the optimal precision needed to process <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-a-transformer-model\/\">transformer models<\/a>, the class of neural networks that spawned generative AI.<\/p>\n<p>Along the way, each GPU generation has packed more memory and optimized techniques to store an entire AI model in a single GPU or set of GPUs.<\/p>\n<h2><b>Models Grow, Systems Expand<\/b><\/h2>\n<p>The complexity of AI models is expanding a whopping 10x a year.<\/p>\n<p>The current state-of-the-art LLM, GPT4, packs more than a trillion parameters, a metric of its mathematical density. That\u2019s up from less than 100 million parameters for a popular LLM in 2018.<\/p>\n<figure id=\"attachment_68468\" aria-describedby=\"caption-attachment-68468\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/1000x-AI-inferrence-gain-in-10-years-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/1000x-AI-inferrence-gain-in-10-years-672x380.jpg\" alt=\"Chart shows 1,000x performance improvement on AI inference over a decade for single GPUs\" width=\"672\" height=\"380\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68468\" class=\"wp-caption-text\">In a recent talk at Hot Chips, NVIDIA Chief Scientist Bill Dally described how single-GPU performance on AI inference expanded 1,000x in the last decade.<\/figcaption><\/figure>\n<p>GPU systems have kept pace by ganging up on the challenge. They scale up to supercomputers, thanks to their fast <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-nvidia-nvlink\/\">NVLink<\/a> interconnects and <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/quantum2\/\">NVIDIA Quantum InfiniBand networks<\/a>.<\/p>\n<p>For example, the <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-gh200\/\">DGX GH200<\/a>, a large-memory AI supercomputer, combines up to 256 NVIDIA GH200 Grace Hopper Superchips into a single data-center-sized GPU with 144 terabytes of shared memory.<\/p>\n<p>Each GH200 superchip is a single server with 72 Arm Neoverse CPU cores and four petaflops of AI performance. A new <a href=\"https:\/\/blogs.nvidia.com\/blog\/gh200-grace-hopper-superchip-powers-ai-supercomputers\/\">four-way Grace Hopper systems configuration<\/a> puts in a single compute node a whopping 288 Arm cores and 16 petaflops of AI performance with up to 2.3 terabytes of high-speed memory.<\/p>\n<p>And <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h200\/\">NVIDIA H200 Tensor Core GPUs<\/a> announced in November pack up to 288 gigabytes of the latest HBM3e memory technology.<\/p>\n<h2><b>Software Covers the Waterfront<\/b><\/h2>\n<p>An expanding ocean of GPU software has evolved since 2007 to enable every facet of AI, from deep-tech features to high-level applications.<\/p>\n<p>The NVIDIA AI platform includes hundreds of software libraries and apps. The CUDA programming language and the cuDNN-X library for deep learning provide a base on top of which developers have created software like <a href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/generative-ai\/nemo-framework\/\">NVIDIA NeMo<\/a>, a framework to let users build, customize and run inference on their own generative AI models.<\/p>\n<p>Many of these elements are available as open-source software, the grab-and-go staple of software developers. More than a hundred of them are packaged into the <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\">NVIDIA AI Enterprise<\/a> platform for companies that require full security and support. Increasingly, they\u2019re also available from major cloud service providers as APIs and services on <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-cloud\/\">NVIDIA DGX Cloud<\/a>.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/customize-ai-models-steerlm\/\">SteerLM<\/a>, one of the latest AI software updates for NVIDIA GPUs, lets users fine tune models during inference.<\/p>\n<h2><b>A 70x Speedup in 2008<\/b><\/h2>\n<p>Success stories date back to a <a href=\"http:\/\/robotics.stanford.edu\/~ang\/papers\/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf\">2008 paper<\/a> from AI pioneer Andrew Ng, then a Stanford researcher. Using two NVIDIA GeForce GTX 280 GPUs, his three-person team achieved a 70x speedup over CPUs processing an AI model with 100 million parameters, finishing work that used to require several weeks in a single day.<\/p>\n<p>\u201cModern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods,\u201d they reported.<\/p>\n<figure id=\"attachment_68471\" aria-describedby=\"caption-attachment-68471\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/Andrew-Ng-GTC-2015-scaling-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/12\/Andrew-Ng-GTC-2015-scaling-672x266.jpg\" alt=\"Picture of Andrew Ng showing slide in a talk on GPU performance for AI\" width=\"672\" height=\"266\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68471\" class=\"wp-caption-text\">Andrew Ng described his experiences using GPUs for AI in a GTC 2015 talk.<\/figcaption><\/figure>\n<p>In a <a href=\"https:\/\/video.ibm.com\/recorded\/60113824\/highlight\/619422\">2015 talk<\/a> at NVIDIA GTC, Ng described how he continued using more GPUs to scale up his work, running larger models at Google Brain and Baidu. Later, he helped found Coursera, an online education platform where he taught hundreds of thousands of AI students.<\/p>\n<p>Ng counts Geoff Hinton, one of the godfathers of modern AI, among the people he influenced. \u201cI remember going to Geoff Hinton saying check out CUDA, I think it can help build bigger neural networks,\u201d he said in the GTC talk.<\/p>\n<p>The University of Toronto professor spread the word<b>. <\/b>\u201cIn 2009, I remember giving a talk at NIPS [now NeurIPS], where I told about 1,000 researchers they should all buy GPUs because GPUs are going to be the future of machine learning,\u201d Hinton said in a <a href=\"https:\/\/venturebeat.com\/ai\/how-nvidia-dominated-ai-and-plans-to-keep-it-that-way-as-generative-ai-explodes\/\">press report<\/a>.<\/p>\n<h2><b>Fast Forward With GPUs<\/b><\/h2>\n<p>AI\u2019s gains are expected to ripple across the global economy.<\/p>\n<p>A <a href=\"https:\/\/www.mckinsey.com\/capabilities\/mckinsey-digital\/our-insights\/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights\">McKinsey report<\/a> in June estimated that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases it analyzed in industries like banking, healthcare and retail. So, it\u2019s no surprise Stanford\u2019s 2023 AI report said that a majority of business leaders expect to increase their investments in AI.<\/p>\n<p>Today, more than 40,000 companies use NVIDIA GPUs for AI and accelerated computing, attracting a global community of 4 million developers. Together they\u2019re advancing science, healthcare, finance and virtually every industry.<\/p>\n<p>Among the latest achievements, NVIDIA described a whopping 700,000x speedup using AI to ease climate change by keeping carbon dioxide out of the atmosphere (see video below). It\u2019s one of many ways NVIDIA is applying the performance of GPUs to AI and beyond.<\/p>\n<p>Learn how <a href=\"https:\/\/www.nvidia.com\/en-us\/lp\/ai-data-science\/how-to-get-started-with-ai-inference-series\/\">GPUs put AI into production<\/a>.<\/p>\n<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/why-gpus-are-great-for-ai\/<\/p>\n","protected":false},"author":0,"featured_media":3284,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3283"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3283"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3283\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3284"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3283"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3283"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3283"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}