{"id":3135,"date":"2023-08-29T20:01:53","date_gmt":"2023-08-29T20:01:53","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/29\/wide-horizons-nvidia-keynote-points-way-to-further-ai-advances\/"},"modified":"2023-08-29T20:01:53","modified_gmt":"2023-08-29T20:01:53","slug":"wide-horizons-nvidia-keynote-points-way-to-further-ai-advances","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/29\/wide-horizons-nvidia-keynote-points-way-to-further-ai-advances\/","title":{"rendered":"Wide Horizons: NVIDIA Keynote Points Way to Further AI Advances"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/29\/hot-chips-dally-research\/\" data-title=\"Wide Horizons: NVIDIA Keynote Points Way to Further AI Advances\" data-hashtags=\"\">\n<p>Dramatic gains in hardware performance have spawned <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/generative-ai\/\">generative AI<\/a>, and a rich pipeline of ideas for future speedups that will drive machine learning to new heights, Bill Dally, NVIDIA\u2019s chief scientist and senior vice president of research, said today in a keynote.<\/p>\n<p>Dally described a basket of techniques in the works \u2014 some already showing impressive results \u2014 in a talk at Hot Chips, an annual event for processor and systems architects.<\/p>\n<p>\u201cThe progress in AI has been enormous, it\u2019s been enabled by hardware and it\u2019s still gated by deep learning hardware,\u201d said Dally, one of the world\u2019s foremost computer scientists and former chair of Stanford University\u2019s computer science department.<\/p>\n<p>He showed, for example, how ChatGPT, the large language model (<a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/large-language-models\/\">LLM<\/a>) used by millions, could suggest an outline for his talk. Such capabilities owe their prescience in large part to gains from GPUs in AI inference performance over the last decade, he said.<\/p>\n<figure id=\"attachment_66499\" aria-describedby=\"caption-attachment-66499\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/New-Single-GPU-advances-final-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/New-Single-GPU-advances-final-672x376.jpg\" alt=\"Chart of single GPU performance advances\" width=\"672\" height=\"376\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66499\" class=\"wp-caption-text\">Gains in single-GPU performance are just part of a larger story that includes million-x advances in scaling to data-center-sized supercomputers.<\/figcaption><\/figure>\n<h2><b>Research Delivers 100 TOPS\/Watt<\/b><\/h2>\n<p>Researchers are readying the next wave of advances. Dally described <a href=\"https:\/\/research.nvidia.com\/publication\/2022-06_17-956-topsw-deep-learning-inference-accelerator-vector-scaled-4-bit\">a test chip<\/a> that demonstrated nearly 100 tera operations per watt on an LLM.<\/p>\n<p>The experiment showed an energy-efficient way to further accelerate the <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/25\/what-is-a-transformer-model\/\">transformer models<\/a> used in generative AI. It applied four-bit arithmetic, one of several simplified numeric approaches that promise future gains.<\/p>\n<figure id=\"attachment_66493\" aria-describedby=\"caption-attachment-66493\" class=\"wp-caption alignleft\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/Dally-closeup-2.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/Dally-closeup-2-373x400.jpg\" alt=\"closeup of Bill Dally\" width=\"373\" height=\"400\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66493\" class=\"wp-caption-text\">Bill Dally<\/figcaption><\/figure>\n<p>Looking further out, Dally discussed ways to speed calculations and save energy using logarithmic math, an approach NVIDIA detailed in a 2021 patent.<\/p>\n<h2><b>Tailoring Hardware for AI<\/b><\/h2>\n<p>He explored a half dozen other techniques for tailoring hardware to specific AI tasks, often by defining new data types or operations.<\/p>\n<p>Dally described ways to simplify neural networks, pruning synapses and neurons in an approach called structural sparsity, first adopted in <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/a100\/\">NVIDIA A100 Tensor Core GPUs<\/a>.<\/p>\n<p>\u201cWe\u2019re not done with sparsity,\u201d he said. \u201cWe need to do something with activations and can have greater sparsity in weights as well.\u201d<\/p>\n<p>Researchers need to design hardware and software in tandem, making careful decisions on where to spend precious energy, he said. Memory and communications circuits, for instance, need to minimize data movements.<\/p>\n<p>\u201cIt\u2019s a fun time to be a computer engineer because we\u2019re enabling this huge revolution in AI, and we haven\u2019t even fully realized yet how big a revolution it will be,\u201d Dally said.<\/p>\n<h2><b>More Flexible Networks<\/b><\/h2>\n<p>In a separate talk, Kevin Deierling, NVIDIA\u2019s vice president of networking, described the unique flexibility of <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/products\/data-processing-unit\/\">NVIDIA BlueField DPUs<\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/products\/ethernet\/\">NVIDIA Spectrum<\/a> networking switches for allocating resources based on changing network traffic or user rules.<\/p>\n<p>The chips\u2019 ability to dynamically shift hardware acceleration pipelines in seconds enables load balancing with maximum throughput and gives core networks a new level of adaptability. That\u2019s especially useful for defending against cybersecurity threats.<\/p>\n<p>\u201cToday with generative AI workloads and cybersecurity, everything is dynamic, things are changing constantly,\u201d Deierling said. \u201cSo we\u2019re moving to runtime programmability and resources we can change on the fly,\u201d<\/p>\n<p>In addition, NVIDIA and Rice University researchers are developing ways users can take advantage of the runtime flexibility using the popular P4 programming language.<\/p>\n<h2><b>Grace Leads Server CPUs<\/b><\/h2>\n<p>A talk by Arm on its Neoverse V2 cores included an update on the performance of the <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/grace-cpu-superchip\/\">NVIDIA Grace CPU Superchip<\/a>, the first processor implementing them.<\/p>\n<p>Tests show that, at the same power, Grace systems deliver up to 2x more throughput than current x86 servers across a variety of CPU workloads. In addition, Arm\u2019s SystemReady Program certifies that Grace systems will run existing Arm operating systems, containers and applications with no modification.<\/p>\n<figure id=\"attachment_66490\" aria-describedby=\"caption-attachment-66490\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/Chart-of-Grace-efficiency-and-performance-gains-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/Chart-of-Grace-efficiency-and-performance-gains-672x365.jpg\" alt=\"Chart of Grace efficiency and performance gains\" width=\"672\" height=\"365\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66490\" class=\"wp-caption-text\">Grace gives data center operators a choice to deliver more performance or use less power.<\/figcaption><\/figure>\n<p>Grace uses an ultra-fast fabric to connect 72 Arm Neoverse V2 cores in a single die, then a version of <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/03\/06\/what-is-nvidia-nvlink\/\">NVLink<\/a> connects two of those dies in a package, delivering 900 GB\/s of bandwidth. It\u2019s the first data center CPU to use server-class LPDDR5X memory, delivering 50% more memory bandwidth at similar cost but one-eighth the power of typical server memory.<\/p>\n<p>Hot Chips kicked off Aug. 27 with a full day of tutorials, including talks from NVIDIA experts on AI inference and protocols for chip-to-chip interconnects, and runs through today.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2023\/08\/29\/hot-chips-dally-research\/<\/p>\n","protected":false},"author":0,"featured_media":3136,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3135"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3135"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3135\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3136"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3135"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3135"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3135"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}