{"id":3659,"date":"2024-07-11T11:00:32","date_gmt":"2024-07-11T11:00:32","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/07\/11\/japan-enhances-ai-sovereignty-with-advanced-abci-3-0-supercomputer\/"},"modified":"2024-07-11T11:00:32","modified_gmt":"2024-07-11T11:00:32","slug":"japan-enhances-ai-sovereignty-with-advanced-abci-3-0-supercomputer","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/07\/11\/japan-enhances-ai-sovereignty-with-advanced-abci-3-0-supercomputer\/","title":{"rendered":"Japan Enhances AI Sovereignty With Advanced ABCI 3.0 Supercomputer"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>Enhancing Japan\u2019s <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-sovereign-ai\/\" target=\"_blank\" rel=\"noopener\">AI sovereignty<\/a> and strengthening its research and development capabilities, Japan\u2019s National Institute of Advanced Industrial Science and Technology (AIST) will integrate thousands of <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h200\/\" target=\"_blank\" rel=\"noopener\">NVIDIA H200<\/a> Tensor Core GPUs into its AI Bridging Cloud Infrastructure 3.0 supercomputer (ABCI 3.0). The HPE Cray XD system will feature <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/quantum2\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Quantum-2<\/a> InfiniBand networking for superior performance and scalability.<\/p>\n<p>ABCI 3.0 is the latest iteration of Japan\u2019s large-scale Open AI Computing Infrastructure designed to advance AI R&amp;D. This collaboration underlines Japan\u2019s commitment to advancing its AI capabilities and fortifying its technological independence.<\/p>\n<p>\u201cIn August 2018, we launched ABCI, the world\u2019s first large-scale open AI computing infrastructure,\u201d said AIST Executive Officer Yoshio Tanaka. \u201cBuilding on our experience over the past several years managing ABCI, we\u2019re now upgrading to ABCI 3.0. In collaboration with NVIDIA we aim to develop ABCI 3.0 into a computing infrastructure that will advance further research and development capabilities for <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/generative-ai\/\" target=\"_blank\" rel=\"noopener\">generative AI<\/a> in Japan.\u201d<\/p>\n<p>\u201cAs generative AI prepares to catalyze global change, it\u2019s crucial to rapidly cultivate research and development capabilities within Japan,\u201d said AIST Solutions Co. Producer and Head of ABCI Operations Hirotaka Ogawa. \u201cI\u2019m confident that this major upgrade of ABCI in our collaboration with NVIDIA and HPE will enhance ABCI\u2019s leadership in domestic industry and academia, propelling Japan towards global competitiveness in AI development and serving as the bedrock for future innovation.\u201d<\/p>\n<figure id=\"attachment_72881\" aria-describedby=\"caption-attachment-72881\" class=\"wp-caption aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\" wp-image-72881\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/07\/facility-1-400x267.png\" alt=\"\" width=\"659\" height=\"440\"><figcaption id=\"caption-attachment-72881\" class=\"wp-caption-text\">The ABCI 3.0 supercomputer will be housed in Kashiwa at a facility run by Japan\u2019s National Institute of Advanced Industrial Science and Technology. Credit: Courtesy of National Institute of Advanced Industrial Science and Technology.<\/figcaption><\/figure>\n<h2><strong>ABCI 3.0: A New Era for Japanese AI Research and Development<\/strong><\/h2>\n<p>ABCI 3.0 is constructed and operated by AIST, its business subsidiary, AIST Solutions, and its system integrator, Hewlett Packard Enterprise (HPE).<\/p>\n<p>The ABCI 3.0 project follows support from Japan\u2019s Ministry of Economy, Trade and Industry, known as METI, for strengthening its computing resources through the Economic Security Fund and is part of a broader $1 billion initiative by METI that includes both ABCI efforts and investments in cloud AI computing.<\/p>\n<p>NVIDIA is closely <a href=\"https:\/\/blogs.nvidia.com\/blog\/japan-sovereign-ai\/\" target=\"_blank\" rel=\"noopener\">collaborating with METI<\/a> on research and education following a visit last year by company founder and CEO, Jensen Huang, who met with political and business leaders, including Japanese Prime Minister Fumio Kishida, to discuss the future of AI.<\/p>\n<h2><strong>NVIDIA\u2019s Commitment to Japan\u2019s Future<\/strong><\/h2>\n<p>Huang pledged to collaborate on research, particularly in generative AI, robotics and <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-quantum-computing\/\" target=\"_blank\" rel=\"noopener\">quantum computing<\/a>, to invest in AI startups and provide product support, training and education on AI.<\/p>\n<p>During his visit, Huang emphasized that \u201cAI factories\u201d \u2014 next-generation data centers designed to handle the most computationally intensive AI tasks \u2014 are crucial for turning vast amounts of data into intelligence.<\/p>\n<p>\u201cThe AI factory will become the bedrock of modern economies across the world,\u201d Huang said during a meeting with the Japanese press in December.<\/p>\n<p>With its ultra-high-density data center and energy-efficient design, ABCI provides a robust infrastructure for developing AI and big data applications.<\/p>\n<p>The system is expected to come online by the end of this year and offer state-of-the-art AI research and development resources. It will be housed in Kashiwa, near Tokyo.<\/p>\n<h2><strong>Unmatched Computing Performance and Efficiency<\/strong><\/h2>\n<p>The facility will offer:<\/p>\n<ul>\n<li>6 AI <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-an-exaflop\/\" target=\"_blank\" rel=\"noopener\">exaflops<\/a> of computing capacity, a measure of AI-specific performance without sparsity<\/li>\n<li>410 double-precision petaflops, a measure of general computing capacity<\/li>\n<li>Each node is connected via the Quantum-2 InfiniBand platform at 200GB\/s of bisectional bandwidth.<\/li>\n<\/ul>\n<p>NVIDIA technology forms the backbone of this initiative, with hundreds of nodes each equipped with 8 NVLlink-connected H200 GPUs providing unprecedented computational performance and efficiency.<\/p>\n<p>NVIDIA H200 is the first GPU to offer over 140 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB\/s). The H200\u2019s larger and faster memory accelerates generative AI and LLMs, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.<\/p>\n<p>NVIDIA H200 GPUs are 15X more energy-efficient than ABCI\u2019s previous-generation architecture for AI workloads such as LLM token generation.<\/p>\n<p>The integration of advanced NVIDIA Quantum-2 InfiniBand with In-Network computing \u2014 where networking devices perform computations on data, offloading the work from the CPU \u2014 ensures efficient, high-speed, low-latency communication, crucial for handling intensive AI workloads and vast datasets.<\/p>\n<p>ABCI boasts world-class computing and data processing power, serving as a platform to accelerate joint AI R&amp;D with industries, academia and governments.<\/p>\n<p>METI\u2019s substantial investment is a testament to Japan\u2019s strategic vision to enhance AI development capabilities and accelerate the use of generative AI.<\/p>\n<p>By subsidizing AI supercomputer development, Japan aims to reduce the time and costs of developing next-generation AI technologies, positioning itself as a leader in the global AI landscape.<\/p>\n<p>\t\t<!-- .entry-footer --><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/abci-aist\/<\/p>\n","protected":false},"author":0,"featured_media":3660,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3659"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3659"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3659\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3660"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}