{"id":2639,"date":"2022-11-17T17:45:46","date_gmt":"2022-11-17T17:45:46","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/11\/17\/moma-installation-marks-breakthrough-for-ai-art\/"},"modified":"2022-11-17T17:45:46","modified_gmt":"2022-11-17T17:45:46","slug":"moma-installation-marks-breakthrough-for-ai-art","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/11\/17\/moma-installation-marks-breakthrough-for-ai-art\/","title":{"rendered":"MoMA Installation Marks Breakthrough for AI Art"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/11\/17\/moma-ai-art\/\" data-title=\"MoMA Installation Marks Breakthrough for AI Art\" data-hashtags=\"\">\n<p>AI-generated art has arrived.<\/p>\n<p>With a presentation making its debut this week at The Museum of Modern Art in New York City \u2014 perhaps the world\u2019s premier institution devoted to modern and contemporary art \u2014 the AI technologies that have upended trillion-dollar industries worldwide over the past decade will get a formal introduction.<\/p>\n<p>Created by pioneering artist Refik Anadol, the installation in the museum\u2019s soaring Gund Lobby uses a sophisticated machine-learning model to interpret the publicly available visual and informational data of MoMA\u2019s collection.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/11\/2022_ANADOL_Unsupervised-%E2%80%94-Machine-Hallucinations-%E2%80%94-MoMA-%E2%80%94-Fluid-Dreams_SDV-10-1939x2000-1-388x400.jpeg\" alt=\"\" width=\"388\" height=\"400\"><\/p>\n<p>\u201cRight now, we are in a renaissance,\u201d Anadol said of the presentation \u201cRefik Anadol: Unsupervised.\u201d \u201cHaving AI in the medium is completely and profoundly changing the profession.\u201d<\/p>\n<p>Anadol is a digital media pioneer. Throughout his career, he\u2019s been intrigued by the intersection between art and AI. His first encounter with AI as an artistic tool was at Google, where he used deep learning \u2014 and an NVIDIA GeForce GTX 1080 Ti \u2014 to create dynamic digital artworks.<\/p>\n<p>In 2017, he started working with one of the first generative AI tools, <a href=\"https:\/\/blogs.nvidia.com\/blog\/2020\/12\/07\/neurips-research-limited-data-gan\/\" target=\"_blank\" rel=\"noopener\">StyleGAN<\/a>, created at NVIDIA Research, which was able to generate synthetic images of faces that are incredibly realistic.<\/p>\n<p>Anadol was more intrigued by the ability to use the tool to explore more abstract images, training StyleGAN not on images of faces, but of modern art, and guiding the AI\u2019s synthesis using data streaming in from optical, temperature and acoustic sensors.<\/p>\n<h2><strong>Digging Deep With MoMA<\/strong><\/h2>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/11\/2022_ANADOL_Unsupervised-%E2%80%94-Machine-Hallucinations-%E2%80%94-MoMA_Render-7-2000x1125-1-672x378.jpeg\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p>Those ideas led him to an online collaboration with The Museum of Modern Art in 2021, which was exhibited by Feral File, using more than 138,000 records from the museum\u2019s publicly available archive. The Feral File exhibit caused an online sensation, reimagining art in real time and inspiring the wave of AI-generated art that\u2019s spread quickly through social media communities on Instagram, Twitter, Discord and Reddit this year.<\/p>\n<p>This year, he returned to MoMA to dig even deeper, collaborating again with MoMA curators Michelle Kuo and Paola Antonelli on a new major installation. On view from Nov. 19 through March 5, 2023, \u201cRefik Anadol: Unsupervised\u201d will use AI to interpret and transform more than 200 years of art from MoMA\u2019s collection.<\/p>\n<p>It\u2019s an exploration not just of the world\u2019s foremost collection of modern art \u2014 pretty much every single pioneering sculptor, painter and even game designer of the past two centuries \u2014 but a look inside the mind of AI, allowing us to see results of the algorithm processing data from MoMA\u2019s collection, as well as ambient sound, temperature and light, and \u2018dreaming,\u2019\u201d Anadol said.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/11\/2022_ANADOL_Unsupervised-%E2%80%94-Machine-Hallucinations-%E2%80%94-MoMA_SDV-A-02-388x400.jpeg\" alt=\"\" width=\"388\" height=\"400\"><\/p>\n<p>Powering the system is a full suite of NVIDIA technologies. He relies on an <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-systems\/\" target=\"_blank\" rel=\"noopener\">NVIDIA DGX server<\/a> equipped with NVIDIA A100 Tensor Core GPUs to train the model in real time. Another machine equipped with an <a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/graphics-cards\/40-series\/rtx-4090\/\" target=\"_blank\" rel=\"noopener\">NVIDIA RTX 4090 GPU<\/a> translates the model into computer graphics, driving the exhibit\u2019s display.<\/p>\n<h2><strong>\u2018Bending Data\u2019<\/strong><\/h2>\n<p>\u201cRefik is bending data \u2014 which we normally associate with rational systems \u2014 into a realm of surrealism and irrationality,\u201d Michelle Kuo, the exhibit\u2019s curator at the museum, told the <i>New York Times<\/i>. \u201cHis interpretation of MoMA\u2019s dataset is essentially a transformation of the history of modern art.\u201d<\/p>\n<p>The installation comes amid a wave of excitement around generative AI, a technology that\u2019s been put at the fingertips of amateur and professional artists alike with new tools such as <a href=\"https:\/\/www.midjourney.com\/\" target=\"_blank\" rel=\"noopener\">Midjourney<\/a>, OpenAI\u2019s <a href=\"https:\/\/openai.com\/dall-e-2\/\" target=\"_blank\" rel=\"noopener\">Dall\u00b7E<\/a>, and <a href=\"https:\/\/beta.dreamstudio.ai\/\" target=\"_blank\" rel=\"noopener\">DreamStudio<\/a>.<\/p>\n<p>And while Anadol\u2019s work intersects with the surge in interest in NFT art that had the world buzzing in 2021, like AI-generated art, it goes far beyond it.<\/p>\n<h2><strong>Inspired by Cutting-Edge Research<\/strong><\/h2>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/11\/2022_ANADOL_Unsupervised-%E2%80%94-Machine-Hallucinations-%E2%80%94-MoMA_SDV-C-06-388x400.jpeg\" alt=\"\" width=\"388\" height=\"400\"><\/p>\n<p>Anadol\u2019s work digs deep into MoMA\u2019s archives and cutting-edge AI, relying on a technology developed at NVIDIA Research called StyleGAN. David Luebke, vice president of graphics research at NVIDIA, said he first got excited about generative AI\u2019s artistic and creative possibilities when he saw NVIDIA researcher Janne Hellsten\u2019s demo of StyleGAN2 trained on stylized artistic portraits.<\/p>\n<p>\u201cSuddenly, one could fluidly explore the content and style of a generated image or have it react to ambient effects like sound or even weather,\u201d Luebke said.<\/p>\n<p>NVIDIA Research has been pushing forward the state of the art in generative AI since at least 2017 when NVIDIA developed \u201cProgressive GANs,\u201d which used AI to synthesize highly realistic, high-resolution images of human faces for the first time. This was followed by StyleGAN, which achieved even higher quality results.<\/p>\n<p>Each year after that, NVIDIA released a paper that advanced the state of the art. StyleGAN has proved to be a versatile platform, Luebke explained, enabling countless other researchers and artists like Anadol to bring their ideas to life.<\/p>\n<h2><strong>Democratizing Content Creation<\/strong><\/h2>\n<p>Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces or cats or cars, and encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting, Luebke explains.<\/p>\n<p>\u201cThis is exciting because it democratizes content creation,\u201d Luebke said. \u201cUltimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids,\u201d Luebke said.<\/p>\n<p>Anadol\u2019s work at MoMA offers a taste of what\u2019s possible. \u201cRefik Anadol: Unsupervised,\u201d the artist\u2019s first U.S. solo museum presentation, features three new digital artworks by the Los Angeles-based artist that use AI to dynamically explore MoMA\u2019s collection on a vast 24-by-24-foot digital display. It\u2019s as much a work of architecture as it is one of art.<\/p>\n<p>\u201cOften, AI is used to classify, process and generate realistic representations of the world,\u201d the exhibition\u2019s organizer Michelle Kuo, told Archinect, a leading publication covering contemporary art and architecture. \u201cAnadol\u2019s work, by contrast, is visionary: it explores dreams, hallucination and irrationality, posing an alternate understanding of modern art \u2014 and of artmaking itself.\u201d<\/p>\n<p>\u201cRefik Anadol: Unsupervised\u201d also hints at how AI will transform our future, and Anadol thinks it will be for the better. \u201cThis will just enhance our imagination,\u201d Anadol said. \u201cI\u2019m seeing this as an extension of our minds.\u201d<\/p>\n<p><i>For more, see our exploration of <\/i><a href=\"https:\/\/www.nvidia.com\/en-us\/research\/ai-art-gallery\/artists\/refik-anadol\/\" target=\"_blank\" rel=\"noopener\"><i>Refik Anadol\u2019s work in NVIDIA\u2019s AI Art Gallery<\/i><\/a><i>.<\/i><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/11\/17\/moma-ai-art\/<\/p>\n","protected":false},"author":0,"featured_media":2640,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2639"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2639"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2639\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2640"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2639"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2639"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2639"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}