{"id":2091,"date":"2022-05-04T16:44:05","date_gmt":"2022-05-04T16:44:05","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/05\/04\/setting-ais-on-siggraph-top-academic-researchers-collaborate-with-nvidia-to-tackle-graphics-greatest-challenges\/"},"modified":"2022-05-04T16:44:05","modified_gmt":"2022-05-04T16:44:05","slug":"setting-ais-on-siggraph-top-academic-researchers-collaborate-with-nvidia-to-tackle-graphics-greatest-challenges","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/05\/04\/setting-ais-on-siggraph-top-academic-researchers-collaborate-with-nvidia-to-tackle-graphics-greatest-challenges\/","title":{"rendered":"Setting AIs on SIGGRAPH: Top Academic Researchers Collaborate With NVIDIA to Tackle Graphics\u2019 Greatest Challenges"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/05\/04\/siggraph-ai-graphics-research-collaboration\/\" data-title=\"Setting AIs on SIGGRAPH: Top Academic Researchers Collaborate With NVIDIA to Tackle Graphics\u2019 Greatest Challenges\" data-hashtags=\"\">\n<p>NVIDIA\u2019s latest academic collaborations in graphics research have produced a reinforcement learning model that smoothly simulates athletic moves, ultra-thin holographic glasses for virtual reality, and a real-time rendering technique for objects illuminated by hidden light sources.<\/p>\n<p>These projects \u2014 and over a dozen more \u2014 will be on display at <a href=\"https:\/\/s2022.siggraph.org\/\" target=\"_blank\" rel=\"noopener\">SIGGRAPH 2022<\/a>, taking place Aug. 8-11 in Vancouver and online. NVIDIA researchers have 16 technical papers accepted at the conference, representing work with 14 universities including Dartmouth College, Stanford University, the Swiss Federal Institute of Technology Lausanne and Tel Aviv University.<\/p>\n<p>The papers span the breadth of graphics research, with advancements in neural content creation tools, display and human perception, the mathematical foundations of computer graphics and neural rendering.<\/p>\n<h2><b>Neural Tool for Multi-Skilled Simulated Characters<\/b><\/h2>\n<p>When a reinforcement learning model is used to develop a physics-based animated character, the AI typically learns just one skill at a time: walking, running or perhaps cartwheeling. But researchers from UC Berkeley, the University of Toronto and NVIDIA have created a framework that enables AI to learn a whole repertoire of skills \u2014 demonstrated above with a warrior character who can wield a sword, use a shield and get back up after a fall.<\/p>\n<p>Achieving these smooth, life-like motions for animated characters is usually tedious and labor intensive, with developers starting from scratch to train the AI for each new task. As outlined in <a href=\"https:\/\/nv-tlabs.github.io\/ASE\/\">this paper<\/a>, the research team allowed the reinforcement learning AI to reuse previously learned skills to respond to new scenarios, improving efficiency and reducing the need for additional motion data.<\/p>\n<p>Tools like this one can be used by creators in animation, robotics, gaming and therapeutics. At SIGGRAPH, NVIDIA researchers will also present papers about 3D neural tools for <a href=\"https:\/\/nv-tlabs.github.io\/lip-mlp\/\">surface reconstruction from point clouds<\/a> and interactive shape editing, plus 2D tools for<a href=\"https:\/\/research.nvidia.com\/publication\/_detecting-viewer-perceived-intended-vector-sketch-connectivity\"> AI to better understand gaps in vector sketches<\/a> and <a href=\"https:\/\/research.nvidia.com\/publication\/2022-07_disentangling-random-and-cyclic-effects-time-lapse-sequences\">improve the visual quality of time-lapse videos<\/a>.<\/p>\n<h2><b>Bringing Virtual Reality to Lightweight Glasses\u00a0<\/b><\/h2>\n<p>Most virtual reality users access 3D digital worlds by putting on bulky head-mounted displays, but researchers are working on lightweight alternatives that resemble standard eyeglasses.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/Holographic-Glasses-wearable-prototype-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/Holographic-Glasses-wearable-prototype-400x337.jpg\" alt=\"\" width=\"400\" height=\"337\"><\/p>\n<p><\/a>A collaboration between NVIDIA and Stanford researchers has packed the technology needed for 3D holographic images into a <a href=\"https:\/\/research.nvidia.com\/publication\/2022-08_holographic-glasses-virtual-reality\">wearable display just a couple millimeters thick<\/a>. The 2.5-millimeter display is less than half the size of other thin VR displays, known as pancake lenses, which use a technique called folded optics that can only support 2D images.<\/p>\n<p>The researchers accomplished this feat by approaching display quality and display size as a computational problem, and co-designing the optics with an AI-powered algorithm.<\/p>\n<p>While prior VR displays require distance between a magnifying eyepiece and a display panel to create a hologram, this new design uses a spatial light modulator, a tool that can create holograms right in front of the user\u2019s eyes, without needing this gap. Additional components \u2014 a pupil-replicating waveguide and geometric phase lens \u2014 further reduce the device\u2019s bulkiness.<\/p>\n<p>It\u2019s one of two VR collaborations between Stanford and NVIDIA at the conference, with another paper proposing a <a href=\"https:\/\/research.nvidia.com\/publication\/2022-08_time-multiplexed-neural-holography-flexible-framework-holographic-near-eye\">new computer-generated holography framework<\/a> that improves image quality while optimizing bandwidth usage. A third paper in this field of display and perception research, co-authored with New York University and Princeton University scientists, measures <a href=\"https:\/\/research.nvidia.com\/publication\/2022-08_image-features-influence-reaction-time-learned-probabilistic-perceptual-model\">how rendering quality affects the speed at which users react<\/a> to on-screen information.<\/p>\n<h2><b>Lightbulb Moment: New Levels of Real-Time Lighting Complexity<\/b><\/h2>\n<p>Accurately simulating the pathways of light in a scene in real time has always been considered the \u201choly grail\u201d of graphics. Work detailed in a paper by the University of Utah\u2019s School of Computing and NVIDIA is raising the bar, introducing a <a href=\"https:\/\/research.nvidia.com\/publication\/2022-07_generalized-resampled-importance-sampling-foundations-restir\">path resampling algorithm that enables real-time rendering of scenes with complex lighting<\/a>, including hidden light sources.<\/p>\n<p>Think of walking into a dim room, with a glass vase on a table illuminated indirectly by a street lamp located outside. The glossy surface creates a long light path, with rays bouncing many times between the light source and the viewer\u2019s eye. Computing these light paths is usually too complex for real-time applications like games, so it\u2019s mostly done for films or other offline rendering applications.<\/p>\n<p>This paper highlights the use of statistical resampling techniques \u2014 where the algorithm reuses computations thousands of times while tracing these complex light paths \u2014 during rendering to approximate the light paths efficiently in real time. The researchers applied the algorithm to a classic challenging scene in computer graphics, pictured below: an indirectly lit set of teapots made of metal, ceramic and glass.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/ReSTIRPT.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/ReSTIRPT-672x378.png\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p><\/a><\/p>\n<p>Related NVIDIA-authored papers at SIGGRAPH include a new <a href=\"https:\/\/research.nvidia.com\/publication\/2022-07_unbiased-inverse-volume-rendering-differential-trackers\">sampling strategy for inverse volume rendering<\/a>, a <a href=\"https:\/\/research.nvidia.com\/publication\/_locally-uniform-possible-reshaping-vector-clip-art\">novel mathematical representation for 2D shape manipulation<\/a>, software to <a href=\"https:\/\/research.nvidia.com\/publication\/_matbuilder-mastering-sampling-uniformity-over-projections\">create samplers with improved uniformity<\/a> for rendering and other applications, and a way to <a href=\"https:\/\/research.nvidia.com\/publication\/2022-07_unbiased-and-consistent-rendering-using-biased-estimators\">turn biased rendering algorithms into more efficient unbiased ones<\/a>.<\/p>\n<h2><b>Neural Rendering: NeRFs, GANs Power Synthetic Scenes<\/b><\/h2>\n<p>Neural rendering algorithms learn from real-world data to create synthetic images \u2014 and NVIDIA research projects are developing state-of-the-art tools to do so in 2D and 3D.<\/p>\n<p>In 2D, the <a href=\"https:\/\/research.nvidia.com\/publication\/2022-05_stylegan-nada-clip-guided-domain-adaptation-image-generators\">StyleGAN-NADA model<\/a>, developed in collaboration with Tel Aviv University, generates images with specific styles based on a user\u2019s text prompts, without requiring example images for reference. For instance, a user could generate vintage car images, turn their dog into a painting or transform houses to huts:<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/stylegan_nada-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/05\/stylegan_nada-672x336.jpg\" alt=\"\" width=\"672\" height=\"336\"><\/p>\n<p><\/a><\/p>\n<p>And in 3D, researchers at NVIDIA and the University of Toronto are developing tools that can support the creation of large-scale virtual worlds. <a href=\"https:\/\/research.nvidia.com\/publication\/2022-07_instant-neural-graphics-primitives-multiresolution-hash-encoding\">Instant neural graphics primitives<\/a>, the NVIDIA paper behind the popular <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/25\/instant-nerf-research-3d-ai\/\">Instant NeRF<\/a> tool, will be presented at SIGGRAPH.<\/p>\n<p>NeRFs, 3D scenes based on a collection of 2D images, are just one capability of the neural graphics primitives technique. It can be used to represent any complex spatial information, with applications including image compression, highly accurate representations of 3D shapes and ultra-high resolution images.<\/p>\n<p>This work pairs with a University of Toronto collaboration that <a href=\"https:\/\/research.nvidia.com\/vbnf\">compresses 3D neural graphics primitives<\/a> just as JPEG is used to compress 2D images. This can help users store and share 3D maps and entertainment experiences between small devices like phones and robots.<\/p>\n<p>There are more than 300 NVIDIA researchers around the globe, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about <a href=\"https:\/\/www.nvidia.com\/en-us\/research\/\">NVIDIA Research<\/a>.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/05\/04\/siggraph-ai-graphics-research-collaboration\/<\/p>\n","protected":false},"author":0,"featured_media":2092,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2091"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2091"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2091\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2092"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}