{"id":2483,"date":"2022-08-05T15:41:27","date_gmt":"2022-08-05T15:41:27","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/05\/nvidia-instant-nerf-wins-best-paper-at-siggraph-inspires-creative-wave-amid-tens-of-thousands-of-downloads\/"},"modified":"2022-08-05T15:41:27","modified_gmt":"2022-08-05T15:41:27","slug":"nvidia-instant-nerf-wins-best-paper-at-siggraph-inspires-creative-wave-amid-tens-of-thousands-of-downloads","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/05\/nvidia-instant-nerf-wins-best-paper-at-siggraph-inspires-creative-wave-amid-tens-of-thousands-of-downloads\/","title":{"rendered":"NVIDIA Instant NeRF Wins Best Paper at SIGGRAPH, Inspires Creative Wave Amid Tens of Thousands of Downloads"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/05\/instant-nerf-creators-siggraph\/\" data-title=\"NVIDIA Instant NeRF Wins Best Paper at SIGGRAPH, Inspires Creative Wave Amid Tens of Thousands of Downloads\" data-hashtags=\"\">\n<p>3D content creators are clamoring for <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/25\/instant-nerf-research-3d-ai\/\">NVIDIA Instant NeRF<\/a>, an inverse rendering tool that turns a set of static images into a realistic 3D scene.<\/p>\n<p>Since its debut earlier this year, tens of thousands of developers around the world have downloaded <a href=\"https:\/\/github.com\/NVlabs\/instant-ngp\" target=\"_blank\" rel=\"noopener\">the source code<\/a> and used it to render spectacular scenes, sharing eye-catching results on social media.<\/p>\n<p>The research behind Instant NeRF is being honored as a <a href=\"https:\/\/blog.siggraph.org\/2022\/07\/siggraph-2022-technical-papers-awards-best-papers-and-honorable-mentions.html\/\" target=\"_blank\" rel=\"noopener\">best paper at SIGGRAPH<\/a> \u2014 which runs Aug. 8-11 in Vancouver and online \u2014 for its contribution to the future of computer graphics research. One of just five papers selected for this award, it\u2019s among 17 <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/05\/04\/siggraph-ai-graphics-research-collaboration\/\">papers and workshops with NVIDIA authors<\/a> that are being presented at the conference, covering topics spanning neural rendering, 3D simulation, holography and more.<\/p>\n<p>NVIDIA recently held an <a href=\"https:\/\/www.nvidia.com\/en-us\/research\/nerf-sweepstakes\/\">Instant NeRF sweepstakes<\/a>, asking developers to share 3D scenes created with the software for a chance to win a high-end NVIDIA GPU. Hundreds participated, posting 3D scenes of landmarks like Stonehenge, their backyards and even their pets.<\/p>\n<p>Among the creators using Instant NeRF are:<\/p>\n<h2><b>Through the Looking Glass: Karen X. Cheng and James Perlman<\/b><\/h2>\n<p>San Francisco-based creative director <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/05\/10\/in-the-nvidia-studio-may-10\/\">Karen X. Cheng<\/a> is working with software engineer James Perlman to render 3D scenes that test the boundaries of what Instant NeRF can create.<\/p>\n<p>The duo has used Instant NeRF to create scenes that explore reflections within a mirror (shown above) and handle complex environments with multiple people \u2014 like a group enjoying ramen at a restaurant.<\/p>\n<p>\u201cThe algorithm itself is groundbreaking \u2014 the fact that you can render a physical scene with higher fidelity than normal photogrammetry techniques is just astounding,\u201d Perlman said. \u201cIt\u2019s incredible how accurately you can reconstruct lighting, color differences or other tiny details.\u201d<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/ramen-672x457.jpg\" alt=\"\" width=\"672\" height=\"457\"><\/p>\n<p>\u201cIt even makes mistakes look artistic,\u201d said Cheng. \u201cWe really lean into that, and play with training a scene less sometimes, experimenting with 1,000, or 5,000 or 50,000 iterations. Sometimes I\u2019ll prefer the ones trained less because the edges are softer and you get an oil-painting effect.\u201d<\/p>\n<p>Using prior tools, it would take them three or four days to train a \u201cdecent-quality\u201d scene. With Instant NeRF, the pair can churn out about 20 a day, using an <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/rtx-a6000\/\">NVIDIA RTX A6000 GPU<\/a> to render, train and preview their 3D scenes.<\/p>\n<p>With rapid rendering comes faster iteration.<\/p>\n<p>\u201cBeing able to render quickly is very necessary for the creative process. We\u2019d meet up and shoot 15 or 20 different versions, run them overnight and then see what\u2019s working,\u201d said Cheng. \u201cEverything we\u2019ve published has been shot and reshot a dozen times, which is only possible when you can run several scenes a day.\u201d<\/p>\n<h2><b>Preserving Moments in Time: Hugues Bruy\u00e8re<\/b><\/h2>\n<p>Hugues Bruy\u00e8re, partner and chief of innovation at Dpt., a Montreal-based creative studio, uses Instant NeRF daily.<\/p>\n<p>\u201c3D captures have always been of strong interest to me because I can go back to those volumetric reconstructions and move in them, adding an extra dimension of meaning to them,\u201d he said.<\/p>\n<p>Bruy\u00e8re <a href=\"https:\/\/vimeo.com\/733819779\/bd3b674659\" target=\"_blank\" rel=\"noopener\">rendered 3D scenes<\/a> with Instant NeRF using the data he\u2019d previously captured for traditional photogrammetry relying on mirrorless digital cameras, smartphones, 360 cameras and drones. He uses an NVIDIA GeForce RTX 3090 GPU to render his Instant NeRF scenes.<\/p>\n<p>Bruy\u00e8re believes Instant NeRF could be a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects.<\/p>\n<p>\u201cThe aspect of capturing itself is being democratized, as camera and software solutions become cheaper,\u201d he said. \u201cIn a few months or years, people will be able to capture objects, places, moments and memories and have them volumetrically rendered in real time, shareable and preserved forever.\u201d<\/p>\n<p>Using pictures taken with a smartphone, Bruy\u00e8re created an Instant NeRF render of an ancient marble statue of Zeus from an exhibition at Toronto\u2019s Royal Ontario Museum.<\/p>\n<\/p>\n<h2><b>Stepping Into Remote Scenes: Jonathan Stephens<\/b><\/h2>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=z3-fjYzd0BA\" target=\"_blank\" rel=\"noopener\">Jonathan Stephens<\/a>, chief evangelist for spatial computing company EveryPoint, has been exploring Instant NeRF for both creative and practical applications.<\/p>\n<p>EveryPoint reconstructs 3D scenes such as stockpiles, railyards and quarries to help businesses manage their resources. With Instant NeRF, Stephens can capture a scene more completely, allowing clients to freely explore a scene. He uses an NVIDIA GeForce RTX 3080 GPU to run scenes rendered with Instant NeRF.<\/p>\n<\/p>\n<p>\u201cWhat I really like about Instant NeRF is that you quickly know if your render is working,\u201d Stephens said. \u201cWith a large photogrammetry set, you could be waiting hours or days. Here, I can test out a bunch of different datasets and know within minutes.\u201d<\/p>\n<p>He\u2019s also experimented with making NeRFs using footage from lightweight devices like smart glasses. Instant NeRF could turn the low-resolution, bumpy footage from Stephens walking down the street into a smooth 3D scene.<\/p>\n<\/p>\n<h2><b>Find NVIDIA at SIGGRAPH<\/b><\/h2>\n<p><a href=\"https:\/\/www.addevent.com\/event\/Ex14445230\" target=\"_blank\" rel=\"noopener\">Tune in for a special address<\/a> by NVIDIA CEO Jensen Huang and other senior leaders on Tuesday, Aug. 9, at 9 a.m. PT to hear about the research and technology behind AI-powered virtual worlds.<\/p>\n<p>NVIDIA is also presenting a score of in-person and virtual sessions for SIGGRAPH attendees, including:<\/p>\n<p>Learn how to create with Instant NeRF in the hands-on demo, <a href=\"https:\/\/s2022.siggraph.org\/presentation\/?id=exf_160&amp;sess=sess513\" target=\"_blank\" rel=\"noopener\"><i>NVIDIA Instant NeRF \u2014 Getting Started With Neural Radiance Fields<\/i><\/a>. Instant NeRF will also be part of SIGGRAPH\u2019s <a href=\"https:\/\/s2022.siggraph.org\/session\/?sess=sess205\" target=\"_blank\" rel=\"noopener\">\u201cReal-Time Live\u201d showcase<\/a> \u2014 where in-person attendees can vote for a winning project.<\/p>\n<p>For more interactive sessions, the NVIDIA Deep Learning Institute is offering free <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/#labs\">hands-on training<\/a> with NVIDIA Omniverse and other 3D graphics technologies for in-person conference attendees.<\/p>\n<p>And peek behind the scenes of NVIDIA GTC in the documentary premiere, <a href=\"https:\/\/www.youtube.com\/watch?v=2EBWXhI67Jk\" target=\"_blank\" rel=\"noopener\"><i>The Art of Collaboration: NVIDIA, Omniverse, and GTC<\/i><\/a>, taking place <a href=\"https:\/\/www.addevent.com\/event\/WU14445238\" target=\"_blank\" rel=\"noopener\">Aug. 10 at 10 a.m. PT<\/a>, to learn how NVIDIA\u2019s creative, engineering and research teams used the company\u2019s technology to deliver the visual effects in the latest GTC keynote address.<\/p>\n<p>Find out more about <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\">NVIDIA at SIGGRAPH<\/a>, and see a full schedule of events and sessions in <a href=\"https:\/\/www.nvidia.com\/content\/dam\/en-zz\/Solutions\/events\/siggraph\/2022\/siggraph-2022-nv-show-guide-print-2392502-r7.2.pdf\">this show guide<\/a>.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/08\/05\/instant-nerf-creators-siggraph\/<\/p>\n","protected":false},"author":0,"featured_media":2484,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2483"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2483"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2483\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2484"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2483"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2483"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2483"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}