{"id":486,"date":"2020-11-02T20:24:16","date_gmt":"2020-11-02T20:24:16","guid":{"rendered":"https:\/\/machine-learning.webcloning.com\/2020\/11\/02\/marbles-at-night-illuminates-future-of-graphics-in-nvidia-omniverse\/"},"modified":"2020-11-02T20:24:16","modified_gmt":"2020-11-02T20:24:16","slug":"marbles-at-night-illuminates-future-of-graphics-in-nvidia-omniverse","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2020\/11\/02\/marbles-at-night-illuminates-future-of-graphics-in-nvidia-omniverse\/","title":{"rendered":"\u2018Marbles at Night\u2019 Illuminates Future of Graphics in NVIDIA Omniverse"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2020\/11\/02\/marbles-at-night\/\" data-title=\"\u2018Marbles at Night\u2019 Illuminates Future of Graphics in NVIDIA Omniverse\">\n<p>Reflections have never looked so good.<\/p>\n<p>Artists are using NVIDIA RTX GPUs to take real-time graphics to the next level, creating visuals with rendered surfaces and light reflections to produce incredible photorealistic details.<\/p>\n<p>The <a href=\"https:\/\/www.youtube.com\/watch?v=H0_NZDSqR3Y\"><i>Marbles <\/i>RTX technology demo<\/a>, first previewed at GTC in March, ran on a single <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/quadro\/rtx-8000\/\">NVIDIA RTX 8000 GPU<\/a>. It showcased how complex physics can be simulated in a real-time, ray-traced world.<\/p>\n<p>During the <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-delivers-greatest-ever-generational-leap-in-performance-with-geforce-rtx-30-series-gpus\">GeForce RTX 30 Series launch event<\/a> in September, NVIDIA CEO Jensen Huang unveiled a more challenging take on the NVIDIA Marbles RTX project: staging the scene to take place at night and illustrate the effect of hundreds of dynamic, animated lights.<\/p>\n<p><i>Marbles at Night<\/i> is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons. Built in<a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/omniverse\/\"> NVIDIA Omniverse<\/a> and running on a single GeForce RTX 3090 GPU, the final result showed hundreds of different light sources at night, with each marble reflecting lights differently and all happening in real time.<\/p>\n<p>Beyond demonstrating the latest technologies for content creation, <i>Marbles at Night <\/i>showed how creative professionals can now seamlessly collaborate and design simulations with incredible lighting, accurate reflections and real-time ray tracing with path tracing.<\/p>\n<\/p>\n<h2>\n<b>Pushing the Limits of Creativity<\/b><b><br \/><\/b><br \/>\n<\/h2>\n<p>A team of artists from NVIDIA collaborated and built the project in NVIDIA Omniverse, the real-time graphics and simulation platform based on NVIDIA RTX GPUs and Pixar\u2019s <a href=\"https:\/\/blogs.nvidia.com\/blog\/2020\/10\/05\/usd-ecosystem-omniverse\/\">Universal Scene Description<\/a>.<\/p>\n<p>Working in Omniverse, the artists were able to upload, store and access all the assets in the cloud, allowing them to easily share files across teams. They could send a link, open the file and work on the assets at the same time.<\/p>\n<p>Every single asset in <i>Marbles at Night <\/i>was hand-made, modeled and textured from scratch. Marbles RTX Creative Director Gavriil Klimov bought over 200 art supplies and took reference photos of each to capture realistic details, from paint splatter to wear and tear. Texturing \u2014 a process that allows artists to transfer details from one model to another \u2014 was done entirely in Substance Painter, with multiple variations for each asset.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2020\/11\/lantern1.jpg\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2020\/11\/lantern1-672x373.jpg\" alt=\"\" width=\"672\" height=\"373\"><\/a><\/p>\n<p>In Omniverse, the artists manually crafted everything in the <i>Marbles <\/i>project using RTX Renderer and a variety of creative applications like 3ds Max, Maya, Cinema 4D, ZBrush and Blender. The simulation platform enabled the creative team to view all content at the highest possible quality in real time, resulting in shorter cycles and more iterations.<\/p>\n<p>Nearly a dozen people were working on the project remotely from locations as far afield as California, New York, Australia and Russia. Although the team members were located around the world, Omniverse allowed them to work on scenes simultaneously thanks to <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-platform\">Omniverse Nucleus<\/a>. Running on premises or in the cloud, the module enabled the teams to collaborate in real time across vast distances.<\/p>\n<p>The collaboration-based workflow, combined with the fact the project\u2019s assets were stored in the cloud, made it easier for everyone to access the files and edit in real time.<\/p>\n<p>The final technology demo completed in Omniverse resulted in over 500GB worth of texture data, over 100 unique objects, more than 5,000 meshes and about 100 million polygons.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2020\/11\/Spray_Can_omni_01.jpg\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2020\/11\/Spray_Can_omni_01-672x364.jpg\" alt=\"\" width=\"672\" height=\"364\"><\/a><\/p>\n<h2>\n<b>The Research Behind the Project <\/b><b><br \/><\/b><br \/>\n<\/h2>\n<p><a href=\"https:\/\/www.nvidia.com\/en-us\/research\/\">NVIDIA Research<\/a> recently <a href=\"https:\/\/research.nvidia.com\/publication\/2020-07_Spatiotemporal-reservoir-resampling\">released a paper<\/a> on the <a href=\"https:\/\/www.youtube.com\/watch?v=HiSexy6eoy8&amp;feature=youtu.be\">reservoir-based spatiotemporal importance resampling (ReSTIR) technique<\/a>, which details how to render dynamic direct lighting and shadows from millions of area lights in real time. Inspired by this technique, the NVIDIA rendering team, led by distinguished engineer Ignacio Llamas, implemented an algorithm that allowed Klimov and team to place as many lights as they wanted for the <i>Marbles <\/i>demo, without being constrained by lighting limits.<\/p>\n<p>\u201cBefore, we were limited to using less than 10 lights. But today with Omniverse capabilities using RTX, we were able to place as many lights as we wanted,\u201d said Klimov. \u201cThat\u2019s the beauty of it \u2014 you can creatively decide what the limit is that works for you.\u201d<\/p>\n<p>Traditionally, artists and developers achieved complex lighting using baked solutions. NVIDIA Research, in collaboration with the <a href=\"http:\/\/vcl.cs.dartmouth.edu\/\">Visual Computing Lab<\/a> at <a href=\"https:\/\/home.dartmouth.edu\/\">Dartmouth College<\/a>, produced the research paper that dives into how artists can enable direct lighting from millions of moving lights.<\/p>\n<p>The approach requires no complex light structure, no baking and no global scene parameterization. All the lights can cast shadows, everything can move arbitrarily and new emitters can be added dynamically. This technique is implemented using <a href=\"https:\/\/developer.nvidia.com\/rtxdi\">DirectX Ray Tracing<\/a> accelerated by NVIDIA RTX and NVIDIA RT Cores.<\/p>\n<p>Get more insights into the <a href=\"https:\/\/www.nvidia.com\/en-us\/research\/\">NVIDIA Research<\/a> that\u2019s helping professionals simplify complex design workflows, and learn about the latest announcement of <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-omniverse-open-beta-letting-designers-collaborate-in-real-time-from-home-or-around-the-world#:~:text=The%20open%20beta%20of%20Omniverse,to%20the%20NVIDIA%20engineering%20team.\">Omniverse, now in open beta<\/a>.<\/p>\n<\/p>\n<p><b>Additional Resources:\u00a0<\/b><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>http:\/\/feedproxy.google.com\/~r\/nvidiablog\/~3\/ZOlMdhU_0Vo\/<\/p>\n","protected":false},"author":0,"featured_media":487,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/486"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=486"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/486\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/487"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=486"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=486"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=486"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}