{"id":2169,"date":"2022-06-21T13:39:10","date_gmt":"2022-06-21T13:39:10","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/06\/21\/ai-in-the-big-easy-nvidia-research-lets-content-creators-improvise-with-3d-objects\/"},"modified":"2022-06-21T13:39:10","modified_gmt":"2022-06-21T13:39:10","slug":"ai-in-the-big-easy-nvidia-research-lets-content-creators-improvise-with-3d-objects","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/06\/21\/ai-in-the-big-easy-nvidia-research-lets-content-creators-improvise-with-3d-objects\/","title":{"rendered":"AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/06\/21\/inverse-rendering-3d-research-cvpr\/\" data-title=\"AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects\" data-hashtags=\"\">\n<p>Jazz is all about improvisation \u2014 and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session.<\/p>\n<p>The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to quickly import an object into a graphics engine to start working with it, modifying scale, changing the material or experimenting with different lighting effects.<\/p>\n<p>NVIDIA Research showcased this technology in a video celebrating jazz and its birthplace, New Orleans, where the <a href=\"https:\/\/nvlabs.github.io\/nvdiffrec\/\">paper behind 3D MoMa<\/a> will be presented this week at the <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/cvpr\/\">Conference on Computer Vision and Pattern Recognition<\/a>.<\/p>\n<\/p>\n<h2><b>Extracting 3D Objects From 2D Images<\/b><\/h2>\n<p>Inverse rendering, a technique to reconstruct a series of still photos into a 3D model of an object or scene, \u201chas long been a holy grail unifying computer vision and computer graphics,\u201d said David Luebke, vice president of graphics research at NVIDIA.<\/p>\n<p>\u201cBy formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit and extend without limitation in existing tools,\u201d he said.<\/p>\n<p>To be most useful for an artist or engineer, a 3D object should be in a form that can be dropped into widely used tools such as game engines, 3D modelers and film renderers. That form is a triangle mesh with textured materials, the common language used by such 3D tools.<\/p>\n<figure id=\"attachment_57780\" aria-describedby=\"caption-attachment-57780\" class=\"wp-caption alignright\">\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/06\/trumpet-mesh.jpg\" alt=\"trumpet mesh\" width=\"468\" height=\"210\"><figcaption id=\"caption-attachment-57780\" class=\"wp-caption-text\">Triangle meshes are the underlying frames used to define shapes in 3D graphics and modeling.<\/figcaption><\/figure>\n<p>Game studios and other creators would traditionally create 3D objects like these with complex photogrammetry techniques that require significant time and manual effort. <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/25\/instant-nerf-research-3d-ai\/\">Recent work in neural radiance fields<\/a> can rapidly generate a 3D representation of an object or scene, but not in a triangle mesh format that can be easily edited.<\/p>\n<p>NVIDIA 3D MoMa generates triangle mesh models within an hour on a single <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/tensor-cores\/\">NVIDIA Tensor Core GPU<\/a>. The pipeline\u2019s output is directly compatible with the 3D graphics engines and modeling tools that creators already use.<\/p>\n<p>The pipeline\u2019s reconstruction includes three features: a 3D mesh model, materials and lighting. The mesh is like a papier-m\u00e2ch\u00e9 model of a 3D shape built from triangles. With it, developers can modify an object to fit their creative vision. Materials are 2D textures overlaid on the 3D meshes like a skin. And NVIDIA 3D MoMa\u2019s estimate of how the scene is lit allows creators to later modify the lighting on the objects.<\/p>\n<h2><b>Tuning Instruments for Virtual Jazz Band<\/b><\/h2>\n<p>To showcase the capabilities of NVIDIA 3D MoMa, NVIDIA\u2019s research and creative teams started by collecting around 100 images each of five jazz band instruments \u2014 a trumpet, trombone, saxophone, drum set and clarinet \u2014 from different angles.<\/p>\n<p>NVIDIA 3D MoMa reconstructed these 2D images into 3D representations of each instrument, represented as meshes. The NVIDIA team then took the instruments out of their original scenes and imported them into the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\">NVIDIA Omniverse<\/a> 3D simulation platform to edit.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/06\/trumpet-omniverse.jpg\" alt=\"editing the 3D trumpet in NVIDIA Omniverse\" width=\"624\" height=\"344\"><\/p>\n<p>In any traditional graphics engine, creators can easily swap out the material of a shape generated by NVIDIA 3D MoMa, as if dressing the mesh in different outfits. The team did this with the trumpet model, for example, instantly converting its original plastic to gold, marble, wood or cork.<\/p>\n<p>Creators can then place the newly edited objects into any virtual scene. The NVIDIA team dropped the instruments into a Cornell box, a classic graphics test for rendering quality. They demonstrated that the virtual instruments react to light just as they would in the physical world, with the shiny brass instruments reflecting brightly, and the matte drum skins absorbing light.<\/p>\n<p>These new objects, generated through inverse rendering, can be used as building blocks for a complex animated scene \u2014 showcased in the video\u2019s finale as a virtual jazz band.<\/p>\n<p>The <a href=\"https:\/\/nvlabs.github.io\/nvdiffrec\/\">paper behind NVIDIA 3D MoMa<\/a> will be presented in a <a href=\"https:\/\/cvpr2022.thecvf.com\/overview\" target=\"_blank\" rel=\"noopener\">session at CVPR<\/a> on June 22 at 1:30 p.m. Central time. It\u2019s one of 38 papers with NVIDIA authors at the conference. Learn more about <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/cvpr\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Research at CVPR<\/a>.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/06\/21\/inverse-rendering-3d-research-cvpr\/<\/p>\n","protected":false},"author":0,"featured_media":2170,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2169"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2169"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2169\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2170"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2169"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2169"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2169"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}