{"id":3109,"date":"2023-08-08T18:50:45","date_gmt":"2023-08-08T18:50:45","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/08\/content-creation-in-the-nvidia-studio-gets-boost-from-new-professional-gpus-ai-tools-omniverse-and-openusd-collaboration-features\/"},"modified":"2023-08-08T18:50:45","modified_gmt":"2023-08-08T18:50:45","slug":"content-creation-in-the-nvidia-studio-gets-boost-from-new-professional-gpus-ai-tools-omniverse-and-openusd-collaboration-features","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/08\/08\/content-creation-in-the-nvidia-studio-gets-boost-from-new-professional-gpus-ai-tools-omniverse-and-openusd-collaboration-features\/","title":{"rendered":"Content Creation \u2018In the NVIDIA Studio\u2019 Gets Boost From New Professional GPUs, AI Tools, Omniverse and OpenUSD Collaboration Features"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/08\/siggraph-studio-rtx-omniverse-openusd\/\" data-title=\"Content Creation \u2018In the NVIDIA Studio\u2019 Gets Boost From New Professional GPUs, AI Tools, Omniverse and OpenUSD Collaboration Features\" data-hashtags=\"\">\n<p>AI and accelerated computing were in the spotlight at SIGGRAPH \u2014 the world\u2019s largest gathering of computer graphics experts \u2014 as NVIDIA founder and CEO Jensen Huang announced during his <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/siggraph\/\">keynote address<\/a> updates to <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/creators\/\">NVIDIA Omniverse<\/a>, a platform for building and connecting 3D tools and applications, as well as acceleration for <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\">Universal Scene Description<\/a> (known as OpenUSD), the open and extensible ecosystem for 3D worlds.<\/p>\n<p>This follows the <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/01\/openusd-alliance-3d-standard\/\">recent announcement<\/a> of NVIDIA joining Pixar, Adobe, Apple and Autodesk to form the <a href=\"https:\/\/www.linuxfoundation.org\/press\/announcing-alliance-for-open-usd-aousd\">Alliance for OpenUSD<\/a>. It marks a major leap toward unlocking the next era of 3D graphics, design and simulation by ensuring compatibility in 3D tools and content for digitalization across industries.<\/p>\n<p>NVIDIA launched three new desktop workstation Ada Generation GPUs \u2014 the <a href=\"http:\/\/www.nvidia.com\/rtx-5000\">NVIDIA RTX 5000<\/a>, <a href=\"http:\/\/www.nvidia.com\/rtx-4500\">RTX 4500<\/a> and <a href=\"http:\/\/www.nvidia.com\/rtx-4000\">RTX 4000<\/a> \u2014 which deliver the latest AI, graphics and real-time rendering technology to professionals worldwide.<\/p>\n<p>Shutterstock is bringing <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/08\/shutterstock-generative-ai-picasso-360-hdri\">generative AI to 3D scene backgrounds<\/a> with a <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/03\/13\/what-are-foundation-models\/\">foundation model<\/a> trained using <a href=\"https:\/\/www.nvidia.com\/en-us\/gpu-cloud\/picasso\/\">NVIDIA Picasso<\/a>, a cloud-based foundry for building visual generative AI models. Picasso-trained models can now generate photorealistic, 8K, 360-degree high-dynamic-range imaging (HDRi) environment maps for quicker scene development. Autodesk will also integrate generative AI content-creation services \u2014 developed using foundation models in Picasso \u2014 with its popular Autodesk Maya software.<\/p>\n<p>Each month, NVIDIA Studio Driver releases provide artists, creators and 3D developers with the best performance and reliability when working with creative applications. Available today, the August NVIDIA Studio Driver gives creators peak reliability for using their favorite creative apps. It includes support for updates to Omniverse, XSplit Broadcaster and Reallusion iClone.<\/p>\n<p>Plus, this week\u2019s featured <i>In the NVIDIA Studio<\/i> artist Andrew Averkin shows how AI influenced his process in building a delightful cup of joe for his <i>Natural Coffee <\/i>piece.<\/p>\n<h2><b>Omniverse Expands<\/b><\/h2>\n<p>Omniverse received a <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-releases-major-omniverse-upgrade-with-generative-ai-and-openusd\">major upgrade<\/a>, bringing new connectors and advancements to the platform.<\/p>\n<p>These updates are showcased in Omniverse foundation applications, which are fully customizable reference applications that creators, enterprises and developers can copy, extend or enhance.<\/p>\n<p>Upgraded Omniverse applications include <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/create\/\">Omniverse USD Composer<\/a>, which lets 3D users assemble large-scale, OpenUSD-based scenes. <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/audio2face\/\">Omniverse Audio2Face<\/a> \u2014 which provides generative AI application programming interfaces that create realistic facial animations and gestures from only an audio file \u2014 now includes multilingual support and a new female base model.<\/p>\n<p>The update brings boosted efficiency and an improved user experience. New rendering optimizations take full advantage of the NVIDIA Ada Lovelace architecture enhancements in NVIDIA RTX GPUs with <a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/technologies\/dlss\/\">DLSS 3<\/a> technology fully integrated into the Omniverse RTX Renderer. In addition, a new AI denoiser enables real-time 4K <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/23\/what-is-path-tracing\/\">path tracing<\/a> of massive industrial scenes.<\/p>\n<p>New application and experience templates provide developers getting started with OpenUSD and Omniverse a major headstart with minimal coding.<\/p>\n<p>A new Omniverse Kit Extension Registry, a central repository for accessing, sharing and managing Omniverse extensions, lets developers easily turn functionality on and off in their application, making it easier than ever to build custom apps from over 500 core Omniverse extensions provided by NVIDIA.<\/p>\n<p>New <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/05\/20\/what-is-extended-reality\/\">extended-reality<\/a> developer tools let users <a href=\"https:\/\/developer.nvidia.com\/blog\/rtx-powered-spatial-framework-delivers-full-ray-tracing-with-usd-for-xr-pipelines\">build spatial-computing options natively into their Omniverse-based applications<\/a>, giving users the flexibility to experience their 3D projects and virtual worlds however they like.<\/p>\n<p>Expanding their collaboration across Adobe Substance 3D, generative AI and OpenUSD initiatives, Adobe and NVIDIA announced plans to make Adobe Firefly, Adobe\u2019s family of creative generative AI models, available as APIs in Omniverse, enabling developers and creators to enhance their design processes.<\/p>\n<p>Developers and industrial enterprises have new foundation apps and services to optimize and enhance 3D pipelines with the OpenUSD framework and generative AI.<\/p>\n<p>Studio professionals can connect the world of generative AI to their workflows to accelerate entire projects \u2014 from environment creation and character animation to scene-setting and more. With Kit AI Agent, OpenUSD Connectors and extensions to prompt top generative AI tools and APIs, Omniverse can aggregate the final result in a unified viewport \u2014 collectively reducing the time from conception to creation.<\/p>\n<h2><b>RTX: The Next Generation<\/b><\/h2>\n<p>The <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-global-workstation-manufacturers-to-launch-powerful-systems-for-generative-ai-and-llm-development-content-creation-data-science\">new<\/a> NVIDIA <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/rtx-5000\/\">RTX 5000<\/a>, <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/rtx-4500\/\">RTX 4500 <\/a>and <a href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/rtx-4000\/\">RTX 4000<\/a> Ada Generation professional desktop GPUs feature the latest NVIDIA Ada Lovelace architecture technologies, including DLSS 3, for smoother rendering and real-time interactivity in 3D applications such as Unreal Engine.<\/p>\n<p>These workstation-class GPUs feature third-generation RT Cores with up to 2x the throughput of the previous generation. This enables users to work with larger, higher-fidelity images in real time, helping artists and designers maintain their creative flow.<\/p>\n<p>Fourth-generation Tensor Cores deliver up to 2x the AI performance of the previous generation for AI training and development as well as inferencing and generative AI workloads. \u200cLarge GPU memory enables AI-augmented multi-application workflows with the latest generative AI-enabled tools and applications.<\/p>\n<p>The Ada architecture provides these new GPUs with up to twice the video encode and decode capability of the previous generation, encoding up to 8K60 video in real time, with support for AV1 encode and decode. Combined with next-generation AI performance, these capabilities make the new professional GPUs ideal for multi-stream video editing workflows with high-resolution content using\u00a0 AI-augmented video editing applications such as Adobe Premiere and DaVinci Resolve.<\/p>\n<p>Designed for high-end creative, multi-application professional workflows that require large models and datasets, these new GPUs provide large GDDR6 memory: 20GB for the RTX 4000, 24GB for the RTX 4500 and 32GB for the RTX 5000 \u2014 all supporting error-correcting code for error-free computing.<\/p>\n<h2><b>A Modern-Day Picasso<\/b><\/h2>\n<p>3D artists regularly face the monumental task of bringing scenes to life by artistically mixing hero assets with props, materials, backgrounds and lighting. Generative AI technologies can help streamline this workflow by generating secondary assets, like environment maps that light the scene.<\/p>\n<p>At SIGGRAPH, Shutterstock announced that it\u2019s tapping into NVIDIA Picasso to train a generative AI model that can create 360 HDRi photorealistic environment maps. The model is built using Shutterstock\u2019s responsibly licensed libraries.<\/p>\n<figure id=\"attachment_65999\" aria-describedby=\"caption-attachment-65999\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/picasso-week69-composer-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/picasso-week69-composer-1280w-672x378.png\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-65999\" class=\"wp-caption-text\">Shutterstock using NVIDIA Picasso to create 360 HDRi photorealistic environment maps.<\/figcaption><\/figure>\n<p>Previously, artists needed to use expensive 360-degree cameras to create backgrounds and environment maps from scratch, or choose from fixed options that may not precisely match their 3D scene. Now, from simple prompts or using their desired background as a reference, the Shutterstock generative AI feature will quickly generate custom 360-degree, 8K-resolution, HDRi environment maps, which artists can use to set a background and light a scene. This allows more time to work on hero 3D assets, which are the primary assets of a 3D scene that viewers will focus on.<\/p>\n<p>Autodesk also announced that it will integrate generative AI content-creation services \u2014 developed using foundation models in Picasso \u2014 with its popular 3D software Autodesk Maya.<\/p>\n<figure id=\"attachment_66002\" aria-describedby=\"caption-attachment-66002\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/autodesk-week69-composer-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/autodesk-week69-composer-1280w-672x360.png\" alt=\"\" width=\"672\" height=\"360\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66002\" class=\"wp-caption-text\">Autodesk Maya generative AI content-creation services developed using foundation models in Picasso.<\/figcaption><\/figure>\n<h2><b>August Studio Driver Delivers<\/b><\/h2>\n<p>The August Studio Driver supports these updates and more, including the latest release of XSplit Broadcaster, the popular streaming software that lets users simultaneously stream to multiple platforms.<\/p>\n<p>XSplit Broadcaster 4.5 introduces NVIDIA Encoder (NVENC) AV1 support. GeForce and NVIDIA RTX 40 Series GPU users can now stream in high-quality 4K 60 frames per second directly to YouTube Live, dramatically improving video quality.<\/p>\n<figure id=\"attachment_66005\" aria-describedby=\"caption-attachment-66005\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/XSplit-Live-Production-Capabilities-Deck.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/XSplit-Live-Production-Capabilities-Deck-672x378.png\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66005\" class=\"wp-caption-text\">XSplit Broadcaster 4.5 adds AV1 livestreaming support for YouTube.<\/figcaption><\/figure>\n<p>Streaming in AV1 with RTX GPUs provides 40% better efficiency than H.264, reducing bandwidth requirements for livestreaming or reducing file size for high-quality local captures.<\/p>\n<figure id=\"attachment_66008\" aria-describedby=\"caption-attachment-66008\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/av1-week69-composer-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/av1-week69-composer-1280w-672x360.png\" alt=\"\" width=\"672\" height=\"360\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66008\" class=\"wp-caption-text\">H.264 vs. AV1: 4K60 source encoded at 8 Mbps.<\/figcaption><\/figure>\n<p>An update to the Reallusion iClone <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/ecosystem\/\">Omniverse Connector<\/a> includes new features such as real-time synchronization of projects, as well as enhanced import functionality for OpenUSD. This makes work between iClone and Omniverse quicker, smoother and more efficient.<\/p>\n<h2><b>Brew-tiful Artwork<\/b><\/h2>\n<p>Words can\u2019t espresso the stunning 3D scene <i>Natural Coffee<\/i>.<\/p>\n<figure id=\"attachment_66011\" aria-describedby=\"caption-attachment-66011\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-final-image-photoshop-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-final-image-photoshop-1280w-672x378.png\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66011\" class=\"wp-caption-text\">Do they accept reservations?<\/figcaption><\/figure>\n<p>NVIDIA artist Andrew Averkin has over 15 years of experience in the creative field. He finds joy in a continuous journey \u2014 blending art and technology \u2014 to bring his vivid imagination to life.<\/p>\n<p>His work, <i>Natural Coffee<\/i>, has a compelling origin story. Once upon a time, in a bustling office, there was a cup of \u201cnatural coffee\u201d known for its legendary powers. It gave artists nerves of steel at work, improved performance across the board and, as a small bonus, offered magical music therapy.<\/p>\n<p>Averkin used an image generator to quickly cycle through visual ideas created from simple text-based prompts. Using AI to brainstorm imagery at the beginning of creative workflows is becoming more popular by artists looking to save time on iteration.<\/p>\n<figure id=\"attachment_66014\" aria-describedby=\"caption-attachment-66014\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-midjourney-03-1280w.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-66014\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-midjourney-03-1280w-327x500.png\" alt=\"\" width=\"327\" height=\"500\"><\/a><figcaption id=\"caption-attachment-66014\" class=\"wp-caption-text\">Averkin iterates for inspiration.<\/figcaption><\/figure>\n<p>With a visual foundation, Averkin speeds up the process by acquiring 3D assets from online stores to quickly build a 3D blockout of the scene, a rough-draft level built using simple 3D shapes without details or polished details.<\/p>\n<p>Next, Averkin polished individual assets in Autodesk 3ds Max, sculpting models with fine detail, testing and applying different textures and materials. His GeForce RTX 4090 GPU unlocked RTX-accelerated AI denoising \u2014 with the default Autodesk Arnold renderer \u2014 delivering interactive 3D modeling, which helped tremendously while composing the scene.<\/p>\n<figure id=\"attachment_66017\" aria-describedby=\"caption-attachment-66017\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-3dsmax-1280w_.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-3dsmax-1280w_-672x378.png\" alt=\"\" width=\"672\" height=\"378\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66017\" class=\"wp-caption-text\">Averkin working in Autodesk 3ds Max.<\/figcaption><\/figure>\n<p>\u201cI chose a GeForce RTX graphics card for quality, speed and safety, plain and simple,\u201d said Averkin.<\/p>\n<p>Averkin then exported <i>Natural Coffee<\/i> to the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/create\/\">NVIDIA Omniverse USD Composer app<\/a> via the Autodesk 3ds Max Connector. \u201cInside USD Composer I added more details, played a lot with a built-in collection of materials, plus did a lot of lighting work to make composition look more realistic,\u201d he explained.<\/p>\n<figure id=\"attachment_66020\" aria-describedby=\"caption-attachment-66020\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-composer-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/week69-composer-1280w-672x353.png\" alt=\"\" width=\"672\" height=\"353\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66020\" class=\"wp-caption-text\">Real-time rendering in Omniverse USD Composer.<\/figcaption><\/figure>\n<p>One of the biggest benefits in USD Composer is the ability to review scenes rendering in real time with photorealistic light, shadows, textures and more. This dramatically improves the process of editing massive 3D scenes, making it quicker and easier. Averkin was even able to add a camera fly animation, further elevating the scene.<\/p>\n<p>The final step was to add a few touch-ups in Adobe Photoshop. Over 30 GPU-accelerated features gave Averkin plenty of options for playing with colors and contrast, and making final image adjustments smoothly and quickly.<\/p>\n<p>Averkin encourages advanced 3D artists to experiment with the OpenUSD framework. \u201cI use it a lot in my work at NVIDIA and in personal projects,\u201d he said. \u201cOpenUSD is very powerful. It helps with work in multiple creative apps in a non-destructive way, and other great features make the entire process easier and more flexible.\u201d<\/p>\n<figure id=\"attachment_66023\" aria-describedby=\"caption-attachment-66023\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/andrew-averkin-artist-feature-gallery-1280w.png\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/08\/andrew-averkin-artist-feature-gallery-1280w-672x311.png\" alt=\"\" width=\"672\" height=\"311\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-66023\" class=\"wp-caption-text\">NVIDIA artist Andrew Averkin.<\/figcaption><\/figure>\n<p>Check out Averkin\u2019s portfolio on <a href=\"https:\/\/www.artstation.com\/averkin\">ArtStation<\/a>.<\/p>\n<p><i>Follow NVIDIA Studio on <\/i><a href=\"https:\/\/www.instagram.com\/nvidiastudio\/\"><i>Instagram<\/i><\/a><i>, <\/i><a href=\"https:\/\/twitter.com\/NVIDIAStudio\"><i>Twitter<\/i><\/a><i> and <\/i><a href=\"https:\/\/www.facebook.com\/NVIDIAStudio\/\"><i>Facebook<\/i><\/a><i>. Access tutorials on the <\/i><a href=\"https:\/\/www.youtube.com\/channel\/UCDeQdW6Lt6nhq3mLM4oLGWw\"><i>Studio YouTube channel<\/i><\/a><i> and get updates directly in your inbox by subscribing to the <\/i><a href=\"https:\/\/www.nvidia.com\/en-us\/studio\/?nvmid=subscribe-creators-mail-icon\"><i>Studio newsletter<\/i><\/a><i>.<\/i><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2023\/08\/08\/siggraph-studio-rtx-omniverse-openusd\/<\/p>\n","protected":false},"author":0,"featured_media":3110,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3109"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3109"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3109\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3110"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3109"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3109"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}