{"id":3639,"date":"2024-06-17T14:10:43","date_gmt":"2024-06-17T14:10:43","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/06\/17\/seamless-in-seattle-nvidia-research-showcases-advancements-in-visual-generative-ai-at-cvpr\/"},"modified":"2024-06-17T14:10:43","modified_gmt":"2024-06-17T14:10:43","slug":"seamless-in-seattle-nvidia-research-showcases-advancements-in-visual-generative-ai-at-cvpr","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/06\/17\/seamless-in-seattle-nvidia-research-showcases-advancements-in-visual-generative-ai-at-cvpr\/","title":{"rendered":"Seamless in Seattle: NVIDIA Research Showcases Advancements in Visual Generative AI at CVPR"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>NVIDIA researchers are at the forefront of the rapidly advancing field of visual generative AI, developing new techniques to create and interpret images, videos and 3D environments.<\/p>\n<p>More than 50 of these projects will be showcased at the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/cvpr\/\" rel=\"noopener\">Computer Vision and Pattern Recognition (CVPR)<\/a> conference, taking place June 17-21 in Seattle. Two of the papers \u2014 one on the <a target=\"_blank\" href=\"https:\/\/github.com\/NVlabs\/edm2\" rel=\"noopener\">training dynamics of diffusion models<\/a> and another on <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2403.16439\" rel=\"noopener\">high-definition maps for autonomous vehicles<\/a> \u2014 are finalists for CVPR\u2019s Best Paper Awards.<\/p>\n<p>NVIDIA is also the <a href=\"https:\/\/blogs.nvidia.com\/blog\/auto-research-cvpr-2024\/\">winner of the CVPR Autonomous Grand Challenge\u2019s End-to-End Driving at Scale<\/a> track \u2014 a significant milestone that demonstrates the company\u2019s use of generative AI for comprehensive self-driving models. The winning submission, which outperformed more than 450 entries worldwide, also received CVPR\u2019s Innovation Award.<\/p>\n<p>NVIDIA\u2019s research at CVPR includes a text-to-image model that can be easily customized to depict a specific object or character, a new model for object pose estimation, a technique to edit neural radiance fields (<a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-instant-nerf\/\">NeRFs<\/a>) and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare and robotics.<\/p>\n<p>Collectively, the work introduces powerful AI models that could enable creators to more quickly bring their artistic visions to life, accelerate the training of autonomous robots for manufacturing, and support healthcare professionals by helping process radiology reports.<\/p>\n<p>\u201cArtificial intelligence, and generative AI in particular, represents a pivotal technological advancement,\u201d said Jan Kautz, vice president of learning and perception research at NVIDIA. \u201cAt CVPR, NVIDIA Research is sharing how we\u2019re pushing the boundaries of what\u2019s possible \u2014 from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.\u201d<\/p>\n<p>At CVPR, NVIDIA also announced <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/omniverse-microservices-physical-ai\" rel=\"noopener\">NVIDIA Omniverse Cloud Sensor RTX<\/a>, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.<\/p>\n<h2><b>Forget Fine-Tuning: JeDi Simplifies Custom Image Generation<\/b><\/h2>\n<p>Creators harnessing diffusion models, the most popular method for generating images based on text prompts, often have a specific character or object in mind \u2014 they may, for example, be developing a storyboard around an animated mouse or brainstorming an ad campaign for a specific toy.<\/p>\n<p>Prior research has enabled these creators to personalize the output of diffusion models to focus on a specific subject using fine-tuning \u2014 where a user trains the model on a custom dataset \u2014 but the process can be time-consuming and inaccessible for general users.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/research.nvidia.com\/labs\/dir\/jedi\/\" rel=\"noopener\">JeDi<\/a>, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago and NVIDIA, proposes a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model achieves state-of-the-art quality, significantly outperforming existing fine-tuning-based and fine-tuning-free methods.<\/p>\n<p>JeDi can also be combined with <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-retrieval-augmented-generation\/\">retrieval-augmented generation<\/a>, or RAG, to generate visuals specific to a database, such as a brand\u2019s product catalog.<\/p>\n<p>\u00a0<\/p>\n<h2><b>New Foundation Model Perfects the Pose<\/b><\/h2>\n<p>NVIDIA researchers at CVPR are also presenting <a target=\"_blank\" href=\"https:\/\/nvlabs.github.io\/FoundationPose\/\" rel=\"noopener\">FoundationPose<\/a>, a <a href=\"https:\/\/blogs.nvidia.com\/blog\/what-are-foundation-models\/\">foundation model<\/a> for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine-tuning.<\/p>\n<p>The model, which <a target=\"_blank\" href=\"https:\/\/bop.felk.cvut.cz\/leaderboards\/pose-estimation-unseen-bop23\/core-datasets\/\" rel=\"noopener\">set a new record<\/a> on a popular benchmark for object pose estimation, uses either a small set of reference images or a 3D representation of an object to understand its shape. It can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions.<\/p>\n<p>FoundationPose could be used in industrial applications to help autonomous robots identify and track the objects they interact with. It could also be used in augmented reality applications where an AI model is used to overlay visuals on a live scene.<\/p>\n<\/p>\n<h2><b>NeRFDeformer Transforms 3D Scenes With a Single Snapshot<\/b><\/h2>\n<p>A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In fields like robotics, NeRFs can be used to generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site. However, to make any changes, developers would need to manually define how the scene has transformed \u2014 or remake the NeRF entirely.<\/p>\n<p>Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method, being presented at CVPR, can successfully transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-full wp-image-72222\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/Screenshot-2024-06-05-at-5.05.51-PM.png\" alt=\"\" width=\"1316\" height=\"794\"><\/p>\n<h2><b>VILA Visual Language Model Gets the Picture<\/b><\/h2>\n<p>A CVPR research collaboration between NVIDIA and the Massachusetts Institute of Technology is advancing the state of the art for vision language models, which are generative AI models that can process videos, images and text.<\/p>\n<p>The group developed <a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2312.07533\" rel=\"noopener\">VILA<\/a>, a family of open-source visual language models that outperforms prior neural networks <a target=\"_blank\" href=\"https:\/\/mmmu-benchmark.github.io\/#leaderboard\" rel=\"noopener\">on key benchmarks<\/a> that test how well AI models answer questions about images. VILA\u2019s unique pretraining process unlocked new model capabilities, including enhanced world knowledge, stronger in-context learning and the ability to reason across multiple images.<\/p>\n<figure id=\"attachment_72225\" aria-describedby=\"caption-attachment-72225\" class=\"wp-caption aligncenter\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-72225\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/06\/VILA.png\" alt=\"figure showing how VILA can reason based on multiple images\" width=\"1999\" height=\"809\"><figcaption id=\"caption-attachment-72225\" class=\"wp-caption-text\">VILA can understand memes and reason based on multiple images or video frames.<\/figcaption><\/figure>\n<p>The VILA model family can be optimized for inference using the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/optimizing-inference-on-llms-with-tensorrt-llm-now-publicly-available\/\" rel=\"noopener\">NVIDIA TensorRT-LLM<\/a> open-source library and can be deployed on NVIDIA GPUs in data centers, workstations and even <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/visual-language-intelligence-and-edge-ai-2-0\/\" rel=\"noopener\">edge devices<\/a>.<\/p>\n<p>Read more about VILA on the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/visual-language-models-on-nvidia-hardware-with-vila\/\" rel=\"noopener\">NVIDIA Technical Blog<\/a> and <a target=\"_blank\" href=\"https:\/\/github.com\/NVlabs\/VILA\" rel=\"noopener\">GitHub<\/a>.<\/p>\n<h2><b>Generative AI Fuels Autonomous Driving, Smart City Research<\/b><\/h2>\n<p>A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research. Other AV-related highlights include:<\/p>\n<p>Also at CVPR, NVIDIA contributed the largest ever indoor synthetic dataset to the <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-city-challenge-omniverse-cvpr\/\">AI City Challenge<\/a>, helping researchers and developers advance the development of solutions for smart cities and industrial automation. The challenge\u2019s datasets were generated using <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" rel=\"noopener\">NVIDIA Omniverse<\/a>, a platform of APIs, SDKs and services that enable developers to build <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\/\" rel=\"noopener\">Universal Scene Description (OpenUSD)<\/a>-based applications and workflows.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/research\/\" rel=\"noopener\">NVIDIA Research<\/a> has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/cvpr\/\" rel=\"noopener\">NVIDIA Research at CVPR<\/a>.<\/p>\n<p>\t\t<!-- .entry-footer --><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/visual-generative-ai-cvpr-research\/<\/p>\n","protected":false},"author":0,"featured_media":3640,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3639"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3639"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3639\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3640"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3639"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3639"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3639"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}