{"id":2513,"date":"2022-08-19T16:51:35","date_gmt":"2022-08-19T16:51:35","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/19\/meet-the-omnivore-startup-in3d-turns-selfies-into-talking-dancing-avatars-with-nvidia-omniverse\/"},"modified":"2022-08-19T16:51:35","modified_gmt":"2022-08-19T16:51:35","slug":"meet-the-omnivore-startup-in3d-turns-selfies-into-talking-dancing-avatars-with-nvidia-omniverse","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2022\/08\/19\/meet-the-omnivore-startup-in3d-turns-selfies-into-talking-dancing-avatars-with-nvidia-omniverse\/","title":{"rendered":"Meet the Omnivore: Startup in3D Turns Selfies Into Talking, Dancing Avatars With NVIDIA Omniverse"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2022\/08\/19\/omniverse-developer-in3d\/\" data-title=\"Meet the Omnivore: Startup in3D Turns Selfies Into Talking, Dancing Avatars With NVIDIA Omniverse\" data-hashtags=\"\">\n<p><i>Editor\u2019s note: This post is a part of our <\/i><a href=\"https:\/\/blogs.nvidia.com\/blog\/tag\/meet-the-omnivore\/\" target=\"_blank\" rel=\"noopener\"><i>Meet the Omnivore<\/i><\/a><i> series, which features individual creators and developers who use <\/i><a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" target=\"_blank\" rel=\"noopener\"><i>NVIDIA Omniverse<\/i><\/a><i> to accelerate their 3D workflows and create virtual worlds.<\/i><\/p>\n<p>Imagine taking a selfie and using it to get a moving, talking, customizable 3D avatar of yourself in just seconds.<\/p>\n<p>A new extension for <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-platform\" target=\"_blank\" rel=\"noopener\">NVIDIA Omniverse<\/a>, a design collaboration and world simulation platform, enables just that.<\/p>\n<p>Created by developers at software startup <a href=\"https:\/\/in3d.io\/\" target=\"_blank\" rel=\"nofollow noopener\">in3D<\/a>, the <a href=\"https:\/\/in3d-1.gitbook.io\/product-docs\/intergation-guides\/omniverse-extension\" target=\"_blank\" rel=\"nofollow noopener\">extension<\/a> lets people instantly import 3D avatars of themselves into virtual environments using their smartphones. <a href=\"https:\/\/docs.omniverse.nvidia.com\/prod_extensions\/prod_extensions\/overview\" target=\"_blank\" rel=\"noopener\">Omniverse Extensions<\/a> are the core building blocks that let anyone create and extend functions of <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/\" target=\"_blank\" rel=\"noopener\">Omniverse Apps<\/a>.<\/p>\n<p>The in3D app can now bring people, in their digital forms, into Omniverse. It helps <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/creators\/\" target=\"_blank\" rel=\"noopener\">creators build engaging virtual worlds<\/a> and use these avatars as heroes, actors or spectators in their stories. The app works on any phone with a camera, recreating a user\u2019s full geometry and texture based on a video selfie.<\/p>\n<p>The avatars can even be added into 3D worlds with animations and a customizable wardrobe.<\/p>\n<\/p>\n<p>In3D is a member of <a href=\"https:\/\/www.nvidia.com\/en-us\/startups\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Inception<\/a>, a free, global program that nurtures cutting-edge startups.<\/p>\n<h2><b>Simple and Scalable Avatar Creation<\/b><\/h2>\n<p>Creating a <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-platform\/ace\" target=\"_blank\" rel=\"noopener\">photorealistic 3D avatar<\/a> has traditionally taken up to several months, with costs reaching up to <a href=\"https:\/\/in3d.io\/blog_page\/realistic-avatar-creator-3d-scanning-tools\/\" target=\"_blank\" rel=\"nofollow noopener\">tens of thousands of dollars<\/a>. Photogrammetry, a standard approach to creating 3D references of humans from images, is extremely costly, requires a digital studio and lacks scalability.<\/p>\n<p>With in3D, the process of creating 3D avatars is simple and scalable. The app understands the geometry, texture, depth and various vectors of a person via a mobile scan \u2014 and uses this information to replicate lifelike detail and create predictive animations for avatars.<\/p>\n<p>Dmitry Ulyanov, CEO of in3D, which is based in Tel Aviv, Israel, said the app captures even small details with centimeter-grade accuracy and automatically fixes lighting. This allows for precise head geometry from a single selfie, as well as estimation of a user\u2019s exact body shape.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2022\/08\/in3d-astronaut-672x420.png\" alt=\"\" width=\"672\" height=\"420\"><\/p>\n<p>For creators building 3D worlds, in3D software can save countless hours, increase productivity and result in substantial cost savings, Ulyanov said.<\/p>\n<p>\u201cManually creating one avatar can take up to months,\u201d he added. \u201cWith in3D\u2019s scanning app and software development kit, a user can scan and upload 21,000 people with a single GPU and mobile phone in the same amount of time.\u201d<\/p>\n<h2><b>Connecting to Omniverse<\/b><\/h2>\n<p>Ulyanov said that using in3D\u2019s extension with <a href=\"https:\/\/nvidianews.nvidia.com\/news\/virtual-assistants-and-digital-humans-on-pace-to-ace-turing-test-with-new-nvidia-omniverse-avatar-cloud-engine\" target=\"_blank\" rel=\"noopener\">NVIDIA Omniverse Avatar Cloud Engine (ACE)<\/a> opens up many possibilities for avatar building, as users can easily customize imported avatars from in3D to engage and interact with their virtual worlds \u2014 in real time and at scale.<\/p>\n<p>In3D uses <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/usd\" target=\"_blank\" rel=\"noopener\">Universal Scene Description (USD)<\/a>, an open-source, extensible file format, to seamlessly integrate its high-fidelity avatars into Omniverse. All avatar data is contained in a USD file, removing the need for complex shaders or embeddings. And bringing the avatars into Omniverse only requires a simple drag and drop.<\/p>\n<p>Once imported into Omniverse via USD, the avatars can be used in apps like <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/create\/\" target=\"_blank\" rel=\"noopener\">Omniverse Create<\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/audio2face\/\" target=\"_blank\" rel=\"noopener\">Audio2Face<\/a>. Users have a complete toolset within Omniverse to support holistic content creation, whether animating avatars\u2019 bodies with the retargeting tool or crafting their facial expressions with Audio2Face.<\/p>\n<\/p>\n<p>To build the Omniverse Extension, in3D used <a href=\"https:\/\/docs.omniverse.nvidia.com\/prod_kit\/prod_kit\/overview.html\" target=\"_blank\" rel=\"noopener\">Omniverse Kit<\/a> and followed the development flow using the VSCode computer program. Being able to put a breakpoint anywhere in the code made VSCode an easy-to-use, convenient, out-of-the-box solution for connecting in3D to Omniverse, Ulyanov said.<\/p>\n<p>\u201cThe ability to centralize our SDK alongside other software for 3D developers is game changing,\u201d he said. \u201cWith our Omniverse Extension now available, we\u2019re looking to expand the base of developers who use our avatars.\u201d<\/p>\n<p>\u201cHaving the ability to upload our SDK and connect it with all the tools that 3D developers use has made in3D a tangible solution to deploy across all 3D development environments,\u201d said Sergei Sherman, chief marketing officer at in3D. \u201cThis was something we wouldn\u2019t have been able to achieve on our own in such a short amount of time.\u201d<\/p>\n<h2><b>Join In on the Creation<\/b><\/h2>\n<p>Creators and developers across the world can download <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Omniverse for free<\/a>, and <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/enterprise\/\" target=\"_blank\" rel=\"noopener\">enterprise teams<\/a> can use the platform for their 3D projects.<\/p>\n<p>Learn how to connect and create virtual worlds with Omniverse at NVIDIA GTC, the design and simulation conference for the era of AI and the metaverse, running online Sept. 19-22. <a href=\"https:\/\/register.nvidia.com\/events\/widget\/nvidia\/gtcfall2022\/1658260015496001Ylbn\" target=\"_blank\" rel=\"noopener\">Registration is free<\/a> and offers access to dozens of sessions and special events.<\/p>\n<p>Developers can use <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-platform\/code-app\" target=\"_blank\" rel=\"noopener\">Omniverse Code<\/a> to create their own Omniverse Extension for the inaugural <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/apps\/code\/developer-contest\/\" target=\"_blank\" rel=\"noopener\">#ExtendOmniverse contest<\/a> by Friday, Sept. 9, at 5 p.m. PT, for a chance to win an NVIDIA RTX GPU. The winners will be announced in the <a href=\"https:\/\/www.nvidia.com\/gtc\/session-catalog\/?tab.catalogallsessionstab=16566177511100015Kus&amp;search=SE41388#\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Omniverse User Group<\/a> at GTC.<\/p>\n<p>Find additional documentation and tutorials in the <a href=\"https:\/\/developer.nvidia.com\/nvidia-omniverse-developer-resource-center\" target=\"_blank\" rel=\"noopener\">Omniverse Resource Center<\/a>, which details how developers like Ulyanov can build custom USD-based applications and extensions for the platform.<\/p>\n<p><i>Follow NVIDIA Omniverse on <\/i><a href=\"https:\/\/www.instagram.com\/nvidiaomniverse\/\" target=\"_blank\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a href=\"https:\/\/medium.com\/@nvidiaomniverse\" target=\"_blank\" rel=\"noopener\"><i>Medium<\/i><\/a><i>, <\/i><a href=\"https:\/\/twitter.com\/nvidiaomniverse\" target=\"_blank\" rel=\"noopener\"><i>Twitter<\/i><\/a><i> and <\/i><a href=\"https:\/\/www.youtube.com\/channel\/UCSKUoczbGAcMld7HjpCR8OA\" target=\"_blank\" rel=\"noopener\"><i>YouTube<\/i><\/a><i> for additional resources and inspiration. Check out the Omniverse <\/i><a href=\"https:\/\/forums.developer.nvidia.com\/c\/omniverse\/300\" target=\"_blank\" rel=\"noopener\"><i>forums<\/i><\/a><i>, and join our <\/i><a href=\"https:\/\/discord.com\/invite\/XWQNJDNuaC\" target=\"_blank\" rel=\"noopener\"><i>Discord server<\/i><\/a> <i>and <\/i><a href=\"https:\/\/www.twitch.tv\/nvidiaomniverse\" target=\"_blank\" rel=\"noopener\"><i>Twitch <\/i><\/a><i>channel<\/i><i> to chat with the community.<\/i><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2022\/08\/19\/omniverse-developer-in3d\/<\/p>\n","protected":false},"author":0,"featured_media":2514,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2513"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2513"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2513\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2514"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2513"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2513"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}