{"id":2903,"date":"2023-03-07T15:46:38","date_gmt":"2023-03-07T15:46:38","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/03\/07\/ai-before-you-buy-israeli-startup-renders-3d-product-models-for-top-retailers\/"},"modified":"2023-03-07T15:46:38","modified_gmt":"2023-03-07T15:46:38","slug":"ai-before-you-buy-israeli-startup-renders-3d-product-models-for-top-retailers","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/03\/07\/ai-before-you-buy-israeli-startup-renders-3d-product-models-for-top-retailers\/","title":{"rendered":"AI Before You Buy: Israeli Startup Renders 3D Product Models for Top Retailers"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2023\/03\/07\/ai-startup-renders-3d-product-models-for-retailers\/\" data-title=\"AI Before You Buy: Israeli Startup Renders 3D Product Models for Top Retailers\" data-hashtags=\"\">\n<p>Preparing a retailer\u2019s online catalog once required expensive physical photoshoots to capture products from every angle. A Tel Aviv startup is saving brands time and money by transforming these camera clicks into mouse clicks.<\/p>\n<p>Hexa uses GPU-accelerated computing to help companies turn their online inventory into 3D renders that shoppers can view in 360 degrees, animate or even try on virtually to help their buying decisions. The company, which recently announced a <a href=\"https:\/\/www.prnewswire.com\/news-releases\/hexa-raises-20-5m-in-series-a-funding-round-301759411.html\" target=\"_blank\" rel=\"noopener\">$20.5 million funding round<\/a>, is working with brands in fashion, furniture, consumer electronics and more.<\/p>\n<p>\u201cThe world is going 3D,\u201d said Yehiel Atias, CEO of Hexa. \u201cJust a few years ago, the digital infrastructure to do this was still so expensive that it was more affordable to arrange a photographer, models and lighting. But with the advancements of AI and NVIDIA GPUs, it\u2019s now feasible for retailers to use <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/06\/08\/what-is-synthetic-data\/\">synthetic data<\/a> to replace physical photoshoots.\u201d<\/p>\n<p>Hexa\u2019s 3D renders are used on major retail websites such as Amazon, Crate &amp; Barrel and Macy\u2019s. The company creates thousands of renders each month, reducing the need for physical photoshoots of every product in a retailer\u2019s catalog. Hexa estimates that it can save customers up to 300 pounds of carbon emissions for each product imaged digitally instead of physically.<\/p>\n<h2><b>From Physical Photoshoots to AI-Accelerated Renders<\/b><\/h2>\n<p>Hexa can reconstruct a single 2D image, or a set of low-quality 2D images, into a high-fidelity 3D asset. The company uses differing levels of automation for its renders depending on the complexity of the shape, the amount of visual data that needs to be reconstructed, and the similarity of the object to Hexa\u2019s existing dataset.<\/p>\n<p>To automate elements of its workflow, the team uses dozens of AI algorithms that were developed using the PyTorch deep learning framework and run on NVIDIA Tensor Core GPUs in the cloud. If one of Hexa\u2019s artists is reconstructing a 3D toaster, for example, one algorithm can identify similar geometries the team has created in the past to give the creator a head start.<\/p>\n<p>Another neural network can scan a retailer\u2019s website to identify how many of its products Hexa can support with 3D renders. The company\u2019s entire rendering pipeline, too, runs on NVIDIA GPUs <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/gpu-cloud-computing\/amazon-web-services\/\">available through Amazon Web Services<\/a>.<\/p>\n<p>\u201cAccessing compute resources through AWS gives us the option to use thousands of NVIDIA GPUs at a moment\u2019s notice,\u201d said Segev Nahari, lead technical artist at Hexa. \u201cIf I need 10,000 frames to be ready by a certain time, I can request the hardware I need to meet the deadline.\u201d<\/p>\n<p>Nahari estimates that rendering on NVIDIA GPUs is up to 3x faster than relying on CPUs.<\/p>\n<h2><b>Broadening Beyond Retail, Venturing Into Omniverse<\/b><\/h2>\n<p>Hexa developers are continually experimenting with new methods for 3D rendering \u2014 looking for workflow improvements in preprocessing, object reconstruction and post-processing. The team recently began working with <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/09\/23\/3d-generative-ai-research-virtual-worlds\/\">NVIDIA GET3D<\/a>, a generative AI model by <a href=\"https:\/\/www.nvidia.com\/en-us\/research\/\">NVIDIA Research<\/a> that generates high-fidelity, three-dimensional shapes based on a training dataset of 2D images.<\/p>\n<figure id=\"attachment_62810\" aria-describedby=\"caption-attachment-62810\" class=\"wp-caption alignright\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-62810\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/03\/get3d_sneaker.gif\" alt=\"sneaker generated by GET3D\" width=\"300\" height=\"300\"><figcaption id=\"caption-attachment-62810\" class=\"wp-caption-text\">By training GET3D on Hexa\u2019s dataset of shoes, the team was able to generate 3D models of novel shoes not part of the training data.<\/figcaption><\/figure>\n<p>In addition to its work in ecommerce, Hexa\u2019s research and development team is investigating new applications for the company\u2019s AI software.<\/p>\n<p>\u201cIt doesn\u2019t stop at retail,\u201d Atias said. \u201cIndustries from gaming to fashion and healthcare are finding out that synthetic data and 3D technology is a more efficient way to do things like digitize inventory, create digital twins and train robots.\u201d<\/p>\n<p>The team credits its membership in <a href=\"https:\/\/www.nvidia.com\/en-us\/startups\/\">NVIDIA Inception<\/a>, a global program that supports cutting-edge startups, as a \u201chuge advantage\u201d in leveling up the technology Hexa uses.<\/p>\n<p>\u201cBeing part of Inception opens doors that outsiders don\u2019t have,\u201d Atias said. \u201cFor a small company trying to navigate the massive range of NVIDIA hardware and software offerings, it\u2019s a door-opener to all the cool tools we wanted to experiment with and understand the potential they could bring to Hexa.\u201d<\/p>\n<p>Hexa is testing the <a href=\"https:\/\/www.nvidia.com\/en-us\/omniverse\/enterprise\/\">NVIDIA Omniverse Enterprise<\/a> platform \u2014 an end-to-end platform for building and operating metaverse applications \u2014 as a tool to unify its annotating and rendering workflows, which are used by dozens of 3D artists around the globe. Omniverse Enterprise enables geographically dispersed teams of creators to customize their rendering pipelines and collaborate to build 3D assets.<\/p>\n<p>\u201cEach of our 3D artists has a different software workflow that they\u2019re used to \u2014 so it can be tough to get a unified output while still being flexible about the tools each artist uses,\u201d said Jonathan Clark, Hexa\u2019s CTO. \u201cOmniverse is an ideal candidate in that respect, with huge potential for Hexa. The platform will allow our artists to use the rendering software they\u2019re comfortable with, while also allowing our team to visualize the final product in one place.\u201d<\/p>\n<p><i>To learn more about <\/i><a href=\"https:\/\/register.nvidia.com\/events\/widget\/nvidia\/gtcspring2023\/1674761436378001P3OK\"><i>NVIDIA Omniverse and next-generation content creation<\/i><\/a><i>, register free for <\/i><a href=\"https:\/\/www.nvidia.com\/gtc\/\"><i>NVIDIA GTC<\/i><\/a><i>, a global conference for the era of AI and the metaverse, taking place online March 20-23.<\/i><\/p>\n<p><em>Images and videos courtesy of Hexa<\/em><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2023\/03\/07\/ai-startup-renders-3d-product-models-for-retailers\/<\/p>\n","protected":false},"author":0,"featured_media":2904,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2903"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=2903"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/2903\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/2904"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=2903"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=2903"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=2903"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}