{"id":983,"date":"2021-10-02T08:48:37","date_gmt":"2021-10-02T08:48:37","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/02\/ai-offers-versatile-general-robots-on-horizon-says-robotics-visionary\/"},"modified":"2021-10-02T08:48:37","modified_gmt":"2021-10-02T08:48:37","slug":"ai-offers-versatile-general-robots-on-horizon-says-robotics-visionary","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/10\/02\/ai-offers-versatile-general-robots-on-horizon-says-robotics-visionary\/","title":{"rendered":"AI Offers Versatile General Robots on Horizon, Says Robotics Visionary"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2021\/10\/01\/ai-offers-versatile-general-robots-on-horizon-says-robotics-visionary\/\" data-title=\"AI Offers Versatile General Robots on Horizon, Says Robotics Visionary\">\n<p>Robots require huge training leaps in AI to become versatile general helpers, says robotics pioneer Pieter Abbeel, who laid out a vision for such neural network brains of the future.<\/p>\n<p>Home robots are electronically and mechanically possible today, Abbeel said, but they lack the AI know-how to navigate a wide variety of situations.<\/p>\n<p>\u201cIt\u2019s just our software, our AI, that hasn\u2019t been good enough to make this a reality in our home,\u201d he said.<\/p>\n<p>Abbeel spoke Wednesday at NTECH 2021, an annual internal engineering conference at NVIDIA, drawing hundreds of online viewers.<\/p>\n<p>Abbeel is a professor of electrical engineering and computer science at the University of California, Berkeley. He is also director of the university\u2019s Robot Learning Lab and co-director of the Berkeley Artificial Intelligence Research (BAIR) Lab. With all of that going on, the soft-spoken Belgian engineer also hosts <a href=\"https:\/\/therobotbrains.ai\/\">The Robot Brains Podcast<\/a>.<\/p>\n<p>While juggling his roles, Abbeel also spent nearly two years at OpenAI, the nonprofit formed in 2015 by tech luminaries including <a href=\"https:\/\/blogs.nvidia.com\/blog\/2018\/09\/14\/reinforcement-learning-openai-pro-gamers-ilya-sutskeverai-elon-musk-gpu-dgx\/\">Ilya Sutskever<\/a> \u2014 who is speaking at <a href=\"https:\/\/www.nvidia.com\/gtc\/\">GTC 2021<\/a> \u2014 to develop and release artificial general intelligence aimed at benefiting humanity.<\/p>\n<p>He left OpenAI in 2017 to start <a href=\"https:\/\/covariant.ai\/\">Covariant<\/a>, a developer of AI for robotic automation of warehouses and factories that has attracted $147 million in funding. Before that he co-founded AI-assisted grading startup Gradescope, which was acquired in 2018.<\/p>\n<p>NVIDIA CEO Jensen Huang spoke with Abbeel after the robotics talk, calling him \u201cone of the brightest minds on the planet.\u201d<\/p>\n<h2><b>Talking Big Robot Brains<\/b><\/h2>\n<p>Abbeel\u2019s talk laid out what he called a \u201cnice starting point\u201d for what could be done to create more capable robots, ones with big brains that could learn new tasks on their own. The talk mentioned the work of many AI researchers, such as Geoffrey Hinton, as stepping stones to get there.<\/p>\n<p>Crediting deep learning pioneer Yann LeCun, he said the idea of training a large neural network for robots with internet videos to do prediction was promising. It would require robots to learn things about the world, offering a big piece in the robot AI puzzle.<\/p>\n<p>\u00a0<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/10\/AbbeelDiagram.jpg\" alt=\"\" width=\"624\" height=\"312\"><\/p>\n<p>Robots might learn entire behavioral representations from videos \u2014 a sequence of things to do \u2014 versus the limitations of using images alone. \u201cVideo prediction is possibly the biggest missing piece to have a good pretrained neural network that can be quickly used for other things for real-world robotic tasks,\u201d he said<\/p>\n<p>Training on text could be important as well for robots. They might learn an entire story to perform a sequence of events, like delivering a car to an auto mechanic on command from an owner, handling the entire process, including driving and talking to the mechanic for a pickup time, he suggested.<\/p>\n<h2><b>Train \u2018Mostly in Simulation\u2019\u00a0<\/b><\/h2>\n<p>Most robot training should be done in simulation \u2014 it\u2019s impractical to slowly train robots in the real world, bumping into things to learn by trial and error, said Abbeel.<\/p>\n<p>Only a little bit of training in the real world will be required to make sure robots can also do things in the real world. \u201cMillions of scenes become very hard to collect in the real world,\u201d he said.<\/p>\n<p>Thousands \u2014 even millions \u2014 of simulations can be used for training to get results on neural networks in a shorter period of time, he said.<\/p>\n<p>\u201cIt\u2019s more affordable, and you can scale it up and parallelize many simulation runs,\u201d he said.<\/p>\n<h2><b>Bridging Research to Commercial<\/b><\/h2>\n<p>In a post-presentation talk with Huang, Abbeel discussed bridging university research to commercial applications.<\/p>\n<p>\u201cYou both do groundbreaking research in the university, and you are also practicing that art and the industry of robotics \u2014 you\u2019re putting it to work in a real company (Covariant),\u201d Huang said. \u201cSolving the 99 percent problem is essential.\u201d<\/p>\n<p>Abbeel agreed that it\u2019s important to know with confidence that your customer will be happy with your network performance and what you\u2019re selling to them. He said it\u2019s largely a matter of knowing the network you\u2019ve developed, running a lot of tests and getting the statistics.<\/p>\n<p>At Covariant, he said, the current applications are pick-and-place use cases in warehouses \u2014 for the <a href=\"https:\/\/covariant.ai\/solutions\">Covariant Brain<\/a> \u2014 and that hitting the end-to-end performance is what matters.<\/p>\n<p>\u201cIf you don\u2019t make your customers happy, what\u2019s the point in selling to them?\u201d said Abbeel.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>http:\/\/feedproxy.google.com\/~r\/nvidiablog\/~3\/GHKrSjtRaH4\/<\/p>\n","protected":false},"author":0,"featured_media":984,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/983"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=983"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/983\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/984"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=983"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=983"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=983"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}