{"id":792,"date":"2021-09-04T13:59:08","date_gmt":"2021-09-04T13:59:08","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/09\/04\/all-the-feels-nvidia-shares-expressive-speech-synthesis-research-at-interspeech\/"},"modified":"2021-09-04T13:59:08","modified_gmt":"2021-09-04T13:59:08","slug":"all-the-feels-nvidia-shares-expressive-speech-synthesis-research-at-interspeech","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/09\/04\/all-the-feels-nvidia-shares-expressive-speech-synthesis-research-at-interspeech\/","title":{"rendered":"All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2021\/08\/31\/conversational-ai-research-speech-synthesis-interspeech\/\" data-title=\"All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech\">\n<p>AI has transformed synthesized speech from the monotone of robocalls and decades-old GPS navigation systems to the polished tone of virtual assistants in smartphones and smart speakers.<\/p>\n<p>But there\u2019s still a gap between AI-synthesized speech and the human speech we hear in daily conversation and in the media. That\u2019s because people speak with complex rhythm, intonation and timbre that\u2019s challenging for AI to emulate.<\/p>\n<p>The gap is closing fast: NVIDIA researchers are building models and tools for high-quality, controllable speech synthesis that capture the richness of human speech, without audio artifacts. Their latest projects are now on display in sessions at the <a href=\"https:\/\/www.nvidia.com\/en-us\/events\/interspeech\/\">Interspeech 2021<\/a> conference, which runs through Sept. 3.<\/p>\n<p>These models can help voice automated customer service lines for banks and retailers, bring video-game or book characters to life, and provide real-time speech synthesis for digital avatars.<\/p>\n<p>NVIDIA\u2019s in-house creative team even uses the technology to produce expressive narration for a video series on the power of AI.<\/p>\n<\/p>\n<p>Expressive speech synthesis is just one element of NVIDIA Research\u2019s work in <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/02\/25\/what-is-conversational-ai\/\">conversational AI<\/a> \u2014 a field that also encompasses natural language processing, automated speech recognition, keyword detection, audio enhancement and more.<\/p>\n<p>Optimized to run efficiently on NVIDIA GPUs, some of this cutting-edge work has been made open source through the <a href=\"https:\/\/developer.nvidia.com\/nvidia-nemo\">NVIDIA NeMo toolkit<\/a>, available on our <a href=\"https:\/\/ngc.nvidia.com\/catalog\/containers\/nvidia:nemo\">NGC hub<\/a> of containers and other software.<\/p>\n<h2><b>Behind the Scenes of I AM AI<\/b><\/h2>\n<p>NVIDIA researchers and creative professionals don\u2019t just talk the conversational AI talk. They walk the walk, putting groundbreaking speech synthesis models to work in our <a href=\"https:\/\/www.youtube.com\/playlist?list=PLZHnYvH1qtObE_PjzaAFqS_CpmumGx5cW\" target=\"_blank\" rel=\"noopener\">I AM AI video series<\/a>, which features global AI innovators reshaping just about every industry imaginable.<\/p>\n<p>But until recently, these videos were narrated by a human. Previous speech synthesis models offered limited control over a synthesized voice\u2019s pacing and pitch, so attempts at AI narration didn\u2019t evoke the emotional response in viewers that a talented human speaker could.<\/p>\n<p>That changed over the past year when NVIDIA\u2019s text-to-speech research team developed more powerful, controllable speech synthesis models like RAD-TTS, used in our <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/08\/10\/siggraph-real-time-live-demo\/\">winning demo at the SIGGRAPH Real-Time Live<\/a> competition. By training the text-to-speech model with audio of an individual\u2019s speech, RAD-TTS can convert any text prompt into the speaker\u2019s voice.<\/p>\n<p>Another of its features is voice conversion, where one speaker\u2019s words (or even singing) is delivered in another speaker\u2019s voice. Inspired by the idea of the human voice as a musical instrument, the RAD-TTS interface gives users fine-grained, frame-level control over the synthesized voice\u2019s pitch, duration and energy.<a href=\"https:\/\/www.nvidia.com\/en-us\/training\/instructor-led-workshops\/building-conversational-ai-apps\/\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2021\/08\/dli-social-instructor-led-courses-1611469-fb-ig-2048x2048-course2-r2-1-400x400.jpg\" alt=\"\" width=\"350\" height=\"350\"><\/p>\n<p><\/a><\/p>\n<p>With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator\u2019s voice. Using this baseline narration, the producer could then direct the AI like a voice actor \u2014 tweaking the synthesized speech to emphasize specific words, and modifying the pacing of the narration to better express the video\u2019s tone.<\/p>\n<p>The AI model\u2019s capabilities go beyond voiceover work: text-to-speech can be used in gaming, to aid individuals with vocal disabilities or to help users translate between languages in their own voice. It can even recreate the performances of iconic singers, matching not only the melody of a song, but also the emotional expression behind the vocals.<\/p>\n<h2><b>Giving Voice to AI Developers, Researchers<\/b><\/h2>\n<p>With <a href=\"https:\/\/github.com\/NVIDIA\/NeMo\" target=\"_blank\" rel=\"noopener\">NVIDIA NeMo<\/a> \u2014 an open-source Python toolkit for GPU-accelerated conversational AI \u2014 researchers, developers and creators gain a head start in experimenting with, and fine-tuning, speech models for their own applications.<\/p>\n<p>Easy-to-use APIs and models pretrained in NeMo help researchers develop and customize models for text-to-speech, natural language processing and real-time automated speech recognition. Several of the models are trained with tens of thousands of hours of audio data on <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-systems\/\">NVIDIA DGX systems<\/a>. Developers can fine tune any model for their use cases, speeding up training using <a href=\"https:\/\/blogs.nvidia.com\/blog\/2019\/11\/15\/whats-the-difference-between-single-double-multi-and-mixed-precision-computing\/\">mixed-precision computing<\/a> on <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/tensor-cores\/\">NVIDIA Tensor Core GPUs<\/a>.<\/p>\n<p>Through <a href=\"https:\/\/ngc.nvidia.com\/catalog\/containers\/nvidia:nemo\">NGC<\/a>, NVIDIA NeMo also offers models trained on <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-and-mozilla-release-common-voice-dataset-surpassing-13000-hours-for-the-first-time\/\">Mozilla Common Voice<\/a>, a dataset with nearly 14,000 hours of crowd-sourced speech data in 76 languages. Supported by NVIDIA, the project aims to democratize voice technology with the world\u2019s largest open data voice dataset.<\/p>\n<h2><b>Voice Box: NVIDIA Researchers Unpack AI Speech<\/b><\/h2>\n<p>Interspeech brings together more than 1,000 researchers to showcase groundbreaking work in speech technology. At this week\u2019s conference, NVIDIA Research is presenting conversational AI model architectures as well as fully formatted speech datasets for developers.<\/p>\n<p>Catch the following sessions led by NVIDIA speakers:<\/p>\n<p><i>Find NVIDIA NeMo models in the <\/i><a href=\"https:\/\/ngc.nvidia.com\/catalog\/containers\/nvidia:nemo\"><i>NGC catalog<\/i><\/a><i>, and tune into talks by <\/i><a href=\"https:\/\/www.nvidia.com\/en-us\/events\/interspeech\/\"><i>NVIDIA researchers at Interspeech<\/i><\/a><i>.\u00a0<\/i><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>http:\/\/feedproxy.google.com\/~r\/nvidiablog\/~3\/cEsxLYZwSLg\/<\/p>\n","protected":false},"author":0,"featured_media":793,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/792"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=792"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/792\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/793"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}