{"id":1391,"date":"2021-12-18T00:39:43","date_gmt":"2021-12-18T00:39:43","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/18\/post-call-analytics-for-your-contact-center-with-amazon-language-ai-services\/"},"modified":"2021-12-18T00:39:43","modified_gmt":"2021-12-18T00:39:43","slug":"post-call-analytics-for-your-contact-center-with-amazon-language-ai-services","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/18\/post-call-analytics-for-your-contact-center-with-amazon-language-ai-services\/","title":{"rendered":"Post call analytics for your contact center with Amazon language AI services"},"content":{"rendered":"<div id=\"\">\n<p>Your contact center connects your business to your community, enabling customers to order products, callers to request support, clients to make appointments, and much more. Each conversation with a caller is an opportunity to learn more about that caller\u2019s needs, and how well those needs were addressed during the call. You can uncover insights from these conversations that help you manage script compliance and find new opportunities to satisfy your customers, perhaps by expanding your services to address reported gaps, improving the quality of reported problem areas, or by elevating the customer experience delivered by your contact center agents.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/connect\/contact-lens\/\" target=\"_blank\" rel=\"noopener noreferrer\">Contact Lens for Amazon Connect<\/a> provides call transcriptions with rich analytics capabilities that can provide these kinds of insights, but you may not currently be using <a href=\"https:\/\/aws.amazon.com\/connect\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Connect<\/a>. You need a solution that works with your existing contact center call recordings.<\/p>\n<p>Amazon Machine Learning (ML) services like <a href=\"https:\/\/aws.amazon.com\/transcribe\/call-analytics\/?nc=sn&amp;loc=2&amp;dn=1\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Transcribe Call Analytics<\/a> and <a href=\"https:\/\/aws.amazon.com\/comprehend\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Comprehend<\/a> provide feature-rich APIs that you can use to transcribe and extract insights from your contact center audio recordings at scale. Although you could build your own custom call analytics solution using these services, that requires time and resources. In this post, we introduce our new sample solution for post call analytics.<\/p>\n<h2>Solution overview<\/h2>\n<p>Our new sample solution, Post Call Analytics (PCA), does most of the heavy lifting associated with providing an end-to-end solution that can process call recordings from your existing contact center. PCA provides actionable insights to spot emerging trends, identify agent coaching opportunities, and assess the general sentiment of calls.<\/p>\n<p>You provide your call recordings, and PCA automatically processes them using Transcribe Call Analytics and other AWS services to extract valuable intelligence such as customer and agent sentiment, call drivers, entities discussed, and conversation characteristics such as non-talk time, interruptions, loudness, and talk speed. Transcribe Call Analytics detects issues using built-in ML models that have been trained using thousands of hours of conversations. With the automated call categorization capability, you can also tag conversations based on keywords or phrases, sentiment, and non-talk time. And you can optionally redact sensitive customer data such as names, addresses, credit card numbers, and social security numbers from both transcript and audio files.<\/p>\n<p>PCA\u2019s web user interface has a home page showing all your calls, as shown in the following screenshot.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image001.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" class=\"alignnone size-full wp-image-31766\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image001.png\" alt=\"\" height=\"297\"><\/a><\/p>\n<p>You can choose a record to see the details of the call, such as speech characteristics.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image003.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31767\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image003.png\" alt=\"\" width=\"708\" height=\"604\"><\/a><\/p>\n<p>You can also scroll down to see annotated turn-by-turn call details.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image004.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31768\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image004.png\" alt=\"\" width=\"811\" height=\"594\"><\/a><\/p>\n<p>You can search for calls based on dates, entities, or sentiment characteristics.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image006.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31769\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image006.png\" alt=\"\" width=\"723\" height=\"528\"><\/a><\/p>\n<p>You can also search your call transcriptions.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image007.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31770\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image007.png\" alt=\"\" width=\"1178\" height=\"539\"><\/a><\/p>\n<p>Lastly, you can query detailed call analytics data from your preferred business intelligence (BI) tool.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image009.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31771\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image009.png\" alt=\"\" width=\"568\" height=\"504\"><\/a><\/p>\n<p>PCA currently supports the following features:<\/p>\n<ul>\n<li><strong>Transcription<\/strong>\n          <\/li>\n<li><strong>Analytics<\/strong>\n<ul>\n<li>Caller and agent sentiment details and trends<\/li>\n<li>Talk and non-talk time for both caller and agent<\/li>\n<li>Configurable Transcribe Call Analytics categories based on the presence or absence of keywords or phrases, sentiment, and non-talk time<\/li>\n<li>Detects callers\u2019 main issues using built-in ML models in Transcribe Call Analytics<\/li>\n<li>Discovers entities referenced in the call using Amazon Comprehend standard or custom entity detection models, or simple configurable string matching<\/li>\n<li>Detects when caller and agent interrupt each other<\/li>\n<li>Speaker loudness<\/li>\n<\/ul>\n<\/li>\n<li><strong>Search<\/strong>\n<ul>\n<li>Search on call attributes such as time range, sentiment, or entities<\/li>\n<li>Search transcriptions<\/li>\n<\/ul>\n<\/li>\n<li><strong>Other<\/strong>\n<ul>\n<li>Detects metadata from audio file names, such as call GUID, agent\u2019s name, and call date time<\/li>\n<li>Scales automatically to handle variable call volumes<\/li>\n<li>Bulk loads large archives of older recordings while maintaining capacity to process new recordings as they arrive<\/li>\n<li>Sample recordings so you can quickly try out PCA for yourself<\/li>\n<li>It\u2019s easy to install with a single <a href=\"https:\/\/aws.amazon.com\/cloudformation\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS CloudFormation<\/a> template<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>This is just the beginning! We expect to add many more exciting features over time, based on your feedback.<\/p>\n<h2>Deploy the CloudFormation stack<\/h2>\n<p>Start your PCA experience by using AWS CloudFormation to deploy the solution with sample recordings loaded.<\/p>\n<ol>\n<li>Use the following <strong>Launch Stack<\/strong> button to deploy the PCA solution in the <code>us-east-1<\/code> (N. Virginia) AWS Region.<br \/><a href=\"https:\/\/us-east-1.console.aws.amazon.com\/cloudformation\/home?region=us-east-1#\/stacks\/create\/review?templateURL=https:\/\/s3.us-east-1.amazonaws.com\/aws-ml-blog-us-east-1\/artifacts\/pca\/pca-main.yaml&amp;stackName=PostCallAnalytics\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-15948 size-full\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/09\/16\/2-LaunchStack.jpg\" alt=\"\" width=\"107\" height=\"20\"><\/a><\/li>\n<\/ol>\n<p>The source code is available in our <a href=\"https:\/\/github.com\/aws-samples\/amazon-transcribe-post-call-analytics\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub repository<\/a>. Follow the directions in the <a href=\"https:\/\/github.com\/aws-samples\/amazon-transcribe-post-call-analytics\/blob\/main\/README.md\" target=\"_blank\" rel=\"noopener noreferrer\">README <\/a>to deploy PCA to <a href=\"https:\/\/aws.amazon.com\/about-aws\/global-infrastructure\/regional-product-services\/\" target=\"_blank\" rel=\"noopener noreferrer\">additional Regions supported by Amazon Transcribe<\/a>.<\/p>\n<ol start=\"2\">\n<li>For <strong>Stack name<\/strong>, use the default value, <code>PostCallAnalytics<\/code>.<\/li>\n<li>For <strong>AdminUsername,<\/strong> use the default value, admin<em>. <\/em><\/li>\n<li>For <strong>AdminEmail,<\/strong> use a valid email address\u2014your temporary password is emailed to this address during the deployment.<\/li>\n<li>For <strong>loadSampleAudioFiles<\/strong>, change the value to <code>true<\/code>.<\/li>\n<li>For <strong>EnableTranscriptKendraSearch<\/strong>, change the value to <code>Yes, create new Kendra Index (Developer Edition)<\/code><em>. <\/em><\/li>\n<\/ol>\n<p>If you have previously used your <a href=\"https:\/\/aws.amazon.com\/kendra\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Kendra<\/a> Free Tier allowance, you incur an hourly cost for this index (more information on cost later in this post). Amazon Kendra transcript search is an optional feature, so if you don\u2019t need it and are concerned about cost, use the default value of No.<\/p>\n<ol start=\"7\">\n<li>For all other parameters, use the default values.<\/li>\n<\/ol>\n<p>If you want to customize the settings later, for example to apply custom vocabulary to improve accuracy, or to customize entity detection, you can update the stack to set these parameters.<\/p>\n<ol start=\"8\">\n<li>Select the two acknowledgement boxes, and choose <strong>Create stack<\/strong>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image011.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31772\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image011.png\" alt=\"\" width=\"599\" height=\"229\"><\/a><\/li>\n<\/ol>\n<p>The main CloudFormation stack uses nested stacks to create the following resources in your AWS account:<\/p>\n<p>The stacks take about 20 minutes to deploy. The main stack status shows as CREATE_COMPLETE when everything is deployed.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image012.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31773\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image012.png\" alt=\"\" width=\"252\" height=\"89\"><\/a><\/p>\n<h2>Set your password<\/h2>\n<p>After you deploy the stack, you need to open the PCA web user interface and set your password.<\/p>\n<ol>\n<li>On the AWS CloudFormation console, choose the main stack, <code>PostCallAnalytics<\/code>, and choose the <strong>Outputs<\/strong> tab.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image013.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31774\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image013.png\" alt=\"\" width=\"986\" height=\"455\"><\/a><\/li>\n<li>Open your web browser to the URL shown as <code>WebAppURL<\/code> in the outputs.<\/li>\n<\/ol>\n<p>You\u2019re redirected to a login page.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image015.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31775\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image015.png\" alt=\"\" width=\"373\" height=\"350\"><\/a><\/p>\n<ol start=\"3\">\n<li>Open the email your received, at the email address you provided, with the subject \u201cWelcome to the Amazon Transcribe Post Call Analytics (PCA) Solution!\u201d<\/li>\n<\/ol>\n<p>This email contains a generated temporary password that you can use to log in (as user admin) and create your own password.<\/p>\n<ol start=\"4\">\n<li>Set a new password.<\/li>\n<\/ol>\n<p>Your new password must have a length of at least eight characters, and contain uppercase and lowercase characters, plus numbers and special characters.<\/p>\n<p>You\u2019re now logged in to PCA. Because you set <code>loadSampleAudioFiles<\/code> to true, your PCA deployment now has three sample calls pre-loaded for you to explore.<\/p>\n<h2>Optional: Open the transcription search web UI and set your permanent password<\/h2>\n<p>Follow these additional steps to log in to the companion transcript search web app, which is deployed only when you set <code>EnableTranscriptKendraSearch<\/code> when you launch the stack.<\/p>\n<ol>\n<li>On the AWS CloudFormation console, choose the main stack, <code>PostCallAnalytics<\/code>, and choose the <strong>Outputs<\/strong> tab.<\/li>\n<li>Open your web browser to the URL shown as <code>TranscriptionMediaSearchFinderURL<\/code> in the outputs.<\/li>\n<\/ol>\n<p>You\u2019re redirected to the login page.<\/p>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image017.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31776\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image017.png\" alt=\"\" width=\"345\" height=\"317\"><\/a><\/p>\n<ol start=\"3\">\n<li>Open the email your received, at the email address you provided, with the subject \u201cWelcome to Finder Web App.\u201d<\/li>\n<\/ol>\n<p>This email contains a generated temporary password that you can use to log in (as user admin).<\/p>\n<ol start=\"4\">\n<li>Create your own password, just like you already did for the PCA web application.<\/li>\n<\/ol>\n<p>As before, your new password must have a length of at least eight characters, and contain uppercase and lowercase characters, plus numbers and special characters.<\/p>\n<p>You\u2019re now logged in to the transcript search Finder application. The sample audio files are indexed already, and ready for search.<\/p>\n<h2>Explore post call analytics features<\/h2>\n<p>Now, with PCA successfully installed, you\u2019re ready to explore the call analysis features.<\/p>\n<h3>Home page<\/h3>\n<p>To explore the home page, open the PCA web UI using the URL shown as <code>WebAppURL<\/code> in the main stack outputs (bookmark this URL, you\u2019ll use it often!)<\/p>\n<p>You already have three calls listed on the home page, sorted in descending time order (most recent first). These are the sample audio files.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image019.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31777\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image019.png\" alt=\"\" width=\"982\" height=\"296\"><\/a><\/p>\n<p>The calls have the following key details:<\/p>\n<ul>\n<li><strong>Job Name<\/strong> \u2013 Is assigned from the recording audio file name, and serves as a unique job name for this call<\/li>\n<li><strong>Timestamp<\/strong> \u2013 Is parsed from the audio file name if possible, otherwise it\u2019s assigned the time when the recording is processed by PCA<\/li>\n<li><strong>Customer Sentiment and Customer Sentiment Trend<\/strong> \u2013 Show the overall caller sentiment and, importantly, whether the caller was more positive at the end of the call than at the beginning<\/li>\n<li><strong>Language Code <\/strong>\u2013 Shows the specified language or the automatically detected dominant language of the call<\/li>\n<\/ul>\n<h3>Call details<\/h3>\n<p>Choose the most recently received call to open and explore the call detail page. You can review the call information and analytics such as sentiment, talk time, interruptions, and loudness.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image021.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31778\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image021.png\" alt=\"\" width=\"709\" height=\"605\"><\/a><\/p>\n<p>Scroll down to see the following details:<\/p>\n<ul>\n<li>Entities grouped by entity type. Entities are detected by Amazon Comprehend and the sample entity recognizer string map.<\/li>\n<li>Categories detected by Transcribe Call Analytics. By default, there are no categories; see <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/call-analytics-categorization.html\" target=\"_blank\" rel=\"noopener noreferrer\">Call categorization<\/a> for more information.<\/li>\n<li>Issues detected by the Transcribe Call Analytics built-in ML model. Issues succinctly capture the main reasons for the call. For more information, see <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/call-analytics-issue-detection.html\" target=\"_blank\" rel=\"noopener noreferrer\">Issue detection<\/a>.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image022.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31779\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image022.png\" alt=\"\" width=\"902\" height=\"349\"><\/a><\/p>\n<p>Scroll further to see the turn-by-turn transcription for the call, with annotations for speaker, time marker, sentiment, interruptions, issues, and entities.<\/p>\n<p>Use the embedded media player to play the call audio from any point in the conversation. Set the position by choosing the time marker annotation on the transcript or by using the player time control. The audio player remains visible as you scroll down the page.<\/p>\n<p>PII is redacted from both transcript and audio\u2014redaction is enabled using the CloudFormation stack parameters.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image024.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31780\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image024.png\" alt=\"\" width=\"914\" height=\"510\"><\/a><\/p>\n<h3>Search based on call attributes<\/h3>\n<p>To try PCA\u2019s built-in search, choose <strong>Search<\/strong> at the top of the screen. Under <strong>Sentiment<\/strong>, choose <strong>Average<\/strong>, <strong>Customer<\/strong>, and <strong>Negative<\/strong> to select the calls that had average negative customer sentiment.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image026.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31781\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image026.png\" alt=\"\" width=\"787\" height=\"577\"><\/a><\/p>\n<p>Choose <strong>Clear<\/strong> to try a different filter. For <strong>Entities<\/strong>, enter <code>Hyundai<\/code> and then choose <strong>Search<\/strong>. Select the call from the search results and verify from the transcript that the customer was indeed calling about their Hyundai.<\/p>\n<h3>Search call transcripts<\/h3>\n<p>Transcript search is an experimental, optional, add-on feature powered by Amazon Kendra.<\/p>\n<p>Open the transcript web UI using the URL shown as <code>TranscriptionMediaSearchFinderURL<\/code> in the main stack outputs. To find a recent call, enter the search query <code>customer hit the wall<\/code>.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image028.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31782\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image028.png\" alt=\"\" width=\"878\" height=\"277\"><\/a><\/p>\n<p>The results show transcription extracts from relevant calls. Use the embedded audio player to play the associated section of the call recording.<\/p>\n<p>You can expand <strong>Filter search results <\/strong>to refine the search results with additional filters. Choose <strong>Open Call Analytics<\/strong> to open the PCA call detail page for this call.<\/p>\n<h3>Query call analytics using SQL<\/h3>\n<p>You can integrate PCA call analytics data into a reporting or BI tool such as <a href=\"https:\/\/aws.amazon.com\/quicksight\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon QuickSight<\/a> by using <a href=\"http:\/\/aws.amazon.com\/athena\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Athena<\/a> SQL queries. To try it, open the Athena query editor. For <strong>Database<\/strong>, choose <strong>pca<\/strong><em>.<\/em><\/p>\n<p>Observe the table <code>parsedresults<\/code>. This table contains all the turn-by-turn transcriptions and analysis for each call, using nested structures.<\/p>\n<p>You can also review flattened result sets, which are simpler to integrate into your reporting or analytics application. Use the query editor to preview the data.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image030.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31783\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image030.png\" alt=\"\" width=\"974\" height=\"501\"><\/a><\/p>\n<h2>Processing flow overview<\/h2>\n<p>How did PCA transcribe and analyze your phone call recordings? Let\u2019s take a quick look at how it works.<\/p>\n<p>The following diagram shows the main data processing components and how they fit together at a high level.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image032.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31784\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image032.png\" alt=\"\" width=\"857\" height=\"583\"><\/a><\/p>\n<p>Call recording audio files are uploaded to the S3 bucket and folder, identified in the main stack outputs as <code>InputBucket<\/code> and <code>InputBucketPrefix<\/code>, respectively. The sample call recordings are automatically uploaded because you set the parameter <code>loadSampleAudioFiles<\/code> to true when you deployed PCA.<\/p>\n<p>As each recording file is added to the input bucket, an S3 Event Notification triggers a Lambda function that initiates a workflow in Step Functions to process the file. The workflow orchestrates the steps to start an Amazon Transcribe batch job and process the results by doing entity detection and additional preparation of the call analytics results. Processed results are stored as JSON files in another S3 bucket and folder, identified in the main stack outputs as <code>OutputBucket<\/code> and <code>OutputBucketPrefix<\/code><strong>.<\/strong><\/p>\n<p>As the Step Functions workflow creates each JSON results file in the output bucket, an S3 Event Notification triggers a Lambda function, which loads selected call metadata into a DynamoDB table.<\/p>\n<p>The PCA UI web app queries the DynamoDB table to retrieve the list of processed calls to display on the home page. The call detail page reads additional detailed transcription and analytics from the JSON results file for the selected call.<\/p>\n<p>Amazon S3 Lifecycle policies delete recordings and JSON files from both input and output buckets after a configurable retention period, defined by the deployment parameter <code>RetentionDays<\/code>. S3 Event Notifications and Lambda functions keep the DynamoDB table synchronized as files are both created and deleted.<\/p>\n<p>When the <code>EnableTranscriptKendraSearch<\/code> parameter is <code>true<\/code>, the Step Functions workflow also adds time markers and metadata attributes to the transcription, which are loaded into an Amazon Kendra index. The transcription search web application is used to search call transcriptions. For more information on how this works, see <a href=\"http:\/\/www.amazon.com\/mediasearch\" target=\"_blank\" rel=\"noopener noreferrer\">Make your audio and video files searchable using Amazon Transcribe and Amazon Kendra<\/a>.<\/p>\n<h2>Monitoring and troubleshooting<\/h2>\n<p>AWS CloudFormation reports deployment failures and causes on the stack <strong>Events<\/strong> tab. See <a href=\"https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/troubleshooting.html\" target=\"_blank\" rel=\"noopener noreferrer\">Troubleshooting CloudFormation<\/a> for help with common deployment problems.<\/p>\n<p>PCA provides runtime monitoring and logs for each component using CloudWatch:<\/p>\n<ul>\n<li><strong>Step Functions workflow <\/strong>\u2013 On the Step Functions console, open the workflow <code><a href=\"https:\/\/console.aws.amazon.com\/states\/home?region=us-east-1#\/statemachines\/view\/arn:aws:states:us-east-1:912625584728:stateMachine:PostCallAnalyticsWorkflow\" target=\"_blank\" rel=\"noopener noreferrer\">PostCallAnalyticsWorkflow<\/a><\/code>. The <strong>Executions<\/strong> tab show the status of each workflow run. Choose any run to see details. Choose <strong>CloudWatch Logs<\/strong> from the <strong>Execution event history <\/strong>to examine logs for any Lambda function that was invoked by the workflow.<\/li>\n<li><strong>PCA server and UI Lambda functions<\/strong> \u2013 On the Lambda console, filter by <code>PostCallAnalytics<\/code> to see all the PCA-related Lambda functions. Choose your function, and choose the <strong>Monitor<\/strong> tab to see function metrics. Choose <strong>View logs in CloudWatch<\/strong> to inspect function logs.<\/li>\n<\/ul>\n<h2>Cost assessment<\/h2>\n<p>For pricing information for the main services used by PCA, see the following:<\/p>\n<p>When transcription search is enabled, you incur an hourly cost for the Amazon Kendra index: $1.125\/hour for the Developer Edition (first 750 hours are free), or $1.40\/hour for the Enterprise Edition (recommended for production workloads).<\/p>\n<p>All other PCA costs are incurred based on usage, and are Free Tier eligible. After the Free Tier allowance is consumed, usage costs add up to about $0.15 for a 5-minute call recording.<\/p>\n<p>To explore PCA costs for yourself, use <a href=\"https:\/\/aws.amazon.com\/aws-cost-management\/aws-cost-explorer\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Cost Explorer<\/a> or choose <a href=\"https:\/\/console.aws.amazon.com\/billing\/home#\/bills\" target=\"_blank\" rel=\"noopener noreferrer\">Bill Details<\/a> on the <a href=\"https:\/\/console.aws.amazon.com\/billing\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Billing Dashboard<\/a> to see your month-to-date spend by service.<br \/><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image034.png\" target=\"_blank\" rel=\"noopener noreferrer\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-31785\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/ML-5919-image034.png\" alt=\"\" width=\"776\" height=\"398\"><\/a><\/p>\n<h2>Integrate with your contact center<\/h2>\n<p>You can configure your contact center to enable call recording. If possible, configure recordings for two channels (stereo), with customer audio on one channel (for example, channel 0) and the agent audio on the other channel (channel 1).<\/p>\n<p>Via the <a href=\"http:\/\/aws.amazon.com\/cli\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Command Line Interface<\/a> (AWS CLI) or SDK, copy your contact center recording files to the PCA input bucket folder, identified in the main stack outputs as <code>InputBucket<\/code> and <code>InputBucketPrefix<\/code>. Alternatively, if you already save your call recordings to Amazon S3, use deployment parameters <code>InputBucketName<\/code> and <code>InputBucketRawAudio<\/code> to configure PCA to use your existing S3 bucket and prefix, so you don\u2019t have to copy the files again.<\/p>\n<h2>Customize your deployment<\/h2>\n<p>Use the following CloudFormation template parameters when creating or updating your stack to customize your PCA deployment:<\/p>\n<ul>\n<li>To enable or disable the optional (experimental) transcription search feature, use <code>EnableTranscriptKendraSearch<\/code>.<\/li>\n<li>To use your existing S3 bucket for incoming call recordings, use <code>InputBucket<\/code> and <code>InputBucketPrefix<\/code>.<\/li>\n<li>To configure automatic deletion of recordings and call analysis data when using auto-provisioned S3 input and output buckets, use <code>RetentionDays<\/code>.<\/li>\n<li>To detect call timestamp, agent name, or call identifier (GUID) from the recording file name, use <code>FilenameDatetimeRegex<\/code>, <code>FilenameDatetimeFieldMap<\/code>, <code>FilenameGUIDRegex<\/code><strong>, <\/strong>and <code>FilenameAgentRegex<\/code>.<\/li>\n<li>To use the standard Amazon Transcribe API instead of the default call analytics API, use TranscribeApiMode. PCA automatically reverts to the standard mode API for audio recordings that aren\u2019t compatible with the call analytics API (for example, mono channel recordings). When using the standard API some call analytics, metrics such as issue detection and speaker loudness aren\u2019t available.<\/li>\n<li>To set the list of supported audio languages, use <code>TranscribeLanguages<\/code>.<\/li>\n<li>To mask unwanted words, use <code>VocabFilterMode<\/code> and set <code>VocabFilterName<\/code> to the name of a vocabulary filter that you already created in Amazon Transcribe. See <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/vocabulary-filtering.html\" target=\"_blank\" rel=\"noopener noreferrer\">Vocabulary filtering<\/a> for more information.<\/li>\n<li>To improve transcription accuracy for technical and domain specific acronyms and jargon, set <code>VocabularyName<\/code> to the name of a custom vocabulary that you already created in Amazon Transcribe. See <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/custom-vocabulary.html\" target=\"_blank\" rel=\"noopener noreferrer\">Custom vocabularies<\/a> for more information.<\/li>\n<li>To configure PCA to use single-channel audio by default, and to identify speakers using <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/diarization.html\" target=\"_blank\" rel=\"noopener noreferrer\">speaker diarizaton<\/a> rather than channel identification, use <code>SpeakerSeparationType<\/code> and <code>MaxSpeakers<\/code>. The default is to use channel identification with stereo files using Transcribe Call Analytics APIs to generate the richest analytics and most accurate speaker labeling.<\/li>\n<li>To redact PII from the transcriptions or from the audio, set <code>CallRedactionTranscript<\/code> or <code>CallRedactionAudio<\/code> to true. See <a href=\"https:\/\/docs.aws.amazon.com\/transcribe\/latest\/dg\/pii-redaction.html\" target=\"_blank\" rel=\"noopener noreferrer\">Redaction<\/a> for more information.<\/li>\n<li>To customize entity detection using Amazon Comprehend, or to provide your own CSV file to define entities, use the <strong>Entity detection<\/strong> parameters.<\/li>\n<\/ul>\n<p>See the <a href=\"https:\/\/github.com\/aws-samples\/amazon-transcribe-post-call-analytics\/blob\/main\/README.md\" target=\"_blank\" rel=\"noopener noreferrer\">README on GitHub<\/a> for more details on configuration options and operations for PCA.<\/p>\n<p>PCA is an open-source project. You can fork the <a href=\"https:\/\/github.com\/aws-samples\/amazon-transcribe-post-call-analytics\/\" target=\"_blank\" rel=\"noopener noreferrer\">PCA GitHub repository<\/a>, enhance the code, and send us pull requests so we can incorporate and share your improvements!<\/p>\n<h2>Clean up<\/h2>\n<p>When you\u2019re finished experimenting with this solution, clean up your resources by opening the AWS CloudFormation console and deleting the <code>PostCallAnalytics<\/code> stacks that you deployed. This deletes resources that you created by deploying the solution. S3 buckets containing your audio recordings and analytics, and CloudWatch log groups are retained after the stack is deleted to avoid deleting your data.<\/p>\n<h2>Live Call Analytics: Companion solution<\/h2>\n<p>Our companion solution, Live Call Analytics (LCA), offers real time-transcription and analytics capabilities by using the Amazon Transcribe and Amazon Comprehend real-time APIs. Unlike PCA, which transcribes and analyzes recorded audio after the call has ended, LCA transcribes and analyzes your calls as they are happening and provides real-time updates to supervisors and agents. You can configure LCA to store call recordings to the PCA\u2019s ingestion S3 bucket, and use the two solutions together to get the best of both worlds. See <a href=\"https:\/\/www.amazon.com\/live-call-analytics\" target=\"_blank\" rel=\"noopener noreferrer\">Live call analytics for your contact center with Amazon language AI services<\/a> for more information.<\/p>\n<h2>Conclusion<\/h2>\n<p>The Post Call Analytics solution offers a scalable, cost-effective approach to provide call analytics with features to help improve your callers\u2019 experience. It uses Amazon ML services like Transcribe Call Analytics and Amazon Comprehend to transcribe and extract rich insights from your customer conversations.<\/p>\n<p>The sample PCA application is provided as open source\u2014use it as a starting point for your own solution, and help us make it better by contributing back fixes and features via GitHub pull requests. For expert assistance, <a href=\"https:\/\/aws.amazon.com\/professional-services\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Professional Services<\/a> and other <a href=\"https:\/\/aws.amazon.com\/machine-learning\/contact-center-intelligence\/partners\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Partners<\/a> are here to help.<\/p>\n<p>We\u2019d love to hear from you. Let us know what you think in the comments section, or use the issues forum in the <a href=\"https:\/\/github.com\/aws-samples\/amazon-transcribe-post-call-analytics\" target=\"_blank\" rel=\"noopener noreferrer\">PCA GitHub repository<\/a>.<\/p>\n<hr>\n<h3>About the Authors<\/h3>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/02\/10\/Bob-Strahan-p.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-21654 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/02\/10\/Bob-Strahan-p.png\" alt=\"Bob Strahan\" width=\"100\" height=\"133\"><\/a>Bob Strahan<\/strong> is a Principal Solutions Architect in the AWS Language AI Services team.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/08\/03\/andrew-kane.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-26761 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/08\/03\/andrew-kane.jpg\" alt=\"\" width=\"100\" height=\"134\"><\/a>Dr. Andrew Kane<\/strong> is an AWS Principal WW Tech Lead (AI Language Services) based out of London. He focuses on the AWS Language and Vision AI services, helping our customers architect multiple AI services into a single use-case driven solution. Before joining AWS at the beginning of 2015, Andrew spent two decades working in the fields of signal processing, financial payments systems, weapons tracking, and editorial and publishing systems. He is a keen karate enthusiast (just one belt away from Black Belt) and is also an avid home-brewer, using automated brewing hardware and other IoT sensors.<\/p>\n<p><strong> <a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/Steve-Engledow.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-31788 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/Steve-Engledow.jpg\" alt=\"\" width=\"100\" height=\"133\"><\/a>Steve Engledow<\/strong> is a Solutions Engineer working with internal and external AWS customers to build reusable solutions to common problems.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/15\/ckp.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-17135 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2020\/10\/15\/ckp.jpg\" alt=\"\" width=\"101\" height=\"136\"><\/a>Connor Kirkpatrick<\/strong> is an AWS Solutions Engineer based in the UK. Connor works with the AWS Solution Architects to create standardised tools, code samples, demonstrations, and quickstarts. He is an enthusiastic rower, wobbly cyclist, and occasional baker.<\/p>\n<p><strong><a href=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/Franco-Rezabek.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-31787 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2021\/12\/16\/Franco-Rezabek.jpg\" alt=\"\" width=\"100\" height=\"133\"><\/a>Franco Rezabek<\/strong> is an AWS Solutions Engineer based in London, UK. Franco works with AWS Solution Architects to create standardized tools, code samples, demonstrations, and quick starts.<\/p>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/post-call-analytics-for-your-contact-center-with-amazon-language-ai-services\/<\/p>\n","protected":false},"author":0,"featured_media":1392,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1391"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1391"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1391\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1392"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1391"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1391"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}