Automating Agent Actions with Cognitive Services Speech Recognition in USD


[The following article is cross-posted from Geoff Innis’ blog]

Microsoft’s Cognitive Services offer an array of intelligent APIs that allow developers to build new apps, and enhance existing ones, with the power of machine-based AI. These services enable users to interact with applications and tools in a natural and contextual manner. These types of intelligent application interactions have significant applicability in the realm of Service, to augment the experiences of both customers and agents.

In this post, we will focus on the Bing Speech API, and how it can be leveraged within the agent’s experience, to transcribe the spoken conversation with customers, and automate agent activities within the Unified Service Desk. We will build a custom Unified Service Desk hosted control that will allow us to capture real-time spoken text, and automatically trigger a search for authoritative knowledge based on the captured text.

 

Concept and Value

As agents interact with customers by telephone, they are engaged in a conversation in which the details of the customer’s need will be conveyed by voice during the call. Agents will typically need to listen to the information being conveyed, then take specific actions within the agent desktop to progress the case or incident for the customer. In many instances, agents will need to refer to authoritative knowledgebase articles to guide them, and this will often require them to type queries into knowledgebase searches, to find appropriate articles. By transcribing the speech during the conversation, we can not only capture a text-based record of some or all of the conversation, but we can also use the captured text to automatically perform tasks such as presenting contextually relevant knowledge, without requiring the agent to type. Benefits include:

  • Quicker access to relevant knowledge, reducing customer wait time, and overall call time
  • Improved maintenance of issue context, through ongoing access to the previously transcribed text, both during the call, and after the call concludes
  • Improved agent experience by interacting in a natural and intuitive manner

This 30-second video shows the custom control in action (with sound):

 

Real-time recognition and transcribing of speech within the agent desktop opens up a wide array of possibilities for adding value during customer interactions. Extensions of this functionality could include:

  • Speech-driven interactions on other channels such as chat and email
  • Other voice-powered automation of the agent desktop, aided by the Language Understanding Intelligent Service (LUIS), which is integrated into the Speech API, and which facilitates the understanding of intent of commands; As an example, this could allow the agent to “send an email”, or “start a funds transfer”
  • Saving of the captured transcript in an Activity on the associated case, for a record of the conversation, in situations where automated call transcription is not done through the associated telephony provider
  • Extending the voice capture to both agent and customer, through the Speech API’s ability to transcribe speech from live audio sources other than the microphone

The custom control we build in this post will allow us to click a button to initiate the capture of voice from the user’s microphone, transcribe the speech to text, present the captured speech in a textbox within the custom USD control, and trigger the search for knowledge in the Unified Service Desk KM Control by raising an event.

In this post, we will build on the sample code available in these two resources:

 

Pre-requisites

The pre-requisites for building and deploying our custom hosted control include:

 

Building our Custom Control

In Visual Studio 2015, we will first create a new project using the Visual C# USD Custom Hosted Control template, and will name our project SpeechSearchUSD.

NewProject

 

We will use the instructions found on NuGet for our respective build to add the Speech-To-Text API client library to our project. For example, the x64 instructions can be found here.

We also set the Platform Target for our project build to the appropriate CPU (x64 or x86), to correspond with our Speech-to-Text client library.

 

Laying Out our UI

For our custom control, we will design it to sit in the LeftPanelFill display group of the default USD layout. In our XAML code, we add:

  • A Button to initiate speech capture from the microphone, in both Long Dictation mode, and Short Phrase mode
  • A Button to clear any previously captured speech transcripts
  • Textboxes to present:
    • the partial results as they are captured during utterances
    • the final, best-choice results of individual utterances
  • RadioButtons, to allow the agent to agent to specify Short Phrase mode, or Long Dictation mode
    • Long Dictation mode can capture utterances of up to 2 minutes in length, with partial results being captured during the utterances, and final results being captured based on where the sentence pauses are determined to be
    • Short Phrase mode can capture utterances of up to 15 seconds long, with partial results being captured during utterances, and one final result set of n-best choices
  • A Checkbox, to allow the agent to choose whether knowledgebase searches are automatically conducted as final results of utterances are captured

 

Our final markup should appear as shown below:

<USD:DynamicsBaseHostedControl x:Class="SpeechSearchUSD.USDControl"
    x:Name="_usdHostedControl"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
    xmlns:USD="clr-namespace:Microsoft.Crm.UnifiedServiceDesk.Dynamics;assembly=Microsoft.Crm.UnifiedServiceDesk.Dynamics"
    mc:Ignorable="d" 
    d:DesignHeight="359" d:DesignWidth="370">
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="auto"/>
            <RowDefinition Height="*"/>
        </Grid.RowDefinitions>
        <ToolBarTray Name="ProgrammableToolbarTray" Grid.Row="0" Focusable="false"/>
        <Label x:Name="lbl_SpeechControlTitle" Content="Call Transcript Knowledge Search" 
               HorizontalAlignment="Left" Margin="10,-2,0,0" Grid.Row="1" VerticalAlignment="Top"/>
        <Button x:Name="btn_CaptureSpeech" Content="Capture" HorizontalAlignment="Left" Margin="10,29,0,0" 
                Grid.Row="1" VerticalAlignment="Top" Width="69" Click="btn_CaptureSpeech_Click"  Height="35px" />
        <Button x:Name="btn_Clear" Content="Clear" HorizontalAlignment="Left" Margin="276,29,0,0" 
                Grid.Row="1" VerticalAlignment="Top" Width="84" Click="btn_Clear_Click" Height="35px" />
        <CheckBox x:Name="chk_AutoSearch" Content="Auto-search" HorizontalAlignment="Left" Margin="276,68,0,0" 
                  Grid.Row="1" VerticalAlignment="Top" 
                  IsChecked="{Binding ElementName=_usdHostedControl, Path=IsAutoSearchEnabled}" />
        <Label x:Name="lbl_Results" Content="Transcript" HorizontalAlignment="Left" Margin="10,61,0,0" 
               Grid.Row="1" VerticalAlignment="Top"/>
        <TextBox x:Name="txtbx_Result" HorizontalAlignment="Left" Height="135" Margin="10,84,0,0" Grid.Row="1" 
                 TextWrapping="Wrap" VerticalAlignment="Top" Width="350" 
                 PreviewMouseLeftButtonUp="txtbx_Result_PreviewMouseLeftButtonUp" VerticalScrollBarVisibility="Visible" />
        <Label x:Name="lbl_CaptureStream" Content="Capture Stream" HorizontalAlignment="Left" Margin="10,224,0,0" 
               Grid.Row="1" VerticalAlignment="Top"/>
        <TextBox x:Name="txtbx_Stream" HorizontalAlignment="Left" Height="91" Margin="10,250,0,10" Grid.RowSpan="2" 
                 TextWrapping="Wrap" VerticalAlignment="Top" Width="350" VerticalScrollBarVisibility="Visible"/>
        <RadioButton x:Name="rad_Short" 
                     IsChecked="{Binding ElementName=_usdHostedControl, Path=IsMicrophoneClientShortPhrase}" 
                     Click="radBtn_Click" Content="Short" HorizontalAlignment="Left" Margin="88,29,0,0" Grid.Row="1" 
                     VerticalAlignment="Top"/>
        <RadioButton x:Name="rad_Dictation" 
                     IsChecked="{Binding ElementName=_usdHostedControl, Path=IsMicrophoneClientDictation}" 
                     Click="radBtn_Click" Content="Dictation" HorizontalAlignment="Left" Margin="88,49,0,0" 
                     Grid.Row="1" VerticalAlignment="Top"/>
    </Grid>
</USD:DynamicsBaseHostedControl>

Our custom control should appear as shown below:
CustomControlLayout

 

Adding Our C# Code

In our USDControl.xaml.cs code-behind, we add several using statements. We also:

  • use a private a variable for our Bing Speech API Subscription Key that we obtained as one of our pre-requisites
  • set a variable for the language we will use when recognizing speech; several languages are supported, but we will use English in our sample
  • set several other variables and events that we will use in our code

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Windows;
using System.Windows.Input;
using Microsoft.Crm.UnifiedServiceDesk.CommonUtility;
using Microsoft.Crm.UnifiedServiceDesk.Dynamics;
using Microsoft.Crm.UnifiedServiceDesk.Dynamics.Utilities;
using Microsoft.Uii.Desktop.SessionManager;

using Microsoft.ProjectOxford.SpeechRecognition;
using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace SpeechSearchUSD
{
    /// <summary>
    /// Interaction logic for USDControl.xaml
    /// See USD API documentation for full API Information available via this control.
    /// </summary>
    public partial class USDControl : DynamicsBaseHostedControl, INotifyPropertyChanged
    {
        #region Vars
        /// <summary>
        /// Log writer for USD 
        /// </summary>
        private TraceLogger LogWriter = null;

        // Speech API variables:
        // You could also pull this from USD Options:
        //   _subscriptionKey = base.CRMWindowRouter.GlobalSettings["MySpeechSubscriptionKey"];
        private string _subscriptionKey = "InsertYourSubscriptionKey";

        // recognition language:
        string _recoLanguage = "en-us";

        // Speech Capture microphone recognition client
        private MicrophoneRecognitionClient _micClient;

        // gets/sets whether this is a short-phrase capture:
        public bool IsMicrophoneClientShortPhrase { get; set; }
        // gets/sets whether this is a dictation capture:
        public bool IsMicrophoneClientDictation { get; set; }
        // gets/sets whether we will automatically trigger a search on captured phrase:
        public bool IsAutoSearchEnabled { get; set; }

To our UII Constructor, we add a call to our InitUIBinding method, which configures some initial bound variables:
public USDControl(Guid appID, string appName, string initString)
    : base(appID, appName, initString)
{
    InitializeComponent();
    InitUIBinding();

    // This will create a log writer with the default provider for Unified Service desk
    LogWriter = new TraceLogger();

}
private void InitUIBinding()
{
    IsMicrophoneClientShortPhrase = true;
    IsMicrophoneClientDictation = false;
    IsAutoSearchEnabled = true;

    // set default speech recognition and auto-search mode in UI:
    rad_Short.IsChecked = true;
    chk_AutoSearch.IsChecked = true;

}

We now add in our click event handler for our Capture button. In it, we set the SpeechRecognitionMode, depending on the checkbox selection of the agent. We temporarily disable our input buttons, and call our CreateMicrophoneRecognitionClient method with our capture mode, recognition language, and subscription key as parameters, and get a MicrophoneRecognitionClient object returned, which we then use to start recognition, with the StartMicAndRecognition method:
private void btn_CaptureSpeech_Click(object sender, RoutedEventArgs e)
{

    SpeechRecognitionMode mode = new SpeechRecognitionMode();
    if (IsMicrophoneClientDictation)
    {
        mode = SpeechRecognitionMode.LongDictation;
    } else
    {
        mode = SpeechRecognitionMode.ShortPhrase;
    }

    // Disable buttons
    btn_CaptureSpeech.IsEnabled = false;
    rad_Short.IsEnabled = false;
    rad_Dictation.IsEnabled = false;

    if (_micClient == null)
    {
        _micClient = CreateMicrophoneRecognitionClient(mode, _recoLanguage, _subscriptionKey);
    }
    _micClient.StartMicAndRecognition();
}

We create our instance of the MicrophoneRecognitionClient class using the CreateMicrophoneRecognitionClient method. In this method, we use SpeechRecognitionServiceFactory to create our MicrophoneRecognitionClient. We then set event handlers as appropriate, depending on whether we are in Short Phrase, or Long Dictation mode:
MicrophoneRecognitionClient CreateMicrophoneRecognitionClient(SpeechRecognitionMode recoMode, string language, string subscriptionKey)
{
    MicrophoneRecognitionClient micClient = SpeechRecognitionServiceFactory.CreateMicrophoneClient(
        recoMode,
        language,
        subscriptionKey,
        subscriptionKey);

    // Add event handlers for events in speech recognition process:
    micClient.OnPartialResponseReceived += OnPartialResponseReceivedHandler;
    if (recoMode == SpeechRecognitionMode.ShortPhrase)
    {
        micClient.OnResponseReceived += OnMicShortPhraseResponseReceivedHandler;
    }
    else if (recoMode == SpeechRecognitionMode.LongDictation)
    {
        micClient.OnResponseReceived += OnMicDictationResponseReceivedHandler;
    }

    return micClient;
}

We adapt our event handlers from those included in the Windows sample application in the Cognitive Services SDK. For the partial response event handler, we will write the result to our Capture Stream Textbox. For the final response event handlers for Long Dictation and Short Dictation, we end the recognition capture, and re-enable our UI elements, as well as writing our final response results to the Transcript Textbox:
void OnPartialResponseReceivedHandler(object sender, PartialSpeechResponseEventArgs e)
{
    WriteStream(e.PartialResult);
}

void OnMicShortPhraseResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
{
    Dispatcher.Invoke((Action)(() =>
    {

        // End Microphone Recognition, as final response has been received
        _micClient.EndMicAndRecognition();

        WriteResponseResult(e);

        // Re-enable the UI elements
        btn_CaptureSpeech.IsEnabled = true;
        rad_Short.IsEnabled = true;
        rad_Dictation.IsEnabled = true;
    }));
}

void OnMicDictationResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
{

    if (e.PhraseResponse.RecognitionStatus == RecognitionStatus.EndOfDictation ||
        e.PhraseResponse.RecognitionStatus == RecognitionStatus.DictationEndSilenceTimeout)
    {
        Dispatcher.Invoke((Action)(() =>
        {

            // End Microphone Recognition, as final response has been received
            _micClient.EndMicAndRecognition();

            btn_CaptureSpeech.IsEnabled = true;
            rad_Short.IsEnabled = true;
            rad_Dictation.IsEnabled = true;

        }));
    }
    WriteResponseResult(e);
}

When writing our final response after speech capture, we loop through the response results, and build a string containing each result, along with its corresponding Confidence, which we display in our Capture Stream Textbox. For the top result returned, we also display it in our Transcript Textbox, and if auto-searching is selected, we call our TriggerSearch method:
private void WriteResponseResult(SpeechResponseEventArgs e)
{
    if (e.PhraseResponse.Results.Length == 0)
    {
        // no result available
    }
    else
    {
        string FinalResult = "";
        for (int i = 0; i < e.PhraseResponse.Results.Length; i++)
        {
            FinalResult += "[" + i + "]: " + e.PhraseResponse.Results[i].DisplayText;
            if (e.PhraseResponse.Results[i].Confidence != Confidence.None)
            {
                FinalResult += " (Confidence: " + e.PhraseResponse.Results[i].Confidence + ")\n";
            }
            if (i == 0)
            {
                // for the top result, output it to the result/transcript textbox:
                WriteResponse(e.PhraseResponse.Results[i].DisplayText);

                // If autosearch is enabled, trigger a search now:
                if (IsAutoSearchEnabled == true)
                {
                    // Add context item to the current session. 
                    TriggerSearch(e.PhraseResponse.Results[i].DisplayText);
                }
            }
        }
        // Show final results in the stream textbox:
        WriteStream(FinalResult);
    }
}

In our TriggerSearch method, we adapt the sample included in the custom hosted control template to add our captured text string to the current USD context. We then fire an event named QueryTriggered:
private void TriggerSearch(string searchTerm)
{
    // add search string to the current context:
    Dictionary<string, CRMApplicationData> myQuery = new Dictionary<string, CRMApplicationData>(); 
    myQuery.Add("SpeechSearchTerm", new CRMApplicationData() { name = "SpeechSearchTerm", type = "string", value = searchTerm.Replace("'", @"\'") });
    ((DynamicsCustomerRecord)((AgentDesktopSession)localSessionManager.ActiveSession).Customer.DesktopCustomer).MergeReplacementParameter(this.ApplicationName, myQuery, true);

    //raise event in USD, to allow us to conduct our KB Search, or take another action with the data:
    this.FireEvent("QueryTriggered");
}

We also add in an event handler to allow a text selection in the Transcript Textbox to trigger a search via the TriggerSearch method:
private void txtbx_Result_PreviewMouseLeftButtonUp(object sender, MouseButtonEventArgs e)
{
    if (txtbx_Result.SelectedText != null)
    {
        TriggerSearch(txtbx_Result.SelectedText);
    }
}

In our code-behind, we add in some additional methods to write text to our textboxes, handle button clicks, and other basics. These additional code snippets are found in the downloadable solution at the end of this post.

 

We can now Build our solution in Visual Studio.

Upon successful build, we can retrieve the resulting DLL, which should be available in a subdirectory of our project folder:

.\SpeechSearchUSD\bin\Debug\SpeechSearchUSD.dll

We can copy this DLL into the USD directory on our client machine, which by default will be:

C:\Program Files\Microsoft Dynamics CRM USD\USD

 

We also copy the SpeechClient DLL that makes up the Cognitive Services Speech-To-Text API client library that we retrieved from GitHub earlier, into our USD directory.

Note that instead of copying the files manually, we could also take advantage of the Customization Files feature, as outlined here, to manage the deployment of our custom DLLs.

 

Configuring Unified Service Desk

We now need to configure Unified Service Desk, by adding our custom hosted control, and the associated actions and events to allow it to display and interact with the KM control.

To start, we log in to the CRM web client as a user with System Administrator role.

Next, we will add a new Hosted Control, which is our new custom control that we have created. By navigating to Settings > Unified Service Desk > Hosted Controls, we add in a Hosted Control with inputs as shown below. Note that the USD Component Type is USD Hosted Control, and the Assembly URI and Assembly Type values incorporate the name of our DLL.

NewHostedControl

 

Next, we will add a new Action Call, by navigating to Settings > Unified Service Desk > Action Calls, and adding a new one, as shown below. We use the default action of our newly added hosted control, which will instantiate the control when the action is called:

ActionCall

 

Next, we will add our new Action Call to the SessionNew event of the CRM Global Manager, which will cause our control to be instantiated when a new session is initiated. We can do this by navigating to Settings > Unified Service Desk > Events, and opening the SessionNew event of the CRM Global Manager. We add our new action to the Active Actions for the event:

SessionNewActions

 

Next, we will add configurations to execute a knowledgebase search when appropriate, using the data from the control. We navigate to Settings > Unified Service Desk > Events, and add a New Event. Our Event is called QueryTriggered, to match the name of the event our custom USD control raises. We add this event to the SpeechCaptureandSearch Hosted Control:

QueryTriggered

 

Next, we create a new Action Call for the QueryTriggered event, by clicking the + to the top-right of the sub-grid, searching, then clicking the ‘+New’ button, as shown below:

NewActionCall

 

Our new Action Call will execute the Search action of the KB Search control, which is our name for our KM Control. We pass the search term data that our custom control added to the context, by adding a Data value of query=[[SpeechCaptureandSearch.SpeechSearchTerm]] .

SpeechSearchActionCall

 

We can also add a Condition to our Action Call, so that the search is only conducted when we have a non-empty search term, with a condition like this: “[[SpeechCaptureandSearch.SpeechSearchTerm]]” != “”

Condition

 

If desired, we can also add some sub-actions to our Search KB with Speech Data Action Call by selecting the chevron in the top navigation, and choosing Sub Action Calls. For example, we may wish to ensure the right panel is expanded, and that the KB Search hosted control is being shown:

Sub

 

Testing our Custom Hosted Control

We are now ready to launch the Unified Service Desk on our client machine, and test the hosted control. After we log into USD, and start a session, our Speech Search control will appear, and we can start capturing our speech, and searching for related knowledge:

FinalShot

 

Some things to try, if starting with the sample data currently deployed in a CRM Trial, include:

  • With the Short radio button selected, Auto-search checkbox checked, and your microphone enabled, click the Capture button, and say “Yes, I can help you with Order Shipping Times”, and see the corresponding KB article automatically surface in the KM Control
  • Highlight the phrase “Order Shipping Times” from the Transcript Textbox, to see how you can automatically search on key phrases from the past conversation

 

The solution source code can be downloaded here. You will need to insert your Speech API subscription key into the placeholder in the code-behind.

Comments (0)

Skip to main content