Teaching Computers to Detect Sentiment

Machine Learning Cartoon

“Sentiment detection enriches the customer insights you get from interaction analytics, and provides value far beyond the contact center, informing strategic decisions across your entire enterprise.”

In the last blog, I talked about the benefits your organization can gain by using sentiment detection to reveal customer attitudes toward your company and products.

This week, let’s take a look “behind the screen” at the technology that powers sentiment detection, to get a better understanding of how it interprets attitudes and predicts what those might mean for your business.

Teach Me How You’re Feeling
You may recall from the last blog that a person’s tone of voice, their volume of speech, and the speed at which they are talking all provide insights into their feelings about a subject. Loud, fast speech patterns typically mean anger or excitement.  Laughter usually means happiness. And of course, the words people use most strongly convey the meaning they are trying to express.

Humans learn to interpret these clues though a lifetime of conversations.  But today, companies often rely on computers and other technology to interact with their customers.  How can we expect computers to sense how a person is feeling – much less identify their underlying attitudes?  The answer:  machine learning.  Your computer, or interaction analytics platform, can be trained to listen for the myriad different clues that might indicate a speaker’s sentiment and then interpret them in the context of your business.  The goal is to insure the sentiment measurements are not generic, but specific to your business and your customers, so that the sentiment measurements can be as useful as they can be.

Training Model Students
Starting with a training corpus, a set of examples that represent the concepts you want to convey, a computer can learn to notice similar patterns in new data that is fed into it.  As new data is ingested, a background algorithm objectively compares it to your examples, and assigns it a probability score based on how closely it matches the original data patterns. This process is called machine learning, and it is step one of training the computer to “think” for itself.

It’s imperative the examples in a training corpus are specific to your organization and industry, or the model won’t be useful.  A computer trained to identify specific patterns in healthcare interactions probably wouldn’t be much use to a telecommunications contact center.

For your sentiment detection model, you’ll want to use you own customer interactions and customer data, such as your after-call survey results, to define what’s good, bad or neutral sentiment in your own environment.

For example, feed the system 20,000 calls and their associated Net Promoter (NPS®) scores from the after-call surveys and the machine will learn which words, tones of voice, amount of laughter and interruption tend to lead to a high NPS score versus a low NPS score.  It does this by using those NPS scores to separate all of the calls into buckets for “good,” “bad,” and “neutral.”

Once in these buckets, the machine learning algorithm crunches and crunches through all of those example calls, comparing and contrasting them across thousands and thousands, sometimes millions of dimensions or “features” of the calls.  The vast majority of those features won’t have any value at all for determining what leads to positive or negative sentiment, but enough will.  This is truly a “needle in a haystack” situation, something a human could never do.  Aren’t computers grand?

More is More
The more learning examples you provide a machine learning system, the more precise the predictive model becomes; with more examples in your training set, the machine has more information to “weigh” the impact of each feature on the probability scores.

Here’s where the importance of being able to capture both acoustic and linguistic speech components comes into play.   Utilizing both the linguistic clues of actual words and phrases, as well as acoustic clues like customer emotion, cross talk and laughter, the machine is able to weigh the importance of what the customer said and how the customer said it against the customer’s own statement of satisfaction with your products and services.

This contributes to your ability to “tune the model” to produce the most revealing results across the widest swaths of customer populations, providing the granularity of detail that enables your business to control for personal temperament and external factors even as you get to the heart of how your customers feel about you.

Graduation Day
Once the machine has learned the data patterns from all of your examples, it can apply this acquired knowledge to the rest of the interactions you need to analyze.   Like a college student that’s passed freshman year 101 courses, your computer has mastered the fundamentals of the subject matter and can apply what it’s learned to more advanced courses.   The machine takes its first steps into lifelong learning, and it’s poised to help your organization take on the challenge of sentiment detection.

{Photo Credit:  jesadaphorn/FreeDigitalPhotos.net}

Categories: Best Practices