This health insurer is committed to putting health first – for its teammates, its customers, and the company itself. Through its insurance services, they make it easier for the millions of people we serve to achieve their best health – delivering the care and service they need, when they need it. These efforts are leading to a better quality of life for people with Medicare, Medicaid, families, individuals, military service personnel, and communities at large.
Calls coming into the contact center are directed to a voice self-service application that attempts to handle patient claims, provider benefit inquiries, and insurance premium quotes for both service providers and individuals (members) insured under their plans. The application receives in excess of 10,000 calls per day.
Since members call the application relatively infrequently, they tend to be less skilled at navigating the call script. Additionally, they tend to be less inclined to learn how to use the IVR. Many will opt for a human as soon as they encounter reprompt/retry messages, or when the IVR prompts become cognitively challenging, or if they feel self-service is becoming unproductive for them.
Providers on the other hand, call the application several times per day and are generally calling for a specific, well-defined purpose such as benefits coverage or a claims inquiry. They know from past experience that the IVR is the fastest way to answer their inquiries and that dealing with an agent may actually take longer.
Part of the problem this customer faced was handling this wide range of caller types that make up their daily call volume.
During the initial proof of concept (PoC) period with the customer, we implemented our technology to determine what effect dynamically and automatically adjusting the audio playback rate of voice prompts in their IVR would have on voice self-service performance. During this period, we ran A/B tests on over 20,000 phone calls over a one week period. We also used Gyst Analytics to collect data on caller behavior as it relates to engagement within the voice application. Existing voice prompts were speed adjusted in direct relation to individual caller skills.
During the trial, audio playback speed adjustment levels of 100, 110, 114, 117, and 119 percent were used. Subsequently, we tried altering the speeds to 106, 112, 115, 118, and 121. A playback level of 100 here indicates the normal playback rate of the audio, 110 represents 110 percent of normal, and so forth. Audio was adjusted in accordance with the detected skill level of each caller at each conversation turn in the voice application.
In summary, the results indicated:
Callers using speed adjusted audio had 36.9% more engagement (Conversation Turns) with the IVR than callers using standard audio. They also encountered 12.5% fewer error messages and thus, had to reenter information 12.5% fewer times.
The difference in cost between a call handled by voice self-service and a call handled by an agent can vary between $2 - $6 or more, depending on the length of the call, the knowledge and training level the agent receives, onshore/offshore sourcing, and other factors. For our calculations below, we will assume a cost differential of $4 per call between the two.
Standard calls consisted of 3.2 Conversation Turns on average. Thus, had the 36.9% increase in engagement the adjusted audio callers experienced been handled by agents, the additional cost for a contact center handling 10,000 self-service calls per day would be: $4 x (10000 x .369)/3.2 = $4,612 per day. Put another way, replacing standard audio with adjusted speed audio in this particular voice application generates $1,683,380 in annual cost savings for the contact center.
Direct cost savings aside, having customers experience fewer error messages and requests for the reentry of information, along with handling their inquiries on their first contact and freeing up agents for less mundane calls, all contribute to additional benefits in terms of improved customer service and brand image.
We've made it easy for you to give Gyst a test drive with your own callers and your own voice applications.
Via our Proof of Concept (PoC) plan, we first request a cloned version of your voice application from you. Then, we enhance this cloned version of the application with Gyst by inserting calls to our web API. Next, we run both versions of the voice application side-by-side in limited production - starting with only a trickle of phone calls to the Gyst enhanced version of the voice application.
As the call volume to the Gyst enhanced version is slowly increased over time, we use A/B tests to compare performance metrics for call handle times, voice self-service rates, reduced caller input errors, increased IVR usage/engagement, goal completion, and other metrics. This will unequivocally prove out the technology benefits for your organization and your callers as they engage with your voice application. As an added bonus, we'll perform a highly detailed analysis of your current voice application efficiency and performance using our Gyst Analytics software. You can see the level of detail you'll receive in this report here.
The PoC usually takes about 2 weeks to accumulate enough phone calls for A/B testing, though we can adjust this time period to suit your daily call volume and sampling rate preferences. And if you have an existing Amazon Web Services account, the PoC can usually be administered and paid for via your existing contracts and volume agreements with AWS. While the PoC is provided for you as a fully implemented turnkey service, you can read more about the technical details on how Gyst is implemented in AWS here.