How Contextual Answers Transforms Customer Support
We've identified gaps where Generative AI can create fast, tangible impact for businesses, starting with their customer support teams. Learn how our Contextual Answers system reduces the need for large support teams, while improving customer satisfaction.
Delivering exceptional customer support is more challenging than ever before. Customers expect fast, comprehensive solutions to increasingly complex issues, putting immense pressure on support teams. Yet sifting through vast knowledge bases to find relevant answers is inefficient and wastes precious time.
AI21 Labs has identified gaps where Generative AI can create fast, tangible impact for businesses, starting with their customer support teams. ‘Contextual Answers’ is an AI system based on question-answering technology that allows support agents to search their company’s extensive knowledge bases to rapidly provide customers with accurate, personalized solutions. It can help businesses boost agent productivity, reduce handling times, and increase their first-call resolution rates – all the while enhancing customer satisfaction.
This article will explore how Contextual Answers integrates into existing workflows to boost productivity. You’ll learn how it empowers support teams to deliver the responsive, satisfying experiences customers demand in today's highly competitive landscape.
The Growing Pains of Customer Support
Studies show that increasing customer retention by just 5% can boost profits by 25%-95%. And the number of repurchases and renewals also increases by 82% when the customer receives excellent and prompt service. Despite the fact that high-quality customer service boosts profits and loyalty, achieving it is a formidable challenge.
One major pain point is the substantial cost of staffing large, round-the-clock support teams to meet today's expectations of instant, reliable responses. Even then, delivering personalized, researched answers quickly strains resources.
Even large teams struggle resolving queries quickly and correctly. Inquiries often require extensive research and tailored responses, lengthening wait times and frustrating customers.
Despite readily available website information, users often escalate straightforward questions to agents. This overburdens staff and disappoints customers. Improving accessibility is key for satisfaction and efficiency.
Using Contextual Answers to Overcome Support Challenges
What is ‘Contextual Answers’?
Contextual Answers is a task-specific Generative AI system, based on large language models (LLMs), to provide accurate responses to questions based on a company's data. It lets users ask natural language questions and receives answers grounded in uploaded documents and information fed into the LLM. This prevents fabricated responses common with AI, known as AI hallucinations.
At AI21 Labs, we have built a comprehensive end-to-end API solution which allows companies to effortlessly upload their required documents and allow their employees or customers to ask inquiries using natural language. The responses provided rely solely on the inputted information. In addition, the Contextual Answers tool provides a reference to the source which the answer was taken from, ensuring an unparalleled level of credibility.
How Contextual Answers Can Improve Customer Support
There are two main ways to use Contextual Answers for customer support:
The Contextual Answers system gives customer support agents quick access to relevant information. This makes it faster for them to find answers when customers ask questions, as it significantly cuts down research time. This results in shorter wait times for customers – a significant metric for customer satisfaction.
The Contextual Answers system can be built into the company's website as a dynamic chatbot or refined search bar. This lets it directly answer common customer questions instantly and accurately. The company then doesn't need to handle questions already answered on its site. This focuses support efforts on complex issues and improves team efficiency.
Companies can use one or both approaches. But the Contextual Answers system reduces the need for large support teams either way, while improving customer satisfaction. The ultimate outcome includes a two-fold benefit: a significant reduction in costs of support representatives, along with an elevation in company profits, as a result of the ongoing retention of satisfied customers.
How to Implement Contextual Answers in Your Company
1. Make a Plan
The first step is to make a plan for what you want to achieve. Decide if Contextual Answers will be used internally, externally, or both. At this point your company will also need to choose an LLM provider, exploring factors such as price, the latency of the LLMs, the throughput (how many questions per user), API quality and ease of use.
You’ll also have to decide if you need multiple languages. If so, you’ll need to add a translation step to the process, and pick which languages to include.
For example, an online bank approached us with the need to improve their customer support.
Being primarily online, one of their main features is having available support 24/7, and Contextual Answers was a perfect fit for this use case.
During the initial planning phase, this digital bank determined that, for their specific use case, an externally-facing customer solution best suited their needs. A user-friendly chatbot was developed to respond to facilitate customer inquiries and provide relevant answers. Fast, reliable responses grounded in the company’s data were the priority.
It was equally important for them to maintain consistency in tone, format, and response length during customer interactions. They also included a translation component into the process, recognizing the multilingual nature of their clientele.
During the planning phase, these decisions were used as guides to streamline our processes and align them with the bank's specifications.
2. Curate and Label the Data
Once your company has its plan in place, relevant data must be compiled and curated.
If the system is for external use, and customers will be interacting with it, the required data should also be publicly available to customers. For internal use by the support team, the company can also incorporate non-public policies and guidelines.
After the data has been collected, it needs to be labeled accordingly.
For instance, a SaaS company might have premium, gold, and platinum tiers of customers. These tiers use different product features and need different answers to their questions. It’s important for the company to tag each data item with the tier to which it belongs in order to provide the right answers. So when a customer asks a question, Contextual Answers will know which data to extract the information from, and customers will get answers that match their product features and tier.
You can even go further, and personalize the answer according to the customer's profile, giving them a customized and accurate response.
In the case of the digital bank that we worked with, they created a database of extensive support documents that included all of the information that they were authorized to provide to their customers. This database was the LLM’s single source of truth. It was the context used to answer customer questions, and unanswerable questions were escalated to human representatives. They also aggregated previously asked questions with ideal responses to refine answer formats.
This shows how thoughtful data gathering, organization and labeling ensures Contextual Answers has the appropriate contextual information to deliver precise, personalized responses. This strengthens accuracy while mirroring company guidelines.
3. Develop & Deploy Contextual Answers
In order for Contextual Answers to work, the company must synchronize the relevant, curated documents with their chosen AI21 large language model. At this stage, the LLM provider will meticulously customize the Contextual Answers tool to precisely meet the company's needs.
When using Contextual Answers for customer support, a company can adjust the tone of communication based on individual customer profiles. A customer who is older may need a more sophisticated response, while someone who is younger may need a more casual response. As well as extracting personalized information, the LLM can extract answers from distinct document sets for each query.
Additionally, for a retail company, Contextual Answers can pull from different product manuals and catalogs to find answers based on what the customer purchased. By customizing which documents are searched, it provides responses tailored to each customer's specific items.
Once the product is developed to meet the company's specifications, it is deployed into their operational framework. In the case of the digital bank refined AI chatbot, seamless integration within the bank's application was executed to guarantee constant accessibility for the users.
Whether you need to deploy a solution locally or globally, AI21 Labs offers an easy plug-and-play option.
4. Continuous Evaluation
In this final phase of the process, it remains vital to maintain a continuous evaluation of the Contextual Answers’ performance. thoroughly checking the user experience, whether for employees, customers, or both. An essential component of this evaluation involves validating the accuracy of the generated answers, and ensuring their precision.
Ongoing monitoring of key metrics is also important. This includes an in-depth analysis of key performance indicators, such as customer satisfaction metrics, retention rates, cost efficiencies, and revenue maximization. This comprehensive evaluation framework provides insights into the extent to which Contextual Answers genuinely contributes to the company's expansion trajectory, while also revealing opportunities for improvement.
In the case of the digital bank, we created a grading system that ranked each chatbot answer from 1-5, whereas 1 was unacceptable, and 5 was ideal. This training ensured acceptable responses aligned with the bank’s requirements. The bank will need to continue scoring its chatbot answers to maintain quality.
As a result of maintaining the chatbot's high quality, the bank has seen some positive results, including better customer satisfaction and a smaller customer support team, which reduces costs.
As customers demand ever-faster and more personalized support, Contextual Answers represents a powerful opportunity. This AI-powered solution streamlines workflows while automating repetitive tasks. Support agents can rapidly address inquiries by tapping into collective knowledge. Meanwhile, self-service options reassure customers their needs are heard.
For organizations seeking to strengthen support and boost loyalty, Contextual Answers is a strategic investment. Don't leave gains on the table – contact our experts today to explore how Contextual Answers can help your business thrive.
ABOUT THE AUTHOR
Stay up to date with the latest research and updates from AI21 Labs.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
In August 2021 we released Jurassic-1, a 178B-parameter autoregressive language model. We’re thankful for the reception it got – over 10,000 developers signed up, and hundreds of commercial applications are in various stages of development. Mega models such as Jurassic-1, GPT-3 and others are indeed amazing, and open up exciting opportunities. But these models are also inherently limited. They can’t access your company database, don’t have access to current information (for example, latest COVID numbers or dollar-euro exchange rate), can’t reason (for example, their arithmetic capabilities don’t come close to that of an HP calculator from the 1970s), and are prohibitively expensive to update. A MRKL system such as Jurassic-X enjoys all the advantages of mega language models, with none of these disadvantages. Here’s how it works.
Compositive multi-expert problem: the list of “Green energy companies” is routed to Wiki API, “last month” dates are extracted from the calendar and “share prices” from the database. The “largest increase“ is computed by the calculator and finally, the answer is formatted by the language model.
There are of course many details and challenges in making all this work - training the discrete experts, smoothing the interface between them and the neural network, routing among the different modules, and more. To get a deeper sense for MRKL systems, how they fit in the technology landscape, and some of the technical challenges in implementing them, see our MRKL paper. For a deeper technical look at how to handle one of the implementation challenges, namely avoiding model explosion, see our paper on leveraging frozen mega LMs.
A further look at the advantages of Jurassic-X
Even without diving into technical details, it’s easy to get a sense for the advantages of Jurassic-X. Here are some of the capabilities it offers, and how these can be used for practical applications.
Reading and updating your database in free language
Language models are closed boxes which you can use, but not change. However, in many practical cases you would want to use the power of a language model to analyze information you possess - the supplies in your store, your company’s payroll, the grades in your school and more. Jurassic-X can connect to your databases so that you can ‘talk’ to your data to explore what you need- “Find the cheapest Shampoo that has a rosy smell”, “Which computing stock increased the most in the last week?” and more. Furthermore, our system also enables joining several databases, and has the ability to update your database using free language (see figure below).
Jurassic-X enables you to plug in YOUR company's database (inventories, salary sheets, etc.) and extract information using free language
AI-assisted text generation on current affairs
Language models can generate text, yet can not be used to create text on current affairs, because their vast knowledge (historic dates, world leaders and more) represents the world as it was when they were trained. This is clearly (and somewhat embarrassingly) demonstrated when three of the world’s leading language models (including our own Jurassic-1) still claim Donald Trump is the US president more than a year after Joe Biden was sworn into office. Jurassic-X solves this problem by simply plugging into resources such as Wikidata, providing it with continuous access to up-to-date knowledge. This opens up a new avenue for AI-assisted text generation on current affairs.
Who is the president of the United States?
Joe Biden is the 46th and current president
Jurassic-X can assist in text generation on up-to-date events by combining a powerful language model with access to Wikidata
Performing math operations
A 6 year old child learns math from rules, not only by memorizing examples. In contrast, language models are designed to learn from examples, and consequently are able to solve very basic math like 1-, 2-, and possibly 3- digit addition, but struggle with anything more complex. With increased training time, better data and larger models, the performance will improve, but will not reach the robustness of an HP calculator from the 1970s. Jurassic-X takes a different approach and calls upon a calculator whenever a math problem is identified by the router. The problem can be phrased in natural language and is converted by the language model to the format required by the calculator (numbers and math operations). The computation is performed and the answer is converted back into free language. Importantly (see example below) the process is made transparent to the user by revealing the computation performed, thus increasing the trust in the system. In contrast, language models provide answers which might seem reasonable, but are wrong, making them impractical to use.
The company had 655400 shares which they divided equally among 94 employees. How many did each employee get?
Each employee got 7000 stocks
(No answer provided)
6972.3 X= 655400/94
Jurassic-X can answer non-trivial math operations which are phrased in natural language, made possible by the combination of a language model and a calculator
Solving simple questions might require multiple steps, for example - “Do more people live in Tel Aviv or in Berlin?” requires answering: i. What is the population of Tel-Aviv? ii. What is the population of Berlin? iii. Which is larger? This is a highly non-trivial process for a language model, and language models fail to answer this question (see example). Moreover, the user can’t know the process leading to the answers, hence is unable to trust them. Jurassic-X can decompose such problems into the basic questions, route each to the relevant expert, and put together an answer in free language. Importantly, Jurassic-X not only provides the correct answer but also displays the steps taken to reach it, increasing the trust in the system.
Do more people live in Tel Aviv or in Berlin?
There are more people living in Tel Aviv than in Berlin.
Berlin and Tel Aviv are roughly the same size
(First hit is a comparison between Tel Aviv and Berlin)
More people live in Berlin than in Tel-Aviv
[‘Return population of Tel Aviv’; Return population of Berlin’; Return which is bigger between #1 and #2’] Step 1: Population of Tel Aviv. Result - 451523. Step 1: Population of Berlin. Result - 3664088. Step 3: Which is bigger, #1 or #2. Result - Berlin.
Jurassic-X breaks down compositional questions, answers the basic sub-questions, and puts together the answer. Importantly, this process is transparent to the user greatly increasing the trust in the system
Dynamic information (like weather and currency exchange rates)
Certain types of information change continuously - weather, currency exchange rates, share values and more. Such information will never be captured by language models, yet can easily be handled by Jurassic-X by integrating it with a reliable source of information. We performed a proof-of-concept on two such features - weather and currency exchange rates, and the design enables quick integration with more sources to solve your use-case. Weather - a loosely phrased question about the weather elicits an answer from all language models, where language models always return the same answer, regardless of when the question was asked (funny, right?), while Jurassic-X provides an answer based on the actual weather prediction.
I’m going to be in New-York in 3 days. Should I pack my umbrella?
Yes, you should pack your umbrella.
Yes, you should. The weather forecast is rain.
(Links to weather websites)
Yes, you should pack your umbrella, because in New York in 3 days there will be broken clouds and the temperature will be -2 degrees.
Currency exchange rates change much faster than weather predictions, yet the Jurassic-X concept - a language model connected to a reliable source of information - easily solves this problem as well.
How much Moroccan money will I get for 100 bucks?
100 moroccan dirhams is about 27$.
How much is 100 dollars in moroccan money?
100 dirhams = 10.75 dollars
100 USD = 934.003 MAD
Jurassic-X combines a language model with access to APIs with continuously changing information. This is demonstrated for weather forecasts and currency exchange rates, and can easily be extended to other information sources
Transparency and trust
Transparency is a critical element that is lacking in language models, preventing a much wider adoption of these models. This lack of transparency is demonstrated by the answers to the question - “Was Clinton ever elected as president of the United States?”. The answer, of course, depends on which Clinton you have in mind, which is only made clear by Jurassic-X that has a component for disambiguation. More examples of Jurassic-X’s transparency were demonstrated above - displaying the math operation performed to the user, and the answer to the simple sub-questions in the multi-step setting.
Was Clinton ever elected president of the United States?
No, Clinton was never elected as president of the United States.
Clinton was elected president in the 1992 presidential elections…
Bill Clinton was elected president.
Jurassic-X is designed to be more transparent by displaying which expert answered which part of the question, and by presenting the intermediate steps taken and not just the black-box response
That's it, you get the picture. The use cases above give you a sense for some things you could do with Jurassic-X, but now it's your turn. A MRKL system such as Jurassic-X is as flexible as your imagination. What do you want to accomplish? Contact us for early access
Your submission has been received!
Oops! Something went wrong while submitting the form.