Sariel Moshe, co-founder, and CTO at xFind, delivered a thought-provoking webinar on AI’s impact on the support experience. Sariel is KCS v6 Practices certified and shared how knowledge will play a critical role in the AI-powered support experience.
Large Language Models (LLMs) enable computers to ‘imitate’ natural human language by capturing its statistical model. This technology is considered Generative Artificial Intelligence (GenAI) because it allows computers to generate an answer to any question in a human-like form. While powerful, Sariel shared that LLMs are tricky and don’t always follow the rules. He cautioned that LLMs:
- Have no commitment to reality
- Provide wrong answers with confidence
- Only as smart as the training data
- Don’t necessarily meet security and privacy regulations
Sariel stated that you still need a human in the loop, as ground-truth knowledge requires human oversight.
Through the webinar polling, it was exciting to see that most participants are actively deploying AI or plan to deploy it within the next 1-2 years in their support organization. Sariel believes in 5 years, 80% of current support work will be fully automated.
Sariel provided many insights in this webinar and concluded with three takeaways:
- Refocus your efforts on establishing good knowledge practices
- Start internally
- Reassess your KPIs and Content Standards
Recording
This webinar was packed with a wealth of information! We encourage you to access the PDF of the slides, and view the chat transcript and recording below:
Resources Shared
- Slides presented by Sariel
- Upcoming Public Events
- KCS Training
- Getting Started with AI, via Consortium Members
- If you are interested in presenting at a future KCS in Action or being part of a Practitioner Panel on a topic, contact Arnfinn
- Get notified about future events by signing up for the mailing list
Chat Highlights
Answers in italics provided by Sariel
Michael Fisher: We use Genesys for our Service Desk, and are introducing a product called observe.ai that listens to the ongoing call, makes suggestions to the agent as the call proceeds, and then afterwards summarizes the call. That call summary strikes me as something that could be very well suited as input for new KCS articles. Does this make sense to you?
Absolutely, that is one of the main takeaways I was aiming for – genAI turns customer interactions into automatic knowledge generation that can then feed self service. BUT it requires a human in the loop to make sure the summaries and suggestions make sense.
Katie Ellis (she/her): How does Proactive-Predictive relate to the KCS feedback loop? Creating knowledge when it’s needed?
Proactive and predictive support are more related to the Evolve Loop, once you detect recurring issues and their causes, you can begin searching for ways to mitigate those causes in the source before they become an issue for your customer
Michael Fisher: You say “Customers don’t want to interact with an agent.” That’s a very broad statement – what is your source, what are you basing that declaration on?
Arnfinn Austefjord, Consortium, Boulder CO: @Michael Fisher I should have better phrased that. Customers would like their questions answered or issues resolved as quickly as possible with the least amount of effort. There have been many studies that show customers prefer Self-Service over logging a case—as Self-Service is quicker and requires less effort
Jason O’Donnell | Percepta | PDX: TSIA (and I believe Gartner) have reports showing this trended preference. Obviously it is not an all-or-nothing statement in day-to-day work, but definitely a trend towards one channel versus the other.
Alex van Dijk: It’s a bit of a Chinese finger puzzle in that it is interdependent. The more intuitive/easy we make self-service, the less people will want to interact with a live agent. Until then, customers will prefer talking to an agent for anything beyond the simplest questions
Justin Loera: There is the 80/20 rule as indicated but not one size fits all. The approach is more hybrid as not all content will warrant or yield a Q&A.
Lynette Ledoux | SearchUnify: What does AI do when a user question or issue description doesn’t lead to a single answer?
As I noted in the webinar there are two general approaches to the use of AI:
- Training (fine-tuning) the model – in this approach – on the one hand the AI is very capable of detecting multiple questions and answering each. On the other hand, you don’t have much control over how that will be carried out, and it could also depend a lot on the wording of the question.
- 2-phased approach of question answering over retrieved documentation – in this approach a search engine will first retrieve a list of items from which to generate the answer. This engine can be built (though most aren’t) in a way that will retrieve separate item lists if it detects there are multiple issues to match on, and if that happens, each list can feed a separate AI answer.
jdtocado: An issue that exists with LLM produced answers is the Validated data issue. Unless you have a human in the middle you cannot be sure that the answer created by the Generative AI meets all the non-technical requirements – safety, marketing, ethical, legal, cultural. The other way to do this is have Machine Learning find already created validated data and provide that.
That is a great summary of the main takeaway from the webinar
Katie Ellis (she/her): How do you know the answers the AI provides are Correct vs just an answer?
As explained later in the webinar, currently the only way to know that is to have the AI retrieve its answers from existing vetted knowledge items
andyk: I’ve experienced ChatGPT lying to me. It did admit that it was wrong once I corrected it, which is better than many humans do.
Harold Mason: This is different I think because the answer provided was previously provided by an agent.
Sara Feldman | Consortium | Las Vegas: “Confident liars” scare me! Haha (human or machine)
andyk: Never read the news! 🙂
Thomas Blackburn: I’ve seen ChatGPT provide creative answers that no one present during the demonstration had sufficient expertise to verify.
jdtocado: “…an infinitely sort of patient intern that sometimes lies to you.”
- Ethan Mollick, an associate professor of management at The Wharton School of the University of Pennsylvania on HBR Ideacast: Why You (and Your Company) Need to Experiment with ChatGPT Now
ZviEizenberg: It was not lying to you, just exercising the methodology of “if you cannot dazzle them with brilliance, baffle them with BS”
Russ Brookes, Avaya: What are your thoughts on adopting a strategy of running open source, commercially licensed, LLM’s locally? (to avoid data sovereignty issues)
That is an approach xFind is actively developing, as it solves many of the security and privacy issues that existing commercial solutions create when dealing with sensitive company data
Michael Fisher: I am interested in more detail on what you mean by “promptable”
‘Promptability’ means developing knowledge with the main thought in mind being – will the Generative AI be able to generate an answer to a future similar question based on what is written in this item. This goes hand-in-hand with the KCS approach of “consistently capturing information in a way that is both structured enough to be useful, and dynamic enough to suit the rapidly changing environment”. In other words – don’t rely on plain product documentation to feed AI, if a human customer will have a hard time finding a specific answer to their question there, don’t think AI will have it any easier.
Matt Seaman: The Predictive Customer Engagement model, which Consortium Members have been working on for 7-ish years now, is a double loop model, where the Improve Loop should help you constantly be tuning and ensuring the the Event Loop and the usage of an AI/ML technology to create actionable outputs is accurate and not starting to go side ways based on the training data it is using.
Michael Fisher: You describe feeding GenAI a series of questions and answers. How about just pointing it at a stack of existing knowledge articles, is that a viable approach?
As noted later in the webinar, my approach of ‘pointing at a stack’ is the 2-phased approach – retrieve the relevant knowledge articles and then generate a specific answer based on them.
Fine-tuning AI on a stack of existing knowledge articles is possible but does not enable full validation of the responses the AI will generate.
Kendall Brenneise | F5: Where does the stat ‘solving the 20% new incoming cases which don’t have structured knowledge’ come from?
As Matt Seaman noted later in the chat, this was based on the ‘generalized’ Consortium experience that 80% of what what comes into support are known issues that have been solved.
Michael Fisher: (Wow, we are a tough room…)
Thomas Blackburn: educated skeptics
Michael Fisher: the worst kind
Matt Seaman: Not 100% sure, but I think this refers to the ‘generalized’ Consortium experience that 80% of what comes into support are known issues that have been solved.
Katie Ellis (she/her): Are LLMs able to generate info based ONLY on the provided set of documents? Can it “forget” the other realms/documents it used for prior questions?
Absolutely – that is the power of the 2-phased approach, it enables much more control over the items used to answer questions.
Justin Loera: Please quantify Structured vs Unstructured as LLMs are generally trained on both types.
- Structured – explicitly developed for consummation as a source of knowledge
- Unstructured – developed as part of an ongoing process and without an aim of future use
It is true that LLMs are trained on both types of data. I made the differentiation mainly to point out that the 2-phased Question Answering over retrieved documents approach works much better with structured than unstructured.
Katie Ellis (she/her): How do LLMs respond as the environment changes? We are always replacing one thing with another, so the old answers are no longer relevant and there are new answers. How good are these AIs at forgetting the older info as the environment shifts to new info?
Removing old information entirely from the reach of AI is still up to humans… This is true both if you want to fine-tune AI, or use the 2-phased Question Answering over documents approach I promoted. The latter enables much more control, in that you can ‘fix’ the situation by simply removing the irrelevant information from the list of retrievable documentation.
Russ Brookes, Avaya: How do you recommend knowledge be structured to lend itself well to AI?
It actually mainly comes back to better enforcing KCS standards – developing knowledge that answers actual customer questions in simple readable language. If humans will be able to figure out the answer, GenAI probably will as well.
Lynette Ledoux | SearchUnify: Structured = consistent, simple article template, metadata, state
Clint Sanderson (NetApp): Seems like a critical attribute we need for Support is for the AI tool to also provide its sources. Otherwise, how do I as a knowledge worker improve the results.
In the 2-phased Question Answering over documents approach I promoted, this is absolutely possible – the system knows and can present the exact items that were used to generate the response. BTW – you will notice in the GenAI answers in the Bing Search Engine that it provides the resources as well, as it is using this approach.
Michael Fisher: Doesn’t “developing structured knowledge” go down the path of “Just In Case” knowledge, which is anathema to KCS?
I actually think the exact opposite is true, and that is what I meant by developing ‘promptable’ knowledge items – ‘Promptability’ means developing knowledge with the main thought in mind being – will the Generative AI be able to generate an answer to a future similar question based on what is written in this item. This goes hand-in-hand with the KCS approach of “consistently capturing information in a way that is both structured enough to be useful, and dynamic enough to suit the rapidly changing environment”. In other words – don’t rely on plain product documentation to feed AI, if a human customer will have a hard time finding a specific answer to their question there, don’t think AI will have it any easier.
Michael Fisher: (To be clear – I maintain that there is a place and a role for “Just In Case” knowledge.)
Pawan Khatawane: There will always be.
John Coles: To summarize … Use GenAI for Knowledge Harvesting with human reconciliation. Does it require the GenAI Engine to provide the Search Results or could we just use our existing Search Engine?
Diana Sarbou: So safest approach initially is having GenAI take on a public viewing role of source content, so nothing internal-only is leaked out? … when answering external customer issue
Correct, that is true for both the AI fine-tuning approach and the 2-phased Question Answering over retrieved documents approach – generative AI does not currently have the ability to reliably filter out certain types of information when generating text unless explicit rules are set out, and these are not easy to develop into the system.
Katie Ellis (she/her): How do you know which documentation the system used to generate its answer? Especially if it needs improvement?
In the 2-phased Question Answering over documents approach I promoted – the system knows and can present the exact items that were retrieved and then used to generate the response.
If the documentation used did not answer the question well, the question will most likely be sent to a human, which will flag the documentation as requiring improvement, and the AI generated response using unstructured data can help fill that gap.
Lynette Ledoux | SearchUnify: So we’re not exactly in a post-search world: It’s just that the AI is searching on behalf of humans?
Non-generative AI has already been searching for humans for a while now 🙂 (most existing search engines use some sort or another of advanced machine-learning) , the main change, and the reason I used the ‘post-search’ term, is that we’re not the ones consuming the items anymore, rather AI will be doing it for us and generating answers.
Thomas Blackburn: With accuracy on par with page 1 of google results.
Russ Brookes, Avaya: What sort of changes do you think need to be made in content standards to have content that can be better used by AI?
It actually mainly comes back to better enforcing KCS standards – developing knowledge that answers actual customer questions in simple readable language. If humans will be able to figure out the answer, GenAI probably will as well.
Johannes Hokamp, Waters: Is there a way for the GenAI to use unstructured knowledge (case data) to decide what structured knowledge (knowledge base) to serve without revealing the unstructured data to the external requestor?
- Unstructured knowledge can be used to enrich answers, but you run the risk of using non-validated data, or including sensitive information in the answer (in the case of using past tickets for example)
- Another way this can be done, and that we use in xFind, is that information from unstructured data can be used to better focus semantically (by enriching the index with synonymous terms etc.) the retrieval of the relevant structured knowledge items for generative AI to work with.
Matt Seaman: 3 things you can do today seems to relate to our favorite latest saying; “AI is not a usecase”
Sara Feldman | Consortium | Las Vegas: Thanks for being here – please stay in touch!
Lynette Ledoux | SearchUnify: Good, old-fashioned content management saves the day every time!
David Lane: Job security for knowledge managers!
Kudos
Nick (Shopify): Thanks so much for this.
Dave Stewart (Akamai, Ottawa): Fascinating topic – thanks for this!
Gina Groom: Thank you all
David Lane: Thanks to Consortium and Sariel. Very topical and actionable information.
Elena Forrest, Oracle, Arlington VA: thank you for the great talk!
Diana Sarbou: Thanks!
rhonda.bartlett: Thank you!
Pawan Khatawane: Thank you!!
Gowri | F5 | Seattle: Thank you!
Judy Gorz – CME Group: thank you!
Wendy Abdo: Very informative. thank you!
Libby Healy | Waters Corp | Remote (Maine, USA): Fantastic! Thank you.
Donna Knapp: Thanks all! Super interesting
Sawyer Hamilton: Thank you all!