LinkedIn is fabulous these days; lots of interesting discussions on relevant topics. I recently participated in a discussion started by Jordi Torras at Inbenta titled “7 Tips to Assess the Efficiency and Value of your Knowledge Base Search Engine” with a link to his blog post containing several suggestions to consider.
Many people contributed thoughtful comments and ideas, which I’ve consolidated here, and which we discussed on the ConversationStorm call this morning!
The overarching sentiment is that it is unlikely you will find the magic answer containing the perfect formula to measure search efficiency and value. Instead, each of us has to collect information and massage it into the formula that works for our environment. After all, this is more of an art than a science.
In an effort to hone in on a starting point, I grouped the ideas below into a framework that allows you to triangulate your data. Pick one or more measures from each side of the triangle to reveal clues about the effectiveness and value of your knowledge base search experience.
Then, watch the data change over time to identify indicators of success or opportunities for improvement. If you are feeling really ambitious, you could even put this into a radar chart view. After you have established a baseline and understand what is happening, then you can identify ways to improve the search experience and add more value for your users.
Qualitative Customer Experience Indicators
- Provide the ability for visitors to rate search results or to indicate how easy it was to find the content
- On a regular basis, ask 5-10 users how the search works for them. Have them show you how they use it. Ask them what their typical experience is like. When you start to see themes emerging, you know you can stop talking and start fixing.
- Survey customers about their success using the self-service website.
- If internal and external users are on the same portal, ask internal participants to rate their search experience frequently. Use their experience as a leading indicator to identify opportunities that probably impact external users.
- Run validation exercises to verify that the returned results are the one we expect (simulating user action by executing search using customer keywords)
Quantitative Search Experience Indicators
- Increasing number of unique users
- Number of unique searches executed
- Frequency with which users login/return to portal
- Search engine performance: time it takes to deliver search results after user clicks on the search button
- Reuse of solutions: number of “click throughs” on an item on the search list located no lower than the xth in the raw of results on the result page
- “zero results”: percentage of searches resulting with no search results returned
- Number of times users taking advantage of phrase matching, thesaurus support and spelling correction when offered
- Utilization of search refinements by domain (e.g., all, images, news, etc.)
- Utilization of advanced search
- Time spent searching
Quantitative Case Volume Indicators
- Percentage visits when users execute a search and do not log a request/demand for service/ticket/case
- Number of potential cases abandoned after user reviews search results during case creation process
- Percentage of assisted service resolved with self-service content on initial search (e.g., new versus known)
- Number of times that users attempt to search for information prior to logging case/service request
Questions? Thoughts? Feedback? I’d love to hear it! Leave comments here or on LinkedIn!