The Event Loop
The work of Consortium member companies on the Predictive Customer Engagement initiative started with a simple question, “How can we provide information that we have, that customers would value, but don’t know to ask for?” In the last post, we talked about an Analysis and Rules engine made up of a four-tier model to help think through using Machine Learning capabilities:
This model is something member companies have started to use when looking at many applications of ML to achieve specific outcomes. The Event Loop of the Predictive Engagement model is highly dependent on the Analysis and Rules engine, but requires us to think through a broader ‘system’ and the data required to be more proactive in how we service our customers.
Looking at the four-tier model again, the inputs for the Data Repository and the outputs for the Visualization are the entry and exits point of the analysis engine. We sometimes refer to these inputs and outputs as “mining for actionable information” and “creating that action.” The ultimate goal is: do we have incoming information that we can use to perform an analysis and apply rules to produce something actionable? This often requires applying other data sources, or data assets, to the model. Some common data assets may be knowledge articles, people profiles, vendor offerings, or privacy laws/settings.
But what is required to think through a closed loop system? We need:
- an external data source
- the ability to listen to the data
- an output that has context
- the right communication mechanism for the action.
With these components in place, we can build a closed loop predictive system.
Let’s walk through an example to bring this to life.
Data Source: A corporate wide enterprise software application has monitoring capabilities built into the application, which are producing data points on its health and any anomalies that may be occurring.
Listen at Scale: The application’s infrastructure is built to “listen” to the data outputs from the software application: collecting the outputs and storing them in a data repository for processing in the Analysis and Rules Engine. (A fancy ‘phone home’ system.)
Analysis and Rules Engine: The engine connects the incoming data sources to other knowledge assets, in this case: a knowledge base. When an anomaly is detected, the engine connects that anomaly to a specific knowledge base article.
Contextual Output: Since the analysis and rules engine identified a correlation between an anomaly in the data source and the knowledge base, it can now output the specific knowledge article that has the right context.
Communication Mechanism: The knowledge article is sent via email to the onsite system administrator, notifying her of the anomaly and the steps to fix it. The system can now be updated before a major failure.
This is a closed loop process. As data continues to stream into the Analysis and Rules engine, the system will be able to monitor if fixes or updates are having the desired impact.
While this is a fairly simplistic example, Internet of Things technologies, data analytics/machine learning, robust knowledge capture best practices (KCS®), and easy communication methods through mobile devices or machine-to-machine are making this simplistic system capable of solving very complex scenarios.
The final piece of the Predictive Customer Engagement model is the Improve Loop. In the next post, we will look at this loop and some of the required dependencies.