I've always wondered how instructional designers appear to work in silos while building training and cranking out several hours of courses. With the advent of xAPI and data science, we could consider a possibility of linking training 'outcomes' to customer success.
After this point, I've never had visibility into 'data' about the outcomes of the training. I've never known whether they were actually able to use the specific knowledge to successfully use an application including their ability to troubleshoot issues and design one-off customizations. I recently read a few posts on LinkedIn that questioned if xAPI could take a step further and track more information about what users are doing in the application and map it to learning outcomes. This gave me a few ideas...
New Implementation Case Study
Let's take the case of a new product release or a new implementation for a large company like Oracle or Salesforce. The partners and consultants undergo training to either use the product in their day-to-day tasks, or to implement it for a customer. A 5-day training is organized and successfully completed. Now the consultants are ready to head back and actually implement the application.After this point, I've never had visibility into 'data' about the outcomes of the training. I've never known whether they were actually able to use the specific knowledge to successfully use an application including their ability to troubleshoot issues and design one-off customizations. I recently read a few posts on LinkedIn that questioned if xAPI could take a step further and track more information about what users are doing in the application and map it to learning outcomes. This gave me a few ideas...
- Can we gather meaningful data on the entire lifecycle of a project starting from 'training'?
- Can we then track if it resulted in changed behavior and successful use of the products and effective troubleshooting?
- Did it reduce the number of support tickets that cost the company money?
- Did it progress the learner to the next level of maturity in their product knowledge?
If we could design a program to first collect all this data and then present it to a PMO, then could we incrementally focus and redesign training to achieve more specific outcomes, thereby reducing trial and error linked with too many unnecessary support tickets? I think this would be a wonderful alternative to a lot of incomplete metrics that only grill product teams excluding the 'training' and the expected outcomes. This could also make training an integral part of the lifecycle to address and enhance customer experience and satisfaction.
Considerations for Metrics on Customer Success
Training is not the only factor that impacts customer success. It is key for metrics to also include other parameters such as, why was an implementation delayed or unsuccessful? There could be other possible reasons which can be included into the metrics:- Unavailability of or issues with the application environment, or existing bugs.
- Implementor not following the instructions or best practices suggested in training.
- Confusing design of the application where users repeatedly make mistakes.
- Prerequisite setups impacting the current tasks were incorrect.
Good metrics depends on complete and good data. They have to be assembled meaningfully to measure business value of the outcomes by clearly identifying the root causes of any issues, and providing opportunity for continual improvement.
Comments
Post a Comment