Data, data everywhere, but not a drop to drink

16:00:00 Learning Boffins 0 Comments



As large L&D departments progressively digitise their learning, they find themselves with systems capable of producing huge quantities of reporting data: learning management systems, purchasing/finance systems, HR systems, talent management systems, social learning platforms and so on.

Sitting on this great wealth of raw data, you’d expect to gain great insights from it. Yet L&D departments seem to be no better off: their decision-making becomes slow and based on guesswork rather than evidence. In a fast-moving economy, this is just not good enough.

Even basic metrics like how much is spent on learning, are still quoted to the nearest million pounds. Calculations of the amount of training delivered annually are, at best, estimates. We need to get better at MI!


Why is this?


I think there are three main reasons why this is so:

1. Systems are fragmented


There is often more than one LMS, or there’s one system with data about classroom training, and another with e-learning data. Then there’s no way to join together data from different systems to get the whole picture: data from different systems is incompatible, stored in different formats and with no universal indexing fields (for example, individuals in one system are identified by their email address, but elsewhere they are identified by their employee ID).

One central problem is that supplier spend data is held in finance systems, whilst the learning activity data is stored in one or more LMSs, if not, it’s held in a plethora of Excel spreadsheets. This means there is no way to link spend to activity, to see with any clarity where your spend is being allocated.


2. Data quality is low


Data input into these systems is often inconsistent, particularly when that data is not used for subsequent reporting. Once the data starts being used, there’s a lot more interest in keying it in right, in the first place.

Even when data entry is not involved, you can have problems. For example, when is a piece of e-learning content ‘completed’? Much of the point of e-learning is that the learner does not have to use all of the content, so unless it’s a compliance piece (which insists the learner view all the content, or tests them at the end), the concept of ‘completion’ (which is after all a construct related to ILT) is impossible to pin down. Another example related to digital learning concerns learning duration. Just because some content is open on a learner’s screen does not mean they are reading it. As a result, recorded learning durations can be highly overstated.

Unless you can improve accuracy or sidestep such problems by making some general assumptions, MI built on low quality data can feel like a house built on sand.
3. We can have unrealistic expectations of learning data

Systems tend to capture learning activity, not the outcomes of learning activity. Nowadays every Head of L&D wants to understand the impact of learning, but learning data is always going to struggle to demonstrate the business impact of our learning investments. This was well-illustrated by a frustrating conversation I had over Kirkpatrick Level 1 response data, about why you can’t use Level 1 data to demonstrate ROI. We risk having unrealistic expectations of our data, however there’s still a great deal that MI can do for you.


What is decent MI worth?


In short, it can reduce the total cost of learning by 30-40% in a large organisation. Big organisations are complex and fast-changing: you simply can’t keep track of everything just by keeping an eye on the daily workload. Good MI gives you the means to count everything that happens – that means you have visibility and the means to control what goes on.

I find that the introduction of good MI across the L&D function can save them around a third of their total cost of learning, because it gives them the ability to spot inefficiency and waste, and then work to drive it down. Here are the main areas where MI enables changes to happen:

Make sure learning effort is well prioritised (i.e. aligned to business goals). MI that shows you precisely what spend is going into which learning, will show you how good you are at targeting learning on the key needs. You can spot any large investments in low-priority low-impact learning, which will help you decide how to prevent that in future.

Reduce penalty costs. The cost of no-shows can be 5% of your external spend, good data helps you spot the trends: higher no-shows on a Monday perhaps; repeat offenders; learners booked a long way in advance and forgot or left the organisation. Spotting the trends tells you how to minimise the waste.

Maximise your ILT event occupancy. Make best use of your trainer cost by filling the room. Fill rate analysis shows you how well you are doing – many organisations only fill 60% of their training places, so use the MI to get smarter:

  • · don’t schedule more events than you really need;
  • · cancel low-fill events in advance before penalties become due;
  • · work out your trainer utilisation in terms of people trained rather than just events delivered

If ILT courses end up running too infrequently, then think about redesigning them into a more flexible delivery mode. Lots of companies have shifted induction training from a traditional classroom event on the first Monday each month, to e-learning, webinar and social media delivery. The results are much better.

Analyse your happy sheet data. Paper feedback sheets are next to useless if you are trying to manage learning for several thousand employees. Capture all your course feedback online, and you can quickly generate data to tell you how your courses are being received. This way you can quickly spot the good and the bad: trainers, courses, suppliers, rooms. And make decisions accordingly.

Sort out your curriculum. Learning activity data that is reliable and comprehensive lets you see who’s doing what. As a rule, I find 90% of an organisation’s learning sits with the top 10% of learning content, yet the curriculum is usually choked with thousands of items that have not been used for 2+ years. Let the data tell you what’s being used, then add anything else you expect to need in the foreseeable future, then discard the rest.

Most curricula need around 500 items: make those easy for learners to find, don’t spend time and money maintaining any more!


What's stopping you?


Your organisation’s IT infrastructure. One global organisation said it took seven years to establish a joined-up suite of enterprise-wide HR systems. Surely it’s a big investment, but having up-to-date technology is fast becoming an important factor for workforce productivity and skills development is just one area of benefit.

Great systems but poor implementation. There are market leading learning systems out there, but the benefit realisation depends heavily on how well they are implemented, especially in the area of MI and reporting. Out-of-the-box reporting is basic and always needs some customising to address the organisation’s needs. Also, data quality depends on well-designed processes, system customisation to suit and a degree of discipline on the part of those responsible for data entry.

Disparate systems. Even without enterprise-wide systems, you can build data interfaces between your different learning technologies, perhaps having several data sources feeding into a single database, with a Business Information tool to conduct the analysis. You’ll need to be smart, so the different interfaces populate the database with consistent data, but this is one way of getting better (if not perfect) MI.

The need for learning impact data. If learning impact is what you need, then the data from learning systems will not get you far. You’ll need to look outside learning, to HR and beyond, for business data (and here you may start all over again!).

Getting your masses of learning data into shape isn’t straightforward, but once you crack it, there are big savings to be had.

You Might Also Like

0 comments: