England in Euro 2016: what can we learn from England's demise?

10:47:00 Learning Boffins 0 Comments



So last night England failed yet again to make it deep into a major competition.  To make matters worse, the unanimous opinion on their performance was that it is the poorest they have ever had!  In the aftermath of their defeat no less than 20 minutes after the final whistle the manager, Roy Hodgson, had resigned and the twitter-sphere was awash with jokes regarding two exits from Europe in almost as many days.  So where did it all go wrong for Roy and his England Team?

Fundamentally I think it boils down to a lack of strategy on Roy's behalf.  The pundits on BBC and ITV both slated Roy for not having any understanding of what the teams best formation was and who its top players were despite having ample opportunity to work this out during the qualification process and the pre-tournament friendlies.  This lack of strategy can be seen in the decisions made in each of our group games in terms of substitutions, starting players, playing positions and even the timing of specific decisions.  A common thread post game was that we lacked purpose and it looked like we had no idea what we were doing on the pitch.  This simply cannot be allowed if you want to succeed.

This is also true for business.  All too often decisions are made, or not made, at a strategic level that directly set the tone for how a business's L&D function will operate and the success they will have.  A good example of this would be IT infrastructure.  In an age where technology is becoming more and more integrated into daily life it is expected that e-learning or learning platforms can be implemented to full effect.  These solutions are often built with cutting edge technology and yet they often struggle to be implemented in a business environment due to the lack of IT infrastructure investment.  Somewhere along the line a short term view has set the strategy for IT in learning and it is now having dramatic consequences on how effectively L&D can deliver learning to the end user.


Another failing of the England performance and in particular Roy Hodgson is the ability to recognise strengths and weaknesses and act accordingly.  There was a lot of media scrutiny regarding Roy Hodgson's decision to take Jack Wilshere to the Euros this year.  A player who had barely any league time due to injury and who could not be considered 'on-form'.  Roy chose to take Wilshere instead of other players who had played all season and shown an ability in the England squad.  This has been put down to his loyalty to his chosen few.  This loyalty would also govern a number of key decisions he would make during the four matches we played at this year competition.  At its base level Roy is probably guilty of not being able to spot a problem because of his unconsciously biased view point.

As before, business cannot allow similar tendencies to creep into their world.  Our activities, initiatives, programmes of learning, processes, etc. all need to be constantly reviewed.  What worked well a year ago may not work now.  A process of unbiased self-reflection is required in order to truly perform at the highest level possible and I believe that this is one of the most difficult things for a business to do.  Internal politics, lack of time and just the day to day pressures of an ever changing world make it hard but if L&D and the wider business is to succeed then they need to be smart about where they focus their time and ensure that they are leveraging their best assets and their successes.

Hindsight is a wonderful thing and it is easy for us all to criticise the England performance after the game, especially given the fact that it is unlikely that anyone reading this is a professional footballer.  However, we do understand our own world of L&D and we should take the hard lessons learned by the England team and apply them to our own day to day activities.  Essentially, have a well thought out strategy that is forward thinking and makes the best use of the assets that you have whilst constantly self-reflecting and improving on your winning formula.

0 comments:

A Brief History of (Formal) Training

16:00:00 Learning Boffins 0 Comments



My industrial history might be a bit patchy, but try this story out for size.

Around the start of the 20th century, pretty much all 'industrial' work was craftwork. It was practical. It was about making things, making them better, more reliable and making more of them faster. This required practical skills, the kind of skills an expert craftsman could show you how to do, often over an extended period of time. 'Management' was discharged by the rich and powerful, and was pretty basic: you worked, you got paid; if you didn't do enough work or good enough work, you got fired. No unions, limited workforce mobility, even less employment law, no need for great management skills either.

Then came mass production and Taylorism, and with it mass employment at relatively low skill levels. With 100s of people doing the same job, you needed them all trained up the same. So training (face to face, of course) also became a large-scale repeatable activity, and it worked, with armies of workers trained to execute practical tasks in the One Best Way.

By the early 21st century, all manner of technological advances have changed the nature of work and given rise to all kinds of work that never previously existed, requiring new skills and different knowledge. Automation, enabled by IT, profoundly changed what we do and how we do it. Telecommunications has diffused the location and timing of work. The skill levels required tend to be higher and more cerebral, knowledge workers are commonplace. Company value, previously measured in physical assets, is now dependent on intangibles like Human Capital: it’s all about what's in the employees' heads, their ability to innovate, etc..

Not only has the work changed, but management now assumes far greater importance. Human Capital depends as much on how humans work together, as it does on the intrinsic level of their skills. Add to this high levels of technological change, economic turmoil, workforce mobility and skills shortages, then management and leadership emerge as vital and complex disciplines.

Given how the world of work has changed so dramatically, I am surprised that our prevailing image of the main task which equips people for work (i.e. the classroom course) persists so strongly. In passing, I’m also curious that at the same time, on-the-job training still holds derogatory connotations, through phrases like “sitting by Nellie”.

We have known for a very long time that formal instruction is just part of the development process. Medieval craftsmen knew this – it was partly why they formed Guilds. For all their faults, Guilds provided a great framework for the development of skills through the medieval forerunner of the apprenticeship.

Today, so much business success is dependent upon employees being skilfully creative, for example:

· interpreting each particular customer situation and responding accordingly;

· regularly making rapid decisions based on incomplete data;

· developing innovative approaches to maintain competitive advantage in an ever-changing marketplace.

In such an environment, how can we expect any kind of standardised formal training programme to really deliver significant improvements?

Don’t get me wrong, large-scale formal training still has a place, but it’s unlikely to deliver high levels of competence, and certainly not on its own. For more advanced skills, some kind of personalised, on-demand (and probably on-the-job) learning is vital. We need to embrace the technologies that facilitate peer-to-peer information sharing on a global basis, and give employees freedom to experiment and innovate.


Kevin Lovell

0 comments:

A Night at the Museum, Our view on Curation

17:06:00 Learning Boffins 0 Comments








I was working for a client and curating content for them from the internet to use as a resource for their learners. I love doing this work but in terms of efficiency I can’t claim to be making a profit. I will explain why this is the fault of the ‘hyperlink’ not my fault!

To do this properly you need to assess the material for suitability, you need to have criteria for selecting people for the materials. Don’t get fooled by thinking this is an easy, cheap or one off activity for your learners.

Think of curating for a museum. You may have a lot of ‘stuff’ but it doesn’t mean a visitor leaves your museum having had a positive experience. In fact quite the reverse, with too much to look at, with no discernible connection or symbiosis between the items the visitor leaves feeling bewildered, exhausted and frustrated. With most modern museums less is more, they leave space for thought, the footnotes are brief and there is a pathway which tells an evolving story as the visitor progresses through the well curated space.


We can learn a lot from how museums have changed over time. They know what works for their visitors and their visitors, like ours, are learners. 

So what are the ‘musts’ for meaningful curation?


  • Know your audience, how do they learn? What do they need to know and what do they want to know?

  • What does your company want the learners to get out of your curated materials? It might be hard fact absorption, it could be to build a collective base line of understanding in the company or it could be developing a sense of curiosity in the learners for learning outside of their job role. Is it their objective to build a learning organisation

  • Do you need to create a pathway of developmental levels? (a good idea) Be open minded to the learner who will access the highest level first to see if they can understand the topic at the advanced level before dipping into to the lower levels. 

  • Add texture to the learning. Look for videos, blogs, articles, infographics, and LinkedIn communities. Research papers and free courses, published dissertations, book reviews and Ted Talks, home-made videos from your own SME’s. Vary the tone from the serious to the lighthearted and vary the length from the short hit from an infographic or a 15 hour Open University free course.

  • You might want to create ‘boxed sets’ of learning, perfect for the binge learner who will work through them in order looking for a plot twist in each article and enable them to create their own taxonomy from the learning you put in front of them. 

  • At all times focus on quality, it is fine to have content which looks at topics from opposing standpoints but make sure it is correct; there is a lot of incorrect information on the internet so stick to reliable sources.                                                                       
  • Nobody likes governance but if you are putting up information from the internet ask people not to click on advertisements, explain this is curation of ‘free materials’ and not to sign up for anything which costs money and set a rule about the purchase of books following a book review.



So what about the hyperlinks and my efficiency? I get so absorbed in the material itself and the thirst to know where things are referenced from, I follow every hyperlink there is. Walking through my own endless museum of facts, history, and futurism and without ever having to stand in line to view something is my idea of a happiness (possibly my MD might want to put a closing time on my museum.)




For help curating materials for any subject for your organisation contact me – it will be a pleasure to help you. Rachel.kuftinoff@knowledgepool.com



0 comments:

Do happy learners make performance improvements?

16:00:00 Learning Boffins 0 Comments


I think it goes without saying that a happy workforce will in fact drive better business results but what about when you focus on the workforce when they have just engaged in piece of learning.  In other words they are no longer just workers but learners.

Level 1 evaluations - AKA 'Happy Sheets'

When ever I ask the question...

"do you evaluate the your training?"

I will more than likely get an answer of "yes" followed shortly by a statement something like...

"we send out evaluation forms with every course we run"

These level 1 evaluations or 'happy sheets' usually provide information on the venue of the training, the facilitator, the course content, course administration, potential indications of the knowledge / skills gained, etc.  What they fail to do is identify any actual real world implementation of the learning, performance improvement or business results.  So why is it that the majority of organisations never look beyond the 'happy sheet'.

Is it the fact that by performing any evaluation, albeit at a very high level, is enough to put a tick in a box and satisfy senior management?  Is there an acknowledgement of the fact that more detailed and worthwhile evaluation data requires a much greater amount of resource and that the time and effort to acquire the data cannot be justified?  Or perhaps, more worryingly, do people think that level 1 evaluation data provides enough of an indicator of potential improvement that any other evaluation is unnecessary?

It is this final question that I seek to answer in this post.  Can level 1 evaluation data indicate the eventual performance improvement?

The raw data

Given our broad client base and the fact that we pass roughly 700,000 learners through our systems each year, we have a wealth of data relating to level 1 evaluations.  As you would imagine there are much fewer instances of level 3 evaluations and this data is required to validate a real performance improvement.  The final data sample I have used consists of entries from 4 recent years and across all vertical markets, thereby reducing the effect of any potential outside influences.

The graph below shows the spread of data when level 1 scores are matched to their level 3 counterparts.  The level 1 scores relate to immediate feedback in relation to the training they participated in.  This will include admin, course content, facilitation, etc.  The level 3 data is a score based upon the extent to which the learning has contributed to their performance improvement.


What is clear is that there is little to no correlation in the data.  What is interesting about this fact is that prior to me sifting through the data I myself expected there to be more correlation than there is.  I anticipated that those learners who came away from the training enthused and energised about their experience were more likely to put into practice the learning covered on the course.  It startled me to find that immediate enjoyment of the learning and even its content has very little bearing on improvement in role.

After observing the lack of correlation I decided to do a deeper dive into the data, the often startling results of which are covered in the sections that follow.


Level 1 evaluations - 70% satisfaction guarantee

Unsurprisingly, level 1 evaluations lean towards awarding full marks more often than not.  What is surprising is the lack of any significant outlier data.   94% of level 1 responses come from individuals scoring training at 70% or more.  In fact, 49% of all responses were scored at 90% or better.  With only 6% of respondents scoring less than 70% in terms of course satisfaction, does this mean that all training is worthwhile, enjoyable or relevant?

Obviously the answer has to be NO.  We have to remember that these responses are taken close to, if not on, the training programme itself and as such there will likely be an abundance of positive emotions.  Things that invoke a positive reaction may include:

  • Learning something new that may impact their abilities
  • Having a fun / engaging exercise prior to completion of the evaluation
  • Spending 2 days out of the office
  • Meeting colleagues they have not seen in some time
  • Actually just attending training rather than the usual day to day work
  • etc.

This positive slant effectively means that trying to find any correlation between level 1 scores and performance impact becomes much harder.  For example, take the following relatively standard 5 point scale:


If the overall question for a piece of training reads something like...

"Did the training deliver against your development needs?" 

94% of individuals are scoring at either "Agree" or "Strongly Agree".  This does not make for insightful data analysis relating to potential performance improvement.  However it is a great gauge of learner satisfaction immediately post event.


What exactly are you measuring?

Throughout this post I have mentioned that level 1 evaluation measures general satisfaction with the training but perhaps more accurately it measures an subjective opinion or feeling  in the moment.  This opinion needs no justification or evidence to support it and as such can be easily influenced by outside factors.

Level 3 evaluations on the other hand are based upon more considered thinking.  Whilst these are often opinions they are based upon information gathered over a period of time and then evaluated by the learner.  The following question is a spin on one we use in a large proportion of our level 3 evaluations:

"To what extent has the course been directly responsible for your performance improvement?"

This question seeks to establish a tangible link between the learning and performance.  Level 1 evaluations do not focus on creating any meaningful link, and in fact cannot, given their completion immediately after the learning.  Instead they focus on how suitable the content of the learning was or how much new information / knowledge you have gained, all of which is pure speculation.

The crux of it is that level 1 evaluations don't ask the right questions to be able to accurately predict eventual performance improvement.


Dissatisfied learners won't improve!

So I have already discussed the fact that 94% of responses come from people scoring more than 70% in level 1 evaluations and that this has little to no bearing on performance improvement.  However, there is a trend when it comes to the remaining 6% of respondents.  In other words those that answer below 70% on their level 1 evaluations.

Those that respond with 60% satisfaction at level 1 tend to see a reduced amount performance improvement that can be related to the training.  In fact 89% of respondents that scored 60% or less at level 1 attributed less than 50% of their performance improvement to the training they received.  Is this surprising?

In many ways this statistic makes complete sense.  If the training doesn't meet the needs of the individual or map well to their role then it would stand to reason that any performance improvements wouldn't map to the training.  It must also be considered that even if the training is relevant, if the individual has a negative learning experience then they are less likely to attribute any improvements to the learning.

The difficulty with making decisions based upon this information is that it is such a small proportion of the data available.  Discontinuing or changing training based upon this amount of information could not be recommended.  Without level 3 data relating to the success of the actual content, making such suggestions would not be wise.  


Conclusion

To answer my original question... There is no guarantee that happy learners make performance improvements.  This is not to say that level 1 evaluations are not relevant.  They provide meaningful data as long as it is understood what this data measures.  What our data shows us is that you cannot use level 1 data to predict performance improvement.  If your goal is to truly understand the effectiveness of training then time and effort has to be invested into proper detailed evaluation methods.

0 comments:

Our Digital Learning Infographic

11:45:00 Learning Boffins 0 Comments

We speak a lot about the need to support learners with digital. But never see a lot about how we actually do this, what steps do we take to move towards a learning approach that is supportive of digital? The Learning Boffins have captured in an infographic the key phases that need to be considered when going on the digital transformation journey. Take a look........

0 comments:

Moving from ILT to digital learning

14:42:00 Learning Boffins 0 Comments







We know that the learning world is full of discussions around using alternative methods of learning. 'The millennial is engaged like this....' and 'this piece of technology can do this....'. Clearly across organisations this is something we are becoming increasingly aware of within L&D. But a question we see ourselves answering is 'so how do we actually move towards and use/support alternative methods of learning in our organisations?'.

So with all the talk around how we should support/implement/explore alternative methods of learning such as social, online, bite-sized, personalised and micro-learning (to name a few) many are not nailing exactly what phases or steps that should be considered to head in the direction of a more 70:20:10 approach.

Us Learning Boffins have been looking at exactly this. We have tried to pinpoint what we see as key phases of the journey from our experience that you go through to move towards a more of a blended learning approach, so here we are:

Step 1. Business Plan - Create a business plan to facilitate your transformation. Consider the key questions here around current capabilities, future needs and potential implementation, integration and maintenance costs for new technology.

Step 2. Market Analysis - So we know our requirements, what technology is out there that can address and support this?

Step 3. SME and stakeholder buy-in - Appoint the SME for the new technology, get buy-in from your stakeholders and make sure IT have been included!

Step 4. New roles/re-shaping L&D - A new shape, new additions, new technology and new techniques means L&D require new skills, therefore address the gaps here and appoint new roles accordingly

Step 5. Content - Where is your learning content coming from? Third parties, curated internally or created internally?

Step 6. Communication and change management - This is a HUGE phase of the journey, communicate, get awareness, get end-users understanding and make sure they get the benefits, the point of it and how they can use it to benefit their own learning. Don't leave this till the end, all of the previous work becomes obsolete and the investment will be lost.

Step 7. Pilot/test/change/feedback - Employees need to be involved here, allow time for testing and piloting to see how it could potentially be changed, improved or removed. This could be a time consuming phase but no buy-in from employees makes it difficult for the new learning tools to be utilised and therefore for it to become a positive investment

Step 8. Roll out and on-going management - Once the technology has been successfully rolled out, it needs to be continually assessed. Ensuring that content remains relevant and that it has the desired business impact.

Every journey will have a slightly different angle and the phase will look different from one to the other organisation. It all depends on the particular requirements, purpose, type of technology and scale.

To discuss the journey further and how Learning Boffins are able to support at each of these phases, then tweet us @LearningBoffins we would love to discuss your Digital Learning Journey with you!

 

0 comments: