Training 1: Vast sums are committed to developing people. How much of it is wasted - and how do you find out if you are getting value for money? In the first of two articles, some of the methods of evaluation come under scrutiny.
Annual reports these days regularly record company chairmen's belief in investing in people, the 'company's greatest asset'. Moreover, the assumption seems to be that money spent on training is automatically money well spent. Yet, if there is no evaluation of the investment, how can companies be so sure?
The Industrial Society has held a number of conferences on the subject of evaluating training in recent years. The 200 or so organisations who have attended these conferences were all committed to training. But 90% of them had no system for evaluating whether their training schemes were a good investment or not.
Admittedly, most training departments distribute those so-called 'happy sheets' so that trainees can comment on the course's merits. Many departments also set tests to assess how much trainees have learned from a course. However, it is not possible to discover from these devices alone whether trainees will subsequently be better at their jobs - presumably the main aim of training courses - or whether the company as a whole has benefited from its investment. Andrew Forrest, human resources director at the Industrial Society, claims, 'The same organisations that refuse to spend £5,000 on a photocopier without a written evaluation of expected benefits will throw themselves into a grand training scheme with no evaluation at all.'
Companies have not cut back their training budgets in recent years. This has not been the case in previous economic downturns. Indeed, faith in training as intrinsically a Good Thing appears to have grown. The training budgets of organisations committed to investing in employees frequently amount to 3% or 4% of their salary bill - in other words, millions of pounds.
Nevertheless, as such large sums of money are involved at a time when firms are trying to cut costs, training departments are increasingly coming under pressure to justify their worth and the money they spend. They are also now realising that they can benefit from evaluation procedures. First, because they provide evidence which vindicates their work. Second, because information acquired on the quality of individual trainers, or particular aspects of a training programme, mean that courses can be improved, methodologies adjusted, and the relevant trainers advised or admonished. Clare Roxburgh, training evaluation officer at the Royal Bank of Scotland, believes that evaluation procedures can also enhance learning: pre-training evaluation questionnaires, for example, focus the minds of trainees on what they should be learning from the course.
What exactly is involved in evaluation? A clearer picture can be gained from looking at the procedures being used by some of the organisations who attended The Industrial Society's conferences. But to start with, there are some overall precepts. First, training exists to serve the company's business goals. 'The training department must understand these goals - whether they are to do with cutting costs, expansion or launching new products - and should cover them in order of priority,' says Malcolm Jones, training and development manager at Sun Alliance Life and Pensions. In addition, the involvement of the line managers is crucial. Their participation is needed to define the objectives of the development programme. Take ICL, for example. It is a company with a 'learning culture' which has developed a sophisticated system for evaluation. Managers attend 'training impact workshops', along with the other 'stakeholders' in the proposed training programme, and help to hammer out the objectives.
Manager involvement is also essential when staff return to their jobs. Gwyneth Henderson, head of training at the BBC World Service, tells her managers: 'We don't train your staff, you do. We give a core of training; we give people the opportunity to mix with and learn from others; but if they then go back to their departments and are allowed to become sloppy and aren't encouraged in good practice, the training has been wasted.' At the motorway service chain, Welcome Break, which ran a successful training programme with the DDI Consultancy aimed at encouraging supervisors to adopt a more managerial role, the managers were themselves sent on the programme first, to make sure they understood it.
It is crucial that evaluation runs through every stage of training, from planning, through design and delivery, to reviewing. Jones believes that evaluation is a 'philosophy' encompassing the entire training process, not a 'one-off task', dutifully tacked on at the end. Barry Kitson, training consultant at ICL, shares this view: 'In the training chain, all stages are equally important - you're only as strong as your weakest link.' Forrest describes this as the 'endless belt' of development. Any training or development scheme, he believes, must be anchored in a specific business need - improving customer service, or productivity, for example. These needs will involve development objectives, for an individual or for a team, which must be defined. As a result, front line staff, for example, could be made more confident in spotting sales leads and negotiating sales. Many courses run by the Royal Bank of Scotland are aimed at creating a change in attitude so that staff feel comfortable with the idea of selling.
Next comes the choice of learning process. Some trainees learn more from individual coaching than from group courses; some prefer reading; others would benefit from work-shadowing in a company other than their own. The Industrial Society has listed 48 ways of developing people, and would encourage managers and trainers to be imaginative in their approach to development. 'Think big, look beyond your own organisation,' is Andrew Forest's advice.
Remember, he adds, that 'development is greater than training, which in turn is greater than courses'. Indeed, the answer, in some cases, might be that training is not the solution. Better communication across an organisation might depend instead on a change in structure. But, assuming that the employee has experienced some form of training, and that knowledge and skills have made inroads into the trainee's understanding, this new-found knowledge must be used and reinforced in the workplace. The next stage in the 'endless belt' of development is that not only the individual, but the organisation, should benefit as a result.
So much for the underlying philosophy. What about the practicalities? Researchers and practitioners in the field are generally agreed that evaluation should take place on four levels. The first two, and the most commonly used, are course-related. The second two are concerned with the effect on job performance and on the company's business results. Each level is not without its difficulties and evaluation risks.
The first level covers the content, materials, methods and instructors for the training and the trainee's immediate opinions of it. Methods include interviews, discussion groups, feedback sessions, surveys and, of course, questionnaires. Kitson believes that it is misleading to call the latter 'happy sheets', since they do serve the useful purpose of sending out early signals if trainers are not up to scratch. He also warns, however, of the risks of misinterpreting trainee responses to a course: 'Those who find the course challenging may give low scores; and a popular tutor, recording high scores, may not be giving long-term benefit.' Gwyneth Henderson also points out that the euphoria induced by a two-week course away from the real world may not leave the trainee in the best frame of mind for a considered judgment; and, contrariwise, that trainees might find it difficult to be critical of a trainer under whose tutelage they had spent the past two weeks.
The second level aims to measure whether learning - of concepts, skills, facts and techniques - has taken place on the course. This pre-supposes that one knows the degree of knowledge, competence and confidence possessed before the course, and therefore requires tests before training as well as after. 'Trainers find pre-tests time consuming,' says Jones, 'but the golden rule is that you must test the current levels of expertise before any training is delivered.' Level two evaluation can also take the form of role playing, skill practices and job simulations, with structured observation from the tutor. In all cases, however, trainers should bear in mind that what is being measured here is short-term memory, rather than the ability to apply the knowledge or skill. Kitson also points out that the score on a test would depend on how challenging the course was: a score of 30% on a tough sales course might simply indicate that the salesperson had a lot more to learn on the job.
evel three evaluation also involves before and after comparisons, in this case to measure how far trainees have improved in their jobs as a result of their training. This could involve questionnaires for trainees, their line managers and their team colleagues, recording their perceptions and observations of behaviour, before and after the course, and again after three months: the differences, plotted on a graph, provide valuable information for managers and trainers, says Jones. It could also involve long-range follow-up, checking after six and 12 months, whether learning is being applied in the job. Kitson at ICL has, on occasion, made use of control groups to measure the impact of training, although this has its own problems - can one afford not to train the control group, for instance?
The level four evaluation tries to measure the impact on the organisation, that is, the positive effect on business results. This could be measured in quantifiable terms - such as increased sales, cost savings, improved work output, reduced error and complaint rates or lower absenteeism - which can be compared with the full cost of training. Or the positive impact could be seen in the company's savings through being able to promote from within - Westminster Press, for example, which has a highly developed journalist training programme, can recruit all its editors from within its own ranks.
It must be said that level four evaluation is the most difficult and therefore, as yet, the most rare. Some believe in the real value of non-financial outcomes, others emphasise that in an age where senior managers are often accountants by training, 'soft', people-related or behavioural measures do not carry enough weight.
Certainly the benefits of some training programmes are not always obvious. At ICL, for example, concern that technical staff were becoming constricted by their specialist work gave rise to an educational programme. This, however, had no direct links to the work the staff were actually performing. Trainers were asked to evaluate the benefit of such education - a less tangible measure than days lost or pounds spent. Nevertheless, Kitson says that he and his colleagues were able to show the benefit of the programme - in such areas as developing people for their next career moves - simply through the amount of statistical evidence.
Another important point is that evaluation is an expense in itself. For this reason, ICL focuses its evaluation procedures only on its 'high value' courses (around 20% of the total). It chooses not to waste high-level evaluation on the smaller courses.