Those companies which truly understand the importance of satisfied - but far better, loyal - customers are in the process of refining their measurement methods.
We all have our horror stories on the subject of poor customer service: goods out of stock, phones left ringing, assistants either surly or oozing bonhomie but hopelessly inefficient, that sort of thing. Now there is hard evidence that companies doling out this sort of treatment stand to lose serious money. A recent report from the Henley Centre for Forecasting concludes that an individual who would usually spend £100 a year with a given supplier is likely to spend only £266, half the usual amount, over the entirety of a five-year period when customer service is found wanting. And that customer is likely to pass on the bad news to friends or colleagues, thereby reducing the likelihood of the latter parting with any money in the direction of the offending supplier.
Reinforcing the point that a satisfied and loyal customer is the best customer, Frederick Reichheld, author of The Loyalty Effect, makes the argument that, 'Good long-standing customers are worth so much that in some industries, reducing customer defections by as little as five points - from, say, 15% to 10% per year - can double profits'.
Rather than trumpeting further vain promises of wanting to delight its customers, any company intent on improvement will therefore start out by measuring the satisfaction or otherwise of its existing customer base, virgin territory for the many organisations which fondly imagine they are doing well even though 80% of customers apparently think UK companies do not value them.
So how to go about it? First, note that in leading companies the focus has shifted away from trying to measure customer satisfaction as such towards monitoring more precisely the determinants of customer loyalty.
3M, for example, has now set itself the very stiff Top Box Measure criteria, identifying customers who are 'completely satisfied'; would 'definitely recommend' 3M; and would 'definitely repurchase'. The company's target is that some 50% of customers should be Top Box loyalists.
Note too, that whatever the methods of measurement used - surveys, focus groups, individual relationships, benchmarking - there are two distinct objectives. On the one hand, you are trying to discover how important particular issues (queueing time, delivery schedules, price, staff friendliness) are to your customers. And on the other hand, you need to know how well your company is doing in these critical areas, to track performance over time and to measure improvements. This applies whether you are the Royal Mail, measuring the time it takes for first and second class letters to reach their destinations, or a bank or hotel chain, tracking people's perceptions of your service. 'You must separate out these two aspects in the analysis,' stresses Rory Morgan, R&D director at Research International, the market research consultancy. There is, after all, little point in being quite perfect in activities that don't mean much to your customers, cultivating those friendly human relationships when all your customers want is swift 24-hour service best dealt with by a machine, for example.
Quadrant analysis, with 'How important?' and 'How well are we doing?' as the two axes, can be extremely revealing here.
Morgan outlines a deceptively simple formula: 'Identify what is important to your customers; formulate these issues into key performance indicators; set these as company targets; and then, measure achievements over time'.
For most companies, customer surveys will be an essential ingredient in putting these precepts into practice. (The exceptions, such as Rank Xerox, which is moving away from surveys, and 3M, which has already left them behind, fall within the business-to-business category, of which more later.) Surveys can be broad (essentially, about brand recognition, but also comparing levels of satisfaction with those accorded to competitors) or, more frequently, transaction-based (BT, for example, conducts some 13,500 telephone interviews monthly with customers who have asked for a service or fault repair, made a request or a complaint). They may be conducted by telephone or on paper, or both. But the key is that they should be couched in words that customers actually use, asking questions which make sense to them.
Only then will the scoring mean something, remarks Graham Clark, senior lecturer in operations management at Cranfield School of Management and an expert in the field. The language should be straightforward and jargon-free: bear in mind that customers are not interested in the mechanics of how you run your business, but in what it offers them. The survey should also be as concise as possible: Birmingham Midshires Building Society, for example, this year's overall winner of the Management Today/Unisys Service Excellence Award, sends out surveys of just one page: five service propositions on the front, free space for comment on the back - and, one might add, the home telephone number of the chief executive at the bottom of the page.
This last item is one way of signalling that the questionnaire is more than just another piece of junk mail. This, in turn, is a way of encouraging people to take part in the survey, much more effective than prize draws for champagne. Customers must feel that the company will pay attention to their comments and suggestions, says Clark, who argues that companies achieving a 15% or higher response rate have reason to feel pleased.
'You must demonstrate that you take notice,' he says, by publicising what you do in feedback newsletters, for instance.
The exemplary Birmingham Midshires, telephones every single customer who fills in the questionnaire - some 36,000 people a year, on average or 25% of the sample - whether to thank them or apologise. And where there are complaints, these are not referred to a customer complaints department but to the person responsible, who is expected to put things to rights.
'You need to fix issues rapidly,' advises corporate quality director John Hughes, 'to say sorry and thank you.' The company's standards mean that a satisfaction score of 1 or 2 (out of 5) is taken as a complaint to be followed up.
Asking the right questions in the right language depends upon more than intuition and common sense (although some memory of what it is like to be a customer would not go amiss). Trends or areas of concern will be discernible from the surveys themselves, but more specific clues as to what customers value will emerge from the open-ended questions. And qualitative research, through focus groups and customer panels, is particularly helpful in prising out what customers really want, and should be fed back into the next generation of surveys.
So, for example, British Airways has long conducted surveys, both in-flight and face-to-face, with the focus on two undoubtedly critical aspects of the customer experience, namely, check-in and the performance of the crew during the flight. But, reports BA research consultant Radan Payget, the company now recognises that 'the journey process actually starts from booking the ticket and ends only when the passenger has left the destination airport'. It also recognises that 'customer expectations are constantly rising', not just through their experiences on other airlines but in, say, the better banks or supermarkets. The company has therefore updated its questionnaires, following a nine-month project involving qualitative research across five countries (30 focus groups in total, among Cantonese, German, Japanese, American and UK passengers), followed by a phase of quantitative research (2,500 respondents) using what are called conjoint or trade-off techniques. These try to quantify customer preferences - very cheap flights with no extras, compared with 10% more for hot towels and cuddly toys - and to establish absolute measures such as how long will people tolerate queueing.
This research has been fed into the new questionnaires (out last month), where passengers may also notice that they are no longer being asked to grade aspects of service on a scale from excellent to very poor but to agree or disagree with the description of check-in, say, as very quick; efficient; and pleasant. This approach has proved more useful in giving specific pointers to staff, indicating areas where they might need training, and has been adopted by many leading companies. Ford, for example, asks all new owners about the details of the purchase process: whether the individual salesperson at the franchise dealership was knowledgeable, and helpful; refrained from pressuring the customer; offered a test drive; whether the car was clean, the petrol tank full; and so on. These data are fed back monthly to the dealers, so that they can modify processes and train individuals. The whole point of measurement, after all, is to improve the service. So BT, for example, uses the information gleaned from its thousands of telephone interviews partly to produce case studies for use in coaching.
When it comes to the satisfaction index from 1 to 5 included in most surveys, Clark's advice is that the definition of customer loyalty should be linked with the intention to repurchase. 'A number of organisations now only count 5 (very or extremely satisfied): people scoring 4 (just plain satisfied) have been shown to be not that loyal.' He advocates a 'healthy scepticism' about interpreting scores as having any absolute meaning since they are primarily useful to indicate trends. 'You must also measure the competition,' adds Morgan of Research International, explaining that this will primarily be through questions in your survey. And companies also learn from those outside their own sector: benchmarking exercises are now widespread. BA benchmarks against Ford, BT and Thomas Cook, for example; 3M benchmarks against Hewlett-Packard, Shell and BP.
'Remember also that the world doesn't stand still,' warns Morgan. Indeed, measuring customer satisfaction is a constantly evolving practice, as the example of BAA (particularly since the advent of chief executive Sir John Egan in 1990) illustrates. The company places customer satisfaction at the core of its mission, and has 'two and a half' ways of measuring it, says research director Stan Maiden. The 'half', he explains, has been conducted for the past 25 years, and consists of measuring the availability of the various systems in the airports: elevators, and the like. This provides an 'objective' record of service delivery but is a 'fairly poor method' of measuring satisfaction: it assumes that 99% availability would meet customers' hearts' desires when their wishes might be subtly different.
The other two methods are more substantive. First is the system of feedback dispensers, namely, several hundred comment card boxes dotted around each airport. These are useful in several ways, although respondents are self-selecting. Comments may require a reply, whether apology or thanks; and clusters of complaints (on smoking, say, or slow service in clearing tables, or queueing to get through customs and immigration) provide pointers towards areas requiring management attention.
- Information from the feedback dispensers is 'cheap to get, immediate, and often quite graphic', and allows customers to get complaints off their chests. But it is not a system to use for comparing one site with another or performance over time. Over the past eight years, therefore, a team of 200 BAA interviewers has been conducting a quality of service monitor (QSM), a continuous survey involving face-to-face interviews with 250,000 arriving and departing passengers at various airports annually. Scores can be tracked with some sophistication, and from this 'statistically robust' information, says Maiden, 'we can spot very clearly when things are going wrong' (or right, of course). And face-to-face interviews, where passengers are asked for comparisons with the airport they have just left, are better than readership surveys for international measuring, he says.
Again, the mix of 'good and bad news' is followed up: learning from passengers that departure flight information was better displayed elsewhere, for example, BAA sent its people to find out how this was done, and to introduce the needed improvements back home.
Customer satisfaction with the company is now directly linked to remuneration: all senior and middle managers face the 'significant erosion of their bonuses if service criteria based on what the customers think are not met'. Different airports are targeted to different levels but the threat of losing a 'good chunk' of bonus is not a hollow one, Maiden stresses.
'People do lose their bonuses ... managers' interests are aligned with those of the passengers.'
Finally, what about 3M, which has effectively abandoned surveys, certainly for its key customers? Tim Hewston, manager of customer loyalty measurement in Europe, explains. 'With key customers, we send our own people, to talk to around 40 or 50 individuals in each customer (company) around the world, for one-and-a-half hours each.' The aim is to satisfy that individual, and to sort out problems; this is an on-the-record encounter, showing strong commitment, says Hewston. 'Few customers like surveys,' he points out. 'Customers tell us more, because their expectations of action are higher.' The 3M people like the process too, he says: 'Their task is to understand what the customer wants.' They report back to each individual with an action plan after two months, which is followed up and monitored.
They also report to the customer company as a whole. Two years later, they return for a further hour and a half.
For non-key customers, the process is similar but more general. 'We're trying to find out whether we are meeting our grand promises,' says Hewston.
'We want to be absolutely sure that what we promise, we can deliver.'.