Peter Drucker is possibly the world’s most quoted management consultant, widely credited with the old adage that ‘what gets measured gets managed’. The implication is clear: auditing is essential if any asset or process – branding and marketing included – is to be managed with any real conviction. The history of business in the 20th century reads like a case study in the effectiveness of a scientific approach to business. In 1903, DuPont introduced formulae for assessing return on investment. Around 1910, Henry Ford became the first practitioner of ‘Just In Time’ (JIT) production. And towards the end of the Century, W. Edwards Deming created the System of Profound Knowledge, which underpinned both Total Quality Management (TQM) and Six Sigma. In 1995, Jack Welch made Six Sigma central to his business strategy at GE and by 2001 had famously made GE the most valuable company in the world.  The moral of the story seems clear: scientific methods make businesses leaner, more efficient and better organised.

Marketers were a little slow to jump aboard the measurement bandwagon. It wasn’t until 1956 that Wendell Smith published an article in The Journal of Marketing, in which he introduced the concept of market segmentation as a way of maximising the sales potential of a product. Since then, each successive decade has introduced new concepts and techniques for the measurement of brands. In the same year, Ross Cunningham coined the term ‘brand loyalty’ in his HBR article, “Brand Loyalty – what, where how much?” In the 1960s, Daniel Yankelovich evolved Smith’s work into a model of consumer segmentation based on lifestyle and attitude, rather than demographic factors.

Yankelovich is also inventor of the ‘McNamara Fallacy’ – but more on that later.

Customer-based brand equity studies were created and made fashionable in the 1980s by advertising agencies keen to demonstrate the effectiveness of their work. In 1988 Interbrand created a methodology for what some people consider to be the ultimate measure of brand performance – financial brand value. The dotcom boom of the 1990s and subsequent explosion of digital technology and social media in the 2000s has exponentially increased our capacity to measure people’s responses to brands, as well as our ability to apply statistical and econometric techniques that add depth to our understanding.  In 2010, Professor Byron Sharp applied his measurement prowess to bust any remaining myths about marketing in his book, “How Brands Grow: What Marketers Don’t Know”. The book contains “scientific laws” about how to build and maintain successful brands. Marketers who fail to appreciate the wisdom of a scientific approach to marketing are likened to Medieval doctors, “working on impressions and myth-based explanations”.

People like Byron Sharp make a compelling case for marketing science. If what gets measured gets managed, then it seems sensible to expect that the vast improvements in brand measurement we have witnessed over the past sixty or so years should have resulted in a correspondingly significant improvement in how brands are managed. But it’s difficult to find evidence for this. Banks are no more respected, trusted or liked than they were sixty years ago: according to a PwC report published in October 2014, fewer than one in three customers now trust their bank. Supermarket shoppers are no more loyal: a survey last year by retail monitoring group IGD found that the typical shopper now flits between four different supermarkets to stock up their kitchen, while the proportion of people using more than one store on a single trip has risen from 42 to 47 per cent. And our ability to measure the impact of our work has not improved how marketers are perceived by the broader business community: a Fournaise Marketing Group study in 2012 found that 80% of CEOs are not very impressed by the work done by marketers and believed marketers were poor business performers.

It would be crazy to suggest that data doesn’t have an important role to play in making smarter decisions about brand-building. I use qualitative and quantitative data to improve the quality of my work every day and marketers would certainly be in a far worse position if we didn’t measure brands at all. Data are extremely helpful for improving media efficiency and gauging the short-term impact of campaigns, particularly online. But we must be doing something wrong. Why do we seem to have reaped such a poor return on the $40 billion we invest globally each year on measuring our brands? And how will brand measurement evolve to create more value in future? There seem to be three possibilities:

  • that there are measurable drivers of success that marketing science has so far failed to recognise, either through lack of technology, investment, insight or imagination,
  • that we are measuring all the right stuff, but failing to translate our knowledge into meaningful action,
  • or else there are intangible drivers of marketing success that defy measurement.

The first option is a possibility. We are constantly creating new business models and routes to market; marketing science will almost certainly need to evolve in line with these. But this has always been a game of catch-up: first we create new routes to market, then we use marketing science to help us use them more efficiently and to greater effect. You can’t analyse how to improve a process or marketing channel until it has first been created. This is a critical weakness of the scientific approach: it can only identify incremental improvements in marketing mechanisms that are mature enough to provide a robust data set. But it’s impossible to create radical future business models by analysing patterns in today’s data.

The second option seems more likely than the first and evokes the parable of the seven blind men and the elephant. This story is supposed to have originated in India and relates to the difficulty of establishing truth when information is limited or difficult to obtain. Seven blind men decide that they are going to touch an elephant to learn what it is like. Each touches a different part of the elephant and when they compare notes they find that they are in complete disagreement. The man who touched the elephant’s leg believes an elephant is like a tree. The man who felt the elephant’s tail considers an elephant to be like a rope. The man who pushed up against the elephant’s side concluded that an elephant is very much like a wall. And so on…

In the context of brand measurement, the story warns that the approach you take to measuring a brand – the assumptions you make, who you involve, how you analyse the data – has as much influence on the outcome as the state of the brand itself. BrandZ, WPP’s brand equity tool, which quantifies a brand’s strength (based primarily on loyalty) can be used to assess both current performance and future potential of a brand. IPSOS has its own proprietary approach to brand measurement – Perceptor®Plus – which also claims to ‘uncover the present strength of the brand as well as the future direction in which it is moving’. I’ve seen hundreds of these types of study. What’s striking is that they reveal lots about how WPP, IPSOS and their competitors define brand success, but demonstrate little sensitivity to the specific challenges faced by the brands they seek to understand. This is not a question of scientific rigour or statistical certainty, but of pesky intangible qualities, like intelligence and empathy. What you measure should reflect the specific challenges your brand faces. It should work backwards from the decisions you need to take. Brand measurement is an art as much as it is a science and the best marketing scientists demonstrate this in the way they design their research and the insight it yields.

Not only does marketing science itself depend on the intangible qualities of creativity, empathy, intelligence and imagination, these unmeasurable qualities also define the limits of what marketing science is able to measure in the first place. This is where Yankelovich’s ‘McNamara Fallacy’ comes into play.

Robert McNamara was ‘the can-do man in the can-do-society in the can-do era.’ He had an MBA from Harvard Business School. He was the first President of the Ford Motor Company to come from outside the Ford family. He was President of the World Bank for 13 years. These are mammoth achievements, but they are dwarfed by an even more titanic failure: he was also the US Defence Secretary who presided over America’s disastrous involvement in the Vietnam war. JFK recruited McNamara to the position with a brief to improve the efficiency and effect of the armed forces. McNamara shunned the advice of experienced military leaders in favour of systems analysis to make key decisions during the Vietnam war. With the benefit of hindsight, many of these data-led decisions proved to be wrong. His life is like a case study in the potency of analytics… and its potential to misdirect and mislead, with calamitous consequences. McNamara’s legacy is a sober warning to anybody who wants to use scientific approaches to make decisions where unmeasurables are involved. This legacy is encapsulated in what has come to be known as the ‘McNamara Fallacy’, the gist of which runs something like this:

You begin by seeking to measure what is important, but end up only attaching importance to the things you can readily measure.

This is a significant issue for management science in general and marketing science in particular, since much of what we seek to measure is intangible. There are plenty of attributes relevant to business processes that we can measure easily enough. But there are also plenty of unmeasurables: for example, the need for community, enterprise and adaptability within an organisation. The problem for marketing scientists is that it is frequently the unmeasurables that make all the difference. Science has played an important role in helping specific areas of marketing to work better, but it’s important to be realistic about the limits of analytical approaches. A great evaluation framework embraces its own limitations. Being clear about all the aspects of a brand you can’t measure – and acknowledging the importance of these – is vital to the responsible and prudent interpretation of marketing data. This is why so many enlightened marketers despise the cliché quoted at the beginning of this article: because ‘what gets measured gets managed… and nothing else’.

Intangible virtues such as creativity, passion, commitment and leadership play a role in brand value creation. Just because this role can’t be quantified doesn’t mean it isn’t significant. Absence of evidence isn’t the same as evidence of absence. And even if we could measure all of the intangible qualities that make brands work, it’s far from clear that this would help us to improve how we manage brands. Let’s take creativity as an example. One potential measure of creativity is quantity of output. Another potential measure is quality of output. But the ability to measure quality and quantity of creative output doesn’t necessarily mean that creativity can be managed any better. Back in the USSR, Central Planning was fabled to have issued a directive rewarding nail producers based on the weight of nails delivered. As a result, they received a small number of giant nails. In response, Central Planning issued a new directive, rewarding nail producers on the quantity of nails produced. The new result: they received millions of very small nails. Economist Charles Goodhart is credited with making the observation, commonly known as Goodhart’s Law, that ‘once a measure becomes a target, it ceases to be a good measure’.

If the McNamara fallacy explains the limits of measurement approaches, then Goodhart’s law explains the dangers of adopting a rigid approach to measuring brands, particularly in terms of setting strategy and targets. Historical relationships are likely to vanish the moment they are translated into a system of ‘best practice’ rules. You cannot brand by numbers. You cannot build rules around yesterday’s prevailing mechanisms in the hope that these will persist. You cannot enforce through a system of dictums what should be promoted through creativity and enterprise.

Art and science aren’t incompatible, in the same way that business and pleasure aren’t compatible. Nor are they substitutable. They complement one another. It would be absurd to argue that management science doesn’t have a valuable role to play in making marketing more efficient or effective. But it is equally absurd to deny that there is an art to making brands great. Businesses rely for their success on people and personality as much as they do on protocol and process. The point of a strong brand is to give its owner authorship over the future. Marketing scientists who encourage a rule-based, law-driven, mechanistic view of the world do far more damage than good. Not everything that can be counted counts. And not everything that counts can be counted.

Leave a Reply