A recurring theme in this series, and a rather irritating obstacle to the further development of the evidence-based management movement, is the limitations of language – especially the one word that is meant to underpin everything – evidence. What should be the movement’s greatest strength could prove to be its biggest impediment, unless we clarify it once and for all.
If that task were not difficult enough, we have the added complexity here of concentrating on evidence-based human resource management, which has its own data measurement problems. Human data is different to most other types of organisational and performance data and just calling it ‘human analytics’ doesn’t suddenly transform it into evidence. In order to break down this dual barrier there needs to be more precise use of language and clear criteria as to what constitutes valid evidence.
We should not lose sight of what we are trying to do here though – we define EBM as ‘making managerial decisions based on the best evidence available’ accepting that we do not live in a perfect world. The best we can do therefore is aim for the highest probability that we are using the best evidence available. Below is one set of pragmatic, working definitions (I don’t want any of my academic colleagues to get me bogged down in semantics) and you can add this to your favourite management checklists if it works for you.
1. Data, information and knowledge
First, let us make a distinction between 3 commonly used terms whilst adding a few human insights -
Data – straightforward facts, statistics and numbers that do not provide a basis for decision making – e.g. the population of the UK is 60 million, the earth is 93 million miles from the sun.
Human insights – some people think they can make decisions based on such raw data saying the UK is ‘overcrowded’ or ‘we have too many immigrants’. Beware those who try to promote data as knowledge. Also, human error might have got the data wrong.
Information – a human being has processed all the relevant data in their head and started to draw some conclusions e.g. the population growth trend in the UK relative to available water resources, how long it would take to get to the sun – but it is still unusable in this format.
Human insights – Information only has meaning when processed by human beings and it is always unique to the individual. Once human beings are involved you can forget trying to regard this as a science. An engagement survey score of 52% can be seen as both a success and a failure, depending on your perspective, and will not change the attitude or behaviour of the manager who does not perceive it as having value.
Knowledge – here we will use the simplest, purest definition – you actually know something to be true. We know ‘1 + 1 equals 2’ because no one disputes the basis of the calculation, it is regarded as proof or perfect knowledge. Better still, we can see, touch and experience it for ourselves.
Human insights – you are rarely, if ever, going to achieve a state of pure knowledge in EB-HR and genuine evidence-based managers do not pretend otherwise. Beware anyone working in HR holding out the promise of proof. Managers should openly admit their knowledge is imperfect and always accept they have to continuously learn (yeh, right), rather than blame, because imperfect knowledge inevitably produces imperfect decisions.
2. Defining Evidence
These definitions, on their own, might not appear to be particularly helpful though until we use them to define what evidence means to an evidence-based manager -
Evidence = Actionable Knowledge
This is really the only pragmatic definition. Decisions have to be made, regardless of how well they are made. At any point in time we just try to ensure we get as much knowledge as we can in order to act. So, in the south east corner of the UK, an action plan is required to continue to provide water according to the population growth projections and what is ‘known’ about climate and weather pattern predictions. Over time those decisions could turn out to be unsuccessful or appear to have been ‘wrong’ but taking no action at all is not an option.
What we need to do now is look at what data we might come across in the fields of HR, human capital and learning and how we can develop our own heightened awareness of what constitutes the best knowledge and evidence available and the skills to use them effectively.
3. Criteria for ‘best evidence’
Here is a simple list of questions to help you make intelligent choices about the quality of evidence presented to you and how you are prepared to use it.
Activity – avoid activity data like the plague. Probably the best example is ‘number of training days’ or ‘annual training hours per employee’. PwC/Saratoga (see page 6) still insist that knowing the ratio of HR people to FTE’s is meaningful data – it isn’t. This type of data just tells you somebody is sitting somewhere, not what they are producing. See also INPUT and CORRELATION below.
Causation – both Gallup’s Q12 “Proven Approach” and Watson Wyatt’s* (now Towers Watson) “Human Capital Index” refer to their statistics as PROOF that their methods work. No credible EB-HR Manager would dare to suggest this without establishing CAUSATION from the beginning: statistical regression mistakenly tries to do this after the event. Regression is an aircrash investigation while EB-HR is aircraft design. EB-HR managers have a much simpler, more practical and convincing way to deal with CAUSATION – they go and ask the people using Q12 or the HCI to show their original, causative hypothesis (e.g. which particular employees in this department will sell more if they become more engaged). If they don’t operate in this way they failed to use the best evidence available.
Correlations – In the absence of CAUSATION the purveyors of very popular, off-the-shelf, HR ‘solutions’ substitute CORRELATIONS, which is the lazy statistician’s way of making up data to suit themselves in order to make a fast buck. For example, suggesting there is data that shows the more a company spends on training the better it will perform. Even if these spurious correlations are made to look statistically valid the dimmest, first grade student of statistics will remember they were taught not to trust CORRELATIONS**. Instead, the evidence-based learning manager will make sure training is designed strictly in accordance with the principle of causation by only investing in training that is designed to deal with specific business issues.
Input – generally cost, time or manhours expended. It cannot be turned into useful INFORMATION until it is set against a corresponding OUTPUT (e.g. how much did all this produce in terms of cars, bank loans etc.) HR departments and training managers often resort to measuring input data (happy sheets) simply because they don’t know how to measure outputs. The worst ones think measuring inputs tells their bosses something about the way they work – yes it does, but not what they intended it to.
Output – The only thing that matters and produces value. Goods or services actually produced and sold, their costs reduced or quality improved and any extra revenue earned. Any HR or training not focused on these outcomes is not only non-evidence-based but a waste of resources.
Performance – a rule-of-thumb definition of a performance measure is that you should know which way the measure should move, which is not as easy as it sounds. For example, should your staff turnover or attrition rate go up or down? Should your ratio of HR to FTE’s go up or down – well surely it depends on what they are doing? The key is for everyone involved to understand and agree what ‘good’ looks like. Stupid measures or ratios encourage stupid behaviour: reducing your HR to FTE ratio could seriously damage the business if you lose people in HR who were adding a lot of value.
Qualitative – a confusing term – the best definition is to view it as the opposite of OBJECTIVE. Some people use the word QUALITATIVE to mean SUBJECTIVE data (i.e. how much do you respect your boss on a scale of 1 to 5?) while others regard it as a measure of intangibles (i.e. engagement, empowerment etc.). It only becomes tangible EVIDENCE (actionable knowledge) though when someone has a stab at making it OBJECTIVE by putting a potential value on it.
Quantitative – the meaning should be very obvious – it is simply data that has a number attached (the population is 60 million) – regardless of whether the number has any intrinsic validity, purpose or application. It is often viewed, mistakenly, as the opposite of QUALITATIVE on the basis that any number or measure is preferable to an indistinct or subjective statement. But some things are extremely important without any numbers attached – how about the question ‘do we have a financial regulation SYSTEM in place?’ It’s not quantitative, it’s not qualitative (it can be clearly demonstrated) but it’s probably the most valuable element in evidence-based human resource management.
*Compare Towers Watson’s ‘proof’ with a contradictory statement from their own European Survey Report in 2000 – “(HCI) Demonstrates a very strong correlation between effective people practices and shareholder value…but on its own does not prove a causal link.”
**“It is important to state 3 caveats about regression models. First, correlation does not mean “cause”. The fact that one variable is correlated to another does not necessarily mean that one variable causes another…. Generally, you should conclude that one thing causes another only if you have some other good reason besides the correlation itself to suspect a cause-and-effect relationship. Second, keep in mind that these are simple linear regressions…. Finally, in multiple regression models, you should be careful of independent variables being correlated to each other…. regression modelling …is a useful tool, but proceed with caution.”
From “How to measure Anything – Finding the value of ‘intangibles’ in business” (2nd Ed.) Douglas W. Hubbard, Wiley, 2010