The level of ignorance about evaluation amongst the training community is alarming and it is actually getting worse, rather than better, as more and more self-proclaimed experts come onto the scene offering unhelpful and unnecessary ‘new’ models that just add further confusion. Evaluation is a very simple subject if you keep it simple. It is not about ‘proving’ that training works or justifying training spend but it plays a crucial part in the learning cycle; providing that all-important feedback loop. However, because most trainers misunderstand its role they either over-complicate it or under-estimate its importance.
You don’t need to know much about training evaluation to know what happy (or smile) sheets are – they are the most simplistic and rudimentary form of evaluation and are very unreliable – but if you persist then you really need to know what you are doing if you are to produce any meaningful evidence from them or learn anything in the process. Strictly speaking, happy sheets are not evaluation* at all but let’s keep things simple for now. To illustrate my point I am going to bear my soul – or at least share with you some painfully honest ‘happy sheet’ feedback from a very short, mini-workshop I ran recently; although it wasn’t me who chose to hand out happy sheets. Funnily enough, the subject was evaluation, for a group of trainers who should already have known how to use evaluation if they had received the right professional development; but that is another story as well.
Now, like any ordinary, sentient, human being it is very easy to be flattered by positive responses and not to want to pay too much attention to negative feedback. These are very natural, understandable reactions but there is not much point handing out happy sheets unless you are prepared to follow through on both the good and the bad. If happy sheets have any use at all they have to be part of an evidence-based, highly professional learning methodology: so how would that be different to a conventional approach?
First, the evidence-based trainer would not bother handing out happy sheets unless they had already gauged, before the event, the learner’s own baseline – how much do they already know and what else do they need to know? They would also need to establish the potential value of the learning experience (in $’s) beforehand because that will significantly influence the trainee’s motivation to learn. Without adopting this Baseline step there is no evidence base for the training and no way to gauge the trainee’s expectations.
Two, having ‘happy’ trainees at the end of a training session is not evidence of success, or value in $’s, which is why* happy sheets are not evaluation: no professional trainer would ever pretend otherwise.
Three, a professional trainer does not regard having ‘unhappy’ trainees at the end of a training session as failure. Any ‘session’ is only one step along the learning process. The unhappy sheet only reveals one thing (assuming they needed training in the first place) and that is their ‘problem’, which is the organisation’s problem, is still to be resolved. If the trainer cannot deal with it someone else will have to.
Now for the painful bit. In my session the feedback ranged from “excellent” to “really poor”, with one participant feeling it was “nothing new” – a criticism that was unintentionally a great compliment. So what does an evidence-based trainer make of that? How can the same session, from the trainer’s perspective, be both excellent and really poor at the same time? There is a very simple and extremely old answer to that one: as Ralph Waldo Emerson would say – “Tis the good reader that makes the good book” and only the receptive learner can make the good training session.