Dealing in Absolutes

I work with teachers as both initial trainees and as experienced teachers wanting support, and one of the things I encounter a lot in these contexts is the tendency, or perhaps need, that developing teachers have for a Correct Answer, and the challenges associated with this. Someone gets some feedback, and gets told that they should have done practice X, then they do practice X in a subsequent observation and then get told that they shouldn’t have done it, often with the same observer.

Take for example the fairly straightforward classroom task of giving instructions and giving students the relevant handout, say, for a gap fill task. Which one do you do first? Give the instructions then the handout, or give the handout before the instructions.

What about learning outcomes – is it always necessary to share them at the start of the lesson, and to reflect on them at the end of the lesson?

Does an ESOL reading task always have to include a gist task followed by a reading for detail task? Is it always necessary to ensure that you pre-teach vocabulary at the beginning of a receptive skills lesson?

Is a lesson always improved by the application of digital technology? Does it always mean higher levels of engagement and motivation? Are all learners excited by digital technology?

Do learners have to interact and discuss with each other to enable learning? Could they learn equally well with a single lecture delivered in an engaging and interesting way, interspersed with questions?

There are people out there who will have a strong opinion one way or the other, and argue in absolute terms to the exclusion of other possibilities. These are also people who like the idea of best practice, and that you can take a given practice and bundle it into neat nuggets for mass application. Even better if there is a piece of evidence for it somewhere, because then they can go “ah ha! Evidence based practice! I’ve got a study over here that we can prove, PROVE, I tell you, that this works all the time for everybody!”

I hate myself for that dig, because it sounds like the kind of comment that a certain type of stick in the mud teacher would make about evidence based practice. You know the type of teacher, right? The ones who argue that “it’s worked for me so far, so why change?” This is not me, I hope, although as my fortieth birthday approaches, I can feel a kind of conservatism creeping in around the edges of my professional, moral and political sensibilities. No, the dig, I think, is that even with the support of an evidence base, it is impossible to ever say to that any given practice works all the time for every learner, every teacher, and in every lesson. If there is a study that stands in favour, or even a cohort of studies that demonstrate that any given practice works, it still may not work in a particular lesson. That’s not to say it isn’t something worth exploring or using, only that you may find it doesn’t work. And if it doesn’t, then it’s worth exploring, of course, why not.

There are challenges here with the nature of the observation, particularly when it is in the setting of graded observation with high stakes consequences, and indeed this is one of the issues with this sort of observation. It creates a tension in the discussion that isn’t there for a peer developmental observation, for example. What often happens is that the feedback is assumed to be a universal statement of best practice, and the “just one lesson” element gets forgotten. But then who can blame a teacher for interpreting it in this ways when “just” that lesson that is going to dictate whether or not they have to go through weeks, possibly months of anxiety over whether they may or may not have a job to come back to. For whatever reason, however, when they forget this they also forget that in a subsequent observed lesson the exact same advice may be inappropriate. Yes, you should have given out the handout first, but next time I come observe you, I’m going to say the other way round, because that’s what would have worked better in that second lesson. What the teachers need to do, and what high stakes grading makes difficult, is reflect on what factors contributed to that particular thing not working, and evaluate it in relative terms.

This isn’t a carte blanche to reject all advice. Not at all. Teachers need to think properly and hard about what happened, and to analyse that in terms of what they do and what they want to do in the classroom. The lesson would have been improved with the suggested change, but it’s up to you to decide what that information means for you as a teacher. Acknowledge the feedback and the ideas that the other person has suggested, and take them on board. Do something with the ideas, even if it is to go off and critically research the thing they said and see if they are right. “Research” here could be googling it, doing a bit of simple action research (essentially try it and see if it works), or a more extensive piece of study: it’s up to you. Hell, go watch someone else do it, ask the observer if you could come see them do it, whatever. The act of reflection and of finding out for yourself is a great learning journey, and one which will bring about change to your practice, one way or the other.

This act of reflection and research is where improvement comes from. Good practice doesn’t exist in neat off the shelf “packs” and “toolkits”. This job is not a set of mechanised principles which anyone can apply and achieve the same results, else why bother with any kind of CPD activity? Why not just say “do this and lo! Learning will happen.” This is not a job of absolutes, of the application of single interventions to achieve specific outcomes (although the value and impact of different interventions is always worth exploring in terms of your own practice). Rather it is a job of grey areas and continuums, of degrees of value and variables. The only absolute is that everything we do, and every bit of feedback we receive, has to be evaluated carefully and honestly in terms of our own practice, our own settings and contexts, and against our own lessons.



  1. Hi,

    Interesting read. Do you think there are any practices which work for everyone everywhere?
    Conversely do you think anything can be useful regardless of ‘evidence’ as long as the teacher feels it’s useful?

    1. My gut instinct answer to your first question is that there are no practices which are universally “good” and that all practices are open to principled rejection (i.e. not “I don’t like it so I won’t do it.”) I think that there are things which have a good chance of working for most people most of the time, but even then all I could convincingly ever say about that is that they work for *me* most of the time. Any piece of advice I might give someone is open to interpretation and may in practice work differently for that person owing to the variables in the setting (different teacher, different students, different subject, different butterfly flapping in a different rainforest…).

      The second question is the more challenging one. Partly because there are practices which I whole-heartedly reject based on evidence. Or rather reject because the evidence supports my own hunches and experiences. I think we have to think about what constitutes evidence – micro-action research and reflective practice is important, but there needs to be an analytical element which weighs up the reasons why something worked or not. To take a well-beaten example on this one, imagine where a teacher uses a range of different input methods in a lesson according to a putative learning styles analysis (which will almost inevitably show that there are lots of “styles” in the class). This works well and the class are fully engaged for lots of the lesson, and at the end of the lesson a short assessment activity shows that the learners did indeed learn something. What the teacher has to do is very carefully and critically analyse the lesson and incorporate a bit of research and realise that the reason they learned lots is because they were engaged by the variety of methods used, not because the methods engaged their specific styles.

      (That final point is not a justification for learning styles being on the teacher training curriculum, mind you, because it would be much more useful and valuable for trainers and developers to simply teach “be interesting and varied”.)

      Essentially a teacher needs to look at *all* the evidence available, in terms of both their own self-evaluations and reflections, peer feedback and wider reading. A “feeling” on its own is not enough.

  2. Thanks for your reply. I see where you’re coming from in both cases. I think I disagree with the first. Would you be able to say ‘practice’ is context dependant? Students need to practice. And also ‘input’ strikes me as context free. That is, in order to learn a language, all students, everywhere need to be exposed to it and they need to practice. Can these things really be dependant on context?

  3. I’m not sure if we’re slightly at cross purposes – the practices I meant were teaching techniques and methods, rather than English language learners practice of language. Or very possibly I’m being a little dense in reading your reply, which is distinctly possible?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s