I’d like to clarify something, just in case you hadn’t worked it out by now. I’m not opposed to observation. I think observation is a really good tool for teacher development, when there isn’t a grade getting in the way. It’s about angles and perception, you see.
I had my formal observation last week, and I was lucky (?) enough to be observed not only by an internal observer, but also by an external consultant. I also noted my own reflections on the lesson immediately afterwards, and the three commentaries that came out of it are interesting to compare.
The first interesting point is about the positives. I’m not naturally inclined to make a note of what went well. It’s like Murphy’s law: we think the toast always lands butter side down, or the other queue always moves faster in the supermarket, but in reality the probability is pretty balanced: we just don’t notice when things go our way. It’s the same for me with a lesson: I don’t really notice the things that go well, only those that don’t. So its useful to have someone say which bits you are doing ok at: it makes you want to ensure that you keep doing them and refining them. It was also nice to note that a particular strength this time round, targeted questioning, had been highlighted as a specific area for development in my last one. Mind you, I have a suspicion that this has more to do with the set up of the lesson (workshop-individualised vs whole class teaching) than with any specific skill on my part.
All three commentaries highlighted more or less the same things, however, in terms of areas for improvement. What was interesting was the prioritising of those things. Although I only got written feedback from the external observer, there appeared to be a very slightly different slant on which things were the priority: although there is every possibility it was the phrasing of the feedback, not the things being said. Even so, this does highlight what is both the fatal flaw and huge benefit of being observed: subjectivity.
Every effort is made to reduce subjectivity in lesson observation. Lists of criteria are drawn up, standardisation takes place, all sorts of effort is made to do this. Criteria are the backbone to most teacher training courses, for example, although these are often highly detailed. CELTA, as a good example, has detailed and fairly prescriptive observation criteria, which sometimes can make it possible for you as a trainer to pass someone’s lesson when every bone in your body is saying it should fail, as well as make it much easier when someone has clearly not met those criteria.
When those criteria are loosened, for example where they need to apply to a whole bunch of different teaching contexts in an FE college, the influence of subjectivity becomes much more pronounced. We are human, after all. In the case of my lesson, the commentaries reflected the same points, but my own perception, for example, was around the structure and the planning of the lesson, and how this reduced my inability to monitor and support in class activity, while some of the issues around tracking and recording feedback were further down the list. This was more or less inverted in the observer feedback, which is useful. Because you dwell on the bits that went less well, you end up on a kind of internal feedback loop which emphasises those points more and more and more and blows them up out of all proportion. The observers’ feedback brought those issues to light a lot more clearly and directly, and I think had I been left to my own devices I would probably have never really got round to it. The background and subjectivity also came up in the ideas for tracking and recording feedback. The external observer, predictably for an OFSTED trained inspector, had a very explicit emphasis on using the VLE to do this, whereas the internal observer only suggested the VLE as a possible tool for this, with an openness to alternatives. (I say “predictably” because, based on the Common Inspection Framework, using a VLE is the be all and end all of e-learning, which pretty much says it all about how up to date OFSTED is.)
So yes, subjectivity. It’s the potential major flaw in observation and the major benefit: it can go wrong, when the criteria are loose and an observer has preconceived ideas about good and bad. That was the problem with my own perception: I don’t like workshoppy lessons. I find them hard work to both plan and manage, while recognising that they have value. My own view was skewed by looking critically at the teaching and the learning, because I live in my head, and the world is filtered through that head. This is hard to step outside of. The observers, while looking at the learning, of course, were also looking at the learners more carefully, trying to find out in forty five minutes things that I had, or indeed hadn’t, found out in 9 months. I’m not sure about the value of the lesson observation as a proxy audit of a whole year’s teaching, but this was also present in aspects of the feedback, both positive and negative, and again, this is something which I wouldn’t always pick up.
We can’t help but be subjective, but a lesson viewed from more than one angle can be useful. We get insights into things we might otherwise miss, details and ideas which might not have occurred to us. With a set of criteria as a touchstone, we can expose ourselves to the subjectivity of others, challenging our own subjective take on a lesson. And challenge is always a useful thing.