I posted last time about an action research projected like to do. I’ve been lucky the last few years and engaged with a couple of research projects. The first looked at teachers and CPD, and most recently I’ve been thinking about the readiness of ESOL learners for blended learning and the impact that funding-enforced blended learning may have on them. The latter is probably the closest to controversial, questioning, as it does, both the increasingly sacrosanct FELTAG recommendations and casting a critical eye on a direct college policy. But as much as I want to do that research, there’s a piece that I’ve been wanting to do for years.
It’s quite simple really. I would ask for two groups of learners of a broadly similar level, and similar range of cultural and social backgrounds within the class. I’d devise a language test and give it to all the students. Then I would apply a particular intervention to one group, and not apply it to the other. At the end of the research period, I’d then test the students again to measure which group had improved the most based on their original test score. It’s not perfect, and ideally I would have at least one other teacher taking part at the same time. Also, on that scale it wouldn’t be something you could necessarily extrapolate much by way of major statement. However, it would represent the first ever evidence one way or another for the intervention I have in mind.
The intervention, and those who know me will now groan, is the setting of SMART targets for learning. I know, I’m like a stubborn dog with a particularly juicy slipper on this, but do bear with me. You see, that research would never happen. Smart targets are an integral part of the individual learning plan, and as such, are pretty much unassailable. After all, they tick every ideological and performance management box: achievement of the target runs the argument, supplies evidence of individual student learning. Criticism of the target is seen as criticism of the concept that the learning of the individual is crucial.
I agree that we need to think about how we are meeting individual student needs. Students need to know what they need to improve. I fully endorse the concept of finding out what students need to learn (although I hate the therapeutic-deficit label of “diagnostic assessment”) and then basing a course plan on those ideas. I even think that writing those things down somewhere and then thinking about them later is dead handy for students, if not the be all and end all. This can all be achieved without the atomistic breaking down of a learning goal like “use past simple irregular verbs” into trite, meaningless targets based on the measurable occurrence of the target language, like “write five sentences using past simple irregular verbs”.
It’s a small distinction, perhaps, but it creates an essentially false impression not only for teachers but most damagingly for students: “I have written my five sentences, now I know past simple irregular verbs.” Or worse “Now I know past simple in English”. Is it really that we are only expecting them to be able to write just five sentences, or is it implied that we are expecting them to know past simple irregular verbs? If we are expecting the former, you have to consider reliability of this as an assessment task: once the target has been achieved, could we say with any certainty that it could be achieved with the same result at a later date? If it’s the latter, and the target is simply an evidenceable proxy for a larger learning goal, then there are issues around validity: how does achieving the target genuinely provide evidence of this? Neither question can be answered cleanly, or in my view, convincingly in favour of the target. It’s problematic at best, deeply flawed at worst.
My point is this. In my putative study, the only difference between group A and group B would be the targets. My non-targets group would still be given detailed feedback on their work, would still be told which areas were problematic and this would form part of the students’ reflections on learning. (In fact, I’d probably become a darn sight better at formally recording feedback and getting students to reflect on this as a result of doing this kind of research). The only thing that would not occur would be framing those development points as SMART targets. That is all.
This research will never happen. No ESOL department in the country would currently countenance such a risk: “Ofsted could come.” “It’s best practice.” Targets are too completely enshrined in the culture of ESOL, a fetish of the cult of the individual, for them to be questioned or challenged. ESOL is embattled, beleaguered by political unpopularity (‘cos everyone loves immigrants, right?) and funding cuts which perhaps creates a challenge in terms of questioning accepted practices. If we challenge SMART targets, we challenge OFSTED, because ILPs are writ large over the Common Inspection Framework. ESOL inspectors will most likely have been teachers during the high days of Skills for Life, when funding was high and the class sizes small, where the rhetoric of individualisation was a practical reality. Show me an inspector who questions target setting, go on. If an inspection goes badly, we suffer. If we suffer in terms of our performance, then questions are raised about our value and our worth. Once those questions start being asked, then money and support begins to dry up, diverted to other areas or other providers who don’t do things like ask awkward questions.
What is harder is that I get this. I understand the sense, almost, of fear of wishing to challenge the status quo. I may be a bit of an idealist, I’m also a lot of a realist, who recognises that sometimes there are hoops through which we must jump. There is a direct link to funding for non-accredited provision through the RARPA process, where evidence of learning is supplied in the form of targets achieved. This link means that the targets are now absolutely stuck (although the case could be made for one an internal pre-test and a post test of the same set of language points drawn from the curriculum and present any difference as evidence of improved performance. It would be about as meaningful as the target.)
I do them, of course, don’t panic. My learners have ILPs for their ESOL classes, and there are targets on them. I don’t do it out of integrity or professionalism, however, but out of cynical pragmatism. I simply don’t believe that targets work, (and this is where I talk about that) and the problem remains with SMART targets that “I believe…” is the only thing anyone can ever say, either for or against. I’m sorry, but this is not enough when you are saying that these things must be done. If it’s a then I don’t think it’s unreasonable for me to expect some references to back up your assertion? Seriously, there is nothing out there apart from a few good practice guidelines which simply assert that targets help learners. Nor is it enough to say “it works well for me” because we are talking about something which is not optional. It’s like me saying that everyone absolutely MUST use jazz chants or suggestopedia because I use them and they work really well for me.
I am an idealist, of course. I think that one day the conditions will be right that I will be able to do this kind of study and that it might even mean something. That day, of course, isn’t any time soon: ESOL has indeed got some bigger funding fish to fry than worrying about target setting. But I can dream, right?