SMART targets


Here’s a question for you. How do you go about making an ESOL lesson “purposeful”? ESOL lessons can, indeed should be wandering and tangential, building on opportunities that arise, but this doesn’t have to be at the expense of being purposeful 

As a starting point, let’s clarify what we mean. Oxford dictionaries give us three options

  1. Having or showing determination or resolve
  2. Having a useful purpose
  3. Intentional

It would be fun to discuss the first of these, but I think that would be semantic nitpicking of the most irritating kind, and we would end up talking about resilience or similar. 

And I don’t really think that the second meaning is terribly pertinent. Or rather it is pertinent but it is sort of the whole point of language learning in a second language environment: it’s the motivational wood we can’t see due to the trees. ESOL learning should have a useful purpose: it’s not academic study for the sake of it. ESOL students usually have a useful purpose behind their motivation for learning, and while humdrum daily reality shouldn’t be the only context for learning (although it’s a lazy quick win for an observation) it is, however, the main context in which students wil be using the language. 

No, I rather suspect that when you hear talk of purposeful learning, the meaning is the third: learning activities should be intentional. This suggests a couple of things: conscious engagement on the part of the students; and a clear something that the students can take away from the lesson. 

Conscious engagement, then. It’s becoming widely accepted, I think, that a lecture, if delivered interestingly with learning checked throughout, can be a damn good way of getting a stack of information across. No problems there, as long as what you are teaching can be taught using the same language as your students. But even for teachers who share a first language with all their students, then there is still a need for the students to make use of the language: theory and practical in one lesson, if you like. Engagement is crucial for production of language, that crucial stage of language learning which consolidates the learners’ understanding, tests it out, and provides you as a teacher some idea of how much or how well the students have learned. 

Which brings me to the second, and I think the most pertinent point: students taking something away from the lesson. I’m going to stick my neck right out on this one and say that in none of my lessons do I expect my students to come away with the target language or language skill fully developed. Not a single one. And neither should you. Students might be closer to full automatisation of the language point, be better able to apply a language skill, but I would be very surprised if I taught something in lesson A and the students were able to the reproduce exactly and in other contexts that language point in subsequent lessons. I was praised once because of apparent “deep learning” when a student had a lightbulb moment about relative clauses in an observed lesson, but despite this, the student was still unable to generalise and apply the thing she had apparently “deeply” learned. 

The problem is that we aren’t dealing with knowledge as discreet from application, but rather we are dealing with knowledge and application simultaneously. It’s of limited value to ask students to tell you the rule: it’s a start, and it does have value, but I’d genuinely question “explain the rule” as a sole learning outcome. I’d be looking at application of the language: what can students do with it?

But this then raises the big question: what are the learning outcomes? The usual “SMART” definition is of no help here: the S is fine, I think, but as soon as you go down the rest of the acronym you end up with a description of the activity. But if your outcome is simply “be better able to use passive voice”, then, how do you assess the learning taking place? Well, you listen to the students, you read their writing, you assess their performance in controlled and freer activities, all sorts. And different learners might demonstrate their skill in different ways, in an often unpredictable manner. And either way they will only be a bit better able to use the language, so why pretend to anyone that “use passive voice accurately and independently in six sentences or utterances” is at all meaningful. SMART outcomes limit and restrict learning in this context and dogged insistence on creating measurable performance is only going to lead to contextualised, limited and unrealistic performance. 

Assessment is part of the problem with this sort of atomising of language. I’ve taught enough higher level students who’ve “performed” at a particular level but have clearly not learned. I have had level 1 learners still struggling both conceptually and productively with first person present simple, and yet they and the system believe that they are “working at” entry 3. They’ve got a certificate and everything. This creates frustration all round: a student who believes they have achieved a level, a teacher who has to cope with managing that discontent. Summative and formative assessment based on tidy outcomes too easily reduces learning into neat observable tics, when proper formative assessment is complex and ongoing. It’s listening to students and correcting spoken language, reading what they have written and telling them what needs changing (and how).  Expressing these things as assessable outcomes, however, creates the false impression of achievement: take an outcome at face value and you have to say “so what?” So what if a student can use third person singular in six different sentences at entry 1: they’ll still be making mistakes with it three years later in a level 1 class. And if I say “oh it’s ok, what I really mean is “know a bit more about third person singular”, then what’s the benefit of the measurable outcome? None that I can see. What does a learner understand from that outcome? All of which assumes, of course, that we can set that outcome without teaching the language point first.

But saying, for example, a non-SMART intention like “today’s lesson will focus on passive voice, vocabulary to do with the environment, and practising reading for gist” is purposeful. For  one, students have a chance of understanding what this means. They can see how the activity they are doing is likely to lead to them knowing more about the language point, or developing that skill.  And as long as you are given the opportunity to listen to and carefully monitor what the students are saying and doing, and think about what they are likely to know about that language, then there should be no concerns with students being bored or lacking challenge. Setting the measurable outcome is well intentioned but deceptive at best, blatantly mendacious at worst. Purpose is perfectly achievable without specific outcomes, but it does involve being clear and honest with the students about what will be happening in the lesson. 

A Long Ramble on Evidence and Change. No, really, it’s long. 

I read with some interest a post on “Six Useless Things Language Teachers Do.” I like this sort of thing, and it’s why I read Russ Mayne’s excellent blog not to mention several other blogs, and numerous books around a general theme of evidence based practice, and on the theme of challenging sacred cows. I particularly enjoyed the “six useless things” post because it challenged some of my own holy bovines: recasts, for example, being largely ineffective. This error correction strategy is something we teach on CELTA, although not, admittedly, as a key one, and it’s definitely one I apply. I think that if I do use it, mind you, it’s as an instinctive, automatic response to a minor error, rather than a planned or focussed technique. 

More of a challenge for me was the second point: not so much the dismissal of direct correction of written errors, as this more or less chimes with my own stance on this. I’m not sure it’s totally useless, as the piece suggests, but I certainly don’t think it’s much good. The challenge to indirect error correction (using marking codes, etc.) is more of a tricky one. I agree, for sure, that students can’t be expected to know what they have done wrong, but I wonder if there are perhaps one or two errors that a student can self correct: slips, silly spelling mistakes, “d’oh” moments which they know on a conscious level but perhaps forget when focussing on fluency (present simple third person singular S for higher level students. I mean you). I wonder, as well, if there is a pragmatic aspect here. Most teachers are working with groups of students, not individuals on a one to one basis, and using an indirect marking strategy, combined with making students do something about it inside class time, means that you, as a teacher, are then freed up to go round supporting students with the mistakes that they can’t self-correct. Context also counts for a lot here: a groups of beginners is radically different from a group of high intermediate students not only in their language level, but also in their meta-language level. Often, but not always, high level students have been through the language learning system a bit, have an awareness of meta-linguistic concepts,  and, crucially, are used to thinking about language. 

I could go on, but this isn’t about trying to pick holes, or a fight! It’s a naturally provocative piece, with a title like that, how can it not be? It’s also, as far as I’m concerned, correct in many of the other points, learning styles, of course, learning to learn, etc., although on that latter one I’d be interested to know how much time should be spent focussing on learning strategies: I’ve got 90 hours, tops, to help my students gain a qualification. How much of that time can my students and I afford to spend on it? If a one of session is minimally impactful, then I think I probably won’t bother.

What this shows you, and me, however, is that as a teacher I am terribly, horribly biased. I come to the job now with many years of courses, teacher training, reading, research, conference workshops, observing teachers, being observed, getting and giving feedback, in-house CPD, and, of course, a bit of classroom experience. This is bad. Bad bad. Because I have developed a set of routines, of practices, of “knowledge” which are, in fact, very hard to change. Oh, I may make lots of noise about research, about innovation, about challenges and being challenged, reflective practitioner, blah blah blah, but a lot of it, I worry, is so much hot air. 

Take one of my favourite bug bears: SMART targets for ESOL learners. Now let’s imagine that some university somewhere funded some formal research into SMART targets. And they did a massive study of second language learners in a multilingual setting which showed, without question, that students who used SMART targets to monitor their learning achieved significantly higher levels of improvement when compared to those who did not. Let’s imagine that a couple more universities did the same, and found very similar results. In fact, there developed a significant body of evidence that setting SMART targets with students was, beyond a shadow of a doubt, a good idea. Pow! 

Now, in our fictional universe, let’s also imagine that I read these reports and am struck by the convincing nature of the evidence which runs entirely at odds with my opinions, beliefs and understanding. I have to wonder that even, in spite of this, I would be able to make the massive mental leap of faith and accept that I am wrong and the evidence is right. Could I do it? On a similar vein, if it turned out the evidence was all in favour of learning styles; that technology is, in fact, a panacea for all educational challenges; and that there is a fixed body of objective Best Practice in Education which works for all students in all settings all the time, if all this turned out to be true, could I align useful with all this because the evidence told me so? 

Probably not. 

For one, if all these things turned out to be true, I’d probably have some sort of breakdown: you’d find me curled up in a ball in the corner of a classroom, rocking backwards and forwards muttering “it can’t be true, it can’t”. More importantly, however, what this shows is that evidence and facts can say what they want, but the pig-headed stubbornness of a working teacher is a tough nut to crack: it would take a long time for me to adjust, to take on the changes to my perceptions and to work them into what I do. It might not even happen at all: even in the best case scenario, I think I would probably want to cling on to my beliefs in the face of the evidence. 

Unless something chimes with our beliefs about our practices, unless we agree in our professional hearts that something should be true, then short of a Damasecene epiphany in front of the whiteboard, it’s going to be extremely hard to embrace it. Let’s not beat ourselves up about it, mind, because that’s not going to help. And don’t let’s beat up others either: we are, after all, only human, and I have a suspicion that, regardless of our politics, one of the things that professional experience leads to is some form of professional conservatism. How do we get past this? 

Expectation, probably, would be a good place to start: it’s too easy for leadership and policy makers to declare that a new practice, with an evidence base, of course, is good and should be enforced. How effectively that gets taken up depends on the size and the immediate visible impact of that practice. When I am leading a training session, I start with a very simple expectation: that everyone go away with just one thing which they can use with immediate and positive impact. It’s u realistic to expect more, and if an individual takes away more than one thing, then that’s a bonus. To expect more than this from any kind of development activity is probably unrealistic, and actually, so what? If someone takes on a new idea and puts it into place, then that’s a success surely? We can apply this also to evidence based practice: make small changes leading up to the big change, and the big change will much more likely happen. This is often not good enough for some leadership mindsets, who demand quick, visible changes, but that is a whole other barrier to teacher development which I’m not going to explore. 

Time, of course, would help, but given that FE in particular is financially squeezed and performance hungry, this time will need to come at the teacher’s own expense. No time will be made for you to read, discuss and understand research (and God forbid that you attempt to try anything new during formal observations) so that time must be found elsewhere. Quite frankly, however, even I would rather watch Daredevil on Netflix of an evening than read a dry academic paper providing evidence in favour of target setting. (Actually, I think I would read that paper; so, you know, when you find the evidence, do let me know: because I’m sure that ESOL manager and inspectors have seen this evidence and are just hiding it for some random reason. After all why would such a thing be an absolute requirement?)

Deep breath. 

I’m sorry this has been such a long post: it’s been brewing quietly while I’ve been off and I’ve been adding bit by bit. But there’s a lot that bothers me about evidence based practice. Things like the way learning styles hangs on in teacher training courses, and therefore is refusing to die. Things like the rare and to easily tokenistic support for teachers in exploring evidence and engaging with it. Things like the complexity of applying a piece of evidence based on first language primary classrooms to second language learning in adults. Things like the way the idea of evidence based practice gets used as a stick (“You’re not doing it right, the evidence says so.”) while at the same time being cherry picked by educational leaders and policy makers to fit a specific personal or political preference. Not to mention the way that the entire concept of needing any evidence can be wholeheartedly and happily ignored by those same stick wielders and cherrypickers when it suits them. An individual teacher’s challenges with evidence which runs counter to their beliefs is a far smaller one than when this happens at an institution or policy level. A far smaller challenge, and an infinitely less dangerous one. 

As SMART as riding a bike. 

English, remember, has no future tense. For example, what does the following sentence mean: a future intention, a fixed future arrangement, or a decision about the future made at the moment of speaking? 

“In September, I am cycling from Leeds to Manchester to raise money for charity.” 

It is, of course, a fixed future arrangement. I booked my place on the ride last week, so it has moved from an intention (“I am going to cycle…”), and has long since ceased to be a spontaneous decision at the moment of speaking (New Year’s Eve, slightly slurred: “F-ck it, this year I will definitely do that ride I’ve been meaning to do for ages.”). The key lexical verb does not change (as it does for past tenses, for example, or as it might in many other languages) and instead it’s all present tenses and modal verbs.

Lecture aside, and what remains, however, is that I am doing this crazy thing, which will cause much amusement for the folk of Yorkshire and Lancashire as I wobble down their roads. Now, aside from being an opportunity to patronise English language teachers,  this also presents a fine opportunity to go back to my second favourite bete noire in ESOL teaching: target setting. 

Riding a bike, on an amateur level, is a fairly straightforward process. You sit on the saddle, spin the pedals and off you go. It’s an entirely artificial process, (the bicycle has only really existed for 150 years or so) and therefore something which everybody consciously learns. Nobody is born a cyclist. And of course, as everyone knows, you never forget how to do it. Riding a bike over long distances is also a straightforward process: all (!) you have to do is persuade your leg muscles to keep spinning the pedals for a long time. That’s a very big all, I have to admit, but it’s fairly uncomplicated. 

 Using a language, however, is terribly complicated. Look at the rules around how we talk about the future, for example, combining vocabulary and grammatical structures with subtle shades of meaning that native speakers sometimes abandon in order to avoid repetitiveness. Even the most apparently monosyllabic of language users uses a complex interaction of lexis, grammar, discourse knowledge, social awareness and paralinguistic features, an interaction which, as yet,  even the best minds in the field don’t agree on. Learning a language  is not much better: science has yet to comprehensively nail the processes involved, except that we do know that children are uncannily good at it, and it gets harder as you grow up. 

So, here’s a question: which one of these two processes can be most easily, meaningfully and effectively  broken down into discrete stages? 

I could probably do the ride tomorrow. It would take me ages, and I’d be a total mess afterwards, but I could do it. What I want to be able to do is complete the journey in a respectable time and be able to walk when I get home. So I need to execute some sort of lifestyle change/training plan.  As I am in a fairly post-beginner state, and cycling between 30-50 miles a week already, the training element is going to be about endurance – longer rides, gradually increasing over the weeks. This is easy to set up in terms of a target: by the end of week 1 I will ride for X hours. I can set myself meaningful goals like “ride the long commute to work at least once a week” and hopefully get a bit fitter. I also need to look at my own diet and weight loss: the cycling will take care of some of that, as will any other exercise I do. However, I suspect that my sugar/cheese/bread addiction will have to be limited, and again, a number of targets can be used to monitor this and motivate me to engage. 

So far so neat. I can identify some clear specific goals there: ride X minutes longer each week up to X hours by the end of August. Investigate potential ideas for off bike exercise. (And start !). Reduce sugar intake by X amount each week until weaned off (or something, although I might have to get back to you on that one). 

These are clear things which are understandable to anyone who wishes to engage with a programme like this. Most of this falls within the realm of general knowledge (more exercise + better diet = improved health and athletic performance). Even something slightly more technical like following an exercise plan off the web is still fairly straightforward in terms of understanding the stages: “move body like this for this long”. If I focus those goals down a bit more and mark them off I should gain a sense of achievement to boot: they are my goals, and I fully own them. All good. 

 In theory, then, this is applicable to all areas of human development and achievement. You can apply it to a business setting very effectively: increase output X to level Y, that sort of thing. Everyone involved usually understands the process and stages, enabling them to get on board and have some sense of ownership of the goals. 

So does this work for learning? A crucial aspect of the SMART target is that it focuses on observable performance only. You can’t measure thought and understanding except through observing what an individual can do as a result of that understanding. This raises a challenging question: at what point can “use present perfect to describe my experience appropriately in 4 sentences” be said to prove anything? I could, for example, demonstrate something similar in German with only the minimal amount of effort and absolutely no learning. I’d be happy to apply this to any area of learning, I think. Evidence of this sort may mean a learner has learned how to do it to an extent that they can reproduce said act on demand, but it may just as easily mean that they will not be able to repeat that goal any time soon. 

A little of this is down to the phrasing, the insistence on the sacred SMART. To tick all five boxes, we end up with language based competency measures like “be able to write five sentences about my daily routine using present simple by the end of February.” The very specificiness, measurability and relevance of this target mean that all we are measuring is not the student’s ability to use present simple for daily routine in general, nor a student’s understanding of that grammar (which is what the teacher is probably aiming at ), only their ability to produce, yes, five sentences about… (Awkwardly, this also applies to SMART learning outcomes. No teacher ever believes that a learner who reads a text and answers five questions about it has actually learned how to “read a text and extract five details”, but that’s what the outcome will be for that lesson because the product focussed quality assurance system of FE demands it. But then, no lesson observer or manager would really believe that either, which raises all sorts of tricky questions. It’s probably one of those Best Practice things.)

Setting targets for riding a bike over distance is not learning a language. The former is fairly straightforward and easily understood by the participants, meaning that ownership of the targets is possible, the target is meaningful, and therefore motivating. None of these apply for language learning. But then I’ve been saying this for years now and nobody seems to be listening or wishing to engage in dialogue. Certainly, despite a move towards a world of evidence based practice and practitioner research, being critical of the notion of target setting remains verboten because, presumably, it provides neat, trite performance data which can be presented to an auditor.  

Sighs. I suppose I’d better go for a ride. 

The Research That Never Was

I posted last time about an action research projected like to do. I’ve been lucky the last few years and engaged with a couple of research projects. The first looked at teachers and CPD, and most recently I’ve been thinking about the readiness of ESOL learners for blended learning and the impact that funding-enforced blended learning may have on them. The latter is probably the closest to controversial, questioning, as it does, both the increasingly sacrosanct FELTAG recommendations and casting a critical eye on a direct college policy. But as much as I want to do that research, there’s a piece that I’ve been wanting to do for years. 

It’s quite simple really. I would ask for two groups of learners of a broadly similar level, and similar range of cultural and social backgrounds within the class. I’d devise a language test and give it to all the students. Then I would apply a particular intervention to one group, and not apply it to the other. At the end of the research period, I’d then test the students again to measure which group had improved the most based on their original test score. It’s not perfect, and ideally I would have at least one other teacher taking part at the same time.  Also, on that scale it wouldn’t be something you could necessarily extrapolate much by way of major statement. However, it would represent the first ever evidence one way or another for the intervention I have in mind. 

The intervention, and those who know me will now groan, is the setting of SMART targets for learning. I know, I’m like a stubborn dog with a particularly juicy slipper on this, but do bear with me. You see, that research would never happen. Smart targets are an integral part of the individual learning plan, and as such, are pretty much unassailable. After all, they tick every ideological and performance management box: achievement of the target runs the argument, supplies evidence of individual student learning. Criticism of the target is seen as criticism of the concept that the learning of the individual is crucial. 

I agree that we need to think about how we are meeting individual student needs. Students need to know what they need to improve. I fully endorse the concept of finding out what students need to learn (although I hate the therapeutic-deficit label of “diagnostic assessment”) and then basing a course plan on those ideas. I even think that writing those things down somewhere and then thinking about them later is dead handy for students, if not the be all and end all. This can all be achieved without the atomistic breaking down of a learning goal like “use past simple irregular verbs” into trite, meaningless targets based on the measurable occurrence of the target language, like “write five sentences using past simple irregular verbs”.  

It’s a small distinction, perhaps, but it creates an essentially false impression not only for teachers but most damagingly for students: “I have written my five sentences, now I know past simple irregular verbs.” Or worse “Now I know past simple in English”. Is it really that we are only expecting them to be able to write just five sentences, or is it implied that we are expecting them to know past simple irregular verbs? If we are expecting the former, you have to consider reliability of this as an assessment task: once the target has been achieved, could we say with any certainty that it could be achieved with the same result at a later date? If it’s the latter, and the target is simply an evidenceable proxy for a larger learning goal, then there are issues around validity: how does achieving the target genuinely provide evidence of this? Neither question can be answered cleanly, or in my view, convincingly in favour of the target. It’s problematic at best, deeply flawed at worst. 

My point is this. In my putative study, the only difference between group A and group B would be the targets. My non-targets group would still be given detailed feedback on their work, would still be told which areas were problematic and this would form part of the students’ reflections on learning. (In fact, I’d probably become a darn sight better at formally recording feedback and getting students to reflect on this as a result of doing this kind of research). The only thing that would not occur would be framing those development points as SMART targets. That is all. 

This research will never happen. No ESOL department in the country would currently countenance such a risk: “Ofsted could come.”  “It’s best practice.” Targets are too completely enshrined in the culture of ESOL, a fetish of the cult of the individual, for them to be questioned or challenged. ESOL is embattled, beleaguered by political unpopularity (‘cos everyone loves immigrants, right?) and funding cuts which perhaps creates a challenge in terms of questioning accepted practices. If we challenge SMART targets, we challenge OFSTED, because ILPs are writ large over the Common Inspection Framework. ESOL inspectors will most likely have been teachers during the high days of Skills for Life, when funding was high and the class sizes small, where the rhetoric of individualisation was a practical reality. Show me an inspector who questions target setting, go on. If an inspection goes badly, we suffer. If we suffer in terms of our performance, then questions are raised about our value and our worth. Once those questions start being asked, then money and support begins to dry up, diverted to other areas or other providers who don’t do things like ask awkward questions. 

What is harder is that I get this. I understand the sense, almost, of fear of wishing to challenge the status quo. I may be bit of an idealist, I’m also a lot of a realist, who recognises that sometimes there are hoops through which we must jump. There is a direct link to funding for non-accredited provision through the RARPA process, where evidence of learning is supplied in the form of targets achieved. This link means that the targets are now absolutely stuck (although the case could be made for one an internal pre-test and a post test of the same set of language points drawn from the curriculum and present any difference as evidence of improved performance. It would be about as meaningful as the target.) 

I do them, of course, don’t panic. My learners have ILPs for their ESOL classes, and there are targets on them. I don’t do it out of integrity or professionalism, however, but out of cynical pragmatism. I simply don’t believe that targets work, (and this is where I talk about that)  and the problem remains with SMART targets that “I believe…” is the only thing anyone can ever say, either for or against. I’m sorry, but this is not enough when you are saying that these things must be done. If it’s a  then I don’t think it’s unreasonable for me to expect some references to back up your assertion? Seriously, there is nothing out there apart from a few good practice guidelines which simply assert that targets help learners. Nor is it enough to say “it works well for me” because we are talking about something which is not optional. It’s like me saying that everyone absolutely MUST use jazz chants or suggestopedia because I use them and they work really well for me. 

I am an idealist, of course. I think that one day the conditions will be right that I will be able to do this kind of study and that it might even mean something. That day, of course, isn’t any time soon: ESOL has indeed got some bigger funding fish to fry than worrying about target setting. But I can dream, right?

Faith and Stuff

“We weren’t supposed to be, we learned too much at school, now we can’t help but think the future that you’ve got mapped out is nothing much to shout about.”

Pulp – Mis-shapes

You will be glad to know that I have no plans to come over all Dawkins at you: your faith, like the absence of mine, is an entirely personal affair, and none of my business. That’s a hint, by the way, for anyone ready to save my soul with a comment…

No, this isn’t a post about that kind of faith. This is a post about teacher faith. You see, not so long ago I was a keen enthusiastic little trainee on a part time training course and there were all these people telling me stuff. Stuff that would make me a good teacher, stuff that I needed to do to pass the course. That sort of stuff. Then I did DELTA, and learned shitloads of stuff. And I believed every single word, referenced or not. Because they were trainers and they knew their stuff, right? And then later on I started working in the public sector and managers told me stuff, and people said that certain stuff was best practice and that I should do that stuff because OFSTED said it was good stuff. I was a true believer. I listened, I absorbed, I followed the True Path of the Righteous.

I can pinpoint, to within a few months, the arrival of my professional scepticism. It was around September 2004, and it was the insistence that setting targets helped ESOL learners learn English. It was my first encounter with the “because it’s good practice” non-argument, and when pushed for a better explanation, nobody could come up with anything. (Still waiting for the research to prove it, by the way.) Suddenly all these authority figures saying stuff began to sound, well, unauthoritative. I asked for evidence, politely, and was politely rebuffed. So I tried to find out for myself, and I found a big fat heap of nothing. Lots of “best practice guidelines”, lots of advice, and a tiny little bit of not very informative “how to” guidance, but nothing that said “this works because this research and this study said so.” I started to look around at all the other stuff I’d been told over the years, looking for evidence for all sorts and by golly was it interesting. Some of it was there, some of it wasn’t, but for an awful lot of things I’d been told was good practice had little or no evidence base. The whole house of cards started to look very rickety indeed.

We place a lot of faith and trust in our teachers. Necessarily so. ESOL students trust that we are telling the truth when we say that we use some for positive statements and any for negatives (I have some apples, I don’t have any bananas) even though it later turns out that this is not really the rule as it is used. (I like most pop music, but I don’t like some of it. I like any coffee, I’m not fussy). It’s an uncomfortable way to phrase it, but sometimes teachers lie to learners in order to make a complex thing less complex, more easily understandable, and this is what happens on initial teacher training courses: we simplify and tell lies so that some basic level understanding can be established before the teachers go off and discover that more or less everything they’ve learned is not wrong, as such, but is almost certainly not much more than a useful guideline.

Asking difficult questions like “who says so?” however, tends not to make you very popular. Nobody likes a smart Alec, after all. You get accused of all sorts when you ask questions: accused of being disrespectful, of being a cynic, of mocking, of not being aspirational, as well as quiet but stern reminders of your place in the grand scheme of things. People who ask questions make life difficult. I have had trainee teachers in the past who ask awkward, challenging, perceptive and generally brilliant questions of you. So far, these trainees have been, without exception, the strongest trainees on their respective courses, and the most successful subsequently. But when they ask those questions it’s bloody annoying, and so it should be. After all, you are getting your beliefs challenged, and that’s hard, but the benefits are endless. It forces you to go away and examine your position much more carefully and thoroughly, and either you come back stronger, and with greater evidence or support, or you come back humbled and your mind broadened. The worst thing you can ever do to people in this situation is dismiss them with “it just is” statements like “it’s good practice” because that’s deeply patronising. You may as well just pat them on the bottom and say “don’t you worry your pretty little head about it”.

Like I’ve written before, there’s nothing absolute in teaching, nothing fixed, although absolutes are which new teachers might find reassuring. Perhaps atheism and religious belief is not the right parallel here: they depend on absolute beliefs. Perhaps agnosticism is the better parallel: we can never fully know for sure, and we are always learning and changing as teachers in face of the evidence as it occurs before us. Any faith we do have must be a flexible faith, one which is open to new thoughts, new developments and interpretations. We must never assume that something is right, at least not on face value, and even where the evidence does exist we must still analyse it and think about how well it can apply to our own contexts of teaching and learning.

Technology, Bias, and Vested Interests

I promise this is my last critical post about technology and learning, I really do. But I’ve been annoyed by something, an article about a report on how FE is lagging behind in the technology stakes.

There are two issues here.

The first is the accuracy of the claim. Do a search for reports and research into the benefits of elearning and, well, you can’t move for the damn things. Technology, if you believe everything you ever read, is the magic bullet that will engage learners with learning and create a fully functioning digital universe. But hang on, if we look a little more closely at these claims, we discover that actually they are just that – claims. Claims that technology benefits learning. Not evidence. Not hard data, that is, like, say teaching two groups of learners one with and one without tech and see who does better in a test. I hope that when you read that you will be sneering at my simplistic test description there, saying “ah but education, it’s just too big and complex to study it in that way.” (although you’d be wrong: it would be perfectly possible to devise a fairly straightforward trial of technologised learning in that way, and if anyone would like to give me some time and money, I’ll do it, because I’m interested.) The reason I hope you’ve said that is because it underlines the fatal flaw for your argument as well. If you to want to argue that you can’t measure education that way, because it is complex, multi faceted blah blah, then you also have to ask yourself the question “how do I know technology works?” Your evidence is a flimsy and as questionable as anyone else’s. That gives us a no score draw at best.

Ok, the second issue. Where evidence does exist or is claimed to exist, it is often in the reports of people with a vested interest in education taking up the use of technology. Take this one, published by Microsoft and Intel, the suppliers of much of the hardware and software used across the vast majority of FE colleges in the UK. Colleges which, perhaps, are upgrading a little less often than a few years back, buying fewer computers, that sort of thing. From a business perspective, two multinational corporations who care not one jot for the young people and adults of the UK, apart from their capacity as consumers, persuading colleges and their learners to invest in their products is a potentially very lucrative investment. Profit is rarely a wholesome motive, even less so when dressed up as the public good.

Vested interest is a dangerous thing: in medical science, for example, research into the effectiveness of a given intervention when funded or carried out by people with a vested interest in said intervention working, is generally peer reviewed and checked for accuracy in its modelling. It’s not a flawless process, but it does cultivate a culture of scepticism and questioning. Alas, in education, we are generally too soft and lacking in confidence for this sort of thing, especially when policy gets behind whatever random claim is being made. Indeed, claims for rigour and robustness suddenly seem laughable in education when we still, as a profession, from teachers in the ground floor all the way to senior managers, don’t engage ourselves in enough research to understand how much of what is presented to us as fact, from e-learning to Bloom’s Taxonomy to hell-in-a-handcart learning styles, is entirely open to discussion and questioning. Nothing is sacred: most scientists can tell you that. Unfortunately, education sometimes seems to be founded on the kind of journalistic psychology not unlike that found in tediously aspirational magazine articles.

I think what I would like is to find some neutral, unbiased studies of actual learners in an actual context who actually end up being more successful than others as a result of engaging with technology. I don’t see it happening any time soon, mind you, so that wish will go into my special internal box of wishes, right next to the one for solid evidence in favour of target setting for ESOL learners. Both practices are policy driven, rather than learner or indeed reality driven, but the difference for me with technology is that I do think it has some positive impact. I’d like to do that research, because I have a suspicion that the technology would be proven right. I don’t think it would be as amazingly wonderful as people would like to believe and I certainly don’t think that teachers can be replaced with hole in the wall grannies or “learning coaches”. I just think we need to redefine what a teacher is, and how they interact with their learners with technology, which is perhaps where my next post is going.