Outcomes

Purposeful

Here’s a question for you. How do you go about making an ESOL lesson “purposeful”? ESOL lessons can, indeed should be wandering and tangential, building on opportunities that arise, but this doesn’t have to be at the expense of being purposeful 

As a starting point, let’s clarify what we mean. Oxford dictionaries give us three options

  1. Having or showing determination or resolve
  2. Having a useful purpose
  3. Intentional

It would be fun to discuss the first of these, but I think that would be semantic nitpicking of the most irritating kind, and we would end up talking about resilience or similar. 

And I don’t really think that the second meaning is terribly pertinent. Or rather it is pertinent but it is sort of the whole point of language learning in a second language environment: it’s the motivational wood we can’t see due to the trees. ESOL learning should have a useful purpose: it’s not academic study for the sake of it. ESOL students usually have a useful purpose behind their motivation for learning, and while humdrum daily reality shouldn’t be the only context for learning (although it’s a lazy quick win for an observation) it is, however, the main context in which students wil be using the language. 

No, I rather suspect that when you hear talk of purposeful learning, the meaning is the third: learning activities should be intentional. This suggests a couple of things: conscious engagement on the part of the students; and a clear something that the students can take away from the lesson. 

Conscious engagement, then. It’s becoming widely accepted, I think, that a lecture, if delivered interestingly with learning checked throughout, can be a damn good way of getting a stack of information across. No problems there, as long as what you are teaching can be taught using the same language as your students. But even for teachers who share a first language with all their students, then there is still a need for the students to make use of the language: theory and practical in one lesson, if you like. Engagement is crucial for production of language, that crucial stage of language learning which consolidates the learners’ understanding, tests it out, and provides you as a teacher some idea of how much or how well the students have learned. 

Which brings me to the second, and I think the most pertinent point: students taking something away from the lesson. I’m going to stick my neck right out on this one and say that in none of my lessons do I expect my students to come away with the target language or language skill fully developed. Not a single one. And neither should you. Students might be closer to full automatisation of the language point, be better able to apply a language skill, but I would be very surprised if I taught something in lesson A and the students were able to the reproduce exactly and in other contexts that language point in subsequent lessons. I was praised once because of apparent “deep learning” when a student had a lightbulb moment about relative clauses in an observed lesson, but despite this, the student was still unable to generalise and apply the thing she had apparently “deeply” learned. 

The problem is that we aren’t dealing with knowledge as discreet from application, but rather we are dealing with knowledge and application simultaneously. It’s of limited value to ask students to tell you the rule: it’s a start, and it does have value, but I’d genuinely question “explain the rule” as a sole learning outcome. I’d be looking at application of the language: what can students do with it?

But this then raises the big question: what are the learning outcomes? The usual “SMART” definition is of no help here: the S is fine, I think, but as soon as you go down the rest of the acronym you end up with a description of the activity. But if your outcome is simply “be better able to use passive voice”, then, how do you assess the learning taking place? Well, you listen to the students, you read their writing, you assess their performance in controlled and freer activities, all sorts. And different learners might demonstrate their skill in different ways, in an often unpredictable manner. And either way they will only be a bit better able to use the language, so why pretend to anyone that “use passive voice accurately and independently in six sentences or utterances” is at all meaningful. SMART outcomes limit and restrict learning in this context and dogged insistence on creating measurable performance is only going to lead to contextualised, limited and unrealistic performance. 

Assessment is part of the problem with this sort of atomising of language. I’ve taught enough higher level students who’ve “performed” at a particular level but have clearly not learned. I have had level 1 learners still struggling both conceptually and productively with first person present simple, and yet they and the system believe that they are “working at” entry 3. They’ve got a certificate and everything. This creates frustration all round: a student who believes they have achieved a level, a teacher who has to cope with managing that discontent. Summative and formative assessment based on tidy outcomes too easily reduces learning into neat observable tics, when proper formative assessment is complex and ongoing. It’s listening to students and correcting spoken language, reading what they have written and telling them what needs changing (and how).  Expressing these things as assessable outcomes, however, creates the false impression of achievement: take an outcome at face value and you have to say “so what?” So what if a student can use third person singular in six different sentences at entry 1: they’ll still be making mistakes with it three years later in a level 1 class. And if I say “oh it’s ok, what I really mean is “know a bit more about third person singular”, then what’s the benefit of the measurable outcome? None that I can see. What does a learner understand from that outcome? All of which assumes, of course, that we can set that outcome without teaching the language point first.

But saying, for example, a non-SMART intention like “today’s lesson will focus on passive voice, vocabulary to do with the environment, and practising reading for gist” is purposeful. For  one, students have a chance of understanding what this means. They can see how the activity they are doing is likely to lead to them knowing more about the language point, or developing that skill.  And as long as you are given the opportunity to listen to and carefully monitor what the students are saying and doing, and think about what they are likely to know about that language, then there should be no concerns with students being bored or lacking challenge. Setting the measurable outcome is well intentioned but deceptive at best, blatantly mendacious at worst. Purpose is perfectly achievable without specific outcomes, but it does involve being clear and honest with the students about what will be happening in the lesson. 

Advertisements

Planning – it’s a love/hate thing.

I like planning lessons, that is, I enjoy planning lessons and thinking about what I might do in that lesson, and coming up with interesting ways of teaching something, or practising a skill, or eliciting a language point, or whatever. I like making or finding or developing a resource. I like thinking about how I am going to make sure I can keep everyone engaged and learning. I like planning.

I hate Planning. I hate the boxes, the “have you thought about whichever governmental whim you are supposed to be embedding”, the “we don’t expect extensive planning but we expect you to show us how you will differentiate for the individual needs of your students” double standards. I hate the hair splitting “ooh, your learning outcome isn’t smart enough, and if you reword ‘write 5 sentences using past simple’ as ‘use past simple to write 5 sentences’ you will be fine” (because students couldn’t give a stuff, because all they really understand is that they will be learning about past simple. Although they can’t self assess against that learning outcome until you teach them what it is…). I hate the stupid “assessment” box. Yes, it does look like I copy & paste, because I do, because I use checking in pairs, self assessing against the answers on the whiteboard, teacher marking and all the rest of it most of the time. I hate the tedious, mechanistic “input > output” simplicity a lesson plan form suggests, as if by achieving said learning outcomes, and assessing said learning outcomes means something. It doesn’t. It means the student achieved that once. Whether or not that outcome is now automatically achievable in any setting is highly unlikely.

I hate the way I find it ard to fiddle with a formal lesson plan and make changes at the last minute, even though I will happily chuck the entire lesson out at the last minute for an exciting but semi-formed idea if, and this is important, if the lesson is not being observed.

But actually, of course, what I really hate is that I have an ok set of lessons for the next few days, but they are missing something and I can’t put my finger on it. And there is no form in the world going to help me there.

To Do

Möbius_stripI’m wary of writing to do lists. I can just about manage to write one for a given day, particularly on days like today, when I’ve not got a lot by way of teaching, but a bunch of other stuff that I need to get done, but beyond that they have a tendency not to be motivating reminders of tasks, but depressing records of your own personal failure, an uncrossed list mocking you with a smug reminder of what you haven’t done. Perhaps it’s the way I use them, I don’t know, but I sometimes struggle with the whole notion that life can be compressed into neat little tasks to be robotically ticked off, as if granting profound meaning to a stack of chores.

But at least a to do list works in theory. After all, you are dealing with concrete, measurable tasks leading to specific results, like “mark 15 functional ICT papers” or “email B about X” or indeed “plan lesson for tomorrow’s evening class”. This last, of course, is where I’m going with this. After all, we generally set our students a kind of “to do” list when we plan and share learning outcomes with them, and it’s part of the teachers job to know what is realistically achievable in that time, and to check that the “to do” list becomes a “have done” list.

Ah no, I hear you think, the learning outcomes are not a “to do” list at all. They are a “to learn” list. Really, you think that, do you? I disagree.

It’s all in the phrasing. We refer to learning outcomes, not aims. An outcome, in all its performance management glory, is usually talked about in terms of observable behaviours, and post observation teachers are usually grilled with interrogatives like “ah ha, but how do you know they learned?” Because really, imaginary observer, how do you know they didn’t? A focus on observable evidence means that all I can say that my students have done in a lesson is produce the evidence to meet the learning outcome, not necessarily learned the things inherent in that. I’m going to pitch this outside ESOL, too, because in ESOL pretty much any teacher in the world could tell you that “use present simple third person singular to write five sentences” is a cheap proxy for “learn present simple third person singular” but that it is very unlikely for that student to have learned such a thing in any convincing way. So if I think of a training session I ran only recently during which I aimed for teachers to “identify and evaluate methods of stretch and challenge”, what I actually wanted was for them to learn one of them and then to apply it. Achievement of the former is only gong to be an educated guess, and the latter something I would struggle ever to find out, short of fitting CCTV to classrooms.

So when we share learning outcomes with students, and ask them to measure their own performance against these outcomes, what are we asking students to do? Some students, perhaps, are knowledgeable enough learners to recognise the learning subtext of an outcome, while others when presented with with an outcome will be able to recognise this aspect of the learning outcome, while the rest is more or less meaningless. For others, perhaps, they read this achievement at face value and wonder what they are actually going to learn? When we share those learning outcomes and ask student to self assess against them, are we effectively peddling a lie to our students about what learning is?

Things get worse when we consider that we use the same methodology for composing individual goals on an ILP – what are we saying that a student has learned if they have achieved a personal target of “use present simple third person singular in five sentences”? A student covering that language point is unlikely to be able to understand it: so we resort to making it more meaningful “write five sentences about things my friend does every day” or something similar. At this point, any awareness of transferable language knowledge has been well and truly lost and we are left with a task, not an outcome. Even in the bizarre world where a student could develop the language ability to be able to meta-analyse grammar in this way, but at the same time not actually know that language point then what model of language learning are we following? I don’t think anyone believes that learning anything happens in neat, observable, evidenceable steps, aside from auditors and similar.

All we can say for sure about learning is that it’s an internal, individual process. It’s probably not even cyclical, really – it’s not that neat. I suspect we’re dealing with a kind of complex spiralling variation of a möbius band, where things are learned, then forgotten, then learned again. If we are using achievement of a learning outcome or individual target as a means of tracking learning, then we do have to wonder what it is we are tracking exactly: to my mind we are tracking performance, not learning – achievement of said target or outcome is simply an example of performance, and one which fails in terms of reliability and validity when considered as assessment.  and if it’s an example of performance, then a list of learning outcomes or ILP targets is indeed a simple to do list, and only loosely linked to learning.

There are other implications too. Achievement of observable behaviours in the form of learning outcomes, whether individual or classroom based, is a self fulfilling prophecy of sorts: we use this achievement as evidence of success for all sorts of classroom practice: “in study X, teachers applied technique Y and this was a success because students achieved the lesson’s stated outcomes” but if the measurement scale is questionable then what does this mean for evidence? I’m personally not sure we can dismiss evidence based practice on this justification, because something was achieved in those lessons, I just have questions as to exactly what that something was.

Even if we accept that the aim is genuine, but that the outcome is false, learning is not restricted to the teacher set, teacher driven, teacher shared learning goals. Students take all sorts from a formal lesson, and not all of it is predictable and measurable. Which makes me think. I have a lesson this week which is free from exams and the rest, so I might try something. I’m going to teach a lesson and not share the outcomes (I’m told this is bad practice, but never mind). But there will be things I have in mind for the learning in the lesson, call them outcomes if you like. Then, at the end of the lesson, I’ll ask the students to tell me what they are going to take away from the lesson, what they learned, what skills they practised, and see how much a) they can articulate these things, and b) how much their perceived achievements marry up with my aims/outcomes/whatever.

Better put that on my to do list.

Language & the ESOL image problem

Three things this week came together quite serendipitously. First was walking past a British Sign Language class, and seeing the tutor not only teaching BSL, but also using BSL to communicate ideas. The second was a conversation with two non-ESOL teaching colleagues about the SOLO taxonomy and the notion of using “higher order” questions. The third was a tweet from Scott Thornbury, “The problem with EFL/ESL teaching is that, unlike maths, history etc, there is no subject. So the language itself becomes the subject.”

So this set me thinking. You see I think ESOL in further education setting has a bit of an image problem. There’s a perception in some corners that we should fit in to everything else, that something which applies to sixteen year old joinery apprentices can be applied without modification to a group of beginner ESOL students, and that our reluctance to do so, or questions asked about it in order to make sense of it in ESOL terms is seen as ESOL teachers and departments being awkward, stroppy, and obstructive. Don’t get me wrong, mind, because like any teacher, ESOL teachers can indeed be stroppy and obstructive, and I get that. However, there is a serious point here: there is a single and profound difference between ESOL and, with the exception, perhaps, of my colleague teaching BSL, every single other subject teacher in a college can communicate directly and unambiguously with their students.

Let’s take questioning as a good example of this. When teaching a subject through a shared language, one quick, effective way of challenging students is to ask questions which probe deeper into the subject, moving from straightforward knowledge of details (“Name three types of…”) to more complex, evaluative and critical questions (“what might happen if…”). This is generally seen as good practice, and, I think, quite right too. When I think of CELTA, for example, I might ask students initially to identify how to use the past simple, and then challenge them to analyse the problems faced by second language learners in using it, or what the barriers might be, or to compare how the past simple is used as a simple,e past reference ce and how it used to describe a narrative. This sort of range of questioning or task-challenge works to push students into thinking beyond just knowing a fact. (For the record, however, you do need to know the fact before you can start to go beyond this. What is commonly referred to as “lower” order questioning is not necessarily worse or less important – if anything it is the most important type of learning without which all the rest is impossible.)

Trouble is, all of this, every element of this, is entirely language dependant. It assumes on the part of the speaker and the listener a shared language with a fair degree of linguistic complexity. Don’t let snobbery get in your way here: my fictional joinery apprentices have access to an astonishing array of linguistic talents, even those ones who failed GCSEs. The fact that they can understand a question like “what might happen if you used an alternative timber for this?” is a demonstration of a fair amount of language skill.

So we have to consider carefully the value of time spent in training or reading about this when you remove that language skill. I simply cannot reliably ask my students “how would you change the verb if it is irregular?” Instead I have to get there a different way. The primary way I use questioning is not to expand in this way, but to apply successive “lower order” questions to build complex knowledge. “Read this sentence: I visited my sister. Am I visiting my sister now? Tomorrow? Before now? Good.” Then the next day I come back and start up irregular verbs, checking and eliciting concepts again using simple questions.

None of this means that ESOL students are incapable of thinking in those terms. Remember these are diverse classrooms on a scale incomparable in FE, with teachers, doctors, university lecturers and civil servants sharing a room with hitherto uneducated housewives, farmers and factory workers, none of which can be used to make assumptions about language learning aptitude. To use terms associated with higher order thinking, synthesis, creativity, evaluation and hypothesising are required of ESOL students from the get go when they are challenged to use language in new and unique situations. It’s just that we, as teachers, can’t use the language as a means to get there.

So we have to critically evaluate everything that a generic trainer says. Teachers are pragmatic people, after all, and would like something useful that we can use in our day to day classrooms, and an interesting curio like the SOLO taxonomy has limited, if any applicability. Ditto Bloom, although it could be used for task design, perhaps. Ditto Socratic questioning, flipped learning,  negotiating learning targets, sharing and self assessing SMART lesson outcomes. These are language dependent concepts, and this is the key to everything.

Until you’ve taught an ESOL class, none of this will make sense to you. I’ve seen it in CELTA teaching practice where a qualified teacher in another subject tries over-complex questions to a low level class and suddenly realises that they might as well have just whistled and farted for all the good it’s done. The good trainees are the ones who realise that they do have to change their paradigm, and alter their classroom behaviours accordingly. Because that is what we are talking about: for a generically trained teacher of a vocational subject, the nature of the ESOL classroom in a UK setting is radically different.

And this can indeed make ESOL teachers seem obstructive when it comes to implementing college-wide initiatives or training opportunities, but they are simply trying to make sense of it all, to take those initiatives and challenges and make them work in their context. And that context is different, profoundly and radically. It’s also what makes ESOL such fun to teach.

By the end of the lesson…

On Monday, I delivered a reading lesson. I’m quite pleased with the materials, and with the level of analysis involved – reading a pair of texts for gist & detail, then a really meaty dig into the language used – connotation, metaphor, rhetorical questions and collocations. (By the way, the question about man or woman writer was just to get the students to think.)

So here’s a question. Two questions, in fact.

What, exactly, did the students learn?

How did I know this?

To put it another way, what were the learning outcomes? You can see the resources, and therefore the shape of the lesson – have a look and think about it first.

Ready?

This is what I said:

  1. read at least one text and be able to identify the gist
  2. read at least one text and be able to extract grammatical and lexical detail
  3. identify how we can use linguistic features like connotation, metaphor and rhetorical questions to achieve an effect
  4. develop a better understanding of collocations and their meaning.

 

On the face of it, perhaps, the first two are OK and unremarkable (we could say how many details per student, to make it more measurable), but the last two would leave me open to criticism – not particularly measurable and lacking in specificness (because when it comes to SMART the only ones which generally count from an audit perspective are the first two).

We didn’t get round to the collocations, but the metaphors, rhetorical questions and connotations provided plenty of focus for the lesson. What was lacking, I first thought, was an opportunity to put this sort of awareness into practice – we had lots of “input” but limited practice. Thequestion, I think, is to ask what kind of practice would have been appropriate. One activity might have been a series of discussion questions to personalise the vocabulary uncovered in the lesson, although I’m not sure what they would have been: “Have you ever plundered a village after a battle?” “Did you slash your budget when they stopped your benefits?” “Are you moderate or fundamental in your religious views? Why?” etc. (I am being facetious, of course.) Perhaps something around reviewing short texts for other, similar examples. Then again, however, I think the lesson was around awareness raising and moving forward with new vocabulary and the skills to deal with new vocabulary and new images, awareness that a writer may well be playing games with you, applying subtle (or not so subtle) tricks to develop ideas more effectively. There was also a sense of learning about the hidden culture of a language: the use of violent and criminal imagery when discussing politics and finance, for example, or the well meaning jargon of aspiration and opportunity employed by public sector employees and journalists. Not all of this is easily “extractable”, and is often a combination of linguistic features, and perhaps there is no place here for application, not in terms of making the tasks achievable and realistic for Level 2 ESOL students, only for evaluation and analysis. Not yet, anyway. 

At the end of the lesson I asked the students to think about what they had learned in the lesson. They wrote down things like these:

All good – focussing on the language we looked at. What was really striking was that without prompting and reminding the students didn’t consider the reading as part of the learning, despite the clear outcomes shared at the start. I could have reviewed them with the students more carefully, but I suspect that this would simply have been parroted back to me in the student feedback activity.

All of which brings me back round to the learning outcomes themselves. I was tempted, in an attempt at micro-rebellion,  to write on my scheme of work for the lesson a super-SMART outcome for the lesson: “Students will be able to read two texts on ESOL policy and identify 8 details from those texts, and those texts alone” This, after all, would be the only reading outcome we could honest say has been evidenced as part of the lesson. It would not have worked had the outcomes been reviewed. “Too descriptive,” an observer would say, “What are the students going to be able to do as a result of that activity?” “be able to do”? I don’t know. I know that this group of students can read a text and make sense of it already. Therefore the lesson was merely giving them practice in it, so “students will have had practice in reading a text. etc.” would have been a much more honest outcome. This would have come under fire, of course: “it’s not SMART. Where is the evidence of learning? It’s not enough just to practice – you need to supply evidence that something has been learned.”

I could have not had the reading outcome at all, perhaps, and focussed on the language analysis later on, but again, my erstwhile observer would have (rightly) slammed me on this one too: “You should have had a reading outcome as well for a two hour lesson where students spend over a third of that time reading.” And round and round we go.

My students were right, of course. The texts were primarily a way into the language, but not the primary aim of the lessons. Learning outcomes are knotty like that – the quest for evidence of learning can make the expression of that learning problematic. After all, the only real evidence I have of the students’ reading skills is that they can read those two texts, not that they are capable of universally applying those skills in any setting. And with language, particularly complex, idiomatic language, it can be hard to evidence an understanding without applying that language, but sometimes application of that language can be hard, or even unrealistic, and even when it is apparently possible, it still doesn’t mean that the students have learned anything in a replicable manner. 

That’s not to say a lesson shouldn’t have clear aims/objectives/outcomes. I like my unplugged lessons, and there is always room for emergent language in lessons, but there needs to be a balance between these and more clearly focussed lessons. But the semantics and purpose of outcomes needs to be evaluated and considered more carefully rather than the usual blind acceptance we go for.