methodology

Nobody Expects Dimples

This week I taught two lessons which reminded me of the richness that letting the students lead on the content can create. The first lesson was on Tuesday night – a Level 1/Level 2 group on the theme, broadly, of “life stages”. It was meant as a build up to a listening activity based around this recording from the BBC’s excellent Listening project, but took on a bit of a life of its own.

The activity was a variation on the game of consequences. At the top of the page I had printed “Be born”. Students worked in pairs and added the next thing that thought would happen. They then passed it to the next pair along who added another idea, and so on. I mixed things up a little, taking a lead from a chapter in 52 by Lindsay Clandfield and Luke Meddings and every now and again asked the students to put something bad as the next event.

Some of the language generated was fairly predictable: go to school, get a job, retire, etc. But once the class warmed up to the task, the list almost became small, occasionally tragic biographies: have a breakdown, have an affair, get expelled, drop out, recover, get kicked out, bring up children, and, my personal favourite, have a mid-life crisis prompted by a pair of students talking about men of a certain age. The students were trying to express an idea, lacked the necessary language to do so, and my job was simply to fill that gap.

Something similar occurred with another class, this time Entry 1. Like the life stages task, this was intended as a precursor to something else, but also grew beyond the bounds of the planned activity.This time, students were brainstorming / researching in dictionaries vocabulary to do with physical appearance. Each group had a sheet of A3 paper – one group used this to brainstorm words to do with hair, one with body, one with face and one with skin. I was a little nervous about the last one, what with the potential for racist overtones, but in fact this provoked arguably the most useful chunks of language: greasy skin, oily skindry skin, and sensitive skin. The face list also took me by surprise – one student pointed at her cheeks –

“What are these?”

“Cheeks.”

“No.” Irritated by her stupid teacher. “These.”

I looked more carefully, a little nervous about staring, and the penny dropped:

Dimples!

In both cases, the new vocabulary arose from the students needing specific sets of language to express a concept. The task gave a setting for the vocabulary, and linked it all together, but ultimately the words were the students’ own words. It would never have occurred to me to teach words like sensitive skin or dimples despite the students in question having both – sensitive skin in particular is useful for a student who is resident in the UK. Having a breakdown, an affair or a mid-life crisis is unlikely to make it into most teacher-selected syllabuses, but nevertheless arose because the students had a need to talk about them. And again, these are not unusual or unlikely phrases – you could find them in most newspapers, magazines or online with a fairly high degree of frequency. Both groups, took great delight in exploring the new language, playing with it, using it.  “I read about a man in Poland, he had a mid life crisis and…” “Can you have just one dimples?” “Do you know good [indicates rubbing cream into skin] for sensitive skin?” 

There is an element of luck to this, for sure: with another group on another day we might have just the expected language. This isn’t some form of “best practice” that can be packaged up and rolled out at a training event. There is an element of skill as well. This wasn’t dogme, unplugged teaching per se: the activities limited the range of emergent language into a reasonably predictable collection of terms. We weren’t about to start talking about finding a job or how to make chapati, after all. A little control, a little setting of boundaries, if judged carefully, can make for a surprisingly productive lesson. You need to judge, as well, if the language is going down a dead end, explain well, and, arguably most importantly of all, capture what has come out. In this case I had the A3 sheets, and the lists of vocabulary (many of which were copied or photographed). For the level 1 group we had key terms on the interactive whiteboard which I have turned into PDF handouts and sent out. I’m in the habit of recording new words and concepts on a regular whiteboard next to the interactive one, then taking a photo with my phone. The A3 sheets were photographed, uploaded to google drive, and on display on the interactive whiteboard in slightly less than a minute, with virtually no impact on the flow of the lesson. The emergent language was captured and shared. 

The language was practised as well. For the level 1 group I closed with a speaking task – asking have you ever..? in pairs using the consequences sheet. For the Entry 1 group, one  I had the photos on the board, I elicited the relevant structures (“I have got…” and “I am…”) and had the students tell each other about themselves before reporting back to the whole class using “he’s/she’s got” and “he/she is…”. I also think I missed an opportunity or two: the level 1 class could have written short fictional biographies, for example. The entry 1 class will be following it up properly: putting the descriptive language together with the work we did last week on daily routines, and creating a profile of an individual based on photographs and other images. 

There is, I think, always a place for teacher led decisions on language content in some lessons, and students like to have a little direction. They are, after all, not stupid, and can smell an unplanned, undirected lesson a mile away. But you can create an activity and the conditions for language to emerge within the lesson, while still giving the lesson a sense of structure and purpose. Sometimes, as well, specific forms are unlikely to ever simply “emerge” in this way, so a bit of teacher-led is necessary: while we are under pressure to get students to pass exams and achieve external curriculum aims, then there has to be a fair portion of teacher selected content. 

But there is still a lot of freedom there, of course, which means there is plenty of room for dimples. 

Spending Time Writing

Last night I did a writing lesson, or at least a lesson working towards the production of a piece of writing. Based on a short video, students watched half, reporting to their partner what was happening, before switching roles. This was followed by a quick review of past tense structures and a writing of a report of what happened from the point of view of the protagonist. Traditionally, of course, the discussion, planning and perhaps drafting of the text might happen in class, with the the final writing taking place for homework, but for this lesson, I chose to use the allow the last twenty minutes, or thereabouts, to ask the students to produce the final written report. This was done more or less in silence. 

Setting writing for homework is always problematic. For one, not every student will complete it on time, or hand it in when you want it handed in, meaning that any follow up activity is inevitably stymied. And even for those students who do hand it in straight away, you have no idea how long the students spent on the writing, nor how much help they got from online research, books, or a family member who can write well in English. So how realistic is it as a “pure” measure of their ability? 

By bringing the writing into the class you can create all sorts of extra benefits. For one, the “creative” element of the writing process is reduced: students can collaborate on the development of ideas, minimising the amount of “I don’t know what to say.” You can also control the amount of support each student gets. In my lesson, for example, I allowed collaboration and phone/dictionary use all the way up to the final version, but allowed students to decide how much they wanted to collaborate. This produced some interesting results. Some pairs worked very closely, and produced very similar pieces of writing: for these students, the key value was in the collaboration and the discussion. These students received feedback from me and from each other throughout the process, but also spent rather less time on the final draft, concentrating on spelling and punctuation rather than the lexical and grammatical elements. Those pairs who only really collaborated up to the planning stage ended up spending probably about half an hour on the drafting and writing up stages, and as a result, received less peer and teacher feedback during the process. For these students, the extensive feedback will come in the next lesson, when they will get the written feedback on their work. The that lesson will take a bit of management: some students have very few errors, and will need very little time to review them, but some students will have extensive questions to ask, and may need more time to check. 

The fact remains, however, that all the students spent at least 20 minutes of a lesson simply sitting and writing, and I was ok with that. The are some people who would be uncomfortable with this, and there is a belief in come parts of the ESOL teaching community that the majority of time should be spent speaking wherever possible, usually cherry picking the statement from the NRDC Effective Practice Project that “talk is work in the ESOL classroom”. Certainly an observer who had arrived some 20 minutes earlier for an observation would not have had much to observe, and may have chosen to be critical of the fact that the students weren’t engaging in speaking, and that observer and I would have wasted significant time during feedback with me justifying my decision against their prejudices. 

In lesson, you see, I think it worked. The silent(ish) writing was an appropriate culmination of a lesson which centred on narrative tenses and a punctuation review, giving lots of opportunity for differentiating the process of writing as well as the evaluation and feedback on the writing. The in-class writing also allowed for a psychological closure of the topic and themes: there was little or nothing left hanging over into the next lesson, and I will have something for all students to do in terms of checking feedback in thr next lesson, rather than having to take in work,negotiate deadline extensions and so on. The task is complete, the lesson done, and the learning can be carried through into the next lesson where students are given guided feedback for self and peer correction. What could be wrong with that?

Nothing New or Innovative Here. 

You know, it really is very tempting to think of notions of blended learning as cobblers. Or at least as old non-cobblers rehashed as cobblers. Because if you take a careful look at it, blended learning, hardly a “new” concept at 15+ years old, is either simple old fashioned correspondence courses, or it’s even simpler, more old fashioned homework.
Let me explain. I’ve been looking into what blended learning is and what it has meant and the general consensual definition is that it’s a combination of some online learning and some face to face learning. Sometimes the online element is considered discrete from the face to face element – essentially a correspondence course by computer alongside a face to face course; or the online elements and the face to face elements are linked, perhaps after the manner of the absence of innovation that is flipped learning, in which case the online element is basically homework. 

However, distance learning by correspondence and homework are, in themselves, not necessarily bad things. Lots of people have successfully learned by distance learning, and a lot of people have gained a lot from homework. All blended learning does is take these perfectly serviceable ideas and chuck them on a web server. What you end up with is the usual “it’s innovative” cry that gets attached to doing stuff on a computer. Paper based multiple choice gap fill? Boring. Multiple choice drop down box on a website? Innovative. Give instructions verbally? Sooooo 20th century. Send them by text? Wow! 

It takes me onto thoughts of SAMR. Essentially this is the idea that technology use in education goes through distinct stages:

  • Substitution, where technology merely does the same as a non-tech method, but brings nothing to it. 
  • Augmentation, where the technology does the same as non-tech but also adds something to the process. 
  • Modification, where using the technology changes the activity. 
  • Redefinition, where the technology creates a whole new type of activity which would have been unimaginable without it. 

There’s a neat definition on this site, with some neat videos: http://www.educatorstechnology.com/2013/06/samr-model-explained-for-teachers.html although the Google Drive example is probably not the best example, although it is easiest to explain. 

So far so clever. It seems to suggest a link with Bloom’s Taxonomy, and you can tell that whoever thought of it clearly had the ideas first and the name second, because it hardly trips off the tongue. It’s a nice idea too, and one which should encourage us to experiment with technology more, and think about the effect it has.
However, I have to be honest and say at my initial reaction was annoyance. A little bit, I have to be honest, was a knee jerk reaction to educational initialisms and acronyms. But there was more to it than that. Like Bloom’s taxonomy as it was originally stated, SAMR seems to suggest a hierarchy of changes, where the SA stuff is somehow perceived as less valuable than the MR sections, much like the idea when discussing Bloom that somehow knowledge is less valuable than being able to synthesise and evaluate. Bloom, happily, is being presented more frequently as a wheel rather than a pyramid, although the divisive hierarchical notions of “higher order” and “lower order” thinking persist. 

However, it occurred to me that I was reading SAMR wrong. It’s not meant as a goad or an encouragement, and Modification and Redefinition are not intended to be taken as better than Substitution or Augmentation, or even of not using the technology at all. No, these are descriptive terms only, and can be used just as easily to justify a technology not being used. Is there a cognitive or learning benefit to the application of technology? That’s the real question. 

And thus we come back to blended learning. How does it fare under SAMR? Let’s think about the two models of blended learning. 

Is there a benefit to the technologicalisation of the distance learning model? I think there is: having the learning materials quite literally to hand at all times through your mobile devices could be a benefit to some learners. Technology lends itself to easily available multimedia: rather than films on TV restricted to weird times of the day, you can have films on demand. You can have instant feedback on certain types of task, collaboration with people all over the world and so on. Indeed, the breadth of reach and relative cheapness of digital technology means that some elements might run which might otherwise not happen. Whether students engage with this sort of thing is a whole other question: even with monitored assessment in the form of quizzes and so on, the temptation, unless it’s absolutely fascinating, and the student 100% motivated, is to try to work out how to game the system. I know that’s what I have done for every bit of mandatory online training ever. I usually start with the final assessment task, then look up (or simply Google) the bits I can’t work out, rather than actually engage with every single piece of said online training. I suspect that this doesn’t lead to brilliant learning, but I do think that a clever online learning designer would take this tendency/temptation into account. Sadly, they don’t seem to have done this yet. 

And the closely linked homework model? Crikey yes. A web link to an interactive task which can be done on the bus or during a break, quick written feedback on digitally submitted writing (even by email!), flexibility of being able to do homework without needing a piece of paper, the (for some) added motivation of a bit of whizzy graphics, quick right/wrong feedback on a quiz meaning that students can think about where and why they made mistakes before coming into class to discuss just those questions. 

The other question to ask, however, is whether either of these models is actually better than a 100% face to face learning. My gut feeling, and my belief, is that they aren’t. For me, face to face learning trumps any kind of online learning simply because of the speed, ease and naturalness of the classroom interactions, although homework can be used to augment that process. Any idea that blended learning is better is often based around assumptions that classrooms are places where teachers stand and talk at or demonstrate to students and students absorb, perhaps with a bit of questioning. My own classroom practices as an ESOL teacher aren’t based on this: rather they are based on notions of enabling and promoting spoken interaction, of discussion and questioning, and for me, the technology simply cannot replace that. Not yet, and maybe not ever. 

Outcomes, Evidence and Assessment

Let me start with an apology and a clarification. First, I’ve blogged about learning outcomes quite recently, and, although I think I’ve taken a slightly different tack here, there may be some repetition. Sorry. The clarification is for those colleagues who I promised that I wouldn’t say anything about what they said about this: I haven’t, and I promise that any resemblance is purely coincidental. No, this came out of a discussion with a colleague who was struggling with phrasing learning outcomes, and in particular trying to get past the idea of writing an outcome which was not just a description of the task, but rather a statement of evidence of a transferable skill: i.e. what had the students learned as a result of the task. I am starting, as well, with certain assumptions about learning outcomes. First, that they should be SMART (Specific, Measurable, Achievable, Relevant and Timed). Secondly, when successfully completed, a learning outcome should demonstrate to the teacher that the learners have learned something. 
Take these, for example:
1. Learners will be able to use present perfect.
2. Learners will be able to use present perfect to describe past experiences.
3. Learners will be able to use present perfect in five sentences describing their past experiences.
Outcome 1, essentially, is the thing you are trying to achieve. This is what you want students to be able to do. However, outcome 2 is better because present perfect has a number of different usages in English, and this is clearer.  God help me, but it is more Specific. 
Outcome number three, however, ticks all five of the sacred SMART requirements. It is specific, in that evokes a precise use of a particular grammar structure, it is measurable by simple inclusion of a number, it is certainly Achievable, assuming that the lesson is for Entry level 3 ESOL, it is Relevant, in that being able to describe what one has done in one’s working life is a helpful thing to be able to do, and if this were an ESOL for employment class, this would be easily something which would suit. I don’t need to explain the time frame, do I? 
So let’s say that, for the sake of argument, this is the outcome of a lesson. So far so good, right? I mean, it’s SMART, there is clear evidence of having achieved said outcome, it’s pretty good, right? They have demonstrated that they can do those things. 
By writing those sentences, however, does this mean that the learners will have demonstrated that they know how to use the grammatical structure? Can we assume with any certainty that subsequently the learner will be able to take that point and reapply it in another context? Essentially, is the grammar a known thing? 
This is where it gets challenging. In literal terms, the writer of the outcome is only claiming that they the learners can make those 5 sentences: not that the learners will know the grammar. I don’t think, as well, that anyone would really assume that they did know the grammar at this point, only that they are further along the path of being able to do it. If that’s the case, however, then what is the point of the super specific learning outcome? How does it help anyone? If we as teachers are acknowledging that this is a bit fake in terms of what the students have done, then where is the value of learners reflecting on this as a means of marking their achievement? Perhaps they, and we, might that they managed to do it but would like to work on the grammar a bit more, but then really we are thinking not about the evidence of the learning outcome, but rather the vaguer, more woolly aim, like outcome 2 above. 
But what about skills development? In principle a learning outcome should be a statement of the transferable skill learned. And taking SMART as our touchstone, a learning outcome can be easily formed for a skill like listen for gist, or read for detail. “Read a text and extract 5 details”. Except it’s not that simple. All you can ever say for sure that the learners will have evidenced here is that they can read one text and extract 5 details from that text alone. They have certainly had practice in that skill and be developing it, but you couldn’t say for sure that they will now be able to read any text appropriate for the level and extract 5 details. Because of the need to measure the achievement, a language skills outcome can only ever be a description of the aim of the task in one lesson, not a statement of a transferable skill. When you ask the question “has learning happened?” it all gets a bit tricky to define. 
What about pure communicative outcomes like “ask for information at the bus station” or be able to tell someone 5 things about jobs you have had” ESOL, and indeed ELT generally is blessed/cursed with the challenges of marrying up functions, skills, lexis and grammar in course design, and success at a communicative function can be achieved without the “right” grammar for the job. One of the challenges of getting students past the Entry 3 (B1 on CEFR) threshold in a target language setting is dealing with the fact that they can often get by pretty well. Thus success at speaking is hard to measure without taking into account the grammar bring used. An outcome like “tell someone 5 things about jobs you have had” could be achieved by pretty much any learner in any class at Entry 3 or above but may not use present perfect, or indeed any tense structure in English which refers to the past. “I work for 5 year at Batley Beds. I work now for Batley Beds.” gets the idea across, albeit inelegantly. 

So how do we write “good” learning outcomes for an ESOL class? Speaking personally, and very definitely outside my usual role, I don’t think we can. The nature of an “evidenceable” learning outcome necessarily restricts us to what is achieved in the classroom, and simply does not allow us to make general comments as to the wider learning.  A very wise man once suggested to me that rather than focussing on specific measurability, and to avoid producing fake evidence like “write 5 sentences using present perfect”a more honest and realistic phrasing would be “be better able to use present perfect to talk about past experiences”. This is perhaps an objective rather than an outcome, but it places the onus on the teacher to monitor carefully the students’ language production. This is, of course, what good teachers do: assessment of language production in a classroom is not simply through the achievement of learning outcomes, but a continual steady process. Language teachers, or at least the decent ones, don’t issue a task and then sit back and wait for the results to roll in before providing assessment feedback, but rather tend to patrol a room suggesting and monitoring students. Formative assessment is not solely through quizzes or (oh God, this makes my heart sink on an ESOL lesson plan) “Q&A”. It is a process which starts at the beginning of the lesson and which continues throughout. Using a selection of learning outcomes which purport to demonstrate learning is essentially false because in a language class you are always measuring learning in a hundred different ways. It would be impossible to realistically record all of this when planning, and very probably pointless. 
Measurable learning outcomes seem based in a model of teaching where learning occurs as a result of direct input, and an for me an ESOL class doesn’t work on these terms. Because the thing being taught is usually the same as the thing being used to teach, evidence, as suggested by Hattie, for the effectiveness of direct instruction supported by questioning is largely irrelevant: there is no point in direct instruction if the people you are instructing can’t understand what you are saying. No, rather, there is still a place in an ESOL class for collaborative work and scaffolding: development of language ability in adults is through using that ability in combination with finding out and only occasionally being told about it. Development of that ability is not always directly evidenceable, and even where it is, the reliability of that evidence is highly questionable.
Perhaps we need to turn our back on the input/output behaviourism of the learning outcome. Forget SMART and be a little more laid back. Unfortunately, this doesn’t fit in with the prevailing educational wind in post 16 learning in the UK. But then, one of the challenges of teaching ESOL in an FE context that we are a bit of a misfit, lauded and celebrated when colleges want to brag about their diversity, but in terms of funding, time tabling and classroom practice, we are a bit of a pain. But then I wouldn’t have that any other way. 

Questions

Sometimes I think that this teaching lark is all about questions. Take CELTA, for example: trainees (and trainers) spend hours agonising over concept checking questions, , ICQs, and questions to draw the meaning: they all cause a headache for new teachers (along with labelling stages of a lesson, although this is like moving your head slightly to show you are using the mirrors in your driving test, and you never ever need to do again).

So, asking questions. Arguably this is the most useful tool in the teachers arsenal, and for some people, the hardest to master. Given time and practice, mind you, and it becomes something you do more or less out of habit.

Here’s how you do it, based on a CELTA teaching practice I saw, focussing on using and and, most importantly, but with a group of Entry 1 students:

Put up the following two sentences:

I have an ipad _______ I have an iphone.

Then point at the first sentence. “Is this yes or no?” (or “positive and negative” if you like). Students say “Yes”

Then repeat with the second sentence.

Indicate this on the board with a tick by each sentence.

“Are they the same?”

“Yes.”

Ask: “How can I make this one sentence?”  and/or “What word can I put here to make one sentence?”

This should elicit I have an ipad and I have an iphone.

Then we follow more or less the same procedure but with some changes:

I have an ipad _______ I don’t have an xbox.

Point at the first sentence. “Is this yes or no?” (or “positive and negative” if you like). Students say “Yes”.

Point at the second sentence. “Is this yes or no?”. Students say “no”.

Take a deep breath: this is a crucial bit.

“Are the sentences the same?”

“No.”

Ask, again: “How can I make this one sentence?”  and/or “What word can I put here to make one sentence?”

I have an ipad but I don’t have an xbox.

This may not work. You may want to try “Can I use and?” to guide them a little.

Obviously, the students may still not have a clue, so you may have to tell them here, although I reckon you should have got there by now.

You then follow this up with some more questions. Write up:

I have an iPad but I have a computer.

Questions are then asked more or less as above, but this time you finish with “is this right?” The real trick now comes with making sure all the students have understood, so you get all of them to answer the question, for example, by saying “if you think it’s right, put up your hand” or getting them to write yes or no on a mini whiteboard or piece of paper. The point being that this questioning now has a different purpose, which is to ascertain whether or it the students have understood the language in the first place. Depending on the level, of course, you may want to ask some students to explain why.

This seems like a lot of fuss, and believe me the explaining of this takes much longer than the actual doing of it. However, the point of this kerfuffle is this – by asking questions like this, you are taking all the bits of knowledge that the students already have (in this case, notions of positive and negative, how these are formed, and the idea of joining two simple sentences using some form of conjunction) and start to link these all together in a structured way. The fact that you are asking questions means as well you are demanding that the students engage with you in order to do this.

Now, I’ve got to admit to a little bit of annoyance with the CELTA fetish of asking instruction checking questions, and indeed sometimes concept checking questions, when there is no need for it. I’d much rather see clear instructions and 90% of the class get on with it while the teacher helps those who have struggled, rather than ritualistically asking stuff like “can you tell me what we are going to do?” and “are you going to write in the gaps?” and the whole class sit there in a state of confusion.

It’s also this sort of thing that irritates the living shits out of me when people waffle on about higher order questioning. It’s a bug bear of mine. Higher order questioning is about meta-awareness and sometimes, indeed very often this is totally unnecessary and no indicator at all as to whether or not they can actually use the language. Give me some speakers of Slavic languages in a Level 1 class, and I will give you a whole bunch of people who can explain the rules around definite and indefinite articles, discuss very clearly the reasons why we use them, hypothetical situations where we might not use them and so on, a whole stack of higher order stuff, but they still say “I went to supermarket and I bought apple.” You can wave Bloom’s Taxonomy at me all you like, but higher order questioning is not going to help those learners learn articles, because there are other things at play here, and more than simply presentation.

What they need is practice, and that, of course, is a whole other question.

Wait time: a reflection on an “innovation”?

Now there’s a pompous title to get the world going, no? Look, I can hear you all cry, he’s going to do a snidey “this is all great, but…” thing, like he usually does. Sorry to disappoint. I’m going to try to be fairly straight on this one, no curve balls, no “howevers”.

Anyhow, there’s this thing called wait time. This is an idea that isn’t terribly new – it’s been knocking around in the literature for years, and I’ve been aware of it for ages. However, it’s only really in the last couple of years that I’ve started to make use of it, and only in the last six months I’ve really put it into place properly.

The theory behind it, in essence, is as follows. In most cases, the teacher asks a group of learners a question, then either jumps on the first hand that shoots up, the first voice that shouts out, or, if they are trying to be a bit more sophisticated, nominating some unfortunate soul who was hoping like hell the teacher wouldn’t ask them, because they don’t know the answer or don’t want to stick out like a swotty show off, or, maybe, are just a bit shy. This nomination could be randomised: break out the lolly sticks or swirling PowerPoint name selector. But either way, someone gets put on the spot, with the teacher expecting an answer. You see it in ESOL classes on a bigger scale: the teacher asks the whole class a question like “what are some words for fruit and vegetables?” You get three or four students supplying predictable answers, and then the teacher tediously writes up a bunch of vocab that the class already know, before realising that it all needs to be rubbed off for the next part of the lesson, rendering the entire activity largely wasted.

Bleurgh. Boooooooring. Even if you jazz it up with an interactive whiteboard it’s still boooooooring, (although interactive whiteboards are pretty dull objects at the best of times).

The point, anyway, is this. In your class you dispense with the lollipops and the random name generator. You stop those coasting learners leaving the hard work to the big mouths (that was my education all the way through: I was Captain Coaster in pretty much every subject.) and you shut up those same big mouths who dominate every single lesson. You do this by building in “wait time”. Wait time is, as the name suggests, a section of time between asking a question and getting an answer where learners either individually or collaboratively think of an answer to the question, before feeding it back to the rest of the group.

This is how it could work. On an individual level, wait time at its most simple, you refuse to accept answers and tell the whole class to think of an answer in the space of time given. Personally I wouldn’t want this to be much more than ten seconds, but that’s very much me disliking extensive periods of silence, not to mention the distinct risk of everyone drifting off into their own private worlds. Then you either nominate a learner, or use some sort of technological interface to find out what everyone thinks: Socrative or PollEverywhere would be useful, or you could be wild and crazy and get everyone to write it down on a mini whiteboard, which they can then hold up. If you like a nice classroom gimmick, you could use coloured cards like green/red for yes/no or true/false questions (and perhaps also an “amber” card for not sure).

Like I said, however, I’m fond of a little noise in my classrooms. I get all itchy and restless if students are just sitting and working quietly. So I have been making this collaborative rather than individual. It’s worked well in grammar lessons, where learners have been exposed to examples of the language in a reading or listening task, and then I want to ask concept questions to get them to think about what the meaning and form of the grammar. Rather than verbally ask the question, I have been putting them on a PowerPoint slide and then asking learners to work in small groups to discuss what they think is the answer. I’ve been varying the time, depending on the complexity and challenge of the question asked, and while learners have been discussing their answers I’ve been walking round and listening and talking to the students. At the end I’ve had the groups write their ideas on mini whiteboards or sharing back to me at the front, depending on how much extra explanation and whole class discussion I think has been necessary.

And I like it. Lots. In my classes, this has meant that my explanations have been minimal, but personalised where I have had to explain in detail. You also feel more confident that everyone is engaging with the language, participating in the discussion, and it’s much less difficult for shy or reluctant contributors because they are only talking to two or three other students. The instructions are quite easy, and on one occasion I didn’t even need to tell the class what to do after the second one: they just had the question on the board and started discussing. Students have been working it all out for themselves, sharing ideas and, crucially in an ESOL class, interacting in English, developing their general speaking skills on the side.

Like I said, at the beginning, this isn’t a new idea. I’m not being an amazing innovator here. But it’s sort of new to me, and I like it. It’s also a fairly unusual experience writing a non-snarky blog post, and I’ve quite enjoyed that too.

“What kind of world are we trying to represent?”

I was at the regional NATECLA YH day conference this week, and the final plenary was from Heather Buchanan of Leeds Beckett University talking about the uses and abuses of global coursebooks.

It was an interesting and indeed controversial topic, particularly to a group of people who probably rarely follow a single coursebooks, preferring out of necessity or expectation, to pick and choose published work, or develop our own materials. I’m not going to weigh in on the coursebook/no coursebook argument, although I do challenge those ESOL managers who think we should have a full year scheme of work at the start of the academic year to tell me why we shouldn’t just follow a fixed coursebook which we adapt to the class.

No, the thing which really resonated from Heather’s talk was the comment at the title of this post: “What kind of world are we trying to represent?”

I make a lot of my own materials, and devise my own activities, and I started to think: what kind of messages do I send to my learners based on my selection of texts to read, approaches to take? Do I, as an ESOL teacher, have an agenda?

Well, yes, I do. If pushed I would argue something along the lines of slightly leftwing woolly liberal, focussed on the needs and lives of the learners. Or something (woolly, remember) but I wonder how much of this comes out in the choices of themes, topics and texts which I bring into the classroom. For one, that sentence says a lot: “texts that I bring into the classroom”. They are often texts that appeal to me, as well as hopefully appealing to the learners. Certainly in theme these are often a lighter touch than perhaps my higher ideals would prefer: articles about lorries getting stuck under bridges, web quests about local events. But then I do sometimes select texts on more complex issues: for example I have a reading and speaking activity based on the Tony Martin case where a man was sentenced to jail for shooting a burglar, or controversial variations on the balloon debate, like the Amnesty task from OneStopEnglish. Interestingly, however, on more controversial topics, I realise, on reflection, that I tend to situate these outside the learners’ own experiences. I have avoided too much explicit discussion of the activities of right wing activists in the UK, especially when this applies to local issues. I have helped learners engage with local issues, for example helping them to write letters to their local MP to protest against the proposed closure of A&E services at the local hospital, and supported learners on an individual basis on their personal plans and progression, jobs and so on.

Where I have felt less comfortable perhaps has been the direct nanny-state teaching of social and moral standpoints. The main problem I had with PSD last year was based on this. Who am I to comments on an individual’s approach to personal health without them initiating that discussion? Yet I would support a learner if they came to me with a personal problem. But not for me the eatwell plate. I approach many of the citizenship materials with a critical, cautious eye: could I define a good citizen? Probably not. Do I think that the NIACE materials helped to define this? Not really, but then they were never meant to. The most recent life in the uk test guidance is excruciating in its literal whitewash of history, and the raising in importance of this history.

I am not a sports fan, and tend to avoid sport related texts and sport based resources. This is silly, as many learners do like sport, and this would be a great source for some really interesting language. I am a music fan, but again, I tend not to use this as a source for my learners’ learning, although this time it is perhaps the nature of the music which makes me reluctant to share with learners. I am also a book and film fan, and sometimes this does make its way into the classroom, perhaps because they are much more clearly and obviously usable.

In short, then, the world I tend to represent to learners is bound up with the identity of who I am and what I feel and believe about the world. Is this true of all teachers? I assume it probably is, and equally that it is hard to step outside of that world when preparing texts, no matter how much you would argue for learner-centredness. For myself, I know I could do more to get learners involved by, for example, bringing in a text each to analyse, or a question to answer. I attempted to do this for a while in my low level community based class, by getting them to talk about the people they know in the form of people “maps” but this has had to fall to the wayside as the class is slowly shedding learners.

This,I guess, is the challenge for all materials writers, and that the world we need to try to represent is the one which learners can engage with and/or relate to. It helps, of course, if we can engage with that world: easier for those of us who share cultural links and heritage with the learners, perhaps. Sometimes my own lifestyle feels like it exists in some sort of parallel universe to the lives of the learners I teach, and the challenge there is to bridge that gap, to link the lives and challenges of our learners to our own lives and challenges, and share in the way that we deal with our different challenges.

I don’t mean to force my agenda, or my ideals onto learners, (although I would challenge learners on any issues of equality), but I think that this comes through nonetheless. Perhaps this is a bad thing, but then again, perhaps not. More importantly, perhaps, is the question of whether we can avoid letting our own personal, social, cultural and political agendas come through in our teaching. I rather doubt we can.