QTLS

I keep coming back to this QTLS thing. There’s an advert on the ETF site now about how to go about it, and it looks pretty much as if the whole process has been bought lock, stock and Reflect+ barrel over from IfL and it looks more or less the same, with the same sort of process as it used to be when I did it.

So was it worth it, and more to the point, would it be worth forking out nearly £500 for now? There are two aspects here: the value of the process itself, in terms of the benefits of the reflections involved; and the value of the finished product, in terms of what it means and what you can do with it. 
Process is probably, for me, where it was weaker. For an ESOL teacher it was, and is, hard to identify with the whole dual professionalism concept. I’m not a professional user of English except in my setting as a teacher, so it was hard to extract out the pedagogy from the subject knowledge. For me, the two are inextricably linked so deciding which but was “subject” and which was “teaching” was, if not impossible, then definitely fake. The unsatisfying compromise was to talk about knowledge of language as subject, and to talk about learning language in the teaching section: unsatisfying because you couldn’t help but repeat yourself, and neither felt complete. The repetitiveness kept coming up as well in the rest of the Professional Formation process. 
It was also not a difficult process. As you can tell by my 100+ blog posts on this site, I am prone to writing about my work at length. It’s not something I find terribly hard to do, either. I find writing far far easier than, say, a face to face discussion. I am slow witted and tongue-tied, and tend to wilt when challenged verbally: I’m the person who comes away from every meeting or professional discussion with a hundred brilliant things I should have said. So really, rattling out a few hundred words about professional practice is not even slightly difficult, and as a result, the net benefit from writing a few thousand words plus amassing all the certificates and so on was pretty small. 
I also hated the system for doing it: Reflect. God I hated that. I don’t want hyperlinks and multi-modal swishy buttons and stuff. I definitely don’t want unusable constantly overlaying menus and graphics that regularly crash my browser, thanks. I wrote the whole sodding lot on Word with capitals LIKE THIS (LINK) to tell me where I was going to put links. I remember seeing the final report in printable format and wondering why it couldn’t have been presented like that. 
So was the process worth the effort? Not really, and I probably only spent a few hours on the writing, plus another hour or two on the scanning and finding of certificates and documents. Would it be worth £485? No way. If I went through the process now and had to pay for it, the phrase “short-changed” would be at the back of my mind the whole time. 
What about the product, having the status of QTLS? There are two nominal benefits to this: the ability to step from FE to school teaching, and the mark of professional status itself. The first is quite easy, really: no specific benefits there for me – the call for state school ESOL teachers is small, perhaps non-existent, although teaching students with EAL needs has a certain appeal. If I did make a wholesale move, my inclination would be primary not secondary (eugh, teenagers) and I would simply not feel comfortable making that move without some training first. QTLS would be pretty useless, I suspect. However, I do recognise that for some people this is a valuable thing, and I like the notional parity it brings to FE. 
What about professionalism? Calling myself QTLS didn’t really make me feel anything much, I have to say. It didn’t create a sensation that I was now a proper professional, partly because I’m an arrogant bastard, but mostly because the process wasn’t challenging. I didn’t feel like I had had to work for it (I’d done that already with my qualifications) and so the final product didn’t feel like an achievement. I also don’t think that professionalism is about membership or letters after your name. Professionalism is a mindset: it’s how you think about yourself and whether you can hold your head up amongst school teachers, doctors, lawyers and the rest and say yes, I am a professional. I have trained, I have studied, I have been, and still am, crafting and developing my skills. I will challenge, preferably in writing, anyone who dares to suggest I’m not a professional. And it doesn’t take four letters to support that challenge. Well, not the four letters “QTLS” at any rate. 

Angles: Subjectivity and Challenge

I’d like to clarify something, just in case you hadn’t worked it out by now. I’m not opposed to observation. I think observation is a really good tool for teacher development, when there isn’t a grade getting in the way. It’s about angles and perception, you see. 

I had my formal observation last week, and I was lucky (?) enough to be observed not only by an internal observer, but also by an external consultant. I also noted my own reflections on the lesson immediately afterwards, and the three commentaries that came out of it are interesting to compare. 
The first interesting point is about the positives. I’m not naturally inclined to make a note of what went well. It’s like Murphy’s law: we think the toast always lands butter side down, or the other queue always moves faster in the supermarket, but in reality the probability is pretty balanced: we just don’t notice when things go our way. It’s the same for me with a lesson: I don’t really notice the things that go well, only those that don’t. So its useful to have someone say which bits you are doing ok at: it makes you want to ensure that you keep doing them and refining them. It was also nice to note that a particular strength this time round, targeted questioning, had been highlighted as a specific area for development in my last one. Mind you, I have a suspicion that this has more to do with the set up of the lesson (workshop-individualised vs whole class teaching) than with any specific skill on my part. 
All three commentaries highlighted more or less the same things, however, in terms of areas for improvement. What was interesting was the prioritising of those things. Although I only got written feedback from the external observer, there appeared to be a very slightly different slant on which things were the priority: although there is every possibility it was the phrasing of the feedback, not the things being said. Even so, this does highlight what is both the fatal flaw and huge benefit of being observed: subjectivity. 
Every effort is made to reduce subjectivity in lesson observation. Lists of criteria are drawn up, standardisation takes place, all sorts of effort is made to do this. Criteria are the backbone to most teacher training courses, for example, although these are often highly detailed. CELTA, as a good example, has detailed and fairly prescriptive observation criteria, which sometimes can make it possible for you as a trainer to pass someone’s lesson when every bone in your body is saying it should fail, as well as make it much easier when someone has clearly not met those criteria. 
When those criteria are loosened, for example where they need to apply to a whole bunch of different teaching contexts in an FE college, the influence of subjectivity becomes much more pronounced. We are human, after all. In the case of my lesson, the commentaries reflected the same points, but my own perception, for example, was around the structure and the planning of the lesson, and how this reduced my inability to monitor and support in class activity, while some of the issues around tracking and recording feedback were further down the list. This was more or less inverted in the observer feedback, which is useful. Because you dwell on the bits that went less well, you end up on a kind of internal feedback loop which emphasises those points more and more and more and blows them up out of all proportion. The observers’ feedback brought those issues to light a lot more clearly and directly, and I think had I been left to my own devices I would probably have never really got round to it. The background and subjectivity also came up in the ideas for tracking and recording feedback. The external observer, predictably for an OFSTED trained inspector, had a very explicit emphasis on using the VLE to do this, whereas the internal observer only suggested the VLE as a possible tool for this, with an openness to alternatives. (I say “predictably” because, based on the Common Inspection Framework, using a VLE is the be all and end all of e-learning, which pretty much says it all about how up to date OFSTED is.)

So yes, subjectivity. It’s the potential major flaw in observation and the major benefit: it can go wrong, when the criteria are loose and an observer has preconceived ideas about good and bad. That was the problem with my own perception: I don’t like workshoppy lessons. I find them hard work to both plan and manage, while recognising that they have value. My own view was skewed by looking critically at the teaching and the learning, because I live in my head, and the world is filtered through that head. This is hard to step outside of. The observers, while looking at the learning, of course, were also looking at the learners more carefully, trying to find out in forty five minutes things that I had, or indeed hadn’t, found out in 9 months. I’m not sure about the value of the lesson observation as a proxy audit of a whole year’s teaching, but this was also present in aspects of the feedback, both positive and negative, and again, this is something which I wouldn’t always pick up. 
We can’t help but be subjective, but a lesson viewed from more than one angle can be useful. We get insights into things we might otherwise miss, details and ideas which might not have occurred to us. With a set of criteria as a touchstone, we can expose ourselves to the subjectivity of others, challenging our own subjective take on a lesson. And challenge is always a useful thing.  

#loveESOL #loveFE

I have a confession. Well, not much of a confession, but perhaps a reconfession, if there is such a thing. I’ll whisper it, however, because it’s really not the done thing to admit this. 

I didn’t get into teaching for the learners. 
There, I’ve said it. I didn’t get into teaching for the learners. Or for the good of society. Or because it’s a vocation, or a craft or a whatever it is. Nope. I got into it because it looked like fun, involved language but without the internal discipline and commitment needed to become a writer (which is still what I will do, one day, when I finally get round to it. You know. One day. Really soon. Just after I’ve nailed this teaching lark.)I got into teaching EFL as it was because it looked like a fun language based indoor work with no heavy lifting. 
Many of my colleagues are far more admirable. They got into teaching because they want to help. They want to help people who are disadvantaged, who are excluded, who are disenfranchised. I have colleagues who know what disenfranchised actually means, which is a darn sight better than me. They are good people. Great, noble people, even. Definitely more so than me. Some ESOL teachers may even have had to learn to love language in order to help those communities they want to help, but I kind of came at it the other way round. 
You see in the last ten years or so I have learned to love ESOL, and indeed further education. It wasn’t always the case: I was a ghastly snob of an academic teenager for whom the phrase “BTEC” was essentially a synonym for “stupid”. I had prickings of a social conscience, but this was a long time in the growing, until, like a vaguely liberal butterfly emerging from a mildly conservative cocoon, I grew weary of yet another group of spoiled Chinese teenagers. My mind moved from not only being interested in language and language learning but also to whether I could do something useful with this. Happily, it turns out I can do something useful with this, although this has been something of a learning curve. Preconceptions were shed about the purpose and nature of FE, and my little inner wooly liberal who had been hiding for so long was finally able to make an appearance. 
Sadly, it turns out our government haven’t come to the same conclusion. I’m not sure why anyone is surprised that FE is having its funding cut by 24% when the people in charge of the cutting are privately educated scions of wealthy families, and who only ever go to FE colleges if they have run out of elderly care homes to visit. But cutting it they are, and with it go the opportunities for learning for huge numbers of people. Loans have been offered, of course, but a huge long term debt is hardly what some of the poorest members of society want to saddle themselves with. A loan is not some major generous gesture, and neither is funding personal projects for the sake of looking good. Yes, Vince Cable, I mean you. http://www.theguardian.com/society/2015/mar/10/vince-cable-adult-education-mental-illness-speech 
ESOL has become good at coping with funding cuts, with the reductions to our funding falling every single year since 2006, through blatant cuts or through more sneaky measures, like restrictions on who can get fee remission, or, my personal favourite, inventing wild claims that apparently you can progress a whole language level in about 60 hours, although you’d think I be used to non-teachers inventing this sort of stuff by now. 
But this is pretty harsh even by our terms. Cuts to adult funding mean lost opportunities. Simple economic opportunities like not being able to understand that health and safety sign and as a result becoming paralysed in an industrial accident, meaning a reliance on disability living allowance and added pressure on the health services. More subtle economic returns, like being able to retrain and do a different job to meet whatever the local need is. More complex, long term opportunities like raising the educational levels of parents so that their children can benefit: thinking not about maiming plans for the next five years of parliament but thinking about 15, 20 years in the future. What about the young woman who didn’t become the scientist who cured cancer because she didn’t do so well at school, mainly because her parents hadn’t been able to read with her properly, and lacked the education to help her as she progressed. 
Yes, young people are important. Yes, apprenticeships have a value and a role. But FE is bigger and wider than that. The challenge that FE teachers have had with professionalism is reflected here: we are not a relatively homogenous group of teachers because of the huge variety of learners and subjects that we all teach. 
I might have started out for the English, and I’m still in it for the English, but I’m also in it for the students. And always they are the ones who will suffer the most when privately educated, self-centred short-termists decide that the only model of education they want is the one that created them. 

“Just do what you normally do”

This post started in November, based on a conversation we were having in the staff room. We were talking about graded observation, and the idea that the lesson you are observed teaching should be a “normal” lesson, with, perhaps, a bit of extra paperwork to evidence your thinking (essentially, the same as showing your workings out in Maths at school), and not a special “observation” lesson. I’m being very specific, and perhaps I should clarify: we’re not talking purely developmental obs, but a graded observation for quality assurance purposes. 


So, given the context, is it good advice? Honestly? Yes and no. 

Yes? 

Yes, because you would be quite stupid to try anything brand new in an graded observation lesson. If you aren’t really using a particular strategy or technique but believe, for whatever reason, that the observer “expects” to see certain practices, then rolling it out for that lesson alone runs a very high chance of failing. I don’t think it necessarily WILL fail, of course: it depends on the nature of the strategy you are trying, if you have faith in that strategy, faith in the learners to roll with it, and faith in your ability to pull it off. Sadly, a graded observation is high stakes, and the impact of getting less than a two for the lesson is profound in terms of the impact on your career. This makes it dangerous to risk doing something which may not work during an observation, and so any kind of untried innovation in this context is a danger. This is a shame, of course, because an observation of you trying something new by an experienced colleague or manager has the potential to be incredibly helpful, developmental and all round beneficial, and would lead to an improvement in quality. Of course, decent colleges will support and promote peer observation for teacher learning, but this is most often as an aside to the main graded observation. 

Then again, no. 

No, because in fact you do do things special in an observation. You might be a little brisker with timings, more explicitly careful with differentiation, particularly in planning. You might plan a whole lot more tightly and carefully. I don’t see any problem with this either: it’s good from time to time, to be more disciplined and controlled. One of the benefits of this kind of observation is that it’s a good way of making you pull your metaphorical socks up, in terms of both doing a bit of proper focussed lesson planning, and also making sure you haven’t slacked off on your long term planning and paperwork. 

No, because if the observation window hadn’t been there, you might have chosen to do something a bit different. You might have had, in that particular observation window, an opportunity to try something completely different, to innovate and explore, and your “normal” practice would have been to experiment. However, it makes sense to plan activities and resources which you are confident will work with you, with your learners and within the theme of the lesson, and not those things which might work, but which you are still developing. 

You might have a behaviourally challenging class of learners and choose an individualised workshoppy method with lots of individual and small group work over whole class teaching, because you know they are easier to manage and respond better to that, and because although you have been developing ways of managing whole class teaching with them, you don’t really feel that confident doing it just yet. You cover the same thing as you planned to cover, but you play to your strengths and the strengths of your learners and you avoid risk. This is the major challenge with graded observations: the link to capability doesn’t necessarily promote innovation but runs the significant risk of stifling it. 

Grading a lesson is a summative process, after all, not a formative one, with clear consequences to that process. In many respects, gaming this system is inevitable, and thinking that teachers won’t and don’t game it suggests a naive faith in a flawed system. It’s like not expecting someone to prepare for an exam. 

Even the best (legitimate) exam prep depends on your ability on the day. Similarly, success or failure in any observation, graded or not, it essentially boils down to what happens on the day. You can have the best lesson plan in the world, a marvellous set of trackers and schemes, ILPs that the students are waving around under your nose and asking questions about their targets. You can have checked the observer’s teaching timetable, found out about their personal foibles and fancies, spent time making sure that you do all the “right” things and making sure that students are all doing the “right” things. (Whatever you happen to think the “right” things are.)

And it could still bomb. A couple of misjudged outcomes, a smattering of mistimed activities, a stage you’ve forgotten, or a piece of differentiated activity that you forgot to put into play, a bus breakdown making half your students late, all those things could still happen. Or you realise as the lesson begins that you misjudged how the tasks were going to run, but for one reason or another it’s too late to rethink things. Sometimes it just doesn’t work out. 

But you still try and game it, and while there is a grade with consequences attached, who wouldn’t? There are rules to the game, and ways to play it. Graded observation becomes not about doing what you normally do, but doing what you need to do to play the game, if not to win, then at least to not lose. 


The Research That Never Was

I posted last time about an action research projected like to do. I’ve been lucky the last few years and engaged with a couple of research projects. The first looked at teachers and CPD, and most recently I’ve been thinking about the readiness of ESOL learners for blended learning and the impact that funding-enforced blended learning may have on them. The latter is probably the closest to controversial, questioning, as it does, both the increasingly sacrosanct FELTAG recommendations and casting a critical eye on a direct college policy. But as much as I want to do that research, there’s a piece that I’ve been wanting to do for years. 


It’s quite simple really. I would ask for two groups of learners of a broadly similar level, and similar range of cultural and social backgrounds within the class. I’d devise a language test and give it to all the students. Then I would apply a particular intervention to one group, and not apply it to the other. At the end of the research period, I’d then test the students again to measure which group had improved the most based on their original test score. It’s not perfect, and ideally I would have at least one other teacher taking part at the same time.  Also, on that scale it wouldn’t be something you could necessarily extrapolate much by way of major statement. However, it would represent the first ever evidence one way or another for the intervention I have in mind. 

The intervention, and those who know me will now groan, is the setting of SMART targets for learning. I know, I’m like a stubborn dog with a particularly juicy slipper on this, but do bear with me. You see, that research would never happen. Smart targets are an integral part of the individual learning plan, and as such, are pretty much unassailable. After all, they tick every ideological and performance management box: achievement of the target runs the argument, supplies evidence of individual student learning. Criticism of the target is seen as criticism of the concept that the learning of the individual is crucial. 

I agree that we need to think about how we are meeting individual student needs. Students need to know what they need to improve. I fully endorse the concept of finding out what students need to learn (although I hate the therapeutic-deficit label of “diagnostic assessment”) and then basing a course plan on those ideas. I even think that writing those things down somewhere and then thinking about them later is dead handy for students, if not the be all and end all. This can all be achieved without the atomistic breaking down of a learning goal like “use past simple irregular verbs” into trite, meaningless targets based on the measurable occurrence of the target language, like “write five sentences using past simple irregular verbs”.  

It’s a small distinction, perhaps, but it creates an essentially false impression not only for teachers but most damagingly for students: “I have written my five sentences, now I know past simple irregular verbs.” Or worse “Now I know past simple in English”. Is it really that we are only expecting them to be able to write just five sentences, or is it implied that we are expecting them to know past simple irregular verbs? If we are expecting the former, you have to consider reliability of this as an assessment task: once the target has been achieved, could we say with any certainty that it could be achieved with the same result at a later date? If it’s the latter, and the target is simply an evidenceable proxy for a larger learning goal, then there are issues around validity: how does achieving the target genuinely provide evidence of this? Neither question can be answered cleanly, or in my view, convincingly in favour of the target. It’s problematic at best, deeply flawed at worst. 

My point is this. In my putative study, the only difference between group A and group B would be the targets. My non-targets group would still be given detailed feedback on their work, would still be told which areas were problematic and this would form part of the students’ reflections on learning. (In fact, I’d probably become a darn sight better at formally recording feedback and getting students to reflect on this as a result of doing this kind of research). The only thing that would not occur would be framing those development points as SMART targets. That is all. 

This research will never happen. No ESOL department in the country would currently countenance such a risk: “Ofsted could come.”  “It’s best practice.” Targets are too completely enshrined in the culture of ESOL, a fetish of the cult of the individual, for them to be questioned or challenged. ESOL is embattled, beleaguered by political unpopularity (‘cos everyone loves immigrants, right?) and funding cuts which perhaps creates a challenge in terms of questioning accepted practices. If we challenge SMART targets, we challenge OFSTED, because ILPs are writ large over the Common Inspection Framework. ESOL inspectors will most likely have been teachers during the high days of Skills for Life, when funding was high and the class sizes small, where the rhetoric of individualisation was a practical reality. Show me an inspector who questions target setting, go on. If an inspection goes badly, we suffer. If we suffer in terms of our performance, then questions are raised about our value and our worth. Once those questions start being asked, then money and support begins to dry up, diverted to other areas or other providers who don’t do things like ask awkward questions. 

What is harder is that I get this. I understand the sense, almost, of fear of wishing to challenge the status quo. I may be bit of an idealist, I’m also a lot of a realist, who recognises that sometimes there are hoops through which we must jump. There is a direct link to funding for non-accredited provision through the RARPA process, where evidence of learning is supplied in the form of targets achieved. This link means that the targets are now absolutely stuck (although the case could be made for one an internal pre-test and a post test of the same set of language points drawn from the curriculum and present any difference as evidence of improved performance. It would be about as meaningful as the target.) 

I do them, of course, don’t panic. My learners have ILPs for their ESOL classes, and there are targets on them. I don’t do it out of integrity or professionalism, however, but out of cynical pragmatism. I simply don’t believe that targets work, (and this is where I talk about that)  and the problem remains with SMART targets that “I believe…” is the only thing anyone can ever say, either for or against. I’m sorry, but this is not enough when you are saying that these things must be done. If it’s a  then I don’t think it’s unreasonable for me to expect some references to back up your assertion? Seriously, there is nothing out there apart from a few good practice guidelines which simply assert that targets help learners. Nor is it enough to say “it works well for me” because we are talking about something which is not optional. It’s like me saying that everyone absolutely MUST use jazz chants or suggestopedia because I use them and they work really well for me. 

I am an idealist, of course. I think that one day the conditions will be right that I will be able to do this kind of study and that it might even mean something. That day, of course, isn’t any time soon: ESOL has indeed got some bigger funding fish to fry than worrying about target setting. But I can dream, right?

Direct Instruction – A Little Action Research Proposal

So I went to a workshop yesterday on dyslexia and second language learning and it was, overall, really interesting. However, the bit that really stuck in my mind was the observation/recommendation for learners with Specific Learning Differences (SpLD) to use the kind of learning where learners work out the rules for themselves was potentially less effective and that for clarity and accuracy one should be using direct explicit instruction. This was an interesting point, and certainly an idea that had never occurred to me, although it does make some sense. It did, however, stir the embers of a little “what if” internal dialogue I have had about the comparative values of discovery learning and of direct instruction.
First, to set out my stall slightly here, unless otherwise stated I am talking about post 16 ESOL learners. I’m making no claims to anyone else. This is important because this forms, for me, the root of the issue.
 My interpretation, and my brief Google researches would suggest that I’m about right, direct instruction means telling people stuff. Hattie ranks it as having an effect size of 0.59, which means it is a fairly good thing to do. 0.4 is the average, suggesting that anything below this is of a neglible effect, and we should be focussing on the higher end stuff. Hattie puts inductive teaching at .33 and problem based learning right down at the bottom – an effect size of .13. This is damning, really, suggesting that telling people stuff, asking them good questions about it, and giving them feedback on their performance is what works. Getting students to sit down and work it out for themselves isn’t.
Ok, so I’m basing this on certain assumptions, but essentially as things stand, the evidence would appear to be stacked against what, for me, is one of the cornerstones of how I have always viewed English language teaching and learning. To my mind learners need to work things out for themselves because telling students stuff assumes a shared language. This, ultimately, is what sets ESOL/EFL teaching apart from pretty much everything else in education. Telling people stuff and checking it through the various types of questioning is not always even possible, and even where explanation is possible there are still questions to be asked about how to make the shift from explicit, conscious meta-knowledge (“the past simple regular verb structure is formed by adding -ed to the base form of the verb”) to unconscious use (“I walked to college today”). The obvious one is degrees of controlled and freer practice (said the good little CELTA trainer). So sometimes I do wonder whether, with the right level of students and effectively designed practice tasks, direct instruction could work for some ESOL students.
But, and of course there’s a but, there remains a strong case for inductive learning activities in an ESOL class. When talking of Hattie, for example, one has to remember that a lot of his work was based on school age children in native language environments. So it is perfectly possible that direct, explicit instruction will fail because the learners simply don’t have the language with which to deal with the explanation, and you don’t speak the language(s) of the learners. You can explain as much as you want but, to be honest, you might as well whistle. Then there’s the case made by Richard Schmidt for what he called “noticing“. Unconscious exposure to language is one thing but decent learning only happens when you start to become aware the systems and rules governing grammar, including those slightly epiphanic moments of “oh, that’s what the rule is!”
Inductive learning, or discovery learning creates the conditions for this to happen. I don’t mean the slightly vague discovery learning espoused in task based learning, but careful, guided and structured discovery. So students are exposed to examples of the grammar, for example, then through carefully designed questions work out what the rule actually is. To use past simple regular verbs, for example, students read a text and underline all the verbs. Students then answer questions: are these things happening now? Did they happen before now? etc. The skill here is in the task design and in the management of that task: making sure that all students are answering the questions, making sure that they are clear questions, making sure that when the rule is finally elicited the students have good opportunities for effective practice of the language.
It’s quite a challenge too, because one can talk about form focussed instruction, which suggests that learners do need to be taught some stuff, whether it is integrated into contextualised, communicative focused lessons, but also an explicit form focussed lesson can be decontextualised – just teaching language for the sake of teaching language – although still to my mind we have to bear in mind the focus still needs to be on getting learners to produce the language. Certainly in my own practice I have become less of a communicative-context obssessive, liking grammar lessons where contexts and situations arise as the lesson develops.
So I’m going to do an experiment. I’m going to use my level 1 class as my guinea pigs for this, and I’m going to see which of these two things work better. First of all, I’m going to devise a simple test of two specific grammar points. The grammar points will be things they have covered before but at something of a remove: passive voice and reported speech. This is because I want them to come at the language with something resembling the same prior exposure, but also because these involve manipulating verbs outside of a time-tense context. I’ll administer the test – probably a simple transformation or gap fill, and record the results. Then in week 1 I’ll teach one structure using a direct instruction followed by controlled and then freer practice, and in the following week I’ll use an inductive approach, followed by  practice activities as close as possible in format to the first week. At the end of each lesson I’ll also get feedback from the students on how much they felt they had learned.
In the following lesson, I’ll give the students the same test as they did before the experiment.
I know it’s not conclusive in terms of making a grand statement in favour of either, and a little flawed (someone pointed out to me the possibility that one of the forms may be harder than the other to learn, which is a good point) but as a piece of action research I think it’ll be interesting to explore.
Watch this space.
***
(By the way I found a whole bunch of interesting links on these themes, some of which I read properly, some of which I just skimmed. Definitely I want to read a bit more on this, but I present the links here for your enjoyment!)

Gnarly: Adrenaline Teaching and Learning

It’s funny really. Give me an empty week and a bunch of things to do in that week and I’ve got to be honest my productivity won’t be brilliant. That’s not strictly true: the stuff will get done, of course it will, but I will inevitably be left with a slight sense of “surely I could have done more?” At the time of writing, however, I am definitely not looking at an empty week. Next week is an Internal Quality Review – essentially a mock inspection – for our department, during which time I am also looking down the barrel of my own graded observation. I’m also not the only one. There are several teachers in the department also under threat of graded obs, as well as a general “crumbs, evaluative observers!” feeling on every level in the team. 

As I’ve said before, I wear multiple hats at work, a little teacher training, a whole lot of teaching ESOL students and a significant chunk of mentoring/teacher development. This does still create tensions, but if I’m honest I generally find these quite creative, productive tensions. However, this places some responsibility on me to support fellow teachers at times like this, as well as get my stuff together for my own observations, plus a slight backlog on general day to day planning and so on owing to a colleague being off sick, things are feeling a bit, well, busy, shall we say. 
Thing is, busy is good. Busy is good for me. Not mentally over-stretched like I felt last year (if that’s a sign of weakness in your book, you know where you can stick that book), but busy with things I largely understand. I have about two and a half working days til the first lesson of the observation window, with only one or two significant gaps in which to get my shit together. This is partly due to bits of time before now being taken up with some cover, partly because I reach whole new heights (depths?) of non-productivity when I try to do anything more complex than marking when I’m at home, and, as I said, partly because when I do have the time, I never feel like I quite use it properly. But this pressure really focuses the brain, and the sensation is really rather satisfying, almost pleasurable, in fact. 
Take my evening class today, for example. I’d planned a lesson on the scheme of work as I sometimes do, thinking “I know what I want to do that day, but I’m not sure what resources to use yet” (remember kids, don’t let the resources tail wag the lesson dog) and then reaching the hour before and being acutely aware that I still hadn’t found the resources. In that same hour, I had to invigilate an assessment, so could only really partly get my head round what I wanted to do. In fact, the detailed lesson and plan all fell into place in a very clear, focussed 25 minutes during which, as an added top bonus, a colleague asked if she could come peer observe me. Obviously I said yes to the colleague, but I did need to get myself out of the room and really really get my head on the case: at times like this I get a bit flouncy, and at the same time a bit terse; basically a bit of a diva. The lesson, you may be pleased to hear, was very satisfying all round. Many positive student comments and only a teensy little bit of winging it/padding at the end (mini whiteboards in pairs and 1 minute to find answers to questions like “who can find a synonym for…?” etc.) because the main task overran and the follow up task needed more than 15 minutes for some decent learning to happen. 
There is a big chunk of arrogance and self assuredness at play here. I know I can knock together all the necessary paperwork, for example, pull my socks up on my schemes, polish my learning outcomes and generally get on it. I know I can do all this and spend time supporting people. I know that when the supporting people bit is done, I will batten down the hatches, lock the doors and get the bulk of the prep done, I also know, however, that if I write a full lesson plan at this point for a lesson next Wednesday, the lesson I teach with it will go to pieces. 

A more important factor than arrogance, however, is the adrenaline rush, insofar as teaching can be described as a high adrenaline activity. I think I get a little bit of a buzz out of the slight sense of danger that it could all go horribly wrong. Rather than inducing fear and panic, somehow it helps me stay centred and focussed. It’s not the wisest philosophy ever, but it works, and I rather like it.