So, we had out final class this week, and I would like to report an exciting and thrilling close to the lesson with lots of teaching and learning.
Unfortunately I can’t.
The learners have completed portfolios, signed off their final ILP targets, and have a clear idea of where they are going next year (most to entry 1 I’m pleased to report).
I’ve talked about the impact of final assessment activities on teaching and learning before, and I don’t want to rattle on. Read this post and you’ll see what I mean. But it meant two and a half hours of admin, essentially. One to one “tutorials” and student file raiding to satisfy the requirements of funding agencies. So learning score there = big fat zero.
There are other issues here, of course. I took the class over late in the year after they had had a number of different teachers. No-one is at fault for this: these things happen, and it can be hard for everyone involved. But it means that I didn’t have the control over the evidence gathering and therefore didn’t have the opportunity to spread this kind of paper exercise across the year. I do this with teacher training courses, which are also portfolio assessed, and for which I consciously plan in time for portfolio review and tutorial activity. Teacher training portfolios, however, are much more focussed on clearly and explicitly meeting externally set assessment criteria.
If I am brutally honest, here, using RARPA to draw down funding, I suspect I would highly tempted to be less principled in my target setting than I am with trainee teachers (see here for why) and set targets I felt confident that the learners could achieve. The other place where the comparison is less appropriate is the nature of the evidence required: for an ESOL group in this context I would be inclined towards setting targets which I could “evidence” easily.
This is my concern with the drive for evidence – it promotes and encourages the kind of practice where the focus is not on the learning but the evidence of learning – two very different things. Hence “learning outcomes” over ” learning objectives”. The explicit setting of learning outcomes, for example, makes it very easy for a non-specialist observer (i.e. one who doesn’t teach the same subject as you) to make judgements on the effectiveness of the teaching in developing learning. As a practice for learning it may work for some, it may not.
That said, I’m not criticising the need for evidence, I understand the role of evidence in supporting the development and assessment of achievement, particularly when it comes to funding. The Skills Funding Agency in the UK is run mainly by accountants & business people, after all, not teachers.
My criticism is that this need can overwhelm the teaching and learning. Teaching and learning becomes directed towards the creation of evidence, rather than the evidence coming out of the teaching and learning. The evidence tail wagging the teaching and learning dog.
But, you know, if it means those individuals get to access language support they might otherwise miss out on, then who am I to comment on “best practice”?