Bedford ESOL Advice Service

Quality Assurance Assessments

In my previous post, I discussed how data from the Initial Assessment Form can be used. The data, if properly used, will have an effect on the individual learner, the local response to unmet need, and the wider ESOL landscape, so that it is reliable and valid, is essential. The data needs to be quality assured.

There are three areas areas to consider with regard to QA; one the quality assurance of the data collection and collation processes; two, the external quality assurance of Initial Advice and Guidance (IAG), and finally, the quality assurance of initial assessment. In this post, I'll be covering the last, although, of course, all three overlap.

From the outset, I knew that quality assurance in the normal way would be difficult on account of 1) issues related to independence and 2) me running at least half of all advice sessions most years! To ensure there was a basic level of standardisation, nonetheless, I did three things:

1. Ensured that all advisors were fully qualified in ESOL to at least Level 5 and currently teaching for an organisation

This was key to securing buy-in from ESOL providers, particularly larger ones. Ensuing tutors were currently teaching for an organisation was primarily done to ensure that they would be receiving professional development training, and be engaged in quality assurance activities such as observations of teaching and learning and moderation activities through their substantive teaching posts (this reduces costs and duplication). The disadvantage was that these tutors needed "detoxing" on grading (see below on discrepancies). It was also necessary, because where advice hours were "donated", only those hours (usually 2) were available, so training outside of that time was not possible. This is not a problem where the service has a dedicated advisor, or training time is negotiated with the partner ESOL provider (latter would have been too big an ask for us initially).

I should mention that on one occasion, a partner provider "snuck in" a new tutor with only a CELTA. Incidentally, that year they also "forgot" to sign the MOU which clearly stipulates the minimum qualifications for tutors. Training the tutor took considerable time and moderation/quality assurance activities had to be increased 4-fold to mitigate against any potential errors due to inexperience. Lesson: make sure you and your partners have a clear understanding; the service's reputation and partnerships depend upon it. A Memorandum of Understanding is a useful tool for this.

2. All advisors were given thorough inductions, including joint assessments

Joint-assessments or mentoring from a more experienced advisor for at least half a term, with proper and consistent reference to the National ESOL Core Curriculum Standards. This was achieved in more and less subtle ways, and to greater or lesser degrees depending on whether a curriculum manager was the advisor or not (and my manager or not)!

3. Collected feedback from class tutors and/or providers to ascertain whether they agreed with the levels indicated.

This was really important. As all advisors were highly qualified, assessment levels rarely varied by more than half a level (see A and B below). This could certainly have been done in a more systematic way, and if you're considering it, I would advise evidencing feedback systematically.

On this point, I only know of two occasions when learners was transferred, internally to another level (in both cases there was half a level difference). Interestingly, when I met one of these learners (moved by the receiving teacher from E2B to E3A) a year later, she was in E2B again, because the internal E3A class tutor thought she needed further consolidation at E2 and disagreed she was suitable for E3A! Overall, very bad for the learner.

This highlights the levelling discrepancies which often exist within a single provider's teaching team, despite regular internal standardisation activities, much less 8-13 different ESOL providers and their teams! In essence, for each new provider, it was necessary to produce a comparability table, which guaranteed that learners forwarded to providers were accessing provision suitable for their development needs. There will be bumps in the road, if one assumes everyone interprets the ESOL Core Curriculum level descriptors in exactly the same way (or uses them at all, sometimes!).

For most years, there was a general absence of other QA activities, because these would not have been conducive to partnership building at the time, and, of course, there was no one to do them. Just before our funding came to an end, however, we were able to secure additional funding for a dedicated ESOL advisor. For the first time, I had time. In the same year, we increased ESOL provider advice contributions from one provider (Council's ESOL Dept.) to a total of four. The need to standardise activities came to the fore and I was able to make some significant headway in improving the QA process.

The new and improved system included the following:

1. E-Bulletin

A monthly email with bullet points with guidance on eligibility and new issues arising from observations, relevant external events, joint-assessments and 1:1 meetings each month.

2. ESOL Levels video

A video of learners speaking to an advisor from E1A- L1+, plus reading and writing samples for advisors to watch as part of their induction. This also proved to be quite useful to policymakers, who wanted an idea of the levels spectrum!

3. Joint Assessments

New advisors carried out joint assessments with our dedicated, and well-trained EAS advisor or myself for at least half a term, to ensure there was consistency in grading. In some cases, this was for the whole academic year.

4. Observations

These were carried out approximately once per term, and included moderation of Speaking, Listening, Reading and Writing grading.

5. Moderation

Moderations of Reading and Writing tests and tracking also took place on a termly or half-termly basis. Between 5 and 15% of all assessments were moderated per tutor.

It wasn't perfect, but it certainly was a leap in a short time. I would have liked to have been able to refine and improve it, by, for example, adding a peer observation scheme, a WhatsApp Group and virtual meetings. However, the system had disappeared by the time I returned from secondment. Such is life!

I have added the following to the File Share (Quality Assurance) which you are welcome to download and use:

Initial Assessment Observation Template

Moderation Record Template

Best wishes,

Khadijah

P.S. On a related but separate note, feedback from providers was also incorporated into the way we collected data. For example, very early on, feedback from larger providers (usually AEB-funded) indicated they were in need of 'exam ready' learners. At the time, levels were record at E1, E2, E3, L1 and L2 as standard. However, as a result of the feedback, we introduced and 'A' and 'B' distinction, to indicate whether a learner was exam ready (B) or not (A) at the point of assessment. This allowed for only suitable learners to be selected and offered the relevant options. From this point, partners were able to specify whether they were able to accept learners who were exam ready, not exam ready or a mix of both. This was particularly useful when providers were recruiting in Term 3 for accredited courses.

PPS. On a related but very separate note, it was also the case that some providers used the term 'pre-entry' and by this meant emerging English speakers who were not literate in any language. Others used the term to refer to learners who were literate but Entry 1A (not exam ready).

Taylorfitch