Levels assessments in geography

By Clyde Palmer,2014-04-10 18:25
12 views 0
Levels assessments in geography

    Level descriptions and assessment in geography: a GA discussion paper

The purpose of level descriptions

    Level descriptions were introduced in SCAA’s (now QCA) 1995 review of the curriculum and its assessment. Level descriptions replaced the vast, unworkable structure of statements of attainment that helped bring the original National Curriculum into disrepute. Level descriptions were intended to be used for long-term assessment, to help teachers reach a rounded judgement of pupils’ attainments at the end of a Key Stage. They were designed to be used as ‘best fit’ descriptions to come to an overall judgement, drawing together evidence of what pupils know, understand and can do in relation to the taught curriculum, and not to require the assemblage of detailed evidence to prove every aspect has been attained. Each level represents about two years ‘progress’ – they are that

    rough-hewn. The level descriptions were not designed to be used as if they were assessment objectives, nor to be broken down into different elements; they were never intended to be used as instruments to assess individual exams, tests, homework or class based exercises.

    Level descriptions were also designed to describe progression in the subject, by providing a structure to help teachers plan for progression, and to help them take a view of where pupils are heading in their acquisition of knowledge, understanding and skills. They are also rough-hewn in the sense that they were QCA’s best shot at describing attainment in the subject, rather than being based

    on any research; one result is that some distinctions between levels seem to depend largely on semantics, and in geography, some aspects of attainment are absent from one or more levels. However the level descriptions have been successful in providing a language for teachers to talk about progression and to plan for it and, when used well, end-of-key stage assessment and moderation has provided a focus for teachers to review pupils’ attainments, improve pupils’ skills at assessment for learning and to provide all KS3 students with feedback on their strengths and areas for development, often supported by QCA’s exemplification guidance on the NC Action site.

The position in 2006

    In the intervening decade the assessment situation as originally envisaged by QCA has changed significantly, with the result that level descriptions are now used in many schools for both medium and short-term assessments (sometimes, paradoxically, weakening their value for assessment at the end of the Key Stage).

Downloaded from

    One development is that schools have been under increasing and relentless pressure both to raise standards for pupils, and to be more accountable for their progress. These are laudable goals. To support them, schools can now deploy a formidable array of data to identify their strengths and areas for improvement at school, subject, group and pupil level; they can interrogate the data to identify the progress of individuals and groups such as boys, the more able, minority ethnic pupils etc., and to set targets for them. This data is founded significantly on test data from the core subjects and supported by sophisticated data handling and reporting systems, both internally and from DfES, such as the old PANDA and current RAISE online system. It is driven by school managers’ desire to monitor

    and report progress closely, not least to OfSTED team inspectors, local authorities and parents. One effect is to put pressure on non-core subjects to produce similar detail and frequency of data to that expected from the core, even though there is no basis of test data for these subjects, and they have less curriculum time. Another effect of the ‘fetishisation’ of data is to exalt the quantitative over qualitative professional judgements and, arguably, to stretch what can be measured beyond the limits of reliability and validity.

Level descriptions and everyday assessment

    As schools have become more systematic in using data and setting targets, they have also begun to extend the use of level descriptions well beyond their original purpose. One response to their broad-brush nature has been to look for more detail, by subdividing levels to make them more fine grained. In the process, some members of the profession have effectively re-invented the statements of attainment they saw abolished a decade ago. In geography at least, the result is akin to a massive professional confidence trick, lending a spurious exactitude to what can at best be an informed hunch about the complexities of progression.

    The similar practice of turning percentages from tests and exams into level judgements - where for example 58% is said, by some mysterious process, to be Level 5 or whatever - is of equally dubious validity: it is a process to which Awarding Bodies devote huge amounts of time and resources trying to get right. These practices are promoted by some data handling systems, by commercial assessment packages and indeed by some inspection teams. It is madness - a clear case of the tail wagging the dog - puts intolerable pressure on subject practitioners and is almost certainly a total waste of time: a distraction from the real task, which is to get to know pupils, engage them in their work, promote learning and improvement.

    A second response has been to increase the frequency of use of level descriptions to yearly, termly, half termly or even to apply levels to individual pieces of work. The latter is also pointless, partly because of the breadth of level descriptions and the impossibility of using them to signal, monitor or reward significant progress from week to week. These become serial summative assessments, in an education system where pupils’ school lives are already Downloaded from

    dominated by this type of assessment. Some schools mistake this activity for formative assessment, and devote immense energy into manipulating level descriptions, rather than into teaching and learning improvements such as formative feedback that will result in genuine learning.

    Together, these responses fly in the face of the research on assessment for learning, that grading pupils’ work is almost always ineffective in promoting progress. This is because pupils focus on their grades, which they tend to use to compare with their friends, rather than focusing on the comment that goes with them and the improvement they need to make. Moreover, frequent testing tends to motivate those who anticipate success; even then students are often only motivated to perform better, not to learn better. For less successful students, repeated tests lower self-esteem and reduce effort, so increasing the gap 1between high- and low-achieving students. If the main purpose of assessment is

    to help all pupils reflect on their learning and improve on it, the conclusion is that neither frequent summative marking using level descriptions, nor subdividing the levels will be effective in promoting learning.

Assessment for learning

    A second significant development in the last decade has been the growth in understanding among teachers of formative assessment (or assessment for learning) and of strategies to promote it in everyday lessons. In contrast to

    assessment of learning, assessment for learning is concerned with helping pupils

    see what progress in their work means, especially to identify the next small steps in their learning,

    Assessment for learning is any assessment for which the first priority in its

    design and practice is to serve the purpose of promoting students' learning.

    It thus differs from assessment designed primarily to serve the purposes

    of accountability, or of ranking, or of certifying competence (Weeden and 2Lambert, forthcoming).

    Assessment for learning is thus intimately founded in the curriculum and in teaching and learning, rather than focused on collecting numbers to feed into schools’ performance and target setting systems: repeated measurement on its

    own will not bring about improvement. Two key principles are that assessment for learning practices give feedback to teachers and their pupils, so helping modify their teaching and learning activities to promote improvement, and importantly also let pupils in on their learning. For both these reasons they help to motivate pupils and give them the opportunity to take more responsibility for their learning. They are particularly useful for less successful learners, in contrast to repeated summative assessments. Whilst level descriptions are

     1 Assessment Reform Group (2002b) Testing, Learning and Motivation. Cambridge: University of

    Cambridge School of Education. 2 Weeden P. and Lambert, D (forthcoming) Geography Inside the Black Box, NFER Nelson

    Downloaded from

    useful in providing guidance on progression, they do not readily provide the fine-grained information that answers the question, for teachers and pupils, what next?

    Assessment for learning can be promoted through tried and tested practices in everyday geography lessons, such as:

    ; clarity about learning intentions and what success looks like

    ; opportunities for self- and peer-assessment, such as peer-reviewing work

    against the criteria for success, and identifying achievement and


    ; improved feedback, focused on achievement and improvement

    ; modelling examples of work which exemplifies achievement and success 3; opportunities for reflection, for example in lesson plenaries.

    Perhaps unusually in education, these assessment for learning practices are founded on a very strong research base, especially Black and Wiliam’s many 4contributions, as well as in strongly developing classroom expertise.

Level descriptions and medium-term assessment

    Black and Wiliam consider that four assessment for learning strategies are particularly useful: improved questioning, peer and self-assessment, feedback focused on improvement, but also formative use of summative assessments. They argue that occasional monitoring of pupils’ progress in relation to their summative goals is valuable to help pupils and teachers gauge progress and identify next steps: this is a practice that is considered essential in promoting progress and achievement at GCSE and A level.

This formative use of summative assessments is part of the solution that many 5secondary geography departments have arrived at in Key Stage 3. They

    commonly make periodic judgements at the end of different units of work about pupils’ attainments, using the level descriptions to devise assessment criteria. Commonly, because they relate their judgements to the taught curriculum, they identify attainment and progress only in relation to relevant parts of the level descriptions, often using the strands, which are known as ‘aspects of

    performance’ (places, patterns and processes etc), to help design the criteria.

     3 see DfES (2004) Key Stage 3 Strategy: Assessment for Learning, whole-school training materials, London: HMSO, and

    Clarke, S. (2001) Unlocking Formative Assessment. London: Hodder and Stoughton. 4 See especially Black, P. and Wiliam, D. (1998b) Inside the Black Box: Raising standards

    through classroom assessment. London: School of Education, Kings College,

    and Black, P., Harrison, C., Lee, C., Marshall, B. and Wiliam, D. (2002) Working Inside the Black

    Box: Assessment for learning in the Classroom. London: School of Education, Kings College. 5 see for example Howes, N. (2006) 'Teacher assessment in geography' in Secondary Geography

    Handbook, Sheffield, Geographical Association and Weeden. P and Hopkin, J. (2006)

    Assessment for learning in geography’ in Secondary Geography Handbook, Sheffield,

    Geographical Association.

    Downloaded from

    They can then use these periodic assessments formatively, to identify broad progress, strengths and weaknesses and to identify curriculum targets. Some teachers use the level descriptions to track progress by recording them in their mark books, but set pupils curriculum rather than level targets, thus avoiding the problems of distraction and motivation discussed above.

More effective practice includes:

    ; focusing assessments on enquiry

    ; varying the range and focus of assessment over a key stage, e.g. group

    presentation, oral, poster, extended writing etc, but not attempting to

    devise level tests

    ; using AfL strategies in these periodic assessments to promote

    achievement, e.g. by sharing success criteria in advance, using self- and

    peer assessment, modelling examples of quality work

     using the results of assessment to agree a learning focus for ;

    improvements in pupils’ geography

    ; using the results of assessment to monitor and review curriculum and

    teaching, perhaps using the strands/aspects of performance to help

    identify strengths and areas for development

    ; being very cautious about overusing these assessments - about once a

    term seems about right - and being aware of the impact on the motivation

    of less successful learners

    ; ensuring these assessments supplement rather than supplant teachers’

    judgements gained from their everyday work with pupils

    ; developing these assessments over a foundation of the AfL practices in

    everyday lessons outlined above, not as a replacement for them.

    This is not what level descriptions were originally designed for. But many departments report that this formative use of summative assessment has helped pupils to talk about their progress and improve their work over the longer term, as well as improving teachers’ monitoring. By effectively redesigning the level descriptions to fit a rather different purpose, they are matching practice at Key 6Stage 3 with that at GCSE, and in line with HMI’s advice.

    In conclusion, without attention to everyday teaching and learning practices that promote genuine progress, these summative-formative assessments can only have a limited impact. But carefully used and backed up by everyday teaching and learning strategies that promote achievement through formative assessment, occasional use of level descriptions to monitor pupils’ progress and identify improvement should be supported by the GA. In contrast, sub-dividing levels and using them over-frequently, especially to mark individual pieces of work, should not.

     6 OfSTED, 2003, Good assessment practice in geography, HMI 1474

    Downloaded from

Primary schools

    The use of level descriptions is undeveloped in the majority of primary schools, and the majority of practices (and malpractices) discussed above apply largely to Key Stage 3. This is partly because, if anything, the pressure on core subjects is even more relentless in primary schools, especially compared with non-core subjects. However some schools use them very effectively, for example to review standards and provision in geography at the end of each Key Stage. The examples of standards in the NC Action website are useful for reviewing the pitch

    of pupils’ work in this way, and an effective means of helping to monitor and

    review progress.

The future

    QCA’s curriculum review is currently focused on Key Stage 3, where the way geography is expressed in the PoS will be significantly different. There will be some changes of emphasis in the level descriptions above Level 3; the principles of their effective deployment to promote learning, as discussed above, are likely to remain the same.

    The GA will continue to represent and support our members, for example by working with QCA on guidance for teachers, which is likely to re-iterate the messages here; by dissemination (e.g. through Teaching Geography) and

    engaging in development work: AEWG in particular will be taking this discussion paper forward.

John Hopkin for Education Committee December 2006


Assessment Reform Group:

    DfES Key Stage 3 Strategy:

    General Teaching Council for England (for case studies):

    National Curriculum in Action (QCA exemplification):

    QCA Assessment for Learning:

    The Association for Achievement and Improvement though Assessment (AAIA)

Downloaded from


Arber, N. (2003) ‘Assessment for Learning’, Teaching Geography, 28, 1, pp. 42-6.

    Hopkin, J., Telfer, S., and Butt, G. (eds) (2000) Assessment in Practice, Sheffield:

    Geographical Association.

    Hopkin, J. (2000) ‘Assessment for learning in geography’ Teaching Geography, 25, 1, pp.


    Lambert, D. (2002) ‘Using assessment to support learning’ in Smith, M. (ed) Teaching

    Geography in Secondary Schools. London: RoutledgeFalmer, pp. 123-33.

    Howes, N., and Hopkin, J. (2000) ‘Improving formative assessment in geography’

    Teaching Geography, 25, 3, pp. 147-149.

    Leat, D. and McGrane, J. (2000) ‘Diagnostic and formative assessment of students’

    thinking’, Teaching Geography, 25, 1, pp. 4-7.

    Martin, F. (2004) ‘It’s a crime’ Teaching Geography, 29, 1, pp. 43-46.

    Weeden, P. (2005) ‘Feedback in the geography classroom: developing the use of assessment for learning’ Teaching Geography, 30, 3, pp. 161-163.

    Wood, P. (2002) ‘Closing the gender gap in geography: update 1’, Teaching Geography, 27, 1, pp. 41-3.

    Downloaded from

Report this document

For any questions or suggestions please email