Authors: Peter Tait, Michael Davison, John Hattie, and Wendy Kofoed

Progressive Use of Data Shown Through ORBIT Online Profiling Analysis A paper to be presented at: Symposium on Assessment and Learner Outcomes 2011, 1...
Author: Lizbeth West
4 downloads 1 Views 600KB Size
Progressive Use of Data Shown Through ORBIT Online Profiling Analysis A paper to be presented at: Symposium on Assessment and Learner Outcomes 2011, 1-3 September 2011, Wellington, New Zealand. Authors: Peter Tait, Michael Davison, John Hattie, and Wendy Kofoed

Progressive Use of School Assessment Data Shown Through ORBIT Online Analysis, Profiling and Reporting Tool

Abstract

This paper outlines some of the challenges schools face in dealing with assessment data in order to raise student achievement. In particular, the dual imperatives of Assessment for Learning as well as for Annual Reporting and School Accountability requirements often act in opposition in schools. Resolving this tension required an integration of multiple approaches which as this paper explains, was facilitated by a comprehensive development partnership. The general aim was to explore how schools could make better use of assessment data to contribute to the learning process. More specifically we aimed to combine a Levels approach with the need for transparency by focussing on the individual student’s progress. This has led to the development of the web-based interactive platform, ORBIT, which encourages progressive use of assessment data. Development involved academics, education practitioners, and business analysts and designers from outside the education sector. The authors had an extensive discussion on the qualities required of an analysis and reporting system then reviewed ORBIT development to date. Results from pilot development schools are encouraging. Teachers found the ability to display graphically a ‘fuzzy’ achievement range from multiple assessments matched to a common achievement scale was supportive of moderation processes. The online tool facilitates a progressive (levels-based) rather than comparative (gradesbased) reporting system. Reporting using levels was found to have benefits for students, parents, and teachers. The research potential, particularly regarding student goal setting, invites further exploration.

Page 1 of 8

Introduction Speaking at the NZEI Annual Conference in 2011, Minister Anne Tolley recognised the need for more support for teachers in making assessment judgements and in reporting to parents and boards. In particular, a nationally consistent approach to teachers’ overall judgement of students’ progress and achievement would give teachers greater assurance that their “judgements are consistent and comparable with other teachers across the country.”1 Yet in making Overall Teacher Judgements, Te Kete Ipurangi asks that teachers consider the evidence from multiple sources to determine which National Standard describes the ‘best fit’ for that student’s achievement. Clearly, the increasing focus on assessment – and tensions within the assessment process -- are placing increasing demands on teachers.

The paper is organised into three sections. The first section outlines the research literature relevant to the use of assessment data to support learning and to help raise student achievement. In addition, the paper uses observations and discussions of the use of assessment data in schools arising from the writing of Data Use Audit reports for schools. It incorporates the professional discussion undertaken over a number of years – with Mary Chamberlain, Royce Sadler, James Irving, Heather McRae, Tim McMahon, and input from many others -that led to trialling a progressive approach to data-use in several schools. The challenge faced was to find a viable approach that complied with the standards of professional best practice while creating a manageable workload for teachers.

The second section overviews the development undertaken to resolve the tension between Assessment for Learning goals and schools’ annual reporting requirements, as often these act in opposition. By focussing on individual-level progress, we were able to mesh the Assessment for Learning and ERO imperatives – both valued components in raising student achievement. Business modelling approaches proved fundamental to meeting the need for a fast display of data, an interactive graphic, and the conceptualization and building of the necessary DataMart to support the analysis required in order to be able to generate individual progress measures.

The final section discusses and integrates the results from the pilot schools. These include the positive impacts on teacher practice, how timely analysis can help support moderation processes, and how teachers are encouraged to have richer discussions about the movement of students between cohorts. The paper concludes with some thoughts on potential research and next steps in development.

1 Tolley, Anne, ‘Speech to the NZEI Annual Conference’, Press Release, New Zealand Government, 20

August, 2011. Page 2 of 8

Background Research – Identification of Problem Enough is known from the assessment literature and the study of teachers’ professional practice to show how to use assessment to inform learning and to help raise student achievement. For example, the Assessment Reform Group produced guidelines for “assessment for learning in practice.”2 These explain how assessment can improve every pupil's learning when it incorporates both pupil and teacher reviewing (and reflecting on) the individual pupil’s assessment data. Further, it can be a means to share learning goals, and enables the pupil to receive constructive feedback on achievements to date. Despite these guidelines, the use of assessment data often seems to be less about student achievement than about judging students in comparison with others. From experience in teaching, and as a consultant auditing how schools used data, one author noted the inherent tension in schools – between management’s need for summative data and the pedagogical demand for formative data. There are clearly difficult problems for teachers in reconciling their formative with their summative roles.3 ORBIT resolves that conflict, and makes the distinction between such assessment types irrelevant.4 While schools use a wide range of assessment tools, often parents, and at times teachers, struggle to interpret the assessment results. As Wynne Harlen observes: Many teachers have a narrow view of assessment and do not know how to respond to freedom to use evidence from students' actions, projects and processes.5 One solution to resolving this problem is outlined in this paper. We argue that schools can make much better use of their assessment data through regular low-stakes assessment. An assessment approach that focuses on the student’s individual progress can have a positive impact on student achievement. This in turn can build self-efficacy (a self-constructed belief in one's own abilities to perform a specific goal or task). When the student succeeds in the task, they become motivated to aim higher, and attempt unfamiliar tasks.6

2 Assessment Reform Group, Assessment for Learning: Beyond the Black Box, University of Cambridge, 1999, p. 7. 3 Black, P. , and D. Wiliam, ‘Inside the Black Box: Raising Standards Through Classroom Assessment’, Phi Delta Kappan, Vol. 80, No. 2, October 1998, p. 18. 4 Hattie, J., (2001) Who says formative assessment matters? 5 Harlen, Wynne. ‘Trusting Teachers’ Judgement: Research Evidence of the Reliability and Validity of Teachers’ Assessment used for Summative Purposes,’ Research Papers on Education, Vol. 20, No. 3, September 2005, p. 249. 6 Pajares, F., and D. Schunk, ‘Self-beliefs in Psychology and Education: An Historical Perspective’, in J. Aronson (ed.), Improving Academic Achievement, New York: Academic Press, 2002, pp. 3-21. Page 3 of 8

Emphasising an assessment for learning approach also increases teacher effectiveness, and serves the national goal of raising standards of learning. As Crooks notes: Even better than standards-based assessment is careful monitoring and reporting of the progress (change across time) of individual students. This is a great challenge to teachers, but offers the prospect of supporting the motivation of all students.7 By modifying the assessment strategy to decrease class-wide testing and marking, teachers can focus on individual student achievement and progress, which can promote an increase in teacher job satisfaction.8 Utilising assessment information to focus on progress encourages students to think for themselves, to build their own understanding, and to increase their motivation. The students’ major concern then is how to improve on their learning; and to address this, the place to start is to look closely at their current level of learning and their journey to that point. Providing benchmarks from which the student's individual progress can be measured also makes it possible to report the achievement level attained by students while avoiding the negative impact of comparing students. As Hattie and Timperley found: It is the feedback information and interpretations from assessments, not the numbers or grades, that matter.9 Reporting using levels means that a progressive scale is used, along which a student’s own learning can be tracked, rather than reporting achievement in relation to others. This is similar to the notion of ‘Personal Bests’, a concept well known to most students. It encourages constructive dialogue between students, parents, and teachers; aimed at developing realistic goals. Also, it provides a platform for positive feedback for the student; all of these are important in building self-efficacy and enhancing motivation.

7 Crooks, T..J., Guide to Good Assessment, Journal, 1993, p. 4. 8 Lonsdale, M., and L. Invarson, ‘Initiatives to Address Teacher Shortage’, Policy Briefs, Acer Issue 5, Nov. 2003, p. 6. 9 Hattie, J., and H. Timperley, ‘The Power of Feedback’, Review of Educational Research, Vol. 77, No. 1, March 2007, p. 104. Page 4 of 8

Development Partnership – Pedagogy and Technology ORBIT takes a strategic approach by utilising assessment data to support change in the classroom to produce the greatest learning effect. At the outset, it was clear that any initiative to raise student achievement required multiple integrated approaches. Initially, we trialled data profiling from an Assessment for Learning perspective at several schools. It was found though that such a pure approach was untenable because it paid insufficient attention to the schools’ needs for accountability. The meshing of Assessment for Learning and Annual Reporting / School Accountability requirements was challenging, as, in schools, these imperatives often act in opposition. This conflict was resolved through a focus on progress. The approach utilises all data, allowing analyses that are based on modelling to generate an overall (smooth) achievement level every month. Overcoming the variability inherent in single assessments leads to increased teacher confidence and accuracy in the ‘best fit’ evaluations that they make. Through the smoothing of results a reliable reference aggregation can be provided, while also allowing goals to be more clearly defined, and therefore more attainable.10 The online platform accommodates multiple assessments, matching them to a common achievement scale based on curriculum levels. This aids teacher-parent-student understanding of disparate scales allowing comparability of data outcomes (similar to the curriculum levels, for example, as implemented in asTTle), and across aspects of Reading, Writing, and Mathematics, with the intention to expand to all curriculum areas. This introduced the concept of students setting a goal based on a level, which would be exemplified to clarify the standard defined, representing a new avenue of development for data use. Generally, goal setting has been narrative and hence not readily measureable. But if there is clear evidence of a student’s progress to their current status, as well as a sense of the direction that their journey will take them, then using assessment information to monitor this progress would be powerful.

10 The ‘on balance – best fit’ judgements many teachers have made for years are formalised under National Standards as “Overall Teacher Judgements” which demand robust processes. Page 5 of 8

The initial school trials revealed that the statistical model was accurate but that timeliness needed to be improved. Teachers required a solution that was robust but also fast. Hence, a partnership with business was sought. ORBIT worked with designers (gardyneHOLT) and analysts (Datamine), as well as receiving funding from the Ministry of Science and Innovation, enabling an accelerated and enhanced IT development necessary to meet those demands. Development schools provided ORBIT with historic data on student assessment as well as valuable feedback on the platform through interviews with a cross-section of staff and via survey monkey. In developing ORBIT, consultation has been the key. The pilot development has now extended to over a dozen schools. Pilot use demonstrates that ORBIT is now well advanced as a robust foundational platform (DataMart) to manage the data flow required for the myLearning website. This data platform is operational now, with feeds currently being received from multiple schools sourced from two distinct School Management System (SMS) platforms (eTAP and MUSAC). This data platform now sources disparate information. Through data import protocols, it 'standardises' the data into a common format, applies statistical logic where required, and prepares data through export protocols that are displayed on the myLearning website. The online graphical display shows students’ progress and achievement over their school years – this includes the facility to add Overall Teacher Judgements, levelled future goals, and build a learning journal.

Online Integration – Service for Schools ORBIT facilitates a progressive levels-based, rather than comparative grades-based, approach to assessment. It gathers and integrates assessment data and makes it easy for teachers to review evidence based on achievement over at least the past 18 months. Through the simple-to-interpret myLearning graphical display, ORBIT enables teachers to make Curriculum Level-based Overall Teacher Judgements which can readily be converted to the National Standards grades (OTJs) if required.

Page 6 of 8

Improvement targets generated from an analysis of comprehensive data can be beneficial in several ways. They can help review the clarity and suitability of tasks and the learning programmes, and, by using achievement evidence, help set realistic annual goals and targets. The potential is there in the review of assessment data to reduce vastly the workload required by teachers to investigate effective strategies for student improvement. The IT implementation of the data-use model already shows signs of generating positive change in teacher behaviour. Because ORBIT provides teachers with views of data that demonstrate tracking between years, greater depth in the conversations among teachers has been observed as to how they made past judgements and the purpose and relevance of future judgements.

Conclusion ORBIT fits the requirements of the National Administration Guidelines for Schools. It takes into account the 2010 Ministry of Education position paper on Assessment. It merges dependable data about student progress allowing interpretations not only about progress on the National Standards, but has also proven to be worthwhile for teachers in the daily decision making about how to determine next learning steps for their students. ORBIT has helped schools make better use of their assessment data, has shown its feasibility, and ease of use. It encourages regular low-stakes assessment by treating assessment as a natural process in monitoring and supporting learning, and generates a more reliable data trail for students of their development, and allows the student to create a journal of their learning progress. The ability to measure progress from modelling in place is ready for implementation. Once done we feel this service will support schools to more efficiently moderate, especially for National Standards. Finally, the trials undertaken to date demonstrate that incorporating principles for learning in assessment is possible. Reporting using levels means that feedback is based on achievement on an independent scale as well as being a transparent measure of achievement. Focussing on progress and showing the results mapped over several years also meets the accountability demands for improvement.

Page 7 of 8

BIBLIOGRAPHY Absolum, M., Geary, M., McMahon, T., and Tait, P., (2003) ‘Planning for Better Student Outcomes’, Sept. Quarterly: Analysing Student Achievement Data, Ministry of Education. Wellington. Alton-Lee, A., (2003) Quality Teaching for Diverse Students in Schooling: Best Evidence Synthesis, Ministry of Education, Wellington. Assessment Reform Group, (1999) Assessment for Learning: Beyond the Black Box, School of Education, University of Cambridge, Cambridge. Black, P., and Wiliam, D., (1998) ‘Inside the Black Box: Raising Standards through Classroom Assessment’, Phi Delta Kappan, Vol. 80, No. 2, October, pp. 139-148. Crooks, T. (1993) Guide to good assessment practice. Journal Crooks, T., and Kane, M., (1996) ‘Threats to the Valid Use of Assessments’, Assessment in Education: Principles, Policy & Practice, Vol. 3 Issue 3, Nov. , pp. 265-286. Harlen, W., and Crick, R. Deakin, (2002) ‘Testing, Motivation and Learning’, Seminar led by Wynne Harlen, 28 November,, on assessment, testing, motivation and learning, Nuffield Foundation Education Seminars, London. Harlen, W., (2005) ‘Trusting Teachers’ Judgement: Research Evidence of the Reliability and Validity of Teachers’ Assessment used for Summative Purposes’, Research Papers on Education, Vol. 20, No. 3, September, , pp. 245-270. Hattie, J., (2001) Who says formative assessment matters? Photocopied extract from MoE reading. Hattie, J., and Timperley, H., (2007) ‘The Power of Feedback’, Review of Educational Research, Vol. 77, No. 1, March, , pp. 81-112. Irving, J., (1997) School-wide Assessment: The Big Picture, New Zealand Council of Educational Research, Wellington. Montgomery, D.C. (1985). Introduction to Statistical Quality Control (2nd ed), Wiley, New York. Sadler, D., (1989) ‘Formative Assessment and the Design of Instructional Systems. Instructional Science , Vol. 18, pp. 119-144 Tait, P. (1999) Assessment Discussion Paper. Handout. Unpublished. Timperley, H. and Robinson, V. (2002). Partnership: Focusing the Relationship on the Task of School Improvement. Wellington: New Zealand Council for Educational Research

Page 8 of 8