By Scott Marion
Student-led assessment has become the umbrella term for describing the range of approaches by which students are involved in collecting and evaluating evidence of their learning. This contrasts with more traditional approaches where the teacher or an entity outside of the classroom (e.g., district, state) dictates the assessment process. Student- or teacher-led assessment is not a simple dichotomy. Rather, the degree of student leadership exists on a multiple continua that range from complete-adult to complete-student control. In this blog post, I briefly contextualize student-led assessment followed by an elaboration of the aforementioned continua to provide an initial conceptualization for educators and others beginning to engage in the work of building student-led assessment into their practice. I conclude with a discussion with the utility of student-led assessments for fulfilling the intended aims of such initiatives.
Student-led is not new. John Dewey (e.g., 1938) wrote about meaningfully engaging students in authentic activities more than 80 years ago. More recently, many high schools, especially those involved in the Coalition of Essential Schools, employed student exhibitions and related demonstrations of learning as a capstone project to document that students possessed critical knowledge, skills, and dispositions necessary to graduate from high school and to move on to the next phase of their lives. The Coalition officially closed its doors last year after a significant decline in member schools since its high of more than 1,000 high schools and the passing of Ted Sizer, the Coalition’s founder. While there might have been a decline in official Coalition members, the underlying “principles” are embodied in many other reforms. The increased attention on college and career readiness has sparked a renewed interest in student exhibitions or related projects as a way to ensure that students possess the myriad of knowledge, skills, and dispositions—beyond and including academic coursework—thought to be necessary for success in careers and college. Many districts and charter organizations quickly recognized that waiting until the end of high school is too late to have students first take ownership of their learning. Therefore, students must have multiple experiences directing their own learning and assessment prior to the end of high school. However, engaging students in their own learning in earlier grades simply so that they will be ready for their senior projects is a missed opportunity. We should actively engage students in their own learning and assessment as early as possible because of the considerable benefits in learning academic content and skills and especially for building the kinds of dispositions (e.g., 21st Century Skills) we hope to see in all students.
Senior exhibitions and other presentations OF learning appear to be the focus of many current discussions of student-led assessment. Just as we can learn by drawing on lessons from the past (e.g., Dewey), we can benefit from the conceptual leadership provided by key formative assessment researchers (e.g., Bennett, 2011; Heritage, 2018; Sadler, 1989; Shepard, 2000; Shepard, Penuel, & Pellegrino, 2018; Wiliam, 2011). The commonly cited definition of formative assessment as adopted by the formative assessment working group of the Council of Chief State School Officers follows:
Formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievements of intended instructional outcomes (Wiley, 2017, p. 3).
However, all of the formative assessment researchers noted above argue that this definition captures really only the first—albeit critical—part of formative assessment. A key aspect of formative assessment is for students to develop the self-regulatory and metacognitive skills that they will need to monitor their own learning, and we can learn a lot about student-led assessment from the high-quality research and conceptualization related to formative assessment. For example, Heritage points out that when teachers create the right formative opportunities, they facilitate having students become more able to regulate their own learning.
To engage in this kind of assessment requires skills in creating formative assessment opportunities while students are involved in learning, and also orchestrating an extraordinary number of complex judgments in the course of a lesson to implement appropriate and immediate actions in response to evidence….Additionally, involving students in assessment practices has the payoff of supporting students to develop self‐regulatory learning processes (Bailey & Heritage, 2018), and is a hallmark of higher levels of skill and expertise in teaching (Heritage, 2018, p.39).
My colleague Brian Gong and I addressed the issue of flexibility and standardization about a dozen years ago for alternate assessments administered to students with the most significant cognitive disabilities (Gong & Marion, 2006). In this paper, we considered the degree to which the various dimensions of an assessment system such as learning targets, assessment methods, scoring, interpretative frameworks, performance standards, and several other aspects ranged from highly flexible to highly standardized. A similar conceptualization can be applied to student-led assessments. In other words, assessments—because they are comprised of multiple dimensions—are not simply student-led or not. Rather, they can range from completely student-led to fully teacher dictated for each of several components.
The following five dimensions appear to capture the range of flexibility associated with student-led assessment.
For example, students might experience highly prescribed learning goals and targets, but they might have some degree of agency regarding how they will learn the material and even more agency for producing evidence of such learning. Such a combination of student agency and teacher direction might serve as an entry point for those early on the student-led assessment journey and can serve as a check on student learning of common learning goals and targets. In many cases, especially for older students, there could be considerable benefits in allowing students to establish their own learning targets within broad competencies or related big-picture learning goals. Colleagues at the Student-Led Assessment Networked Improvement Community, a partnership of several Virginia school divisions (districts) and EdLeader 21, have been translating these continua into design principles for student-led assessment as they support schools in implementing and evaluating high-quality student-led assessment practices and policies.
Striving for Utility
Simply being on the student-led end of the continuum for each dimension does not automatically make an assessment high quality. This is analogous to earlier discussions about performance-based assessments. Samuel Messick (1994), a legend in validity theory, argued that calling something a performance or authentic assessment is a “promissory note” for validity, but the name alone does not constitute validity evidence. Similarly, calling something “student-led” does not necessarily make it useful for the intended purposes and uses. Many people, for example, likely assume that student-led assessments should promote deep engagement and thinking by students, but if poorly designed, student-led assessments could focus on low-level outcomes. This could be a wasted opportunity; therefore, we need to attend to many of the same design considerations for student-led assessment as we would for any other high-quality assessment.
A key goal of employing student-led assessments is to maximize the utility of the assessment process and products. Utility is the degree to which the system provides the information necessary to support the intended aims of an assessment program. Therefore, it is critical to specify these intended aims. It is axiomatic to say that assessments are only validated for specific purposes and uses, but in considering utility, the aims often reach beyond the score inferences that are the subject of validity evaluations. In the case of student-led assessment, aims for students often include such things as fostering engagement, enhancing self-regulation, encouraging independence and persistence, and promoting deeper learning. Thus, utility must be evaluated by examining the extent to which the assessment experience(s) support these or other identified aims.
So how does utility intersect with the continua of student-led assessment? Utility requires a thoughtful articulation of the intended aims and purposes. Once these are clear, assessment designers (perhaps with students) should consider how placement along the various continua will most likely support the aims of the assessment program. For example, educators may argue that in order to address certain program aims, it might make sense to be more teacher directed with the learning goals and targets but encourage considerable student independence for how they provide evidence of their learning.
In subsequent posts, staff from the Center for Assessment will continue to explore how these design continua interact with purposes and aims of student-led assessment.
Scott Marion is the Executive Director of the National Center for the Improvement of Educational Assessment. He is a national leader in designing innovative and comprehensive assessment systems to support both instructional and accountability uses, including helping states and districts design systems of assessments for evaluating student learning of identified competencies.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles Policy and Practice, 18, 1, 5–25.
Dewey, J. (1938). Experience and education. NY, NY: Simon and Schuster
Gong, B. & Marion, S. F. (2006). Dealing with flexibility in assessments for students with significant cognitive disabilities. Minneapolis, MN: University of Minnesota, National Center for Educational Outcomes Synthesis Report No. 60. http://education.umn.edu/nceo/OnlinePubs/Synthesis60.html.
Heritage, M. (2018). Making assessment work for teachers. Educational Measurement: Issues and Practice, 37, 1, 39-41.
Messick, S. (1994). Alternative modes of assessment, uniform standards of validity. ETS Research Report Series, 1994, i–22. doi:10.1002/j.2333-8504.1994.tb01634.x
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 17, 2, 119-144.
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29, 7, 4-14.
Shepard, L. A., Penuel, W. R., & Pellegrino, J. (2018). Using learning and motivation theories to coherently link formative assessment, grading practices, and large-scale assessment. Educational Measurement: Issues and Practice, 37, 1, 21-34.
Wiley, E. C. (2017). Formative Assessment: Examples of Practice. Retrieved January 11, 2018, from http://ccsso.org/resource-library/formative-assessment-examples-practice
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 1, 2–14.
Leave a Reply