The CenterPoint interim assessments were designed to provide information that helps teachers understand the breadth of students’ skills and understandings in mathematics content and mathematics practices that are typically measured on state summative assessments. Students answer a variety of questions, including selected response, multiple selected response, fill in the blank, technology-enhanced items, and constructed-response items and engage in tasks that are both scaffolded and unscaffolded. These assessments provide educators with meaningful data that can be used to inform curriculum and instructional decisions.
CenterPoint’s interim assessments provide educators with the information needed to monitor student performances in mathematics so that teachers can identify students who need additional intervention or enrichment opportunities. Using evidence-centered design helps to ensure the interims provide quality data that can be used to make informed decisions. The design of the interim assessments begins with inferences, or claims we want to make, about student proficiency. To support those claims, we must gather evidence from tasks that are designed to elicit specific evidence in support of the claims.
The CenterPoint Interim Assessments were designed to provide information about a master claim and four sub claims as shown in the diagram and defined below.
Master Claim
On track or ready for college and careers.
Major Content
Students solve problems involving the major content for the grade/course with connections to the Standards for Mathematical Practice.
Additional and Supporting Content
Students solve problems involving the additional and supporting content for the grade/course with connections to the Standards for Mathematical Practice.
Mathematical Reasoning
Students express grade-/course-level-appropriate mathematical reasoning by constructing viable arguments, critiquing the reasoning of others, and/or attending to precision when making mathematical statements.
Mathematical Modeling
Students solve real-world problems with a degree of difficulty appropriate to the grade/course by applying knowledge and skills articulated for the current grade/course, engaging particularly in the Modeling practice, and, where helpful, making sense of problems and persevering to solve them, reasoning abstractly and quantitatively, using appropriate tools strategically, looking for and making sense of structure, and/or looking for and expressing regularity in repeated reasoning.
Each question or task on the interims was designed so that students could demonstrate evidence of learning to support the claims. Additionally, CenterPoint utilizes item types that provide the best way for students to show the evidence of learning that is desired. Below is a list of some of the item types utilized on the interims.
Machine-Scorable Item Types
Human-Scored Item Type
In addition to designing assessments within the framework of evidence-centered design, CenterPoint applies principles of universal design to increase the accessibility, and therefore fairness, of each assessment for all students. Universal design is essential to valid measurement practices. If assessment questions are not accessible or fair for every student, then the evidence collected will not provide meaningful information about students’ knowledge and/or abilities.
College- and career-ready standards in Mathematics are designed to describe the knowledge, skills, and understandings essential to postsecondary success. This includes an emphasis on major content, supporting content, and additional content in each grade as well as the mathematical practices.
The interim assessments are designed to measure students’ conceptual understandings and skills as defined in the standards. The mathematical practices also come into play as students solve problems in the constructed-response items that focus on students’ abilities to model and reason with mathematics.
In Grades 3-8 and in Algebra 1, Geometry, and Algebra 2, each interim contains 16 questions that vary in complexity levels. The content assessed in Interims A, B, and C is unique.
There are two versions of each interim. Version 1 contains machine-scored items assessing conceptual understanding and skills. There is also 1 constructed-response item that focuses on mathematical modeling and 1 constructed-response item that focuses on mathematical reasoning. Version 2 is identical to version 1; however, version 2 does not contain constructed-response items. This version was designed to allow educators the flexibility of assessing the standards without including human-scored questions. Please see the Blueprints and Test Maps to view the standards assessed within each interim and grade.
The questions on the interim assessments are like those on summative assessments to provide students with an indication of their progress throughout the year and a better understanding of what to expect on end-of-year assessments.
Raw score data and questions showing actual student responses can be used by educators to determine patterns of student performance and to diagnose students’ strengths and areas of need. Data may also illuminate areas within the curriculum and instruction that require tweaks and tune-ups.
Of note: CenterPoint’s interim assessments are designed to show students’ progress toward meeting end-of-year expectations. Be cautious when reviewing student data at the standards level so as not to jump to immediate conclusions since most standards are assessed with a minimal number of questions. Similarly, the subclaim data provides a better picture of student proficiency when looking at all the data points from the three assessments completed within the year. This design was intentional to keep testing time to a minimum, ensure assessment coverage of the standards, and provide a high-level view of progress toward meeting end-of-year expectations.
There are two constructed-response questions on each interim assessment. Each constructed-response question is aligned to a mathematical practice standard and a content standard. These questions are designed to be human scored using the scoring rubrics that accompany each question. The rubrics should be accessed within the online platform to score each student response. Teachers within schools and districts should work together to ensure student responses are being scored consistently. A sample rubric is shown.