Skip to main content

Mobile Navigation

  • National
    • Policy
      • High Expectations
      • Quality Choices
      • Personalized Pathways
    • Research
    • Commentary
      • Gadfly Newsletter
      • Gadfly Podcast
      • Flypaper Blog
      • Events
    • Covid-19
    • Scholars Program
  • Ohio
    • Policy
      • Priorities
      • Media & Testimony
    • Research
    • Commentary
      • Ohio Education Gadfly Biweekly
      • Ohio Gadfly Daily
  • Charter Authorizing
    • Application
    • Sponsored Schools
    • Resources
    • Our Work in Dayton
  • About
    • Mission
    • Board
    • Staff
    • Career
Home
Home
Advancing Educational Excellence

Main Navigation

  • National
  • Ohio
  • Charter Authorizing
  • About

National Menu

  • Topics
    • Accountability & Testing
    • Career & Technical Education
    • Charter Schools
    • Curriculum & Instruction
    • ESSA
    • Evidence-Based Learning
    • Facilities
    • Governance
    • High Achievers
    • Personalized Learning
    • Private School Choice
    • School Finance
    • Standards
    • Teachers & School Leaders
  • Research
  • Commentary
    • Gadfly Newsletter
    • Flypaper Blog
    • Gadfly Podcast
    • Events
  • COVID-19
  • Scholars Program
Flypaper

Busting the belief gap via regular student assessment

Jeff Murray
8.5.2021
Getty Image/smolaw11

At its simplest, the belief gap is the gulf between what students can accomplish and what others—particularly teachers—believe they can achieve. It is especially pernicious when beliefs around academic competency are fueled by extraneous information such as socioeconomic status, race, or gender. All too often, the assumption of low academic ability on the part of adults becomes actual underachievement in young people. A new study looks at one simple possibility to mitigate extraneous information and remove the assumptions: using demonstrated academic ability.

The data on which the belief gap analysis is based was collected during a separate study on the efficacy of an online student evaluation platform called Assessment-to-Instruction (A2i) in several elementary grades. A2i uses regular, ongoing student assessment to not only track a student’s progress through a literacy curriculum, but also to help guide teachers as to what and how much additional work students need to reach competency. The belief gap study, conducted by researchers from the University of California, Irvine and Texas A&M University, looked at the effects of both the assessment data and the professional development (PD) around it on teachers’ perceptions of student ability.

The A2i study took place in an unnamed district in northern Florida in the 2008–09 school year, in five elementary schools ranging from urban to rural in setting. The belief gap researchers focused on a subset of the participants—twenty-eight teachers and 446 of their first-grade students. Students were representative of the district community: 84 percent were White, 6 percent were multiracial, 5 percent were Black, 3 percent were Hispanic, 2 percent were Asian, and 0.7 percent were Native American. Approximately 46 percent of the students were boys, and 27 percent of the students qualified for the National School Lunch Program (NSLP). All teachers were female, with an average of seventeen years of teaching experience. One teacher identified as Black; the rest identified as White. Fifteen teachers and their 255 students were randomly assigned into the A2i treatment group, while thirteen teachers and their 214 students were assigned to the “control” group.

It’s important to note that, due to the construct of the main A2i study, a pure control group was not possible for the belief gap analysis. Both groups of teachers received the same amount of PD regarding research-based teaching, but the focus varied between groups. The treatment group was focused on why and how to use A2i assessment data to tailor their instruction; the control group received more generalized PD on the potential value of any assessment-guided instruction. Teachers in the control group delivered business-as-usual instruction during their literacy block and implemented a research-based intervention called Math PALS for their mathematics class periods. They received infrequent assessment data for their students but were not asked to tailor their instruction based on that data. The treatment group teachers used Math PALS, too, but utilized the frequent, dynamic assessment feedback from A2i to guide and shape their literacy instruction.

At the midpoint of the school year, teachers completed the Social Skills Rating System (SSRS) for all of the first graders in the study. SSRS is a norm-referenced, multirater assessment tool comprised of fifty-seven items in three measurement areas, academic competence, problem behaviors, and social skills. The researchers hypothesized that teachers using the frequent assessment feedback from A2i for the first half of the year (and exposed to the A2i-specific PD) would produce more accurate predictions of student competence than their control group peers, and that potential biases in predictions based on student characteristics would be minimized.

Generally, this hypothesis proved correct. Teachers in the treatment group provided a more accurate rating of their students’ academic competence than their control group peers by choosing ratings that agreed with student test scores. Control group teachers—those without access to the A2i assessment data—generally rated the overall academic competence of their students lower, and rated students who qualified for the NSLP as less academically-competent than more affluent students. The strength of this effect varied based on the percentage of NLSP students attending a given school. The fewer the number of NSLP students in the school, the lower control group teachers’ ratings of those students were. Interestingly, teachers’ perception of students’ social skills and behavior problems appeared impervious to the treatment. Teachers in both groups who rated students’ behavior or social skills as poor also predicted lower academic competence for those students.

Students in the A2i classrooms achieved greater gains in test scores between fall and spring than students in the control classrooms, which likely speaks more to the primary study of A2i’s effectiveness. However, teacher ratings of academic competence were positively and significantly correlated to higher test scores in both literacy and math. For example, for every one-point increase in a teacher’s rating of academic competence, their student’s score on reading comprehension increased by 0.24 points. Thus, while it would be something of a leap to assert that a high competency rating directly results in higher test scores, there is clearly an interaction.

To the extent that teacher ratings are influenced by student and classroom characteristics unrelated to their actual performance—often negatively—any successful effort to mitigate that influence should yield positive outcomes for students. Teachers participating in PD on data-driven personalized instruction were significantly more accurate in their competency judgments regardless of socioeconomic status and other non-academic characteristics. Filtering out the noise is a great first step to eliminating the belief gap.

SOURCE: Brandy Gatlin-Nash, et. al., “Using Assessment to Improve the Accuracy of Teachers’ Perceptions of Students’ Academic Competence,” The Elementary School Journal (June 2021).

Policy Priority:
High Expectations
Topics:
Accountability & Testing
Evidence-Based Learning
Curriculum & Instruction
Teachers & School Leaders

Jeff Murray is a lifelong resident of central Ohio. He previously worked at School Choice Ohio and the Greater Columbus Arts Council. He has two degrees from the Ohio State University. He lives in the Clintonville neighborhood with his wife and twin daughters. He is proud every day to support the Fordham mission to help make excellent education options more numerous and more readily available for families and…

View Full Bio

Sign Up to Receive Fordham Updates

We'll send you quality research, commentary, analysis, and news on the education issues you care about.
Thank you for signing up!
Please check your email to confirm the subscription.

Related Content

view
Sidestepping accountability again blog image
Standards & Accountability

Ohio’s bad habit of sidestepping accountability

Jessica Poiner 3.28.2022
OhioOhio Gadfly Daily
view
Retention TGRG blog image
Standards & Accountability

Intensive interventions and the third grade reading guarantee

Aaron Churchill 3.28.2022
OhioOhio Gadfly Daily
view
Gadfly Bites
School Funding

Gadfly Bites 3/28/22—Because the district might do what now?

Jeff Murray 3.28.2022
OhioOhio Gadfly Daily
Fordham Logo

© 2020 The Thomas B. Fordham Institute
Privacy Policy
Usage Agreement

National

1016 16th St NW, 8th Floor 
Washington, DC 20036

202.223.5452

[email protected]

  • <
Ohio

P.O. Box 82291
Columbus, OH 43202

614.223.1580

[email protected]

Sponsorship

130 West Second Street, Suite 410
Dayton, Ohio 45402

937.227.3368

[email protected]