Test Scores vs. Growth Scores: Which Tells You More?
Test scores show what students know today. Growth scores reveal how much a school actually helped them learn—often a more meaningful signal.
Test Scores vs. Growth Scores: Which Actually Tells You More About a School?
When you pull up a school's profile, the first number that catches your eye is usually the proficiency rate: 62 percent of students meet state standards in math, say, or 48 percent in reading. For decades, that single snapshot—what percentage of kids score "proficient" on a standardized test—has shaped how parents, policymakers, and the public judge school performance.
But a quiet revolution has been underway in education accountability. More states and districts now publish a second metric alongside test scores: academic growth. Instead of asking where students are, growth measures ask how far they've traveled. And mounting evidence suggests that growth scores often reveal more about a school's actual impact than proficiency numbers ever could.
What Test Scores Actually Measure
Proficiency rates tell you what students know at a single moment in time. California's Department of Education puts it plainly: achievement shows "how much students know at the time of the assessment," while growth captures progress over time.
These snapshots aren't useless. High proficiency rates do signal that a school has students performing at grade level, and research by economists links national test performance to long-term economic outcomes. Parents understandably want to know whether a school meets basic benchmarks.
The trouble is that proficiency rates are heavily influenced by factors schools don't control. A Brookings Institution analysis found that "average test scores do a poor job of identifying the schools that contribute the least to students' learning." Students at affluent schools often arrive already scoring near proficiency, while students in high-poverty neighborhoods typically start far below grade level. A school serving mostly disadvantaged students can do excellent instructional work and still post a low proficiency rate—because it inherited a deficit it couldn't close in a single year.
Research by MIT economist Josh Angrist and colleagues concluded that proficiency-based ratings are "strongly skewed in ways that hurt schools with more students of color." Schools in whiter, more affluent communities consistently earn higher ratings under proficiency systems, regardless of how much students actually learn.
How Growth Scores Work
Growth models flip the question. Instead of comparing this year's fifth graders to an arbitrary proficiency bar, they compare each student to their own past performance. Did a student who scored at the 30th percentile last year move up to the 50th percentile this year? If so, that's strong growth—even if the student still isn't proficient.
Student growth percentiles, or SGPs, are the most common approach. A student's SGP ranks their progress against "academic peers"—students statewide who started the year with similar test scores. An SGP of 70 means a student learned more than 70 percent of peers who began at the same level.
North Carolina's accountability system illustrates how this works in practice. Growth is "measured by a statistical model that compares each student's predicted test score, based on past performance, against his or her actual result." During the 2024-25 school year, 71 percent of North Carolina schools met or exceeded growth expectations, even though only 29 percent earned an A or B grade overall—a composite that's 80 percent proficiency and just 20 percent growth.
California recently rolled out its own growth model. The state's methodology uses two to three years of prior assessment data to generate expected scores, then compares actual performance. Unlike individual report cards, these scores are aggregated at the school and district level to paint a picture of institutional impact.
Why Growth Is Often the Stronger Signal
Growth measures level the playing field. A Brookings analysis of North Carolina and Florida schools found that students in the bottom 15 percent of schools ranked by proficiency learned about a third of a year less than average, but students in the bottom 15 percent of schools ranked by growth learned more than half a year less. In other words, growth metrics more accurately identified where learning wasn't happening.
The same study found that "the red dots"—schools serving the highest concentrations of low-income students—were clustered at the bottom when ranked by proficiency, but scattered across the distribution when ranked by growth. Some high-poverty schools were adding tremendous value; proficiency-only rankings buried that fact.
Value-added models, a more sophisticated cousin of growth percentiles, try to isolate a school's contribution by controlling for student demographics and prior achievement. A Harvard study describes value-added as measuring "how much of a student's academic progress from one year to the next is attributable to his or her teacher, as opposed to factors outside of the teacher's control."
Consider two schools in the same district. School A has 40 percent proficiency and serves mostly students far below grade level. School B also has 40 percent proficiency but serves students closer to grade level. Under a proficiency system, as USC professor Morgan Polikoff noted, these schools "look exactly the same." But dig into growth, and you might find that School A's students are making 1.5 years of progress per year—rocketing toward proficiency—while School B's students are stagnating.
Illinois recently overhauled its accountability system to reflect this insight. Under the new model, a school's designation is determined by its strongest "core indicator," including both proficiency and growth, with growth carrying substantial weight. The message: excellence can look like strong progress even when proficiency hasn't caught up yet.
The Limits and Criticisms of Growth Models
Growth measures aren't perfect. Some civil rights organizations worry that an exclusive focus on growth lets schools off the hook for persistent achievement gaps. The Education Trust has argued that growth models "risk setting lower expectations for students of color and low-income students" because comparisons to peers "won't reveal whether that student will one day meet grade-level standards."
That's a fair critique. A school can show strong growth while still leaving most students below proficiency. If a student starts at the 10th percentile and grows to the 30th percentile, that's solid progress—but the student is still struggling.
Andrew Ho of Harvard, who has studied accountability design, supports using both metrics. Systems that rely solely on proficiency "are a disaster both for measurement and for usefulness," he said, because they're "extremely coarse and dangerously misleading." But growth-only systems have drawbacks too. The solution most states have landed on is a hybrid: weight both proficiency and growth, ensuring schools are judged on results and trajectory.
Another concern is technical. Growth percentiles typically don't adjust for factors like special education status or English learner classification, which some researchers note can make SGPs "perform more poorly than value-added models when students are not randomly assigned to classrooms." Still, SGPs are easier to explain to parents and educators, which has driven their widespread adoption. Nearly half of U.S. states now use SGPs or similar frameworks.
Finally, growth data require at least two years of consecutive testing, which means they're not available for high-mobility students, kindergarteners, or schools that only serve a single tested grade. California excludes K-3 schools and high schools serving only grades 9-12 from growth calculations for this reason.
What This Means for Parents
When you're evaluating a school, here's what to look for:
Check both numbers. A school with 90 percent proficiency but below-average growth may be coasting on an affluent student body. Conversely, a school with 45 percent proficiency but strong growth may be doing excellent work with students who arrive far behind.
Look for growth among subgroups. States and districts often break down growth by race, income, and language status. California, for instance, publishes growth results for student groups alongside school-wide averages. A school that shows strong growth for English learners or students with disabilities is demonstrating it can meet diverse needs.
Beware of high proficiency alone. Research highlighted by Chalkbeat found that proficiency-heavy ratings "effectively steer families towards schools serving more affluent, white, and Asian students," not necessarily the schools where kids learn the most. GreatSchools, a widely used rating site, now weights growth more heavily than it used to—precisely because proficiency alone proved misleading.
Context matters. A principal at Laura H. Carnell School in Philadelphia described the paradox her school faced: ranking 18th citywide in growth but still posting low proficiency. "Achievement takes longer than growth," she wrote. If your child is entering a turnaround school with momentum, growth data can give you confidence that progress is real, even if test scores haven't caught up.
Ask about goals. Some schools set growth targets transparently. New Hampshire publishes median student growth percentiles by grade and subject. A growth percentile above 50 means students are progressing faster than they did pre-pandemic; below 50 means slower. These benchmarks help parents understand whether a school is accelerating or stalling.
The Bottom Line
Test scores and growth scores measure different things, and you need both to understand school performance. Proficiency tells you whether students have reached a standard; growth tells you whether the school is actually moving them forward.
For families choosing schools, growth is often the more revealing metric—especially if your child is entering below grade level or you're comparing schools across very different demographics. A Brookings study summed it up: using proficiency alone "is unacceptable from a fairness and equity perspective," and growth measures "more accurately identify those schools in which learning is not taking place."
None of this means test scores are irrelevant. Parents should care whether a school eventually gets students to proficiency. But if you had to pick one number that captures what a school does with the students it serves—its actual impact—growth is the stronger signal.
