Want Better Teaching? Get Better Curricula.

Date:


One Catch with Basing Reform on Curriculum: Defining Quality

There’s a potential problem, though, with making everything rest on curriculum. If the curriculum isn’t actually a high-quality, knowledge-building curriculum—even if it’s labeled as one—the whole structure is unlikely to work. This problem first became apparent after the Common Core State Standards were promulgated in 2010. The theory, as with the standards movement in general, was that publishers would create a curriculum based on the standards, and then the rest of the dominoes would fall into place and student achievement would rise.

What happened, though, to the dismay of the architects of the Common Core literacy standards, was that publishers made only cosmetic changes to their existing programs and slapped a sticker on them saying they were “Common Core aligned.” Perhaps that should have been foreseeable. The authors of the literacy standards believed that the only way for schools to enable students to meet the standards was by adopting and implementing a content-rich, knowledge-building curriculum, and there was language to that effect in the supplemental materials. Few people read those materials, though, and the standards themselves made no mention of building knowledge. They appeared to be just a somewhat different list of skills. As a result, most curriculum publishers—and most educators—didn’t recognize the need for a fundamental shift.

Officers at some philanthropic foundations, spearheaded by the Charles and Lynn Shusterman Family Philanthropies, realized the significance of the problem. They knew there were nontraditional curriculum developers that had gotten the message about the need for fundamental shifts and were creating products that incorporated them. But how were states, districts, and schools supposed to recognize the difference between a curriculum that was truly aligned to the new standards and one that merely had a sticker saying it was?

The solution the foundations came up with was an organization called EdReports, which was launched in 2015. The idea was to recruit classroom teachers, train them to recognize what made a curriculum a good one—one that was truly aligned to the Common Core standards or something like them—and issue reviews and ratings based on a detailed rubric. Since EdReports was to be funded by philanthropy rather than publishing companies, it would be objective in its reviews.

EdReports rates literacy curricula on three “gateways”: text complexity and quality, building knowledge, and usability. The top rating for each one is green, curricula that partially meet expectations get yellow, and the lowest rating is red. A curriculum needs to get green on each gateway in order to proceed to the next one.

In some ways, EdReports has been a resounding success. By 2022, according to the organization’s annual report, EdReports had been used by over 1,400 districts, representing nearly 16 million students. Although there are undoubtedly many places where curriculum isn’t yet part of the conversation—as one literacy consultant told me—where it is part of the conversation, EdReports is likely to pop up.

“The first line of screening for school systems,” said Kareem Weaver, the cofounder and executive director of a literacy-focused nonprofit called Fulcrum, “is EdReports 95 percent of the time.”

A number of states now use “all green on EdReports” as a proxy for high quality, and some, like Rhode Island, develop lists that include only curricula that have gotten all greens. EdReports has also trained curriculum reviewers for states and districts. In January 2024, the then-interim state superintendent of Maryland, Carey Wright, assured the state board of education that if a curriculum got all green on EdReports, “you can take that to the bank, that that is a high-quality piece of instructional material.”

All of this would be news to cheer if EdReports’s ratings were reliable. Unfortunately, many literacy experts and advocates I’ve spoken with say the organization’s yellow and green ratings have become increasingly mystifying. In its early years, EdReports was doing what it was intended to do—giving all greens to truly high-quality knowledge-building curricula that were developed by nontraditional publishers, many of them nonprofits. Around 2017, though, things began to change.

A curriculum called Bookworms, highly regarded by literacy experts who helped develop the Common Core, was given some yellows rather than all greens. After that, major publishers of the reading textbooks called basal readers began submitting their curricula to EdReports—in some cases, the same publishers that had affixed “Common Core aligned” stickers on their products without making fundamental changes—and EdReports gave some all greens. They include Wonders (McGraw-Hill), myView Literacy (Savvas, formerly Pearson), and Into Reading (Houghton Mifflin Harcourt). These programs now appear on many state adoption lists in part thanks to EdReports’s high ratings.

From what I have seen myself and heard from many educators and curriculum experts, these basal programs bear little resemblance to the knowledge-building curricula that got all greens in EdReports’s early days—curricula like Core Knowledge Language Arts and Wit & Wisdom. For one thing, they’re stuffed with more activities and features than any teacher could possibly cover in one school year. There may now be some high-quality texts in the mix, but there’s also a lot of time-wasting fluff.

At least some of these publishers are aware that their curricula are bloated. One reason, they say, is that they need to satisfy a plethora of state standards.

It’s not just standards that are causing bloat. If a basal program gets anything less than green from EdReports, the publisher may simply add whatever has been identified as missing and resubmit it for review. EdReports’s chief external affairs officer, Janna Chan, told me the organization revised its criteria in 2020 partly to guard against bloat, but the revision doesn’t seem to have had the desired effect.

Even though they’re overstuffed, the “all-green” basals are also too thin on content to do a good job of building knowledge. Instead of the meaty topics covered in effective knowledge-building curricula, such as “the American West” or “early American civilizations,” the basals focus on broad themes such as “Many Cultures, One World” and “How do people from different cultures contribute to a community?” These themes don’t provide children with the repeated exposure to the same vocabulary and concepts that enable them to retain information in longterm memory.

The EdReports criteria also don’t include evidence of a curriculum’s effectiveness. Bookworms, the curriculum that experts say has the best evidence for raising reading scores, has now failed to get all greens on three rounds of reviews over five years, with points taken off for different issues on each round. As a result, few if any states have put Bookworms on their approved lists.

Literacy consultant Kate Crist told me that EdReports has “such a thumb on the scale that it has sort of wreaked havoc.” It’s not just that states and districts are being misled into spending massive amounts of money on curricula that don’t work—which is bad enough. It’s also that a complex and interconnected superstructure has been built on top of a foundation that is flawed. Researchers at institutions like the RAND Corporation and journalists who write a about education routinely use all green on EdReports as a proxy for “high-quality,” making it impossible to determine how many schools or districts are using truly effective curricula and how many are not. It’s also difficult to figure out which curricula are leading to improved student outcomes.

One reason EdReports has been unreliable for so long is that, despite a general consensus among experts that its ratings are flawed (“We all talk about it,” one literacy consultant told me), few have been willing to speak up publicly. The reason, I’ve been told repeatedly, is that the powerful funders behind EdReports also fund a lot of other education organizations—and those they don’t fund often hope they’ll get funding in the future. The result is that no one wants to criticize the funders.

Ironically, unless things change, EdReports could be a perpetuator of the same problem it was designed to solve: ineffective curricula that are adorned with labels saying they’re effective. The theory was that philanthropic funding would prevent that from happening by ensuring objective ratings, but if people are reluctant to tell philanthropists they’ve made a mistake—or if philanthropists are reluctant to admit they’ve made a mistake—the end result is pretty much the same. The ratings may be objective, but they’re still misleading.

There’s probably no perfect way to rate curricula, just as there is no perfect curriculum. However, given the crucial role that curriculum plays in education—and the difficulty of judging quality—officials and educators need as much reliable guidance as possible. There are rubrics that states and districts can use for evaluating literacy curricula instead of relying on EdReports. Some literacy experts recommend an evaluation tool produced by The Reading League and another developed by the Knowledge Matters Campaign, which is specifically focused on knowledge building. (Disclosure: I serve on the board of the parent organization of the Knowledge Matters Campaign.)

In May 2024, The Reading League, through a project called Compass, began releasing its own reports on specific curricula, based on its curriculum evaluation guidelines. Those guidelines are grounded in a definition of the science of reading that includes both knowledge building and writing as well as foundational reading skills, although—like EdReports’s criteria—it doesn’t extend to the principles of cognitive science more generally. With only eight evaluated programs as of this writing, it remains to be seen whether Compass can dislodge EdReports from its deeply entrenched position of primacy.

Even if it can, The Reading League’s reports and guidelines have their own troubling aspects. While EdReports’s usability ratings are unreliable, The Reading League doesn’t even try to apply that criterion. It’s true that usability can be hard to evaluate, but it’s crucial for districts to have at least some information on that score.

More fundamentally, The Reading League, like EdReports, has given high marks for knowledge building to some curricula that don’t appear to deserve them. In addition, its guidelines place more emphasis on explicit comprehension strategy instruction and practice than is warranted by the evidence.

It would be helpful to have more reliable curriculum rubrics and ratings, but ultimately we need to go beyond those tools. We need rigorous, objective research that evaluates one specific curriculum against another, in different contexts. Typically, when researchers undertake experimental studies of curriculum or other interventions, they identify the intervention they’re testing but describe the control group as getting “business as usual.” Educators who are deciding between two or more curricula need to know how they stack up against each other, not how they do as compared to some unknown quantity.

In addition, these studies should last at least three years, because the evidence suggests that’s about how long it takes for the benefits of a knowledge-building curriculum to become apparent on the standardized reading comprehension tests that are considered the gold standard for evaluating effectiveness. Those studies are expensive, which is why so few of them get done. Given the urgency of the situation, though, the federal government should fund them in the same way they fund clinical trials of new drugs. Surely the education of the nation’s children is as important as its citizens’ health.

We also desperately need examples that policymakers and educators can look to—schools and, perhaps, entire districts that are doing it right. For that to happen, we need better data. We need to know what curricula are being used where—and we can’t just rely on EdReports’ ratings to define “high quality.” We need educators and leaders to step up and say, publicly, “This is what we’re doing. It’s working. Come see it for yourselves.” It’s no exaggeration to say that the futures of our children, and perhaps our democracy, largely depend on shining examples of what education can be, for all students.

Share post:

Subscribe

Popular

More like this
Related

Why You Feel Off-Balance Without Dizziness? 7 Common Causes & When to Seek Help

Ever Feel Off-Balance but Not Dizzy? It’s a...

8 Key Skills to Develop Leadership for Learning with Elementary Students

In twenty-one years of teaching, I have experienced...

A Classroom Without Books Is Not Progress

A Classroom Without Books Is Not Progress -...

Shingles Relief: Effective Treatments & Home Remedies

Shingles Relief, often referred to as herpes zoster,...