Natalie Wexler – Education Next https://www.educationnext.org A Journal of Opinion and Research About Education Policy Wed, 12 Jul 2023 13:37:41 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 https://i2.wp.com/www.educationnext.org/wp-content/uploads/2020/06/e-logo-1.png?fit=32%2C32&ssl=1 Natalie Wexler – Education Next https://www.educationnext.org 32 32 181792879 The Evolving Science of How We Read https://www.educationnext.org/evolving-science-of-how-we-read-book-review-the-science-of-reading-johns/ Tue, 25 Jul 2023 09:00:36 +0000 https://www.educationnext.org/?p=49716772 Survey has lots about eye-movement measurement, less about comprehension

The post The Evolving Science of How We Read appeared first on Education Next.

]]>

Book cover of "The Science of Reading" by Adrian Johns

The Science of Reading: Information, Media, and Mind in Modern America
by Adrian Johns
The University of Chicago Press, 2023, $32.50; 504 pages.

As reviewed by Natalie Wexler

If you’ve been following the debates on the “science of reading” over the past several years, prepare to be surprised when you delve into Adrian Johns’s recent book on the subject.

In its current incarnation, the term “science of reading” is primarily used to refer to a substantial body of research showing that many children—perhaps most—are likely to experience reading difficulties unless they receive systematic instruction in phonics and other foundational reading skills in the early years of schooling. Those who advocate that approach are on one side of the debate.

On the other side are the proponents of “balanced literacy,” the currently dominant approach to reading instruction in the United States. The educators and literacy gurus who lead that movement acknowledge that phonics is important, but they maintain that it’s generally sufficient to teach bits of phonics as the need arises—perhaps when a child is stuck on a particular word—while also encouraging children to use pictures and context clues to guess at words.

That stance is a modification of the one taken by the philosophical predecessor of the balanced literacy movement, known as “whole language,” which swept the country in the latter part of the 20th century. Whole language maintained that children learn to read by grasping whole words rather than sounding them out using individual letters. Science-of-reading proponents say that the balanced-literacy school’s approach to phonics doesn’t align with science any more than whole language did.

The revelation in Johns’s book is that throughout most of the 20th century the contemporaneous science of reading was firmly on the side of whole language. Johns, a professor of intellectual history at the University of Chicago, spends almost the entirety of his 500-page book on that era. For a reader whose understanding of the subject has been formed in the recent past, the result is a topsy-turvy, Alice-in-Wonderland experience.

Johns begins his account with the 19th-century American psychologist James McKeen Cattell. Like many of his peers, Cattell engaged in precise measurements of physical reactions and often used himself as an experimental subject. Initially, that led him to attempt to read and write under the influence of various substances—hashish, alcohol, cannabis, morphine—and assess, as best he could, the results.

Photo of Adrian Johns
Adrian Johns

But it was another aspect of his research that had a lasting influence: he invented a device that limited a reader to viewing just one character at a time to ascertain the shortest time in which people could identify characters correctly. His experiments led him to conclude that readers perceived whole words—or even complete sentences—more quickly than individual characters. Later researchers repeatedly confirmed that finding.

Cattell’s device was the granddaddy of a slew of similar contraptions—the kinetoscope, the ophthalmograph, and, most notably, the eye-movement recorder and the tachistoscope—that, judging from the illustrations in the book, resembled medieval torture instruments. The objective, through about the 1960s, was the precise measurement of eye movements with the goal of increasing reading speed.

Johns does his best to make the minutiae of these painstaking experiments engaging, but it’s an uphill battle. He quotes William James as remarking of these studies—many of which were conducted in Germany—that they could only have arisen in “a land where they did not know what it means to be bored.”

And the question, as Johns eventually acknowledges, is whether this research made much difference. To the extent that scientists focused on improving the reading ability of the populace—which then, as now, was a cause for great concern—the assumption seems to have been that a faster reader was necessarily a better one. The focus was on training readers to move their eyes more quickly, leading to the “speed reading” boom of the mid-20th century. While some researchers still measure eye movements, merely increasing reading speed is no longer the goal.

On the other hand, the scientific consensus that readers grasped whole words rather than individual characters made a huge difference to reading instruction—and not a positive one. By the 1930s, Johns writes, “it was simply impossible to buy elementary books that were not written on the whole-word principle.” One prominent reading scientist, William S. Gray, was the moving force behind the Dick and Jane readers, the best-known embodiment of the “look-say” method, which predated whole language. Children who could memorize sentences like “Run, Spot, run” were thought to be learning to read.

Johns takes us on journeys down many and various byways. We learn, for example, that researchers applied what they knew about pattern recognition to help World War II pilots identify distant aircraft and avoid crash landings. We get a tale about how in the late 1930s, fading movie diva Gloria Swanson hatched a plan to develop a “luminous paint” by recruiting European inventors who were being persecuted by the Nazis. But readers may wonder what this information is doing in a book about the science related to reading.

Meanwhile, there’s a lot about the science of reading that Johns leaves out of his account—including applied science having to do with reading instruction. He mentions that Jeanne Chall’s famous survey of reading pedagogy research, published in 1967 as Learning to Read: The Great Debate, found that the consensus of some 30 experimental studies “was overwhelmingly in favor of including at least some phonics instruction.” But Johns doesn’t describe any of those studies or the researchers who conducted them. Similarly, when discussing Rudolf Flesch’s 1955 bombshell Why Johnny Can’t Read, Johns ignores the experimental studies cited there that—according to Flesch—demonstrate the superiority of phonics instruction.

This is a significant omission. The studies done by Cattell and his successors were, according to reading researcher Timothy Shanahan, accurate and reliable basic research: adult readers do recognize words more quickly than letters. The mistake was to conclude that children should therefore be taught to read by memorizing whole words. “Studies quite consistently have found decoding instruction to be advantageous,” Shanahan notes in his paper “What Constitutes a Science of Reading Instruction?”

Johns acknowledges that point only obliquely, remarking toward the end of the book that he is not questioning “the current consensus that a ‘decoding’ model is the preferred basis for teaching early readers.” To the extent that he discusses recent science-of-reading research—much of it focused on brain imaging—he seems skeptical. Neuroscience, he observes, “rarely has much to suggest about how to teach.” True, but Johns could have said the same about the basic research of the past that he spent the previous 400 pages detailing.

Johns’s skepticism about current reading research stems from his intuition that reading is about much more than decoding. Reading, he observes, “is a variegated and dynamic practice, not reducible to one basic and unchanging perceptual skill.” Indeed it is, but Johns has omitted from his account another hugely significant yet far more complex aspect of reading: comprehension.

In a way, that omission isn’t surprising, given that in current usage the “science of reading” often denotes only studies of decoding. But, as with his omission of experimental studies of phonics instruction, Johns’s failure to include any of the extensive research on reading comprehension renders his history seriously incomplete. That research, which includes studies on the roles of knowledge and metacognitive strategies in the reading process, began as far back as the 1970s.

Still, The Science of Reading is a thorough summary of at least part of the science of reading, if not all of it. It’s also a useful reminder that science can change radically over time.

Natalie Wexler is an education writer and author of The Knowledge Gap: The Hidden Cause of America’s Broken Education System—And How to Fix It.

The post The Evolving Science of How We Read appeared first on Education Next.

]]>
49716772
A Sharp Critique of Standards-Based Reform https://www.educationnext.org/sharp-critique-standards-based-reform-beyond-standards-polikoff-book-review/ Thu, 05 Aug 2021 09:00:26 +0000 https://www.educationnext.org/?p=49713755 Polikoff pins his hopes on high-quality curricula selected by the states

The post A Sharp Critique of Standards-Based Reform appeared first on Education Next.

]]>

Book cover of "Beyond Standards" by Morgan Polikoff

Beyond Standards: The Fragmentation of Education Governance and the Promise of Curriculum Reform
by Morgan Polikoff
Harvard Education Press, 2021, $60; 192 pages.

As reviewed by Natalie Wexler

The many education reformers who have relied on academic standards to boost student achievement might outline their theory as follows: States broadly define what students should know and be able to do at specific grade levels. Publishers use these standards to create detailed curricula, which districts adopt. Teachers receive training in the standards’ requirements. Students’ progress is tracked by standards-based assessments. And educators are held accountable for the results. The expected outcome: markedly higher student achievement and a narrowing of racial and income-based gaps.

In Beyond Standards, Morgan Polikoff demonstrates that this theory hasn’t matched reality. He argues that standards are inherently too vague to enable teachers to arrive at a common or accurate understanding of what they need to teach or to identify the right materials for teaching it. Polikoff also points to decentralized governance as a problem. With over 13,000 school districts in the United States, he argues, it’s impossible to provide all students with a standardized educational experience.

Polikoff pins his hopes for improvement in K–12 education partly on high-quality curriculum rather than standards. As he notes, studies have shown that the effect of a high-quality curriculum on student outcomes can be as strong as the effect of having a veteran rather than a novice teacher. He also supports a more active role for states. Citing Louisiana as an example, he argues that states should identify or create high-quality curricula and exercise greater control over decisions typically left to districts, schools, or individual teachers (see “Louisiana Threads the Needle on Ed Reform,” features, Fall 2017). Polikoff even urges states to mandate that districts adopt curricula from among a limited set of state-approved options.

That aspect of his reform prescription is more problematic. As Polikoff is aware, the idea of state-mandated curriculum flies in the face of a strong American tradition of local control. Some states—20 or 25 by his estimate—issue lists of approved curricula or textbooks, but none require districts to use those on the list. In the other states, curriculum decisions are left entirely to districts. When Polikoff advised one state’s education leaders simply to collect data on which curricula districts were using, they protested that “district folks would freak out and assume the state was trying to usurp their authority over teaching and learning.”

Photo of Morgan Polikoff
Morgan Polikoff

Beyond the logistical or political obstacles, though, it’s not clear states can be relied on to make good curriculum choices—especially in the area of literacy. Polikoff argues that decentralization has led to a plethora of curricular approaches, but there is a standard approach to literacy, referred to as “balanced literacy.” While curricula vary in some respects, most of the commonly used ones fail to guide teachers effectively in teaching phonics, an area where their training is often deficient. And almost all emphasize reading comprehension skills and strategies, such as “finding the main idea” or “making inferences.” Students practice the skills on “leveled texts”—books on various topics that they can read easily and that may be well below their grade level. Polikoff doesn’t specifically address either component of this widespread approach.

When he refers to high-quality literacy curriculum, he seems to have in mind a handful of newer curricula grounded in evidence that many children need systematic instruction in phonics to read fluently and that comprehension depends far more on academic knowledge than “skill.” These curricula put content in the foreground and go deeply into topics in social studies and science. Although they haven’t been studied as much as math curricula, the evidence on their effectiveness that does exist is promising.

But will state decisionmakers gravitate to these newer curricula or stick with what’s familiar? Polikoff seems to assume they’ll opt for the good stuff if they involve teachers in the adoption process and rely on guidance from organizations like EdReports, which rates curricula according to their alignment to Common Core standards. Teachers, though, may prefer to work with what they’re used to. And although EdReports ostensibly includes knowledge building as one of its criteria—and although the organization has rated several knowledge-building curricula highly—it has made some puzzling decisions of late.

For example, EdReports gave its highest rating to McGraw-Hill’s Wonders, one of the 10 most popular reading programs. But a recent critique by Student Achievement Partners, which evaluates curricula for how well they align with research evidence, found that Wonders is overstuffed, fails to spend enough time on some foundational skills, lacks coherence, and doesn’t build content knowledge systematically.

It’s not hard to imagine a state being misled by guidance from EdReports. In fact, that may have already happened in Florida, where the state’s new standards profess to value knowledge over reading-comprehension skills. Strangely, the recently announced state adoption list failed to include any of the curricula that focus on building knowledge but did include Wonders, along with some other lower-quality options. At the same time, several Florida districts had already conducted their own reviews and chosen actual high-quality curricula, only to find that the state later failed to recommend them.

Fortunately, those districts can still adopt high-quality curricula, although it might be more difficult or cost them more money. Polikoff calls such state incentives to use approved curricula “modest,” but they may be more powerful than he thinks. It’s essentially the method Louisiana used to induce districts to adopt high-quality curricula, and, as he reports, over 80 percent of schools in the state are now using such materials. But if Polikoff had his way, Florida would be able to prevent the districts that want high-quality curriculum from purchasing it and require them to use inferior curriculum instead.

In the abstract, Polikoff’s prescription makes sense: why not have 51 decisionmakers rather than 13,000? We might even wish we could have just one, like the many developed countries that have national curricula. But centralizing the decisionmaking process only makes sense if the decisionmakers understand what they’re doing. In Louisiana, curriculum adoption at the state level worked because of a visionary state superintendent of education, John White. There may be others like him, but at this point we can’t count on one being at the helm of every state department of education—or even most of them.

I agree with Polikoff that standards-based reform hasn’t worked—and in the case of literacy standards, which reinforce the mistaken notion that reading comprehension is primarily a set of skills, I think we would be better off without them. But weaning schools away from what’s familiar and toward what’s aligned with science will unfortunately take a lot more than jettisoning standards and giving authority over curriculum to the states.

Natalie Wexler is an education writer and author of The Knowledge Gap: The Hidden Cause of America’s Broken Education System—And How to Fix It.

This article appeared in the Fall 2021 issue of Education Next. Suggested citation format:

Wexler, N. (2021). A Sharp Critique of Standards-Based Reform: Polikoff pins his hopes on high-quality curricula selected by the states. Education Next, 21(4), 76-77.

The post A Sharp Critique of Standards-Based Reform appeared first on Education Next.

]]>
49713755