Douglas N. Harris – Education Next https://www.educationnext.org A Journal of Opinion and Research About Education Policy Tue, 12 Apr 2022 18:14:43 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 https://i2.wp.com/www.educationnext.org/wp-content/uploads/2020/06/e-logo-1.png?fit=32%2C32&ssl=1 Douglas N. Harris – Education Next https://www.educationnext.org 32 32 181792879 The Bigger Picture of Charter School Results https://www.educationnext.org/bigger-picture-charter-school-results-national-analysis-system-level-effects-test-scores-graduation-rates/ Mon, 18 Apr 2022 09:00:59 +0000 https://www.educationnext.org/?p=49715256 A National Analysis of System-Level Effects on Test Scores and Graduation Rates

The post The Bigger Picture of Charter School Results appeared first on Education Next.

]]>
Parents and schoolchildren demonstrate their support for charter schools and protest the racial achievement gap in New York City.
Parents and schoolchildren demonstrate their support for charter schools and protest the racial achievement gap in New York City. An estimated 25,000 people attended the rally.

Charter schools now represent 7 percent of national school enrollment. In a growing number of cities, this number is well above 40 percent. This represents one of the most dramatic shifts in the structure of U.S. schooling in the past half century. An entire sector of publicly funded, privately run schools has emerged from scratch that now rivals private schools in its size and scope.

We have learned a great deal from the charter-school experience. Most prior research has focused on how well charter schools serve the students who attend them. These “participant effects” are, on average, small and positive for test scores—more positive in urban areas and in schools using a “No Excuses” approach to instruction and discipline. The results have also generally improved over time, perhaps because charter schools and their partners have had more time to learn from experience.

But charter schools could have broader effects on schooling systems as a whole. Other studies have examined the effects of charter schools on nearby traditional public schools. Sometimes called “competitive effects,” these influences actually reflect a range of ways in which nearby traditional public schools might respond to charter schools. The competitive effects documented in past research, too, are typically small and positive.

Another potential effect of competition is that traditional public schools might be forced to close. Charter schools draw enrollment from traditional public schools. The loss of students can make the traditional public schools less viable, financially and academically. Closures are painful, to be sure. However, a growing body of research suggests that if the schools that close are among the lowest performing, then students benefit academically because they end up in better schools. We know little, however, about the effect of charter schools on the closure of other schools.

More generally, we are not aware of any studies that capture the net or systemwide effects of charter schools including all of these mechanisms. Prior research therefore gives us only a partial picture. We decided to address this issue. Instead of focusing on one particular mechanism—participant or competitive effects—we try to estimate the net effect of almost all the potential mechanisms. Instead of focusing on particular cities or states, we take a national look. And, instead of focusing on test scores alone, we consider both scores and high-school graduation rates. In short, we aim to provide a bigger picture of charter school effects.

National Data and Analysis

We included essentially all school districts in the United States during the years 1995–2016. During this period, 608 of the nation’s approximately 12,000 districts had at least one charter school. Sixty-one percent of these districts have 10 percent or more charter enrollment, and 39 percent of these districts have 20 percent or more charter enrollment. (The number of districts in each group is smaller for the sample we use to study graduation rates.) The remaining, no-charter districts serve as a potential comparison group.

These data come from the National Longitudinal School Database, or NLSD, which we created at REACH, the National Center for Research on Education Access and Choice. The NLSD combines a wide variety of school and district data sources, including test-score data from the Stanford Education Data Archive, high-school graduation data from the federal Common Core of Data, and demographic data from the Common Core of Data and the U.S. Census.

While these data are not unusual, our approach to the analysis is in one key respect: We focus on system-level outcomes, which are an average of the outcomes of traditional public schools and charter schools located within districts’ geographic boundaries, weighted by school enrollment. This approach has two key advantages. First, it allows us to capture system-level results, which reflect the outcomes of all students (excluding private schools and home education). Second, one of the main concerns in studies of charter schools is that they might select or “cream-skim” the best students and inflate their outcomes. However, this type of selection is largely irrelevant in a district-level analysis of the total effects of charter schools. All students are counted in the analysis regardless of which type of school they attended. This is really an analysis of “systems” instead of “districts.”

We analyze these data using a method called difference-in-differences that compares a control group of districts with a treatment group. In this case, the control includes only districts that have no charter schools. The treatment group includes only “charter-heavy” districts, which we initially define as those that eventually reach at least 10 percent charter enrollment share. We then compare the trends over time in each group to see whether they diverge after charter schools open.

A key challenge in understanding any effect of charter schools is separating their impact on student outcomes from the impact of other policies aimed at improving schools that were adopted at roughly the same time. For example, states might adopt charter schools as part of a larger education agenda—which might include changes in school funding, investments in school facilities, or school accountability—that also affects student outcomes. Our matching method helps address this by focusing the comparison on districts that are otherwise similar and therefore are similarly likely to experience additional policies. If a state institutes new policies for low-performing schools, for example, the analysis will account for this by comparing districts that initially had similar performance levels.

It is also possible that non-policy factors could change at the same time that charter schools open. For example, demographics of a district might change, and, since outcomes are correlated with demographics, the results might change for reasons that have nothing to do with charter schools. To account for this, we sometimes control for demographics. We also test directly for demographic shifts that coincide with charter entry.

Yet another problem is that charter schools might intentionally seek to open in locations where the performance of traditional public schools is expected to decline. In that case, it might appear that charter schools are having a more negative impact than they actually are. The matching partially addresses this as well. In addition, we carry out “placebo” analyses in which we look for “effects” of opening high-school charter schools on elementary outcomes, which should not exist.

Districts with Greater Shares of Charter Enrollment Improve Test Scores and Graduation Rates (Figure 1)

Average Effects on Test Scores and High-School Graduation

Though we examine a number of factors, we focus here on comparing districts with charter enrollment of 10 percent or more to no-charter districts, while controlling for other district characteristics including race/ethnicity, free-lunch eligibility, and urbanicity.

Figure 1 shows the effects on elementary- and middle-school test scores in math and reading up to six years after charter schools open. The first bar indicates that, when enough charters open to reach at least a 10 percent enrollment share, math test scores increase by 0.15 standard deviations, or approximately 6 percentage points. For reading scores, the increase is 0.08 standard deviations (the equivalent of 3 percentage points).

The right side of Figure 1 also shows a 2.8 percentage point increase in high-school graduation rates over an eight-year period when comparing districts without charter schools to districts with at least 10 percent charter enrollment.

Additional analysis reinforces our conclusion that these effects are the result of charter schools. To test the robustness of our estimates to different analytic choices, we alter the matching method, vary the control variables, fix the number of years after charters enter at five years, and address the staggered nature of charter-school openings. The results vary somewhat across our methods, but the general picture is the same. In fact, with graduation, the effects often appear considerably larger when we estimate them in other ways. The estimates in Figure 1 might therefore be conservative.

The analyses also generally pass the usual tests that give us confidence that estimates reflect causal effects. The comparison and treatment groups were on the same trajectories before charter schools opened. The placebo estimates reinforce our findings by confirming that the expansion of charter high schools is unrelated to outcomes of elementary-school students.

We also used an entirely different method. Rather than compare charter-heavy districts to no-charter districts, we compare each charter-heavy district to itself as charter enrollment changes. This “fixed effects” approach makes somewhat different assumptions than our main analysis, but this, too, yields very similar results.

Diminishing Returns to Charter Enrollment (Figure 2)

Diminishing Returns

The 10 percent charter enrollment share threshold is arbitrary, and there are reasons to expect that the effects would be different if we picked other thresholds. For example, some have argued that having too many charter schools may reduce the performance of traditional public schools.

We find that increased charter enrollment share is generally associated with larger effects in the lower ranges of charter enrollment. Figure 2 shows that the improvement is especially pronounced once the threshold reaches 10 percent. When we raise the threshold above 15 percent, the effects continue to be positive, but they do not get larger.

New Orleans is an extreme case with the highest charter enrollment of any district. It has also been one of the more successful and well-documented examples of improved student outcomes. To test whether New Orleans might be driving the results, we dropped it from the analysis. The results are essentially unchanged when we do this. As in the prior analyses, this pattern holds when we use other comparison groups and other methods.

Do Charter Effects Vary by Student and District Characteristics?

The 10 percent charter enrollment threshold yields a positive effect on math scores for almost all of the subgroups we examine. In particular, our results show that the increase in math scores for districts with charter schools is larger in metropolitan areas. This is consistent with prior research, though, again, that research had focused on particular mechanisms, such as participant effects, not the broader systemwide effects.

More novel is our analysis by grade level and initial achievement level. Here, we consider high initial achievement as the top 50 percent of math scores nationwide and low initial achievement as the bottom 50 percent of math scores. We find some evidence of larger effects in middle schools and where initial (pre-charter) achievement was low. This is consistent with the theory that it is easier to improve when outcomes are low to start.

Our analysis includes not only average test scores, but also scores by student race/ethnicity and family income. We find evidence of improvements for every group as well. We see positive and statistically significant effects on math scores for low-income, higher-income, white, Black, and Hispanic students.

Students at New York City’s Bushwick Ascend Charter School
Students at New York City’s Bushwick Ascend Charter School, which recently scrapped its strict code of discipline and conduct.

What Mechanisms Explain the Total Effects?

What exactly about charter schools leads to these effects? Prior studies have focused on whether charter schools are more effective than nearby traditional public schools or whether charter schools induce traditional public schools to improve through competition.

One key contribution of the present study is focusing attention on the net effects of all of these methods, including a third possible mechanism: how charter schools might replace low-performing traditional public schools. To analyze this, we use the same methods described above, but here we are interested in whether the opening of charter schools led any traditional public schools to close or be taken over. We find that higher charter enrollment share does increase the likelihood of closure or takeover of traditional public schools.

To further understand this, we used school-level measures of achievement growth from Stanford Education Data Archive. These measures are created by calculating the change in achievement between cohorts and years (for example, the change in scores between 3rd graders in 2010 and 4th graders in the same school in 2011). Prior research suggests that these growth measures are similar to “value-added” measures that more accurately capture what schools contribute to student learning.

We find that traditional public schools that close as charter schools open have lower-than-average achievement growth. We also find that charter schools tend to locate near relatively low-performing traditional public schools. This may partly explain why charter schools tend to be slightly higher performing than the schools their students would otherwise attend.

We also examined the effects of charter schools on private-school closures, but we find no evidence of such effects. This is important, too, given the possibility that students might switch from private to charter schools. We might also expect competition between schools when there are more charter schools; more schools mean more competition for students and funding. Indeed, we find that traditional public school performance rises with the charter enrollment share, though only slightly. This evidence may reflect correlation more than causation, but it is consistent with prior research that has examined charter entry more rigorously in specific locations.

Putting this research together with prior research, it does seem clear that multiple mechanisms play a role in explaining how charter schools improve student outcomes.

Implications

This study continues a general trend. Charter results continue to improve in studies using rigorous designs of charter effectiveness—including one recent study of voting—as well as more descriptive studies. The fact that we see find systemwide gains in high-school graduation rates on a national scale is significant, given how important graduation is for long-term life outcomes.

There is still much we do not know. While our work advances understanding of the system-level effects, we still know little about some indirect effects of charter schools. Some recent research finds that charter schools attract more high-performing teachers to the profession, some of whom end up in traditional public schools.

On the other hand, critics also point out that charter entry might be accompanied by increases in average student funding. This happened in New Orleans and may also have occurred in other locations where traditional public schools are funded mainly by local property-tax revenue and charter schools are funded separately by state funds. Relatively little research has examined this topic.

Another legitimate concern is about how charter schools operate and how they might affect other outcomes. In New Orleans, we found, for example, that the intense charter-school focus on test scores took schools’ attention away from the city’s centuries-long traditions in the arts. Whether this has happened on a national scale is less clear.

Charter schools may also have contributed to weakened ties between parents and schools, and among families within neighborhoods. School choice generally means that students have longer commutes to school, which can make it more difficult for parents to make it to parent-teacher conferences, attend sporting and other afterschool events, or pick up their children when they are sick. Choice may also weaken neighborhood ties as students living across the street walk to different bus stops and attend schools that are not in their neighborhoods and often on opposite sides of the city.

The bigger picture, as it turns out, is even bigger than it might appear. Still, this study is an important step forward.

Douglas N. Harris is director of REACH, the National Center for Research on Education Access and Choice. He is chair of the department of economics at Tulane University, where he also holds the Schlieder Foundation Chair in Public Education. Feng Chen is a PhD student in economics at Tulane University. A more technical version of this paper is available at reachcentered.org.

The post The Bigger Picture of Charter School Results appeared first on Education Next.

]]>
49715256
What Charter School City Actually Says https://www.educationnext.org/what-charter-school-city-actually-says-response/ Thu, 20 Aug 2020 09:00:39 +0000 https://www.educationnext.org/?p=49712417 Why Jay Greene’s critique of my book about education reform in New Orleans doesn’t hold up

The post What <em>Charter School City</em> Actually Says appeared first on Education Next.

]]>
A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo
A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo.

The purpose of my new book, Charter School City, is to describe the unprecedented New Orleans school reforms, piece together more than three dozen studies about them from the center I direct, and explain what all of this might tell us about the future of schooling nationally. As a first-of-its-kind reform, New Orleans is a fascinating case study, which I describe by intermingling evidence with stories and interviews from those most intimately involved in the reforms.

Jay Greene, who penned a review of the book for Education Next, and I both seem to agree on many of the book’s main conclusions. We agree that the reforms yielded large and positive effects on a wide variety of metrics: test scores, high school graduation rates, college-going, achievement/opportunity gaps, and parent satisfaction. We agree that these effects were at least partially due to the unusual conditions of New Orleans and therefore may not generalize to other cities and states. We agree that it is important to understand why those effects emerged in New Orleans.

That might be where the agreement ends, but it’s difficult to tell. Greene focuses his unsuccessful critique on one specific conclusion that “closing and taking over low-performing schools was the factor above all others, that explains the improved student outcomes,” and that choice and competition were not key factors. The first problem with his critique is its focus on such a narrow part of the book.

The second problem is that even this narrow line of attack, as he has written it, doesn’t make much sense. Greene would like you to believe that my conclusion on school takeover is based on a single flimsy analysis. In reality, it is based on five different and compelling types of evidence:

  1. We used a difference-in-differences strategy, comparing New Orleans students who experienced takeover, before and after the takeovers occurred, to a comparison group. This is a widely accepted method for identifying causal effects. Our version goes a step further and combines the difference-in-differences strategy with another that selects the comparison groups via matching methods. Greene only argues that the difference-in-differences approach “sometimes produces spurious findings.” That’s true. In fact, all research methods can yield spurious findings if they’re carried out poorly or with limited data. However, he offers no specific critique at all of our analysis. He also doesn’t mention that we carried out many different versions of the analysis, all of which yielded the same conclusion.
  1. The same study he mentions also provides evidence about why the takeover process was so effective in this case. In particular, my co-authors and I find that it was effective for a very simple reason: New Orleans students ended up in better schools (measured by school value-added) when their schools were taken over, which further reinforces the conclusion that takeover was a key contributor to improved student outcomes. It’s almost common sense that students do better in better schools, but this simple point is something that has been missed in much of the national discussion of school takeovers. The book also shows that the same pattern holds when we look at studies of takeover in other states. He doesn’t mention or question this part of the analysis either.
  1. We also compared the experiences of New Orleans with our neighbor, Baton Rouge. In contrast to New Orleans, the Baton Rouge results do not yield positive benefits for students. This turns out to be easy to explain using point 2 above. Baton Rouge students ended up in lower performing schools after takeovers, while New Orleans students ended up in higher performing ones. Greene labels this an “ad hoc explanation.” Does he believe that students don’t do better when they move to better schools? I don’t know because he, again, provides no specific critique.
  1. Rather than rely only on rigorous analysis of student outcomes, I also talked to New Orleans education leaders. This is a helpful general strategy for understanding why effects emerge. If the quantitative analysis discussed above had conflicted with what educators told me, then I would have been circumspect about the quantitative analysis. But, without exception, New Orleans education leaders said the takeover process was key. Though Greene recommends that “education reform might benefit from…greater acceptance of qualitative analysis,” he makes no mention of the qualitative evidence in the book.
  1. The performance-based takeover process seemed so important in that first study that we developed a novel way to decompose school improvement into multiple parts. That study, “From Evolution to Revolution: Market Dynamics in School Value-Added and Marketed Program Offerings Under the Post-Katrina School Reforms in New Orleans,” suggests that we probably under-estimated the role of the takeover process in the study he cites. To be fair to Greene, that additional study was not published when I was writing the book. Also, I didn’t see a need to add it in the final book edits, given how compelling the rest of the evidence was (see points 1-4).

Put another way, to argue that school takeovers were not the driving force behind improved student outcomes, as Greene seems to suggest, you have to believe most of the following: (a) we made some kind of unspecified error in the New Orleans analysis—an error that neither he nor any other peer reviewer has identified; (b) that those errors had no effect, or the opposite effect, in Baton Rouge; (c) that students actually do better in low-performing schools or that our measures of school performance (value-added) are wrong, despite considerable evidence to the contrary; (d) that all the education leaders in New Orleans I spoke with are wrong about what drove the measurable improvement; and (e) that we made an another large, unspecified error in the follow-up study decomposing the effects into different parts. I will let readers judge for themselves whether this scenario is likely. But the key is that Greene does not attempt to make a case that any of this is true.

Another way Greene might have approached his critique would have been to argue that factors other than school takeover mattered even more. He mentions choice and competition as an alternative possibility, but he does not attempt to rebut the evidence I present that these were not key factors. I present evidence that families, in choosing schools, focus on practical factors like distance and after-school care, as well as extracurricular activities, not the outcome measures whose improvement I was trying to explain. I present evidence that essentially all schools were guaranteed sufficient enrollment, which greatly reduced competitive pressures. I summarize the large national research base showing that the effects of competition, while positive, are widely considered to be small. He does not question any of this evidence or even propose a theory about why these findings might be misleading.

Greene does point out, correctly, that I argue competition had some unintended consequences, but, again, he offers no rebuttal. Instead, he suggests that I add in discussion about Arizona, which cannot possibly inform a debate about whether takeover worked in New Orleans. He also gets sidetracked on the question of whether the National Association of Charter School Authorizers’ process for reviewing applications to operate charter schools was successful in identifying the best applications. This point is relevant to why school takeovers mattered, but not to the larger point about whether they mattered. If this were a movie review, his approach here would be equivalent to trashing the movie because he doesn’t like one of the extra actors hanging out in the background of the scenes.

His review gets more puzzling as it moves along. Greene implies that school takeover was the explanation I “prefer” for the New Orleans experience. In coming to your own judgment, you have to ask, why would I prefer this? School takeover is one of the least popular ideas in the history of education and not one I had ever written positively about in the past. The reality is that I prefer this explanation because this is what the data tell me.

Another problem with the review comes when he writes that the importance of school takeovers in New Orleans is the “heart of [my] argument.” But he never actually states what my argument is. This makes his review hard to follow, especially as the main focus of the book isn’t on making arguments at all.

My best guess is that he is referring to the conclusion that I draw late in the book that the government has some important roles to play in schooling vis-a-vis markets. (This is one of the most interesting and surprising things we learned, especially given the measurable success of the New Orleans reforms and their market orientation.) Specifically, I propose a framework that I call Democratic Choice, which suggests five general roles for government: providing accountability for schools, ensuring access to quality schools for all students, providing information and acting with transparency, ensuring engagement of families and the community in system-level decisions, and providing (some forms of) choice. Notably, this framework doesn’t call for active school takeovers, which belies his assertion that school closure is the “heart of [my] argument.” It also explicitly includes an idea—choice—that he suggests I am biased against. Puzzling indeed.

In any event, you don’t need to support any element of the Democratic Choice framework to get something useful from the book. In fact, my greatest concern with Greene’s review is that it gets the essence of the book wrong. The value of Charter School City, in my view, is that it presents a comprehensive picture of the New Orleans school reforms, not a simplistic story, as Greene would have you believe. If you are interested in learning about the complexity of the school reforms, if you want to hear from supporters and opponents of the reforms in their own words, if you want to hear the vast trove of evidence (in New Orleans and beyond) woven together, if you want to hear what went well and what did not, if you want to be surprised and engaged, and if you want to use all of this to think deeply about the roles of government and markets, then I think you will find the book worthwhile.

I can understand why Greene would oppose a book that affirms some roles for government. He generally opposes roles for government (beyond funding), and sometimes he makes interesting and important points to support that view. Especially in this time of political polarization, we need to listen to people who think differently than ourselves. But having a different perspective is not enough. We have to really listen to one another’s arguments, think about the elements of those arguments and supporting evidence, and analyze each part. This is where Greene’s review falls short. At the end of his critique, he writes, “well-done case studies … make their accounts compelling … [using] standards of evidence and logical argumentation.” I agree, and I wish he had applied that same standard to his review.

The New Orleans school reforms certainly do draw strong reactions. This is because the stakes are high. After a quarter-century of mostly small experiments with school choice, charter schools, and reduced job protections for teachers, New Orleans was the first to try them all at once, at scale. For that reason, we’ll be talking about the city’s experiences for years to come. I believe Charter School City is a valuable resource in that discussion.

Douglas N. Harris is professor and chair of the economics department at Tulane University and is author of Charter School City. He is also Tulane’s Schlieder Foundation Chair in Public Education and director of the Education Research Alliance for New Orleans and the National Center for Research on Education Access and Choice (REACH).

The post What <em>Charter School City</em> Actually Says appeared first on Education Next.

]]>
49712417
Still Waiting for Convincing Evidence https://www.educationnext.org/still-waiting-for-convincing-evidence-forum-private-school-choice/ Tue, 13 Feb 2018 00:00:00 +0000 http://www.educationnext.org/still-waiting-for-convincing-evidence-forum-private-school-choice/ The post Still Waiting for Convincing Evidence appeared first on Education Next.

]]>

Do public-school students who move to a private school with a government-funded voucher benefit from making this switch? A growing body of research is shedding light on this question. Of particular interest are findings coming out of three states and the District of Columbia, all of which have implemented ambitious voucher programs over the past dozen or so years. That evidence does not seem to justify the fervor for vouchers displayed by many education reformers and now by U.S. secretary of education Betsy DeVos.

First, what do we know about the students who choose to participate in voucher programs? It almost goes without saying that families who choose to use tuition vouchers are less satisfied with their traditional public schools than those who stay. And since the vast majority of private schools are religiously affiliated, it comes as little surprise that voucher users tend to be more religious.

Participation in voucher programs is also driven by eligibility requirements. For example, programs that target low-income families directly or indirectly (by virtue of being based in urban areas) will enroll low-income students. Even when limited to low-income populations, though, vouchers tend to serve a socioeconomically advantaged portion of that group, those who are best positioned to leverage choice. Why? Mostly because this is how markets work. Economic research shows that more-educated adults are more likely to get what they want in the marketplace writ large.

This dynamic intensifies in the schooling market. In a practical sense, families who lack personal transportation or live far away from private schools do not have access to alternatives. Also, most private schools, in the absence of vouchers, are designed for the wealthy and middle class. It is wealthier families who can afford to pay tuition, and school eligibility requirements often exclude students who have academic and disciplinary challenges. That is largely because parents know what researchers have confirmed: students benefit from attending school with more-advantaged classmates. School reputations therefore depend to a substantial degree on exclusivity. This is also why we must view parent satisfaction cautiously. Research shows that white or affluent parents often avoid schools that have high concentrations of minority and low-income students. This might make them more “satisfied,” but it is hardly a reason to celebrate.

Some evidence about exclusivity comes from my fellow participants in this forum, who have shown that private schools considering whether or not to accept voucher students often worry about having to lower their academic standards. Given the correlation between family socioeconomic advantage and the student characteristics that schools look for, this concern on the part of private schools will restrict access for voucher-bearing students. Religious schools, a partial exception, are more willing to participate in means-tested voucher programs, but they, too, often have academic and behavioral admissions requirements. In some cases, schools institute these policies in a well-intentioned effort to build strong scholastic communities, but their criteria effectively serve to exclude many students. Even though voucher programs often officially prohibit selection practices, these rules are rarely enforced, and research shows that schools have many ways to shape enrollment that fall outside the rules.

Vouchers that are targeted to disadvantaged students could theoretically help address the affordability barrier, but, in general, when governments target attractive financial benefits to one part of the population, politically powerful groups will exert pressure to loosen eligibility requirements to gain their own access. We have seen this in Indiana, Louisiana, and other places where small-scale, means-tested voucher programs gradually expanded to include families closer to middle class. The trend is toward voucher programs becoming less well targeted, with funding shifting to socioeconomically advantaged students who already have some degree of choice.

Effects of Vouchers

And what do we know about the academic outcomes of students participating in voucher programs? Most of the rigorous research, now dating back more than a decade, focuses on programs in large urban areas, such as Milwaukee and New York City. Averaging across 16 U.S.-based programs, Patrick Wolf and colleagues find that these small-scale voucher programs have statistically insignificant effects on standardized test scores across academic subjects.

This interpretation is different from what Wolf concludes from the same results in his contribution to this forum. The reason for the difference is noteworthy. In his Figure 1, Wolf shows that studies of voucher programs that examine students’ outcomes longer after they switch to a private school produce more positive results. However, this conclusion assumes that the programs had no effect on the achievement of students who switched to a private school only temporarily and, even if valid, could only be generalized to the small subset of students who used a voucher consistently when given the opportunity to do so, ignoring the smaller and perhaps negative effects on the majority of voucher users. The analysis also includes a now dated study of the Washington, D.C., program, the effects of which have since turned negative (more on this below). Moreover, the other handful of studies that lasted for four years might only have done so because they were relatively successful, so that the less successful ones are omitted. In contrast, my interpretation—that there is no statistically significant effect—places equal weight on all students observed using a voucher to enroll in a private school, including all studies and students regardless of how long they were followed, therefore capturing all the positive and negative effects at work. Note, too, that even Wolf’s more optimistic interpretation asserts a positive achievement effect only for reading, not for math.

These early programs’ effects on non-achievement outcomes, such as graduating from high school, tend to be somewhat more positive. Even if we place more weight on these outcomes, however, we need to keep in mind that effects found in small-scale programs often do not generalize to larger scales. School-choice initiatives, including charter schools, seem to work better in cities than statewide because it is easier to exercise choice where there is better mass transit and higher population density, and the performance of traditional public schools is generally worse in urban areas, making it less challenging for choice programs to improve on baseline student outcomes.

The limited scale of the programs examined in most prior studies is important, because the United States is now in the midst of a full-scale nationwide expansion of these policies. Twenty-five states and the District of Columbia have some type of voucher program. Just four statewide voucher programs have been formally evaluated, and only one has shown any signs of success. Ohio’s statewide program has shown clear negative effects on test scores. Two others, in Indiana and Louisiana, started off with some of the worst test-score results any education program has ever demonstrated, though these subsequently improved somewhat so that the net effects are now essentially zero. In the fourth program (Florida), the authors conclude that test-score effects cannot be determined with any confidence because of the program design.

Why are the results for test scores not more positive? One possible reason is that the state-mandated tests were not well aligned to the curriculum taught in private schools. However, the most recent experimental evaluation of the D.C. voucher program showed negative test-score effects after one year, even though the study did not rely on a state-mandated test—and despite the fact that an earlier study of the program showed no effects. The more likely explanation is that private schools in the city are competing with a public- and charter-school system that has demonstrated substantial academic improvements in recent years. More generally, where vouchers are competing with charter schools—which have produced increasingly positive results over time—the voucher results are likely to continue to be less positive. It is harder to look good against stronger competition.

Another possible reason for the uninspiring results is that the private schools that participate in statewide voucher programs are simply not very effective. This could be interpreted as a failure of the voucher concept or, as voucher supporters have asserted, it could be that the regulatory burden of the programs, while very small compared with those of public or charter schools, kept the best private schools from joining. Research by Wolf and colleagues does not seem to support the latter interpretation, however. Based on ratings from the organization GreatSchools, the schools participating in the Louisiana voucher program were not of lesser quality than those that did not participate, though the voucher-accepting schools did charge lower tuition.

The effects of the Florida Tax Credit (FTC) scholarship program on college outcomes have been widely cited as a success story, but several caveats apply here. First, unlike the other studies mentioned above, the design of the FTC program precludes a rigorous research design. In their Florida study, Matthew Chingos and Daniel Kuehn do their best by matching students on observable characteristics that are somewhat removed from the outcome of interest. But this strategy is unlikely to yield an apples-to-apples comparison. In studies using test scores as the outcome, matching is much more effective, because the treatment and comparison groups of students can be matched on their scores—the variable
of interest—before students receive vouchers. This is not possible when studying college-going. Also, because the assignment to the FTC is not random, the more positive effects they see for students participating for more years may, as they acknowledge, reflect selection bias; that is, any student who stays in the same school for more years is likely to have better outcomes.

Earlier research in D.C. provides evidence of positive effects on another important long-term outcome, high school graduation, but these findings are now difficult to interpret for another reason. The downward trend in test-score results in D.C. calls into question whether prior outcomes still reflect the current reality, given increased competition from charter schools.

Still Waiting

What has all this taught us about how states ought to design and oversee voucher programs—and, indeed, whether they should do so at all? How about slow down? The latest results should give everyone pause.

Try this exercise: Let’s drop the word “voucher” and simply say, “Statewide _________ programs show a mix of tenuously positive and negative results.” Now, fill in the blank with your favorite non-performing program. It is hard to
imagine that any objective observer would respond by saying, “Great, let’s expand this to states across the country.” Yet this seems to be what DeVos and half the states in the country have concluded about vouchers.

Robert Pondiscio, senior fellow at the Thomas B. Fordham Institute, has argued that empirical evidence is largely irrelevant to determining whether vouchers and other forms of choice programs should be adopted, because choice aligns schooling with a core American value: freedom. Certainly, it is desirable that education policy support our most fundamental principles, and freedom is at the top of the list. However, this argument assumes that choice policies, by definition, increase freedom. Whether that assumption holds true depends on what form of freedom we mean and how policies are designed. A voucher program that allows schools to set their own rules and does not provide transportation will increase freedom only for those students who meet the schools’ standards and who can find their own way to and from school.

Even that interpretation assumes we mean freedom in the libertarian sense—that is, freedom from restrictions on individual choice. School attendance zones, which assign children to a particular school by their neighborhood of residence, do curtail this kind of freedom. But the freedom that comes from promoting educational opportunity is also important. Unlike the libertarian form of freedom, opportunity is mostly an empirical issue, as it requires not only that families be unfettered by government policies in selecting schools for their children, but also that they are able to choose from among accessible, high-quality options. We can debate what criteria define quality—strong test scores versus parent satisfaction, for example—but the assertion that opportunity is an empirical issue is hard to dispute.

It is also important to consider how voucher programs contribute to or detract from other salient cultural values such as equity, community, and democracy. The fact that vouchers are likely to open access only for some creates an immediate concern for equity. Apprehension that vouchers will undermine neighborhood schools—and the neighborhoods themselves—is also well founded at a time when geographically based communities are already under great stress.

The voucher debate, therefore, is a question not just of values but also of effectiveness, and research should play a significant role. So how should we interpret the available evidence? At most, only one of the more than two dozen states that have tried statewide vouchers and tuition tax credits has yet to demonstrate convincing, measurable success with them, Given this reality, it is hard to make a case for substantially replacing our system of public schooling on a national scale. The American workforce continues to be the most productive and creative in the world. This does not mean we cannot do better, but it does indicate that we should proceed with caution and care.

Finally, we cannot interpret the voucher evidence without thinking about the alternative policy options. Vouchers represent just one form of choice. Given the multitude of ways in which we would expect a free market in schooling to fail, perhaps other forms of choice that strike a different balance between government and market forces would be more effective. The evidence on charter schools, for example, is increasingly positive—even at scale. Perhaps what some call the “portfolio model,” and what I have called “managed competition,” will do more to increase freedom, equity,
efficiency, and community. A system of managed competition gives families genuine choice in schooling, but it also ensures 1) true accessibility to these options; 2) transparency, including data reporting and open board meetings; 3) coordination of school operations with a government body that has some degree of authority; and 4) government enforcement of the rules and protection of students’ civil rights. It also seems likely that different localities need different systems, and many might be best served by maintaining traditional public schools.

We have been debating vouchers for decades, even centuries, without much evidence to inform those debates. Today, policy advocates are way out in front of the evidence, especially with the current proliferation of statewide voucher programs. The new federal expansion of tax-favored 529 savings plans to include tuition for private schools, a move that constitutes reverse targeting to the affluent, has even less justification. It would be wise to put a hold on further broad-based experiments until we see whether the dozens of relatively new programs yield more positive results than the older ones. When it comes to convincing evidence, we are still waiting.

This is part of a forum on private school choice. For alternate takes, see “Programs Benefit Disadvantaged Students,” by Patrick J. Wolf, or “Lessons Learned from Indiana,” Mark Berends, R. Joseph Waddington, and Megan Austin.

This article appeared in the Spring 2018 issue of Education Next. Suggested citation format:

Wolf, P.J., Harris, D.N., Berends, M., Waddington, R.J., and Austin, M. (2018). Taking Stock of Private-School Choice. Education Next, 18(2), 46-59.

The post Still Waiting for Convincing Evidence appeared first on Education Next.

]]>
49707714
Taking Stock of Private-School Choice https://www.educationnext.org/taking-stock-private-school-choice-forum-statewide-programs-wolf-harris-berends-waddington-austin/ Tue, 13 Feb 2018 00:00:00 +0000 http://www.educationnext.org/taking-stock-private-school-choice-forum-statewide-programs-wolf-harris-berends-waddington-austin/ Scholars review the research on statewide programs

The post Taking Stock of Private-School Choice appeared first on Education Next.

]]>

In the past few years, new statewide voucher programs in Indiana, Louisiana, and Ohio and the steady growth of a tax-credit funded scholarship program in Florida have offered a glimpse of what expansive private-school choice might look like. What have we learned about the students and schools who choose to participate in statewide private-school choice programs and the academic results for participants? How do these programs work in practice? And what does research tell us about how states should design and oversee voucher programs—if indeed they should do so at all?

In this forum, we hear from Patrick J. Wolf, education policy professor at the University of Arkansas, Douglas N. Harris, professor of economics at Tulane, and the trio of Mark Berends, professor of sociology at the University of Notre Dame, R. Joseph Waddington, assistant professor at the College of Education, University of Kentucky, and Megan Austin, researcher at the American Institutes for Research, Chicago.

 

Programs Benefit Disadvantaged Students
by Patrick J. Wolf

 

 

 

Still Waiting for Convincing Evidence
by Douglas N. Harris

 

 

 

Lessons Learned from Indiana
By Mark Berends, R. Joseph Waddington, and Megan Austin

 

This article appeared in the Spring 2018 issue of Education Next. Suggested citation format:

Wolf, P.J., Harris, D.N., Berends, M., Waddington, R.J., and Austin, M. (2018). Taking Stock of Private-School Choice. Education Next, 18(2), 46-59.

The post Taking Stock of Private-School Choice appeared first on Education Next.

]]>
49707719
Make It Local with In-House Researchers https://www.educationnext.org/make-it-local-with-in-house-researchers/ Tue, 21 Feb 2017 00:00:00 +0000 http://www.educationnext.org/make-it-local-with-in-house-researchers/ We need to build a new cadre of researchers employed by school districts, state agencies, and local nonprofits.

The post Make It Local with In-House Researchers appeared first on Education Next.

]]>
I agree wholeheartedly with Tom Kane’s argument that we need to invest more in localized research and that we can do so with integrated data systems, relationships between districts and analytic partners, and “venues for synthesizing results.” I made very similar arguments in an article released by the Brookings Institution in December.

@darby via Twenty20
@darby via Twenty20

If there is one difference between my argument and Kane’s, it is that he argues for relying on outside analytic partners, whereas I argue that we need to build a new cadre of researchers employed by school districts, state agencies, and local nonprofits. This new army of researchers, with master’s degrees and certificates, would have many of the skills of Ph.D. researchers, especially the ability to design and carry out randomized trials and rigorous quasi-experiments.

But these in-house researchers would have important advantages over the Ph.D.s.

First, there would be far more of them. There will never be enough Ph.D.s to carry out or even advise on this type of work, especially not at a price education agencies can afford.

Second, the in-house researchers would be more connected to practice. Many would have been teachers or administrators themselves, and they would be working in school and education agencies on a daily basis. Ph.D. researchers think in the abstract, which is good for basic science but not always the best for applied work intended to aid in the improvement of specific schools.

Third, in-house researchers will be able to help education leaders interpret outside research and understand whether the results apply in their contexts. Context matters, and those working in schools know their own contexts better than anyone.

Fourth, in-house researchers would generally be trained by Ph.D. researchers at universities, creating a direct relationship between education agencies and the analytic partners that I think Kane has in mind.

Fifth, by working within the education agencies that actually make decisions, the in-house researchers would also have the relationships to ensure that research is at the table—at strategy meetings with superintendents and school board meetings. Relationships are key, and these new researchers could act as a bridge between the world of university research and school practice.

There is an important role for the federal government here. IES already funds a very successful Ph.D. training program, which has helped immensely in generating more rigorous research. We now need those same Ph.D.s to train in-house researchers who can fulfill the principles of ESSA and take the research enterprise to scale in a way that, gradually, over time, can produce meaningful improvement.

—Douglas N. Harris

Douglas N. Harris is Professor of Economics and the Schleider Foundation Chair in Public Education at Tulane University.

The post Make It Local with In-House Researchers appeared first on Education Next.

]]>
49706099
DeVos and the Evidence from Michigan https://www.educationnext.org/devos-and-the-evidence-from-michigan/ Mon, 05 Dec 2016 00:00:00 +0000 http://www.educationnext.org/devos-and-the-evidence-from-michigan/ What does the evidence say about a free market approach to school reform that relies on school vouchers and unregulated forms of charter schooling?

The post DeVos and the Evidence from Michigan appeared first on Education Next.

]]>
In a recent New York Times op-ed, I argued that the case for Betsy DeVos’s Secretary of Education appointment rests on a very weak track record—in particular, the evidence does not support her free market approach to school reform that relies, first and foremost, on school vouchers for private schools, as well as unregulated forms of charter schooling.

American Federation for Children
American Federation for Children

To support my case, I presented three categories of evidence: (1) the fact that national reform groups seem deeply concerned about Detroit; (2) the similarity in performance between the city’s charter and traditional public schools; and (3) the large negative effects of two statewide voucher programs on student outcomes. Given that DeVos was a key architect behind Michigan’s policies and a devotee of the free market ideology behind them, this collection of facts is highly problematic. Each deserves more attention than I had room to provide in the original piece. Here, I will take a closer look at the evidence and what it says about DeVos’s approach:

(1) Concerns from within the charter movement.

The fact that a broad swath of the national charter reform community is quite critical of Detroit charters does not seem to be in dispute. In addition to the article I initially cited, the Center for Reinventing Public Education (CRPE) and the National Association of Charter School Authorizers (NACSA), groups that support charter-based reforms, have also expressed concern about the city’s efforts. This is worrisome because it tells us that key leaders who share DeVos’s concerns about public education and even support some of her preferred solutions—people who know a lot about what’s happening in Detroit and nationally—believe she is off track.

The reasons for concern are clear. One of the main arguments for charter schools is that the government can write performance-based contracts, then close or take over schools that fall short on performance. In Detroit, it doesn’t work that way. A NACSA report indicates that Michigan is well below average in the number of charter schools that close, and there is no indication that these decisions are based on performance. One local organization, sympathetic to charter schools, recently lamented that when four low-performing schools came up for renewal, they dodged closure by shopping around for a charter authorizer who would take them. The authorizers themselves have financial incentives to keep schools open and may know little about what is happening in the schools themselves if their offices are located hundreds of miles away.

In addition to running against the basic logic behind charter school reform, allowing low-performers to keep operating is bad for students. In our research on New Orleans, we found very large improvements in student outcomes when charter schools were taken over based on low performance and replaced with others. While it can be hard for authorizers to predict which charter schools will be effective, this only reinforces the importance of following up with accountability after approved schools open, to ensure that students are being well served. Authorizers also need to ensure transparency in the use of funds and ensure that charter managers and board members do not have conflicts of interest, which has been a real problem in Detroit.

A second basic principle of charters is that they allow families to choose schools. However, one of the main concerns with charters is that some act like private schools, picking their own students and pushing out those they don’t want. In other cities, charters have been found to have secret lists of students they are trying to get rid of. Unfortunately, the lack of oversight in Detroit makes the extent of cherry-picking hard to measure. We only learn about it when there are inside whistleblowers. The key point is that the design of the city’s charter system creates incentives for cherry-picking and does nothing to prevent it.

A third issue is that there are certain aspects of school systems that have to be coordinated. For example, if families are going to choose schools, they need good information from third parties that are working on a citywide basis. This lack of coordination is partly why Detroit’s charter system received a D on the Brookings Choice and Competition Index.

(2) The failure of Detroit charter schools to improve student outcomes.

At first glance, it might seem that, despite all this criticism, Detroit charter schools are producing decent results. If you look at Figures 1 and 2 in the report on Detroit that I cited from Stanford’s CREDO research center, you will see that the city’s charter schools do look somewhat better than the comparison traditional public schools, but there are four problems with taking these results literally.

First, given the lack of oversight in Detroit and evidence from other cities that some charter schools cherry-pick their preferred students, these results may make Detroit’s charter schools look better than they are. There is no way the CREDO analysis, or really almost any analysis, could account for this. If it’s happening, then the charter effects on achievement are inflated.

Second, given the potential concerns about schools cherry-picking students and other concerns with high-stakes testing, it’s worth looking at other evidence on academic achievement. Among the 21 mostly low-performing urban districts participating in the urban NAEP test in recent years, Detroit experienced growth that was below the group average growth, even though many of these districts were not undergoing any major governance reforms. This reinforces concerns that the CREDO results may reflect cherry-picking.

The third issue really gets us into the research weeds. Estimates of policy effects are based on comparisons between control and treatment groups. Normally, we researchers look only at the difference—the effect estimate—but we are also coming to realize that we have to pay more and separate attention to the control group itself. In this case, Detroit is the lowest-performing school district in the country. Incredibly, based on the federal urban NAEP test, it is 0.3 standard deviations (approximately 12 percentile points) worse than the next best available, Cleveland. For reformers, the first reaction to this might be, “Yes, and this is why we need to take radical action in Detroit!” I agree, but the point here is quite different: the extraordinarily low standing of the city as a whole, to the degree it is caused by low performance of traditional public schools, should make it easier to improve student outcomes when trying something new. The worse your comparison group, the better you look, and it probably doesn’t get worse than Detroit. Put differently, if we could put Detroit charters on a national scale, they would likely be well below average.

For the same reason, comparing Detroit and New Orleans in the CREDO framework is questionable. In New Orleans, where essentially all schools are charters, the comparison schools have to come either from a handful of district schools (which aren’t really traditional public schools) or from the suburbs—whereas, in Detroit, the comparison schools are apparently within the city. The broader point is that it’s not just that the charter schools in Detroit and New Orleans differ; the comparison groups are also very different in ways that make it very hard to compare these two cities in the CREDO analysis. (To be clear, none of this is a critique of CREDO. They have taken on a herculean task, and their work is extremely important, even if sometimes hard to interpret by itself.) If you want to understand the much more positive effects of the New Orleans reforms, take a look at a summary of evidence on a wide range of metrics.

Fourth, and perhaps most important of all, whatever effect we think occurred for a given city, we can only interpret it in comparison with the other options. To take a generic example, if all programs have positive effects, but Program A has effects that are twice as large as Program B, then why choose B? There does not seem to be much dispute that we can do much better than DeVos’s free market approach.

(3) Recent convincing evidence of the negative effects of school vouchers.

There is no better place to look for evidence about DeVos’s free market approach to education than the most unregulated strategy available: private school vouchers. This is also an important topic because DeVos founded and now directs a national organization advocating for vouchers and other private school choice programs. In the group’s own words: “The American Federation for Children is the leading national advocacy organization promoting school choice, with a specific focus on advocating for school vouchers, scholarship tax credit programs and Education Savings Accounts.”

I have written about voucher results from Louisiana extensively before, especially the large negative effects that the state’s program, and a similar program in Ohio, have had on the achievement of students using them to move to private schools. We don’t know for sure why these programs had negative effects. Some argue (not persuasively in my view) that the negative effects suggest that the programs are actually over-regulated. Also, while some might point to the fact that both programs show signs of helping lift achievement in traditional public schools a bit by increasing competition between schools, I don’t think anyone would argue that we should sacrifice the achievement of students using vouchers in order to help others. “Do no harm” are words to live by.

One likely reason the Louisiana and Ohio voucher results were disappointing is that the market-based approach to school reform seems to work better in urban areas—and the voucher programs were statewide. This reflects the fact that the right approach to school improvement varies by community. Some areas need a new form of governance, while, for others, the traditional public school district no doubt makes the most sense. Given the advocacy of her organization, the American Federation for Children, for statewide vouchers and tax credit programs, it seems unlikely that DeVos would attempt a more focused approach to reform.

Negative effects are also common for virtual (online) schools, which DeVos has also supported. It’s important to emphasize that it’s rare to find negative effects of any educational program, especially ones this large. The fact that we are finding them in DeVos’s signature programs is worrisome.

A Triumph of Ideology

I am an economist and certainly support free markets in general, but it is widely recognized in my field that markets sometimes fail and that external oversight and accountability can go a long way to making them work better. This is especially true of education where there are all sorts of reasons to expect market failures and where equal educational opportunity is a key objective.

DeVos clearly rejects this view. After years of wrangling in Michigan, the Republican-led state was finally able to pass a law that added some of the needed accountability and oversight provisions for the state’s charter schools. Though these provisions fall far short of real accountability, they are a step in the right direction. But contrary to what some have said in defense of DeVos, the law’s passage was not a sign of her lack of influence or of her support for charter accountability. Her level of influence is well established, and after the Michigan law passed the state House of Representatives, a press release from DeVos’s group, the Great Lakes Education Project, noted that lawmakers “were successful tonight in preserving choices and options for [students] and their families […] Despite efforts by some of the elites and big city bosses who believe they know what is best for children.” In other words, in passing this bill, Devos’s group had prevented stronger accountability measures, thus “preserving” the system essentially intact. By all accounts, including their own press release, they fought against an accountability plan developed by a broad-based, bipartisan group of educators and policymakers, and on top of it, wrote the group off as “big city bosses.”

Perhaps the strongest indication of DeVos’s beliefs on accountability comes from the “model legislation” on her group’s web site. I read through two of these in full, one for a tax credit and another for a universal voucher program. In both cases, the only provision for academic accountability would require states to publicly report students’ academic results and satisfaction surveys. That is a start, though all of these data would be self-reported by the schools, apparently without verification. Moreover, there are no provisions in the model legislation for the states to actually do anything if schools are failing. This hands-off approach is consistent with the group’s failed efforts in Detroit.

The evidence simply doesn’t support the free market approach DeVos has championed in Michigan and in her national advocacy efforts. While each piece of evidence I’ve discussed has some limitations, collectively, they present a strong case against DeVos’s ideas. To argue that she has been even moderately successful with her approach, we would have to ignore the legitimate concerns of local and national charter reformers who know the city well, and ignore the possibility that Detroit charters are taking advantage of loose oversight by cherry-picking students, and ignore the very low test score growth in Detroit compared with other cities on the urban NAEP, and ignore the policy alternatives that seem to work better (for example, closing low-performing charter schools), and ignore the very low scores to which Detroit charters are being compared, and ignore the negative effects of virtual schools, and ignore the negative effects of the only statewide voucher programs that provide the best comparisons with DeVos’s national agenda.

That’s a lot to stomach. I don’t usually step out so strongly for or against a given idea—or in this case, a nominee. I only do so when I believe the evidence is overwhelming. This is one of those times.

In addition to evidence, there is an important place for ideology. I agree, for example, with many charter and voucher advocates that freedom of choice is an important value. But that is not a justification to confirm DeVos. We can have choice and government oversight, too—in fact, as we have found in New Orleans, oversight can lead to more and better choices. The best evidence suggests that, at least in urban areas, a regulated charter sector can substantially improve results, much more than we have seen in Detroit. Moreover, maintaining a significant role for government will help ensure that we meet crucial public ends that markets alone are unlikely to support—especially equal opportunity for all students. At the very least, the government has a fiduciary responsibility to oversee the use of public revenues and ensure that schools are using them well.

This brings us full circle back to my New York Times piece. I hope it’s clear now why I concluded that the DeVos nomination was a “triumph of ideology over evidence.” Where school reform is needed, choice with accountability works better to achieve the wide range of goals we have for education than a free market ideology that relies on choice alone.

There are several criteria we might use to judge’s DeVos’s candidacy, but if the goal is to improve measureable student results, the evidence votes no.

— Doug Harris

Douglas N. Harris, a professor of economics at Tulane University, is the founding director of the Education Research Alliance for New Orleans and a nonresident senior fellow at the Brookings Institution.

The post DeVos and the Evidence from Michigan appeared first on Education Next.

]]>
49705702
Should Louisiana Eliminate Its Voucher Program? https://www.educationnext.org/should-louisiana-eliminate-its-voucher-program/ Mon, 07 Mar 2016 00:00:00 +0000 http://www.educationnext.org/should-louisiana-eliminate-its-voucher-program/ How long should we wait to see whether the program is working? That is a question that only lawmakers can answer.

The post Should Louisiana Eliminate Its Voucher Program? appeared first on Education Next.

]]>
Of all education policies debates, few draw stronger opinions than school vouchers. The issue has almost become a litmus test for Democrats and Republicans. Given the strong passions and interests, I have long argued that this was a topic where research was unlikely to really influence the debate.louisiana

Maybe I was wrong. Last week, we released four Technical Reports and a Policy Brief on the Louisiana voucher program, led by the prolific University of Arkansas voucher researcher Patrick Wolf. Days later, there were open calls to eliminate the program. The main driving force behind this pressure is the special session of the Louisiana Legislature to address the state’s major fiscal crisis, but the evidence is being cited. Is that justified? Does the evidence mean the program should be eliminated?

Below, I summarize the evidence and try to help put this in perspective for policymakers as they weigh their options:

 Effects on Achievement Among Voucher Users . Students who use the voucher to enroll in private schools end up with much lower math achievement than they would have otherwise, losing as much as 13 percentile points on the state standardized test after two years. Reading outcomes are also lower for voucher users. While the size of the reading effects remain essentially constant across the two years, it is only statistically significant in the first year. In short, it looks like these students had a bad first year, then regained some, but certainly not all, of the losses in the second year.

 Competitive Effects on Achievement Among Public School Students .The program may have modestly increased academic performance in public schools, consistent with the theory behind school vouchers that they create competition between public and private schools. That said, most of the estimates are statistically insignificant, except for public schools located quite close to private schools participating in the program.

 Effects on Non-Achievement Measures . There is no evidence that the voucher program has positive or negative effects on students’ non-cognitive skills, such as “grit” and political tolerance. (On this point, the authors conclude that the data are not up to the task of identifying effects. So, I am going to follow suit and not say anything more on this.

 Effects on Racial Integration . The program reduced the level of racial segregation in the state. The vast majority of the recipients are black students who left schools with student populations that were disproportionally black relative to the broader community and moved to private schools that had more white students.

The way we interpret these multiple findings depends a bit on which outcomes you care about most, but it’s hard to see this as anything but bad news for Louisiana vouchers. While the positive competitive effects partially offset the negative effects for users, I think most would agree that we shouldn’t sacrifice the achievement of one group by sending them to low-performing private schools only as an incentive to public schools to get better. Also, while I personally think segregation is an important issue, it is not one of the objectives of the program.

So, how should we look at this evidence? Interestingly, even those skeptical of vouchers did not call for their repeal in the panel discussion we held when we released the reports. But here are the key points that I think policymakers should understand as they consider the future of the program:

First, whether we look at the first or the second year, these are the worst effects I have ever seen–for vouchers or anything else. Remember, the first-year effect was -24 percentile points and this “improves” to only -13 percentile points in the second year.

Second, the results do show improvement in the second year, but what this means for the future is unclear. If the improvement continues, then perhaps, eventually, the results will turn neutral or positive. This is what happened in New Orleans with the charter-focused post-Katrina reforms as the RSD began closing low-performing schools. The same could happen here since similar accountability is built in to the voucher program. That said, the initial results with the charter reforms were not so negative the way they are here with vouchers, so it’s hard to imagine the voucher story ends as well.

But this is still speculation. Wolf and his colleagues plan to study the program going forward, including adding more important outcomes such as high school graduation and college-going. Getting these longer-term results is especially important given what I wrote earlier about how the negative effects are probably driven by the misalignment between the state test and the private school curriculum. Test scores alone may not be the best way to measure the effectiveness of any education program and that problem may be worse with vouchers.

The researcher in me would like to see the program continue at least a bit longer so we can see whether things improve. Public schools are often criticized, unfairly, when they try something and it doesn’t generate immediate benefits. As Rick Hess has written, they end up spinning their wheels in an endless cycle of new programs. They also need to consider the possible effects on students currently in the program. Also, completely cancelling the program would force students to bounce back and forth between schools, which could be bad for everyone. On the other hand, we want policymakers to use evidence to inform their decisions, so we can’t tell them to wait forever, especially with the initial poor results.

How long should we wait to see whether the program is working? That is a question that only lawmakers can answer.

– Doug Harris

This first appeared in Education Week’s Urban Education: Lessons from New Orleans on March 2, 2016. Reprinted with permission from the author.

The post Should Louisiana Eliminate Its Voucher Program? appeared first on Education Next.

]]>
49704284
The First Negative Effects of Vouchers and the Predictable Misinterpretation https://www.educationnext.org/the-first-negative-effects-of-vouchers-and-the-predictable-misinterpretation/ Mon, 11 Jan 2016 00:00:00 +0000 http://www.educationnext.org/the-first-negative-effects-of-vouchers-and-the-predictable-misinterpretation/ Why are the effects so negative when prior studies have found either no effect or positive effects? Good question. Unfortunately, we know much less about reasons than some have suggested.

The post The First Negative Effects of Vouchers and the Predictable Misinterpretation appeared first on Education Next.

]]>
An NBER Working Paper made waves last week because it found large negative effects. It is the first study to find large negative effects of vouchers on achievement. It’s well done. I believe it.

The question is, why? Why are the effects so negative when prior studies have found either no effect or positive effects? Good question. Unfortunately, we know much less about reasons than some have suggested.

The libertarian Cato Institute chimed in quickly: “Although not conclusive, there is considerable evidence that the problem stemmed from poor program design. Regulations intended to guarantee quality might well have had the opposite effect.” Others on the pro-voucher side had similar responses–not surprising for those who want the government as uninvolved as possible.

These interpretations are quite premature, however. The only real “evidence” consists of this single case of Louisiana with heavier regulation and negative effects. I would make the usual warning that correlation isn’t causation, except calling this a “correlation” is a bit generous when the pattern is identified from a single example. Not only do we need more evidence on regulation effects, but we also have to consider evidence on alternative explanations.

There are two other obvious explanations for why the results are worse here in Louisiana. First, Louisiana has especially aggressive accountability for the public schools. In New Orleans, regulation and accountability seem to have contributed to large positive effects on achievement. Why does this matter? Because the voucher studies compare students who won a voucher to those who did not–and those not receiving a voucher very likely ended up in the new and improved public/charter system. Families have to be low-income to receive a voucher in Louisiana so public schools represent the main plausible alternative.

Put differently, even if the quality of voucher-participating private schools was identical in every city, they would all show different effects depending on the academic effectiveness of the publicly funded alternatives. This interpretation creates a tension within the reform community because it potentially pits public school reforms like charters and school closure against reforms like vouchers. The better one looks, the worse the other looks.

A second possibility is that the curriculum in the private schools in Louisiana simply doesn’t align with the state standards captured on the tests. Even with the “heavy regulation” on private schools that voucher supporters decry, the public schools have a much larger incentive to align their instruction than the private schools accepting vouchers. What makes Louisiana different from other cities is not just that they are regulating more heavily, but that voucher recipients take the state test, not some off-the-shelf norm-referenced test that is more in line with the ones that private schools normally use. Weaker alignment between the curriculum and the test in Louisiana would also tend to make the results look worse.

The regulation explanation is a third possibility. As I argued previously, regulation probably does reduce the number of private schools, especially the number of higher-performing private schools. On the other hand, part of the point of those regulations is to keep out low-performing private schools. In fact, these are the same kinds of regulations that have apparently generated such positive effects in the New Orleans public school reforms.

The fact that the NBER study only covers the first year is another factor. The state of Louisiana recently banned four schools from receiving new voucher students because the scores of prior voucher recipients had been so low. Eventually, this seems likely to make the results look less negative. Again, this is also what happened with the public school reforms. There were no effects of the reforms at first, but they improved quickly as low-performing schools had been closed. (We have a study coming out on school closure effects later in the spring.) The influence of the regulations on public/charter schools may be different than on private/voucher schools, but the pattern here is noteworthy.

The reform community sure has itself tied up in knots on this one. Even among those who support choice and non-governmental providers, there is disagreement over the rules of the road. Unfortunately, at this point, the Louisiana case tells us little about regulation effects. We can only speculate on the reasons why the Louisiana voucher effects have been so negative, but the fact that they are negative is important by itself.

Next month, the Education Research Alliance for New Orleans will release a series of studies that will provide a richer picture of the situation. We also just released a study on how families that are eligible for vouchers choose between public and private schools. The authors will provide a guest blog on that in the near future.

—Douglas N. Harris

This first appeared in Education Week’s Urban Education: Lessons from New Orleans on January  6, 2016. Reprinted with permission from the author.

The post The First Negative Effects of Vouchers and the Predictable Misinterpretation appeared first on Education Next.

]]>
49704052
Many Options in New Orleans Choice System https://www.educationnext.org/many-options-new-orleans-choice-system/ Tue, 04 Aug 2015 00:00:00 +0000 http://www.educationnext.org/many-options-new-orleans-choice-system/ School characteristics vary widely

The post Many Options in New Orleans Choice System appeared first on Education Next.

]]>

As the school-choice movement accelerates across the country, several major cities—including Cincinnati, Detroit, Memphis, Milwaukee, and Washington, D.C.—are expanding their charter-school portfolios. Historically, communities have used charter schools not only in hopes of spurring traditional schools to improve but also to increase the variety of options available to families. If family preferences vary, and schools are given the autonomy to innovate and respond to market pressures, the theory holds, then we should expect a rich variety of schools to emerge.

A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo
A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo

But does this theory hold up in practice? First, it is not clear that parents do have distinct preferences when shopping for schools. If parents are uncertain about their child’s skills, they may play it safe and seek out a generic “basket” of school services. Second, charter schools always face the possibility of closure for low performance, and this threat may pressure the schools to avoid risk by imitating successful charter models. Government regulations might also inhibit a school’s capacity to offer a unique program. And finally, large charter management organizations (CMOs) may attempt to leverage economies of scale by replicating a single model at multiple schools. Conceivably, the market strategies of charter schools and large CMOs, rather than the needs of families and students, could drive the market, leading to more imitation and less diversity.

The city of New Orleans offers an ideal laboratory for examining how much true “choice” resides in a public school market. With 93 percent of its public-school students attending charter schools, New Orleans has the largest share of students enrolled in charters of any U.S. city. In some ways, the New Orleans system is unique, having been launched in the wake of a terrible disaster. However, the city’s student population—majority minority and mostly eligible for lunch subsidies—is typical of other urban centers where school reform is growing. Furthermore, the CMOs in New Orleans are supported by many of the same national foundations that support charter schools across the U.S., suggesting that similar patterns might emerge in other expanding charter markets. This study examines public schools in the Big Easy, investigating how—and how much—schools have differentiated themselves in a citywide school-choice system.

A New Approach

Previous studies have focused on the differences between charter schools and district schools, treating all charters within a community as essentially alike. In effect, these studies take a “top-down” approach, assuming that the governance of the school (charter versus district) determines the nature of the school. This approach may be appropriate where charter schools are few and their role is to fill service gaps. By contrast, our study adds a “bottom-up” approach, focusing not on governance but on salient school characteristics such as instructional hours, academic orientation, grade span, and extracurricular activities—factors that determine what students and families actually experience.

More than 30 different organizations operate charter schools in New Orleans.
More than 30 different organizations operate charter schools in New Orleans.

We ask, are New Orleans schools homogeneous or varied? Is this answer different when we use the bottom-up approach based on school characteristics rather than the top-down analysis based on school governance? And finally, to what degree is the New Orleans school market composed of unique schools, multiple small segments of similar schools, and larger segments of similar schools?

Grouping schools by key characteristics, we find considerable differentiation among schools in New Orleans. Furthermore, schools operated by the same CMO or governed by the same agency are not necessarily similar to one another. In fact, the differences and similarities among schools appear to be somewhat independent of what organizations and agencies are in charge. Overall, we find that the market comprises a combination of large segments of similar schools and smaller segments of like institutions, but also some schools that are truly unique.

A Charter School “Laboratory”

In 2003, the Louisiana Department of Education (DOE) created the state-run Recovery School District (RSD) and empowered it to take over failing schools. At the time, only a handful of charter schools were operating in New Orleans. In the aftermath of Hurricane Katrina in 2005, city and state leaders used the RSD to take over all underperforming schools in the city. The local school board continued to manage a small number of high-performing schools, some of which have selective admissions.

Over the next several years, the RSD contracted out the schools under its control to CMOs, including both single-school operators and larger CMO networks. Policymakers also expanded school choice by eliminating geographic attendance zones for students: students were henceforth free to enter lotteries for any open-enrollment school in the city. Open-enrollment schools in New Orleans, as well as some selective-admissions schools, provide free transportation to students across the city.

The city’s charter schools are governed by three different agencies: the state’s Board of Elementary and Secondary Education (BESE), the RSD, and the Orleans Parish School Board (OPSB). The schools are managed by more than 30 school operators. This milieu creates the potential for a wide variety of schools to emerge in New Orleans. But as noted, regulations and accountability demands could stifle diversity. For example, the RSD and DOE have strict test-based requirements for charter contract renewal; 45 schools have been closed, merged, or turned over to other operators since 2007. Also, state regulations set restrictions in some areas but provide autonomy in others. For instance, DOE establishes standards for teacher preparation and certification, but charter schools are allowed to hire uncertified teachers. All schools, including charters, are required to participate in the statewide teacher-evaluation system. The net effects of these policies on school autonomy and differentiation are unclear.

Data

Our study focuses on the 2014–15 school year; by that time, 100 percent of the RSD schools were operated by CMOs (including the schools formerly run directly by the RSD). The OPSB continued to operate a small number of district schools and was expanding its own charter-school portfolio. A final small group of charter schools continued under direct supervision of the BESE.

Our data come from the spring 2014 edition of the New Orleans Parents’ Guide to Public Schools, published annually by a local nonprofit organization and distributed free of charge. This publication is the primary formal source of information for parents choosing schools in New Orleans.

From the guide, we selected eight characteristics that reflect decisions schools make when designing their programs:

· whether the school has selective admissions or open enrollment

· whether the school mission is “college prep”

· whether the school has a specific curricular theme (e.g., math, technology, or arts)

· number of school hours (annual total)

· number of grades served

· number of sports

· number of other extracurricular activities (“extras”)

· number of student support staff (nurses, therapists, social workers, etc.).

We also considered measures that are not in the Parents’ Guide. For example, we ran some analyses with the number of suspensions and expulsions, as an indicator of discipline policies. This did not change the clustering. For other categories, such as instructional approach, we did not have good measures.

Note that not all New Orleans schools have autonomy over their admissions policies. Selective admission is permitted at OPSB district and charter schools and at BESE charter schools, but not at RSD schools. Any school can attract or repel certain student populations through the menu of enhanced student-support services that it offers, however. For example, schools with an on-site speech therapist might be more attractive to parents of children with individualized education plans (IEPs) requiring these services. Our measure of the intensity of student support services may therefore help to identify open enrollment schools that target a distinct student population.

Methods

A potential student speaks with a Sylvanie Williams College Prep representa- tive at the Schools Expo hosted by the Urban League of Greater New Orleans
A potential student speaks with a Sylvanie Williams College Prep representative at the Schools Expo hosted by the Urban League of Greater New Orleans

The simplest version of the top-down theory predicts that the number of differentiated “clusters” in a public school market will correspond to the number of governing agencies. New Orleans has two authorizers: the OPSB and BESE. Both authorizers are also the governing agency for some of their schools. BESE also authorizes the schools governed by the RSD, which are low-performing schools taken over by the state. If these three governing agencies have singular “tastes” for certain kinds of schools, we should observe high similarity among schools that fall under the same agency, and differences across governing agencies. To see the extent to which schools differ across the three governing agencies, we first group the schools by governing agency and check for differences along the characteristics listed earlier.

Next, we expand the top-down groupings to allow for additional differences between district-run schools, independent charter schools, and charter network schools (run by a CMO that operates multiple schools). This creates five groups of schools in New Orleans:

· OPSB district schools

· OPSB charter schools

· RSD charter network schools

· RSD independent charter schools

· BESE charter schools.

We then compare the results of this exercise to those obtained when we ignore governance arrangements and instead group schools from the bottom up, based on their characteristics alone. To do this we use cluster analysis, a statistical method designed to group objects of study (in this case, schools) based on similar qualities.

With cluster analysis, we can specify the number of groups that will be formed. To start, we first allow schools to form either three or five clusters to test whether similar governance predicts membership in the same cluster.

We then allow schools to form more clusters (up to 10) and select the best grouping based on meaningful within-group similarities and across-group differences. This strategy tests for the possibility of market segments that are not described in the top-down theory and allows us to identify schools with unique combinations of the measured characteristics (niche schools).

Results

We focus separately on 56 elementary schools and 22 high schools included in the Parents’ Guide. New Orleans school operators can select each school’s grade span, and it is quite common for elementary schools to serve grades K–8 and uncommon to have schools with just middle school grades (5–8). Therefore, we define elementary schools as those with any grade K–4, and high schools as those with any grade 9–12. A small number of schools that serve only middle-school grades were not included in this study.

On average, elementary schools enroll 540 students; 86 percent of students are eligible for free or reduced-price lunch, and 87 percent are black. Ninety-five percent of elementary schools are charter schools, 50 percent have a college-prep mission, 43 percent have a specific curricular theme, and 9 percent use selective admissions. For high schools, average enrollment is 550 students, with 78 percent of students eligible for free or reduced-price lunch; 84 percent of students are black. Ninety-six percent of high schools are charter schools, 52 percent have a college-prep mission, 57 percent have a specific curricular theme, and 22 percent use selective admissions.

Grouping from the Top Down: Elementary Schools. When we examine school characteristics by governing agency alone (three groups), we find that the groups differ by a statistically significant amount on only one of the five continuous variables we examine. Specifically, schools governed by the RSD have more school hours. When we analyze school characteristics by governing agency and school type (five groups), we find no statistically significant differences on these same variables. We do observe some modest differences across groups in the average values of the three yes/no variables (open enrollment or selective admissions, curricular theme or not, and college prep or not). Overall, however, results from the top-down approach suggest that governance arrangements do not correlate with notable differences in school characteristics.

High Schools. When we group the 22 high schools by governing agency (three groups) and by governing agency and school type (five groups), we find in both cases that the clusters differ on the number of sports offered. In the three-group structure, clusters also differ on the number of student support staff, while in the five-group structure, they differ on grade span.

Grouping from the Bottom Up: Elementary Schools. Despite modest differences between elementary schools grouped by governing agency and school type, we find that no two individual schools are identical on all eight variables. The two schools that are most similar to one another overall are a pair that includes an OPSB district school and an RSD charter network school. The second most similar pair includes an RSD charter network school and an OPSB charter school. These groupings provide initial evidence that the most similar schools across all characteristics do not share the same governing agency and type.

We first cluster the schools into three groups to further test the top-down assumption that school characteristics will be roughly aligned with the governing agency—the OPSB, the RSD, or BESE. In other words, we let the data determine the school groupings that produce the highest degree of similarity within groups and see whether the schools within those groups tend to have the same governance arrangement.

ednext_XV_4_arcetrigatti_fig01-small

Figure 1 shows that schools can exhibit similar characteristics but not share a governing agency. For example, cluster 1 is composed of schools that share a college-preparatory mission but represent two of three governing agencies, although most (28 of 38) are RSD charter network schools. Thirteen schools that share enough similarities to form a second cluster also include RSD and OPSB schools, but most are RSD independent charter schools. The third cluster of five schools includes three OPSB and two BESE charters that have selective admissions and a specific curricular theme.

Statistical analysis suggests there are no meaningful differences described by this grouping other than the differences in admissions, theme, and mission mentioned above. Overall, these results suggest that the RSD governs schools that are more similar to one another than those governed by the OPSB. But we are able to reject the hypothesis of the top-down theory that the governing agency predicts either similarities within school groups or differences across school groups; we also find evidence of differentiation within school operators.

We next test five groupings and again find that schools do not cluster by the combination of governing agency and school type (results not shown). The first cluster includes 19 schools, all but two of which are RSD charter network schools. However, RSD charter network schools are also found in three of the other four clusters. The other governing agency-school type combinations also appear in multiple clusters, except for the two selective-admissions BESE charter schools that form a cluster with three selective-admissions OPSB charters. Six of nine RSD independent charter schools are grouped in one cluster, but that cluster also contains RSD charter network schools and OPSB charter schools. OPSB charter schools appear in four of the five clusters.

Interestingly, even the CMO does not frequently predict cluster membership. While some larger CMOs have all their schools in a single cluster, KIPP, ReNEW, Algiers Charter School Association, and other charter networks have schools in multiple clusters.

Finally, the three OPSB district elementary schools, which might be expected to be the most similar because they are the only New Orleans schools operated by a government bureaucracy, also appear in multiple clusters. OPSB district schools are clustered with schools with several other governance arrangements, including RSD charter network schools and RSD independent charter schools.

Next we examine how the five clusters differ on the five continuous variables. The groups are not statistically different in extracurricular activities, sports, student support staff, or grade span. The groups do vary across school hours, with the two clusters composed of college-prep elementary schools reporting more hours than other clusters. Clustering to five groups explains only 38 percent of total variance in continuous clustering variables.

Overall, these results suggest that grouping schools by governing agency and type does not capture the market structure in New Orleans. Although we observe that RSD-governed schools tend to cluster together, there are multiple governance-type combinations represented in each cluster, and a CMO can have schools in up to three different clusters. Thus, the top-down theory of three or five groups appears to be inadequate to identify meaningful differences across schools.

We next characterize the market by allowing the clusters to emerge from the data. When we allow 10 groups to form, we find that the groups are statistically different along all continuous clustering variables, except the number of student support staff (see Figure 2).

ednext_XV_4_arcetrigatti_fig02-small

Cluster 1 contains 19 RSD charter schools with more-than-average school hours and a college-prep mission. Cluster 2 contains two OPSB charter schools and 10 RSD charter schools. The schools in this second cluster have near-average values for all continuous variables, and they do not have a curricular theme or college-prep mission.

Six other clusters capture additional nuance in the supply of schools. All the schools in these six clusters have at least one school with a curricular theme, and three of the six clusters contain only schools that also have a college-prep mission. However, the clusters vary across all the continuous variables except student support staff.

Finally, two elementary schools appear as outliers in the analysis, suggesting they occupy niches in the market. The first is a selective-admissions OPSB charter school with a curricular theme, fewer school hours, more extracurricular activities and sports, and a large grade span. The second is an RSD charter school with no curricular theme or college-prep mission, but higher than average numbers of extracurriculars, sports, and student support staff.

Overall, this more-flexible clustering strategy creates groupings that are more similar within group and more different across groups than clustering based on governing agency and school type. We observe that a single CMO can manage schools that differ from each other, and that similar schools can be governed by different agencies and managed by different organizations. We find that RSD schools are more likely to cluster together than are OPSB schools, which often occupy smaller market segments. Finally, we see that elementary schools cluster in groups with varied levels of school characteristics—except for student support staff. We find little evidence that any New Orleans elementary schools differentiate themselves by offering more in-house support staff than other schools.

ednext_XV_4_arcetrigatti_fig03-small

High Schools. The 22 New Orleans high schools include 12 RSD, 7 OPSB, and 3 BESE schools. There are 10 RSD charter network high schools, 2 RSD independent charter high schools, 5 OPSB charter high schools, 2 OPSB district high schools, and 3 BESE charter high schools. These groups vary statistically in the number of sports offered and student support staff, and also grade span.

ednext_XV_4_arcetrigatti_fig04-smallFirst, we use the school characteristics data to form three clusters. Compared to elementary schools, high schools appear to have more differences among them, and the maximum degree of difference is also greater. Overall, these findings do not support the assertion that schools vary by governing agency (see Figure 3).  Re-clustering into five groups also did not create groupings that reflect the combination of governing agency and school type (results not shown).

Using the flexible clustering strategy, we see a mixture of governing agency and school type across four clusters, with six outliers (see Figure 4). The largest cluster includes six high schools—one OPSB school and five RSD charter network schools run by four different CMOs. The second cluster is also diverse, with five total schools from three governing agency and type combinations. We also observe charter network schools and the OPSB schools in different clusters. The six outlier high schools include one OPSB district school, two OPSB charters, two BESE charters, and one RSD independent charter school.

Five of the six niche schools are selective-admissions schools. Five of them have a curricular theme (such as science and math, intercultural studies, or performing arts), and two have a college-prep mission. Outliers tend to have more extracurricular activities, shorter school hours, and larger grade spans. Overall, outliers are much more common among high schools, and every selective-admissions high school has its own niche.

Conclusion

New Orleans presents the opportunity to study an urban school system where charter schools comprise more than 90 percent of school campuses and total student enrollment. We find that school characteristics vary within both governance arrangements and individual CMOs, and that the most similar schools are often governed by different agencies and have different managing organizations. We also found a greater degree of market differentiation than would be expected from a top-down approach. Our methods reveal 10 distinct types of elementary schools comprising large segments of similar schools, small segments of two to three schools, and niche schools. Among high schools, we found four segments (both large and small) and a larger number of niche schools. This may reflect more specialized interests among older students.

Charter schools governed by the RSD are often, but not always, similar to each other, with emphasis on college-prep missions and more school hours. It is unclear if this reflects governing agency preferences or the fact that RSD schools are, by definition, previously low performing and therefore may be more constrained by test-based accountability. Moreover, schools within the same CMO network are often, but not always, similar to each other. Amid this similarity, we also find that within the RSD, CMOs can and do create diversified portfolios of schools.

Schools outside of the RSD are more likely to be diverse. For example, OPSB charter schools differ considerably from each other and often serve a market niche. Particularly at the high-school level, charter schools governed by the OPSB or BESE create niche markets with a curricular theme, while different CMOs come together to form a segment of similar schools, often sharing a college-prep mission. In the New Orleans context, this suggests that governing agencies may be more willing to provide unique offerings when they manage higher-performing schools with little risk of sanctions related to standardized testing. Furthermore, uniqueness often comes with selective admissions, which suggests that access to diverse school choices is greater for students who through ability or parent involvement can navigate a complex system of admissions rules and testing.

The small number of schools that remain under the bureaucratic control of the OPSB play a notable role in the school market. These schools appear in smaller clusters or stand alone as different from most charter schools. They also do not typically cluster with one another, suggesting that even a bureaucratic system can offer diverse options in a school-choice system.

Our study indicates that New Orleans parents can choose from among schools that vary on several key dimensions, and that these differences are not necessarily driven by the decisions of charter governing agencies or large CMOs. Even within large CMOs, we found significant variation among schools; for example, the expansion of KIPP in New Orleans to manage five elementary campuses did not result in five schools with identical characteristics.

Finally, we note that much of the market differentiation in New Orleans comes from schools authorized or run by either the Orleans Parish School Board or the Board of Elementary and Secondary Education. Having multiple governing agencies may be important for market differentiation.

As more cities expand school choice, we will have the opportunity to compare New Orleans to other markets to see how factors such as economies of scale, regulations, and demand influence the amount and quality of differentiation. We will also be able to observe the evolution of public school markets over time, to see if competitive pressures result in more differentiation or a drift toward imitation—and how such trends affect student outcomes.

Paula Arce-Trigatti is postdoctoral fellow in economics at Tulane University and the Education Research Alliance for New Orleans. Douglas N. Harris is professor of economics at Tulane University and founder and director of ERA-New Orleans. Huriya Jabbar is assistant professor of education policy at the University of Texas at Austin and research associate at ERA-New Orleans. Jane Arnold Lincove is assistant research professor of economics at Tulane University and associate director of ERA-New Orleans.

For more information on New Orleans, read “Good News for New Orleans: Early evidence shows reforms lifting student achievement,” by Douglas N. Harris, and “The New Orleans OneApp: Centralized enrollment matches students and schools of choice,” by Douglas N. Harris, Jon Valant, and Betheny Gross.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Arce-Trigatti, P., Harris, D.N., Jabbar, H., and Lincove, J.A. (2015). Many Options in New Orleans Choice System: School characteristics vary wildly. Education Next, 15(4), 25-33.

The post Many Options in New Orleans Choice System appeared first on Education Next.

]]>
49703400
The New Orleans OneApp https://www.educationnext.org/new-orleans-oneapp/ Tue, 04 Aug 2015 00:00:00 +0000 http://www.educationnext.org/new-orleans-oneapp/ Centralized enrollment matches students and schools of choice

The post The New Orleans OneApp appeared first on Education Next.

]]>

ednext_XV_4_gross_img01In most of the U.S., the process for assigning children to public schools is straightforward: take a student’s home address, determine which school serves that address, and assign the student accordingly. However, states and cities are increasingly providing families with school choices. A key question facing policymakers is exactly how to place students in schools in the absence of residential school assignment.

In the immediate aftermath of Hurricane Katrina, New Orleans families could choose from an assortment of charter, magnet, and traditional public schools. The city initially took a decentralized approach to choice, letting families submit an application to each school individually and allowing schools to manage their own enrollment processes. This approach proved burdensome for parents, who had to navigate multiple application deadlines, forms, and requirements. Moreover, the system lacked a mechanism for efficiently matching students to schools and ensuring fair and transparent enrollment practices. The city has since upped the ante with an unprecedented degree of school choice and a highly sophisticated, centralized approach to school assignment.

Today, New Orleans families can apply to 89 percent of the city’s public schools by ranking their preferred schools on a single application known as the OneApp (see Figure 1). The city no longer assigns a default school based on students’ home addresses. Instead, a computer algorithm matches students to schools based on families’ ranked requests, schools’ admission priorities, and seat availability. Experience with the OneApp in New Orleans reveals both the significant promise of centralized enrollment and the complications in designing a system that is technically sound but clear to the public, and fair to families but acceptable to schools. The OneApp continues to evolve as its administrators learn more about school-choosing families and school-choosing families learn more about the OneApp. The approach remains novel, and some New Orleanians have misunderstood or distrusted the choice process. The system’s long-term success will require both continued learning and growth in the number of schools families perceive to be high-quality options.

The OneApp’s Design

Early centralized enrollment systems, and the matching algorithms at their core, suffered from a key flaw: the lotteries were designed so that if a family ranked its most-preferred school first and that school was in high demand, then the family could lose its second-ranked option. In this situation, it could be rational for families to rank less-preferred options first. This is precisely what families did in cities like Boston that used this approach to match students to district schools, and it likely produced inefficient outcomes.

ednext_XV_4_gross_fig01-smallThe challenge that faced the state entity that oversees most of the New Orleans schools, the Louisiana Recovery School District (RSD), was how to build a centralized, market-like enrollment system without inducing inefficient strategic behaviors. The solution was found in the Nobel Prize‒winning research of Stanford economist Al Roth. He, along with fellow Nobel Prize winner Lloyd Shapley, showed that a system could be designed to elicit true preferences just as prices would in a normal market. New Orleans and Denver became the first cities to use this Roth/Shapley-inspired centralized enrollment system across charter and district sectors. In New Orleans, this enrollment system is called the OneApp. To develop and run the OneApp, the RSD contracted with the Institute for Innovation in Public School Choice (IIPSC), an organization for which Roth has served as an adviser and board member.

For families, the OneApp process begins by acquiring an application packet with details about the application process, profiles of participating schools, and the application itself. Parents can request up to eight schools by submitting a ranked list to the RSD, in paper or online. The RSD then assigns students to schools based on families’ preferences, schools’ enrollment criteria, and seat availability. Families that do not submit a “Main Round” application, are not assigned to a school, or would like to try for a better placement may apply in a subsequent round. Families still lacking a satisfactory placement after the second round can go through a late enrollment process managed by the RSD to select from schools with available seats.

The machinery driving these placements is the RSD’s “deferred acceptance” computer algorithm. The first step of the process is to assign every student a lottery number for use when seats in oversubscribed schools must be allocated at random. The algorithm then tentatively assigns students to their first-choice schools, provided that students satisfy the entry criteria. If the school cannot accommodate all families applying for that grade, then the algorithm makes tentative assignments based on the school’s priority groupings (e.g., whether the student lives within the school’s broad catchment area) and students’ lottery numbers. At this point, students who were not assigned to their first-choice school are rejected from that school. Importantly, however, the algorithm leaves all assignments tentative until the final step. This means that students tentatively assigned to their first-choice school might later lose their seats to students who ranked that school lower than first but were rejected from all higher-ranked schools. This is key to the algorithm’s strategy-proof design.

In the next step of the process, all students who were rejected from their first-choice school are considered for their second-choice school. The algorithm considers them along with other second-choice applicants and those who were tentatively assigned to their first-choice schools. These steps are repeated for third choices and so on until no available seats remain. The algorithm’s final step is to actually assign all students to the schools to which they are tentatively assigned. Only then are families notified of the results.

The OneApp has many useful properties as a system for assigning students to schools of choice, including its strategy-proof design. To maximize the probability of receiving a desired placement, applicants have an incentive to rank as many schools as possible (eight) in their true order of preference. In fact, deviating from that strategy only makes it less likely that applicants will be assigned to their most-preferred schools. Yet even a technically elegant system—and especially one this difficult to explain—faces challenges when it confronts families making decisions for their children in actual choice settings.

The OneApp in the School Choice Context

The RSD set three goals for the OneApp: efficiency, fairness, and transparency. Here, we consider the OneApp and centralized enrollment in the context of these goals, at times defining them differently from how the RSD does. We examine not just the technical process of assigning students to schools, but also the relationship with the city’s broader school-choice setting, since the OneApp is so intertwined with New Orleans overall education policy. To incorporate empirical evidence when possible, we draw on data from interviews with 21 parents and surveys of 504 parents about the OneApp and school choice, conducted in the spring of 2014 by the Center on Reinventing Public Education (CRPE). We also utilize de-identified OneApp data containing families’ school requests and assignments for the 2013‒14 school year.

Efficiency. A centralized enrollment system like the OneApp may improve efficiency both in how families choose schools and how the broader market for schools operates. The RSD’s stated definition of efficiency is reasonable, if incomplete. It states that the OneApp can improve efficiency by making the enrollment process easier for parents to navigate, reducing the costs associated with choosing and enrolling in a school. We favor a definition that also considers how successfully the system matches families to the schools they want. Economists emphasize the importance of matching preferences with products—in this case, matching what families want with the available schools. Given the available schooling options, the OneApp algorithm is designed to do that.

How well the OneApp stacks up on this two-pronged definition of efficiency depends on the alternative to which it is compared. Relative to traditional zone-based assignment, the OneApp requires somewhat more effort from families. Families are asked to gather information and think about the many options in front of them before actively selecting a school and ranking their preferred schools. Families could incorporate school considerations into decisions about where to live, but once a residential decision is made, the school-housing linkage sharply limits a family’s options. Traditional zoned-based assignments may be less able to match family preferences than the OneApp, especially for those who don’t have the means to purchase or rent a home in a neighborhood with desirable public schools.

Compared with decentralized choice, where families apply to every school separately, centralized enrollment should be easier on families by reducing the applications and deadlines they have to navigate. It also should more efficiently match families to schools via a centralized matching algorithm. Perhaps surprisingly then, CRPE’s surveys of New Orleans parents in spring 2014 found that families that chose schools after the OneApp was instituted in 2012 reported greater difficulty with the number of applications and deadlines involved than families that chose schools before the OneApp. This may have been due to families adjusting to an unfamiliar process early in the OneApp’s tenure. It will be worth tracking future surveys to see if parents grow more comfortable with the procedures as these procedures grow more familiar.

ednext_XV_4_gross_fig02-smallIn general, most families that enter the OneApp are getting the schools they request. The RSD reports that 54 percent of Main Round applicants received their first-choice school and 75 percent got one of their top three choices for the 2015‒16 school year (see Figure 2). While these results are encouraging, no comparable metric exists for zone-based assignment or decentralized choice, and these metrics can be misleading. They indicate how well participating families are being matched to participating schools. These measures cannot gauge families’ true satisfaction with their school options and their matches. For example, if an extremely popular school joins the OneApp and many families rank that school first, the percentage of families receiving their first choice might fall even as the system’s ability to match families to desirable schools improves. For this reason, the OneApp data provide limited, though useful, information about family satisfaction. Continued surveys and discussions with school-choosing New Orleans families can complement the information from these publicized metrics.

Fairness. Defining fairness requires normative judgment. A high standard might hold that access to high-quality schools does not vary by students’ socioeconomic status. Every modern enrollment system would fall far short of this standard. Traditional zone-based systems generally leave low-income and minority students heavily concentrated in low-performing schools. Decentralized systems typically favor parents who have strong social networks and resources to understand, navigate, and even manipulate the many different enrollment processes in a city. The centralized OneApp system is not devoid of problems either. Students receive preference within their geographic catchment areas, and students from affluent families are more likely to have the preparation needed for admissions to selective schools. Moreover, the early deadline for schools with special entrance requirements—in December of the year before enrollment, two months before other Main Round applications are due—requires early awareness that may disadvantage all but the most well-informed or socially connected parents. On the other hand, families of all backgrounds at least have a chance to enter lotteries for the vast majority of schools, and even though some of the most desirable schools have early deadlines and additional requirements, simply including these schools in the OneApp likely makes them more visible and accessible than they would have been otherwise.

A more attainable definition of fairness, and the one adopted by the RSD, is that a system is fair if it sets rules governing enrollment and assignment in advance and then applies those rules consistently to all students. Residence-based school-assignment systems generally treat students within their zones equally for purposes of admission, though there have been cases of skirting the rules with incorrect addresses or special treatment. More significant problems arise in schools of choice when, for example, school leaders hide open seats from certain types of students or manipulate their lotteries or waitlists—problems that are especially likely when schools manage their own enrollment processes amid significant accountability pressure. Prior to the implementation of the OneApp, a study by Huriya Jabbar found that roughly one-third of New Orleans principals admitted to practices that kept certain students out. The OneApp has reduced opportunities for schools to engage in these behaviors by transferring decisionmaking authority in admissions from schools to the centralized process. While system leaders report that these behaviors became less common after the OneApp, it did not completely eliminate opportunities for unfair enrollment behaviors, as schools still might dissuade certain families from applying or enrolling. But these behaviors cannot be remedied with an application system alone.

Transparency…and Clarity. The RSD also includes transparency among its primary goals, and for good reason. Being open and honest about the rules governing enrollment and the strategies for effective participation is an essential element of the responsible administration of a centralized enrollment system. We submit, however, that simply being transparent is not enough with a program as unfamiliar and potentially confusing as a centralized enrollment system. A transparent system can still be unclear, and a lack of clarity can produce misunderstandings and distrust that undermine even the most transparent system.

To assess transparency, we again compare a centralized enrollment system with the alternatives. Attendance zones are extremely transparent, despite obvious questions about equity. At the other extreme, decentralized choice systems can have severe transparency concerns, with schools individually managing their lotteries and waitlists outside the view of the public or an oversight agency. State or local rules requiring public lotteries and equal treatment may be helpful but difficult to enforce, as Jabbar’s evidence on pre-OneApp principal behavior attests.

The OneApp, in contrast, requires that all rules and criteria determining admission are set in advance and, in fact, coded into a computer algorithm. The criteria are also included in the OneApp enrollment packet for the public to see. Some schools still give priority for criteria such as being the child of a school staff member, but these criteria at least are made known to the public. Putting this information in the OneApp booklet helps families understand the enrollment processes, and may discourage schools from adopting enrollment criteria or processes to strategically manipulate their pools of incoming students.

Being clear about certain elements of the OneApp has proven more difficult than being transparent. In some ways this is understandable, since at the core of the OneApp lies an algorithm that is difficult to explain to even the most interested audience. Yet clearly communicating to families information about the matching process and instructions for correctly filling out an application is essential, since misunderstandings or mistrust may lead parents to approach the OneApp in ways that undermine its goals. To examine the possibility of misunderstandings or mistrust, we analyzed patterns in OneApp rankings and interviews and surveys with parents. Useful, if limited, evidence of the OneApp’s clarity can be found by identifying application behaviors that reduce applicants’ probability of getting their desired placements.

We find evidence that many families do not approach the OneApp as its designers likely expected. The OneApp allows families to rank up to eight schools, and given the algorithm’s strategy-proof design, families cannot gain by ranking fewer than the allowed number. Yet most families rank far fewer than eight. Applicants seeking nonguaranteed kindergarten or 9th-grade Main Round placements for the 2013–14 school year submitted forms with only 3.1 schools ranked, on average. (Students are guaranteed slots in the schools they currently attend.) Perhaps these families were considering only a few OneApp schools before seeking out private schools or non-OneApp public schools. For many applicants, this did not seem to be the case. In the Main Round, 315 families that requested nonguaranteed kindergarten or 9th-grade placements with applications listing fewer than eight schools did not get placed at all. Of these families, about half (164) applied to at least one additional school in a subsequent round of the OneApp, which indicates a willingness to enroll in a school not originally ranked. Many of these families likely would have been better off listing additional schools in their Main Round application, when more schools were available to them. While this amounts to a small proportion of total OneApp applicants, others who ranked fewer than eight schools and yet received a Main Round placement might have simply been fortunate.

One possible explanation for this behavior is that many parents do not understand or believe the OneApp’s strategy-proof design. Parents interviewed by CRPE researchers described efforts to outwit the OneApp’s matching algorithm by ranking fewer than eight schools. For example, many interviewed parents reasoned that by ranking only their most-preferred schools, they gave the RSD little alternative but to assign them to one of their top choices. While such decisionmaking is hard to observe in the OneApp data, this kind of strategy puts parents at a greater risk of not matching to any school.

The number of families that do not submit an application at all suggests that many families, despite the RSD’s efforts to publicize the OneApp and provide information on procedures, may still be unclear about the OneApp process. For the 2013–14 school year, 2,881 applicants requested a nonguaranteed kindergarten or 9th-grade placement during the Main Round in February. However, another 774 applicants first requested a nonguaranteed kindergarten or 9th-grade placement in either Round 2 (in May) or Round 3 (in July), before the final administrative matching process. With some highly regarded schools filling up during the Main Round, these families’ access to desirable schools was limited. For many, missing the Main Round was likely the result of imperfect information about either the OneApp process or their own plans for the coming school year. And certain populations are especially vulnerable. Families just arriving in New Orleans, families with children just reaching school age, and families without access to informed social networks could struggle to learn about the OneApp process in time.

Centralized Enrollment and Education Policy

In many ways, the OneApp is more efficient, fair, and transparent than the decentralized choice system that preceded it. Despite this, some New Orleanians remain skeptical of the new system, often for reasons only tangentially related to the city’s enrollment process. For example, in one parent’s words, “This [common enrollment] would be great…if we had better choices.” We argue that these impressions tend to emerge not from the OneApp itself but from the larger choice system, especially the closely connected “supply side” of the market. Yet these impressions can have direct implications for the OneApp. How the public feels about the school choice setting in New Orleans can shape education policy, and education policy can shape the OneApp’s role, now and in the future.

Examples of supply-side issues that can affect public perception include transportation, selective admissions, and nonparticipation in the OneApp. If families cannot access the schools they want because commuting to those schools is too difficult, their children do not meet performance requirements, or those schools do not appear in the OneApp, then families are unlikely to believe that centralized enrollment gives them real choice.

These supply-side issues intersect in New Orleans, where it can feel like a decentralized school-choice system operates alongside a centralized one. Most public schools in New Orleans are administered by the RSD, but among other public schools are those run directly by the traditional school district (the Orleans Parish School Board, or OPSB), OPSB-authorized charter schools, and charter schools authorized by the state’s Board of Elementary and Secondary Education (BESE). Whereas all RSD schools participate in the OneApp and do so without academic entrance requirements, the same is not true of OPSB and BESE schools. Several OPSB and BESE public schools have selective admissions based on entrance exams, language proficiency exams, prior grades, essays, and other criteria. Some of these selective admissions schools do not currently participate in the OneApp, and school bus service is less consistently provided by them. This multi-part system can give rise to confusion and frustration, particularly among families trying to reconcile claims that they have unprecedented choice with the reality that their children may not have access to some of the city’s most desired public schools.

Parents also indicated a slim possibility of receiving a seat in a high-quality school. While New Orleans schools have improved considerably since pre-Katrina (see “Good News for New Orleans,” features, Fall 2015) and families seem to have a variety of schooling options (see “Many Options in New Orleans Choice System,” research, Fall 2015), only 22 of the 90 schools in the 2015–16 OneApp received a letter grade of A or B under the state’s accountability system. Of the four schools that received an A, three are full-immersion Spanish or French language schools that required applications during the Main Round’s Early Window period because they mandated language proficiency tests.

Moreover, while 89 percent of New Orleans public schools appeared in the OneApp, a few of the city’s highest-rated, most-desired schools constitute the 11 percent of New Orleans public schools that have chosen to handle enrollment processes on their own, outside of the OneApp. Some of these same schools have complex application requirements and ambiguous selection procedures, heightening the sense that the best schools in New Orleans are not truly accessible to all families.

In the long run, parental perceptions will also depend on how the school system responds to market demand. The OneApp can help in this regard, since it collects information about family preferences. Ideally, system leaders use this information—along with other data on school quality—to increase the number of high-quality seats (e.g., by adding seats to desirable schools or opening more schools like them) and reduce the number of low-quality seats (e.g., by closing low-performing, undesirable schools). Indeed, the RSD has incorporated demand data in judgments about school sites, placing popular schools in buildings that can accommodate future growth. However, responses through the portfolio management process can be slow to develop, and some high-demand schools, feeling effective at their current scale, have expressed reluctance to increase their enrollment substantially. Individual school leaders may be able to adjust to demand signals more quickly by better aligning their offerings with community needs, though research on schools’ responses to market pressures generally shows that schools make some programmatic improvements in response to demand pressures but focus more intently on superficial changes like improved marketing.

The OneApp will likely enjoy long-term public support only if it is woven into a larger fabric of school options and choice. These examples show that some important threads in this fabric are still missing. No matter how well thought out and carefully constructed the OneApp itself might be, families that find their preferred schools inaccessible or their options undesirable are likely to experience frustration and confusion. Some may judge the enrollment system using metrics of efficiency, fairness, and transparency, but parents will judge it based on their own experiences and interests.

The OneApp represents an ambitious policy shift, requiring families and educators to think in an entirely new way about how students are assigned to schools. Given this, and the fact that the OneApp is still in its early years, misunderstandings are not surprising. With most families getting one of their top-ranked schools, the number of satisfied parents could give system and school leaders time to improve the application process further as well as the quality of schools offered. There are signs in New Orleans that such learning and improvement are underway. RSD administrators routinely consider the system’s successes and failures, and modify it accordingly for the next iteration, all while the public continues to acclimate and learns how to better leverage the choice system. Continued learning and adaptation will be essential to the OneApp’s sustained success and the ability of New Orleans to provide the country with a model for student enrollment that is worthy of replication elsewhere.

Douglas N. Harris is professor of economics and founder and director of the Education Research Alliance for New Orleans at Tulane University. Jon Valant is postdoctoral fellow in the department of economics at Tulane University and at ERA-New Orleans. Betheny Gross is senior analyst and research director at the Center on Reinventing Public Education at the University of Washington Bothell.

For more information on New Orleans, read “Good News for New Orleans: Early evidence shows reforms lifting student achievement,” by Douglas N. Harris, and “Many Options in New Orleans Choice System: School characteristics vary widely,” by Paula Arce-Trigatti, Douglas N. Harris, Huriya Jabbar, and Jane Arnold Lincove.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Harris, D.N., Valant, J., and Gross, B. (2015). The New Orleans OneApp: Centralized enrollment matches students and schools of choice. Education Next, 15(4), 17-22.

The post The New Orleans OneApp appeared first on Education Next.

]]>
49703414