More tuition inflation
This just in: colleges are unable to rein in their costs and keep hiking their tuition bills. For in-state students at public 4-year universities, tuition and fees increased 7 percent after adjusting for inflation between this academic year (2012-13) and the 2010-2011 academic year. During the same period, tuition and fees at all 4-year nonprofit institutions increased 3 percent (to about $24,300), again after adjusting for inflation. The Postsecondary Institutions and Cost of Attendance in 2012-13; Degrees and Other Awards Conferred, 2011-12; and 12-Month Enrollment, 2011-12 data report was released today, May 21, 2013 by the National Center for Education Statistics (NCES).
However, for-profit institutions reported a 2 percent decrease tuition and fees to $15,400. That’s interesting since for-profits have been under pressure to be pickier in which students they admit since new 2011 regulations went into effect. Under these new regs, if for-profits fail to raise graduation rates and train students for careers with high enough salaries, then they risk losing access to federal student aid. Federal student loans are the main source of funding for these institutions; very few students can afford to pay out of pocket.
Data on teacher absenses, sick days and substitutes
On May 16, 2013, Choice Media, an online education news service that is critical of teachers unions, posted a provocative story, What’s Making Asbury Park Teachers Sick?. They collected data from a few New Jersey towns, through a Freedom of Information Act request, and found that Asbury Park’s teachers averaged more than 18 absences a year over the past two years . Each missed more than three solid weeks of school or, put another way, about 10 percent of the school year. Conventional wisdom would say that chronically absent teachers can’t be good for these students — largely low-income minorities — who need as much instruction as possible. And it’s certainly not good for taxpayers, who have to shell out more money for substitute teachers.
But, what’s interesting, is that Choice Media found that even a high-performing, high-income school district can have a high rate of teacher absences. In Montclair, for example, where many New York City professionals flock to for the good schools, the average teacher is absent more than 12 days a school year.
We all know that schools are germ factories and one could expect the sick rate for teachers to be higher than for, say, desk-bound insurance actuaries. But 18 or 12 days just seems like too much, no?
Not if you live in Rhode Island, it turns out. Teachers in that state took off an average of 21 days per school year. (A full list of state rankings can be found in Table 1, page 8 of Teacher Absence as a Leading Indicator of Student Achievement: New National Data Offer Opportunity to Examine Cost of Teacher Absence Relative to Learning Loss by Raegen Miller, Center for American Progress, November 2012.)
In a Spring 2013 magazine story No substitute for a teacher for Education Next, former Wall Street Journal reporter June Kronholz dug into the teacher absenteeism numbers. She reported that nationally, the average teacher takes off 9.4 days per school year. But this number may be misleadingly low because districts count absences in different ways. “Some would count the tennis coach absent if he left his gym classes in the hands of a sub to attend an out-of-town tournament with his team; others wouldn’t. Some count professional development days when subs are hired to take the class; others don’t,” wrote Kronholz.
Kronholz points out that unionized teachers tend to have generous sick-and-personal day policies. Citing the National Council on Teacher Quality (NCTQ) as a source, Kronholz says that union contracts in 113 large school districts give their teachers, on average, 13.5 days of sick and personal leave per school year.
So how much does teacher absenteeism affect student performance? That’s unclear. The Miller study cited above says it’s challenging to measure because teachers are gone on a day-to-day basis, but student performance is measured far less frequently. In an earlier paper, “Do Teacher Absences Impact Student Achievement?” Miller, with co-authors Murnane and Willet, calculated that every 10 days of teacher absence was associated with a moderately lower math score in one urban school district.
What is clearer in the data is that low-income schools tend to have greater rates of teacher absenteeism than wealthier schools within the same geographic region. According to a study of North Carolina schools by Clotfelter, Ladd, and Vigdor, “Are Teacher Absences Worth Worrying about in the U.S.?”;, schools in the poorest quartile averaged almost one extra sick day per teacher than schools in the highest income quartile.
Has teacher absenteeism been increasing or decreasing in recent years? Also unclear. The Civil Rights Data Collection department inside the U.S. Department of Education only began tracking teacher absences in 2009. It will be several years before the 2011-12 data report is out.
Data on resilience
Can resilience be taught? On May 3, 2013 Bruce Rogers of Forbes posted The Power of Resilience: Study Shows How Horatio Alger Association Scholarships Make A Difference about a 2012 study by NORC’s Gregory C. Wolniak and Zachary Gebhardt at the University of Chicago. The authors found that low-income students, many of whose parents were drug addicts or imprisoned, were more successful at overcoming adversity, completing college and finding full-time employment when they received both large scholarships and mentoring. Money alone is not enough.
Data on the children of Tiger Mothers
On May 8 in Poor Little Tiger Cub, Slate wrote about a March 2013 study of the children of Tiger Mothers by Su Yeong Kim at the University of Texas. Kim studied 444 Chinese American families (what an unlucky number!) and concluded that the children of Amy Chua-like tiger parents had lower GPAs and educational attainment. These children also had more symptoms of depression and a greater sense of alienation.
This is a subject near and dear to my heart. For my final project at a Columbia Teachers College statistics course in May 2011, classmate Ajay Srikanth and I crunched federal data on kindergarteners (ECLS-K). We found, among kindergarteners in 1998-1999, that the children of tiger mothers, on average, scored just a little higher on reading and math tests than other kids did. But we used a small 3,000 subsample of the original 20,000 student data set. And we always wanted to run the regressions again on the full data and track the kids to see how they did later in life.
The main problem in these data studies is how you define “tiger parenting” and how you decide which kids get classified as the children of tiger moms. We employed a “factor analysis technique with varimax rotation”. That’s just a fancy way of saying that we put a bunch of parent attributes in a big salad spinner and noted how they clumped together. We then defined tiger parents as ones who had high expectations, high parental involvement and were strict. But you could arbitrarily decide that tiger moms don’t really get down on the ground and build things together with their kids, for example, and take that out of the equation. And so, as your group of tiger cubs change, so do your conclusions.
An explanation of when $20,000 is not enough to teach a student.
New York City may spend more per student than most districts in the United States ($19,597 during the 2009-2010 school year according to the U.S. Census), but one education scholar’s number crunching shows that the city’s schools are underfunded.
Bruce D. Baker, a Rutgers education professor, posted Class Size & Funding Inequity in NY State & NY City, on his personal blog, School Finance 101, on May 9, 2013. (Thank you to Ajay Srikanth, my former classmate, for pointing me to it). Baker argues that school districts with high poverty need more dollars per student because poor students tend to have greater needs. Instead, school districts with high poverty have lower funding per student. Check out this graph.
The result of this underfunding, according to Baker, is that class size is ballooning in poor districts, such as New York City. Baker acknowledges the controversy over whether reducing class sizes actually improves student outcomes, but asserts that classes should not be allowed to increase beyond 30 kids in a class in high poverty districts.
“From a simple fairness standpoint, it makes little sense that children in the top 20% districts by wealth and income should have access to such smaller classes than children in New York City,” Baker writes.
I have a friend who teaches high school English in the New York City public schools. Each of her classes is well north of 30. “How can I be a good teacher to 37 kids?” she asked me a few months ago with tears welling up in her eyes.
Do U.S. students lag behind the rest of the world?
In a May 3, 2013 HuffPo story, ‘We’re Number Umpteenth!’: Debunking the Persistent Myth of Lagging U.S. Schools, Alfie Kohn takes issue with the conventional wisdom that American students are slipping behind their peers abroad. Kohn is partly right. The international ranking tables are largely a reflection of how much poverty you have in your nation. Countries with the lowest poverty levels rise to the top. Countries with the highest poverty levels sink to the bottom. And that’s a big reason the U.S., with something like a 25 percent poverty rate in our schools, has slid.
But it’s not true that our top students are doing just fine. After digging through international testing data, Martin Conroy of Stanford University and Richard Rothstein of EPI, in a January 2013 paper, What do international tests really show about U.S. student performance?, find that the biggest, most alarming gaps are between America’s top students and the top students of other countries. Our best students are the problem.
The accuracy of federal education data
Correcting mistakes may be an essential part of a good education, but that doesn’t apply inside the branch of the U.S. government that compiles and keeps education statistics. Indeed, the National Center for Education Statistics (NCES) knowingly leaves in errors that are discovered two to three years later. And then this error-ridden data is used by education policy makers to make decisions.
I recently learned about these revision deadlines from the person in charge of the education data, NCES Commissioner Jack Buckley. Buckley explained that the Integrated Postsecondary Education Data System data set, a.k.a. IPEDS, allows for one year of revisions (after the initial collection year) and then “locks” that year’s data forever. That’s been a frustration for for-profit universities who’ve been clamoring to retroactively revise their graduation rates upwards so that their students can remain eligible for federal student loans.
Another major data collection, the Common Core of Data, is kept active for only three years, effectively cutting off revisions afterwards.
The IPEDS data is self-reported and it may be wise to limit the ability of schools to game the data to their benefit. But the “locking down” of the data also means that more innocent mistakes are made that can never be fixed — and then used for analysis by the public. At a May 2, 2013 session of an Education Writers Association conference, California Watch investigative reporter Erica Perez recounted how she repeatedly found mistakes in the federal databases when she phoned schools to verify the numbers.
Contrast the Department of Education’s data practices with the Bureau of Labor Statistics inside the Department of Commerce, which never locks its data and allows for infinite revisions of its jobs and other economic data. (I verified this with the BLS). Why shouldn’t the education folks, like the economists, always correct an error when one is found?
Commissioner Buckley, in an email, said that he sees a “tradeoff between getting it right the first time (since these data are used by policy makers and the public for annual decision making)” and “ allowing some flexibility for inevitable errors. Allowing schools to submit revisions for any year, any time is confusing for users and difficult for data collectors trying to maintain a complex process and keeping it timely and accurate.”
It’s hard to make a blanket assessment of the quality of the federal education data. The Department of Commerce rushes data out to the public as quickly as possible, making it more error prone. The Department of Education takes a much longer time to verify and clean up the data before its first release. “We spend a lot of time and money on data quality. Beware of the fastest person to put data out,” said Buckley at an April 30, 2013 session of the American Educational Research Association’s annual meeting.
Data on teacher evaluations
Here’s a really wonky description of data studies on teacher evaluations, What Do We Know About the Tradeoffs Associated with Teacher Misclassification in High Stakes Personnel Decisions?, published by the Carnegie Foundation for the Advancement of Teaching on April 15, 2013. In a nutshell, we make too many mistakes. Current evaluation systems tend to overlook many bad teachers and yet classify many good teachers as bad ones. But the more measures you use beyond just student test scores, such as student evaluations , and the more years of data you use for rating each teacher, the more accurate the evaluations become.
The authors, education scholars Dan Goldhaber and Susanna Loeb, are holding a webinar on the topic on Wednesday, May 8, 2013, at 1pm (PST). (That’s 4pm in NY)
Data on bullying
It’s a myth that “bullying” at schools is a worse problem today than in the past, according to a task force report released on April 30 commissioned by the American Education Research Association (AERA). Indeed, major categories of bullying, such as being threatened by a weapon on school grounds have remained stable — between 7 and 9 percent — between 1993 and 2009. The percentage of high school students who say they’ve been in a physical fight has declined from 16 percent to 11 percent during the same time period. This data comes from a 2012 National Center for Education Statistics paper, Indicators of School Crime and Safety, written by Robers, Zhang, Truman and Snyder. This report claims that overall, all forms of bullying have decreased by 50 percent from 1995 to 2009.
The task force’s co-chairs also said it’s wrong to assume that bullying is primarily happening over social networks today. Co-chair Dorothy Espelage of the University of Illinois at Urbana-Champaign cited a figure that 39% of bullying is occurs face to face.
There’s terribly little data on bullying and solid data-drive research on what schools can do to curb it. The big problem is the word “bullying” itself because it means different things to different people. Some researchers cling to a narrow definition in which there must repeated incidents between two people of unequal power. But lay people, when they fill out surveys, might consider a single hazing incident to qualify as bullying. The AERA task force suggested that we should instead break the term “bullying” down into sub-categories of “victimization” to track it more properly.
The report emphasized that many anti-bullying programs being marketed to and adopted by schools had no evidence to support their effectiveness.
Other press coverage and resources:
USA Today: Researchers: Stop using the word ‘bullying’ in school
Education Writers Association (EWA) 2010 backgrounder on bullying
Data on taking algebra in eighth grade, and the watering down of U.S. math instruction
Here’s some of the findings presented at a session on U.S. math instruction at the AERA annual meeting on Tuesday, April 30, 2013.
Another data-driven study shows that the judgment of teachers can often be wrong.
In a study of middle-school math education in a California school district, standardized test scores and grades were a much better predictor of whether a student would pass eighth-grade algebra than whether his seventh-grade math teacher thought he was ready for it. That’s according to an unpublished paper, The Missing Link in Algebra Policy Analysis: A Case Study of Placement in Eighth-Grade Algebra by Andrew Thomas (Walden University), Michael H. Butler (Public Works, Inc.), Robert Kaplinsky (Downey Unified School District).
The issue of when a student takes his first algebra course is of great interest to academic scholars and policy makers. Some theorize that taking algebra early in eighth grade will lead to more students taking advanced math courses in high school and ultimately going to college. Kids that don’t study algebra in eighth grade are tracked into curricula that effectively shut them off from many educational opportunities. Many districts and states around the nation have been pushing schools to teach algebra earlier. Nationwide, about 6 percent of districts — many of them low-income schools — are now requiring algebra in 8th grade. But there’s also concern that pushing unprepared kids into algebra too soon sets them up for failure. California recently reversed its decision to require algebra in eighth grade.
This particular study found that middle-school math teachers recommended that a little more than half of their students (53%) take the more advanced algebra class in eighth grade. But half of these purportedly “stronger” students ultimately failed to get at least a C grade in the class and score “proficient” on the California state exam. The district was not identified in the study. These same teachers recommended that 38 percent of their students take a more basic arithmetic class, often called “pre-algebra,” postponing algebra until ninth grade. But a big chunk of these supposedly “weaker” students – 209 of them — were nonetheless put into algebra classes and many of them succeeded in passing the course.
This study shows that teachers make two types of judgment errors. They overestimate the abilities of half the students they think are strong. And they underestimate the abilities of a big chunk of students they perceive as weak. In both cases, the mistakes can be heartbreaking for the student.
What’s fascinating is that, if the math teachers just looked at the grades they themselves gave their students in seventh grade, these errors would have mostly vanished. Most students who got at least a B in seventh grade math succeeded in eighth grade algebra. Can you believe that there were teachers who gave a student a B in seventh grade math, but didn’t think the kid was ready for algebra? And they thought many of their C students were ready? Did the teacher think his own grading system was bogus?
(These results make me question the whole validity of teacher recommendations. Why are they so important to college admissions departments?)
An even better predictor than grades was the student’s score on the annual California State assessment test. But the results of that test come out too late for the school to use it for placements.
The school district also developed its own home-made diagnostic test. But it was not a good predictor of whether a student was ready for eighth grade algebra.
Not all algebra classes are the same. Another paper, Breaking Down the Achievement Gaps Among High School Graduates: Contributions of Geometry Content Rigor by Kathryn S. Schiller (University at Albany – SUNY), Janis D. Brown (U.S. Department of Education), Robert Colby Perkins (Westat), Stephen E. Roey (Westat), found that there was wide variation in the content of high school math courses with the very same title. Some are rigorous. Some are lame. Even an honors geometry course could be quite watered down and focus more on two-dimensional objects and nearly ignore the important topic of three-dimensional objects that move in space. The weird thing is that there was no correlation between the rigor of the math class a student took and the score he got on the NAEP. Whites and Asians tend to outscore blacks and Hispanics regardless of the rigor of the math class they took.
Better to go for the easy A or fail a hard class? That’s the question asked in a working paper, entitled Success and Failure in Eighth-Grade Mathematics: Examining Outcomes Among Middle Schoolers in the HSLS:09 by Keith E. Howard (Chapman University), Marty Romero (University of California – Los Angeles), Derrick Saddler (University of South Florida), Allison Scott (University of California – Berkeley). The researchers found that the California State standardized-test scores were about the same for similar students whose only difference was that one failed algebra in eighth grade while the other passed an easier math class. But there were significant psychological wounds. The ones who failed felt that they weren’t good at math, didn’t like the subject and they didn’t pursue math later in high school. The equally weak students who lived in ignorant bliss liked math more and continued with the subject.