District administrators balk at calculating how much each school spends per student
Since President Johnson’s War on Poverty Program in 1965, policy makers have been trying to equalize education spending across the United States. The lofty goal is for schools with lots of poor students to have access to the same resources that schools with rich kids have. But researchers and advocates for the poor have pointed to loopholes in Title I funding that effectively allow affluent schools to operate at higher levels of funding than low-income schools. For example, Marguerite Roza at the Center on Reinventing Public Education found that less money is spent on salaries in high-poverty schools than on low-poverty schools within the same district.
Because there can be so much variation in poverty within a school district (just think about the socio-economic differences between Tribeca and the Bronx), the Department of Education is making a big push to calculate exactly how much each school spends on a student. That might sound simple enough. But like any data project, the devil is in the details.
The issue is, how do you allocate administrative and other centralized expenses among schools? For example, say you have an itinerant teacher who spends a few hours at one school, then moves to another, and then another — each day of the week. To properly figure out how much of that teacher’s salary to attribute to each school, districts would need to create some sort of time-and-attendance punchcard system. But who wants to create such an expensive system or put teachers on punch cards?
I attended a boisterous and sometimes acrimonious session on this topic between district bean counters and the U.S. Department of Education at the NCES STATS-DC 2013 Data Conference on July 18, 2013. Many administrators protested the whole idea of counting pennies per school, saying it was too burdensome and impossible. They worried they would have to waste hours figuring out how to allocate all kinds of centralized activities, from computer servers to buses.
Building maintenance and repairs is a particularly thorny issue. Say, one school is an old building where the boiler bursts and the outside bricks need repair. The repair costs could make it look like an absurd amount is being spent per student in that old schoolhouse. But these funds aren’t going to instruction to the direct benefit of the students. One administrator in Massachusetts, which already calculates school-level expenditures, argued that only instructional costs should be included. “There’s no point in trying to allocate a superintendent’s salary” among schools, he said.
Another worry is that accountants could come up with all kinds of rules for how to allocate centralized expenses, but it wouldn’t reflect reality. That is, you could say “divide a speech therapist’s salary by the number of schools in the district,” but perhaps that speech therapist spends more of her time in some schools than others. And then policy makers, politicians and advocacy groups would be using bad data to make decisions.
Stephen Cornman of the U.S. Department of Education urged local education official to submit comments by August 20, 2013 to specify which district-wide expenses are feasible and infeasible to allocate among schools.
Cornman’s powerpoint presentation included this chart:
Examples of Expenditures That May Be Relatively Feasible or Infeasible to Track at School Level
Relatively Feasible |
Could Be Attributed With Effort |
Infeasible? |
* Salaries for most teachers |
* Itinerant teachers and staff |
*Transportation |
* Salaries for school-based instructional staff |
* Technology hardware and software |
* District administration |
*Employee benefits for instructional staff |
* Telecommunications |
* Staff who provide district-wide services |
* Equipment, furniture, fixed assets |
* Food services |
* Maintenance and operations |
* Textbooks |
||
* Instructional materials |
||
* Office supplies |
||
* Professional development |
I remember being frustrated at an early attempt to calculate student spending by district. I’m now starting to understand how meaningless these numbers are.
Big data systems still not answering which education programs work
Information Week‘s Michael Fitzgerald wrote about Colorado public school’s use of big data on July 29,2013. The state’s educational data program is now four years old and stores all kinds of facts and figures about 860,000 students in 2,000 schools, but it’s still unable to answers the questions that education policy makers want to know, such as, “if specific programs, say, for reading intervention, have an impact on student performance.”
“We probably have a ways to go to where we can definitively say here are things that are proven to work and here are things the data is not supporting,” Daniel Domagala, the Colorado Department of Education’s CIO, told Information Week.
Misuse of NAEP scores
EdWeek’s Stephen Sawchuk wrote a piece on the misuse of NAEP score data by politicians and advocacy groups. The parsing claims sidebar has a few examples of prominent people an organizations who’ve made some elementary mistakes.
Use of Data:
“Public education is supposed to be the great equalizer in America. Yet today the average 12th grade black or Hispanic student has the reading, writing, and math skills of an 8th grade white student.”
—From a 2009 Wall Street Journal op-ed written by Joel I. Klein, then the chancellor of the New York City school system, and the Rev. Al Sharpton
Problem:
NAEP scales differ by subject and grade.
In other words, you can’t compare a 17 year old’s score of 250 with a 13 year old’s score of 250.
College towns are smarter
Venture Beat reports that the towns with the smartest people are small college towns, based on how more than 3 million people around the U.S. performed in brain training games created by Lumosity. VB explains, “These games measured performance across five cognitive areas: memory, processing speed, flexibility, attention, and problem solving. Then the scores were ranked by location.” Here’s a direct link to the Lumosity paper.
These are the top 10 smartest cities, ranked by median scores:
1. Ithaca, NY
2. State College, PA
3. Lafayette-West Lafayette, IN
4. Iowa City, IA
5. Ames, IA
6. Ann Arbor, MI
7. Bloomington, IN
8. Madison, WI
9. Lawrence, KS
10. Pullman, WA
First, I might quibble with the methodology of ranking cities by median scores. Probably there are more smart people right here in NYC than in, say, Ames, IA. (Though I know at least three extremely smart people from Ames who might dispute me on this).
Interesting implications for education and where we should choose to raise our children. If your only goal is educational excellence, is it better to be in a homogenous highly educated community than a diverse community of high and low achievers?
Also makes me wonder if the public schools in these towns are any different than the public schools elsewhere around the country. Are the teachers smarter too and using more creative teaching practices? Or are the schools simply blessed with students who are the offspring of PhD parents?
More college educated parents, but their kids are not getting smarter
Here’s another data puzzle I’ve been thinking about. Why is it that more and more kids have college educated parents, but high school test scores are not improving? In 1978, only 32 percent of the parents of 17-year-old students had obtained a college degree. In 2012, 51 percent of the parents of 17 year olds had a college education. That’s a gigantic 59 percent jump in parental education. Why isn’t it making a difference? It’s conventional wisdom that parental education — especially a mother’s education attainment– determines how well a child will do at school. I’ve even heard theories that a grandmother’s highest level of educational attainment is an excellent indicator of how well a child will perform in school. Why isn’t college education making a bigger difference in how we raise our children? Is a college degree no longer a sign of how much a family values education?
Source: See Appendix Table A-2 (p. 56) of NAEP 2012 Trends in Academic Progress
Can an algorithm ID high-school dropouts in first grade?
Early warning systems to detect high-school dropouts are all the rage in education data circles. See this post on a new early warning system in Wisconsin. Like the Wisconsin example, most data systems focus on identifying middle-school students. But what if researchers could use grades, attendance and behavior data to identify at-risk students as soon as possible — as early as first grade? That would really give counselors more time to try to motivate these kids and keep them in school!
Thomas C. “Chris” West at the Montgomery Country Public School district is probably the first person in the country to build a first-grade early warning system. He presented it on Friday, July 19, 2013 at the STATS-DC 2013 Data Conference in Washington, D.C., sponsored by the National Center for Education Statistics. Montgomery County is a great place for data geeks. It’s a wealthy suburb of Washington, D.C. that’s been keeping excellent data records for more than a decade. And that’s what lured West, who worked with Johns Hopkins’s Robert Balfanz and other trailblazers in the field of detecting dropouts, to mine the data there.
West studied the county’s senior class of 2011, in which 833 (or 7.4 percent) of the 11,241 students dropped out of high school. When he traced these students back to their first-grade report cards and attendance records, he found that 75 percent of the dropouts could be identified at the the tender age of six or so. In other words, three-quarters of the students who would eventually drop out showed warning signs, such as missing school for more than three days each quarter or performing below grade-level in math or reading.”It’s depressing to hear, but it’s also an opportunity to work with these students,” said West.
(Interesting aside: A little less than one fourth of Montgomery County students qualify for free or reduced-price lunch. But less than 40 percent of the county’s dropouts come from this bottom quartile. Over 60 percent of the dropouts aren’t poor).
West found that the most important marker was academic performance. Behavior issues and attendance were less important, partly because Montgomery Country rarely uses recorded punishments, such as suspensions, and partly because first-graders don’t play hooky. “The message in Montgomery County is that the kids are there in school, but they’re not doing well,” said West.
The big problem with West’s model is that it not only identifies eventual dropouts, but it overidentifies almost half the students in the first grade as being at risk for dropping out. He identifies 48.6 percent of the student body to find the 7.4 percent that will drop out. But teachers and counselors have no idea which of the 48.6 percent to focus on. Some of them might have had some medical issues that kept them out of school. Others might have been slower to learn to read. They don’t all need the same kind of interventions.
West also found that as these first-graders progressed through their education, they would go in and out of the warning zone. For example, 20 percent of the first-graders who had a dropout indicator no longer did in sixth grade. And 14 percent of first-graders who didn’t have an indicator later developed an indicator by sixth grade. Only a quarter of the first-grade class had a warning indicator in both grades.
This first-grade dropout model is still a data-crunching experiment. Montgomery County has not implemented this model for identifying which current first-graders are at risk of dropping out.
Principals likely to overlook girls who are at risk for dropping out of school
In the Spring of 2013 Wisconsin tested a a data-driven early warning system that can identify which middle-school students are at risk-for dropping out of high school. After 5800 students were identified for teachers and counselors to work with, the principals of these schools were surveyed on whether they were already aware that these students were having trouble. With regard to most of the these students, the answer was, “yes”. The principals knew about them before the data told them.
But principals admitted that some of the students were not on their radar screen.
“All the missed students were females,” said Jared Knowles of the Wisconsin Department of Public Instruction, who presented the results of his model at the annual National Center for Education Statistics conference in Washington DC, STATS-DC 2013, on July 17.
Knowles had guessed that his data model might indeed find girls that high school administrators and counselors were overlooking. He suspects that boys who will eventually drop out of high school tend to have more overt behavioral issues. With girls, “it might be more subtle,” he said.
Knowles developed the early-warning-system model using regression. After trying out many variables, he found that he could determine with 60 percent accuracy who would eventually drop out of high school by looking a sixth grader’s attendance record, disciplinary record, state assessment scores and whether the student switched schools. Knowles said that the assessment test scores were particularly powerful in combination with the attendance record in predicting drop outs. And the more assessments he had, the more accurate the model became. Knowles plans to release his source code for any other state or school district to use and customize. “You can add GPA or whatever other data you have,” he said.
Many other U.S. school districts and states have or are currently developing data-driven models to predict drop outs. Administrators from Yonkers, N.Y. also presented their early warning system at the STATS-DC 2013 conference, but I didn’t get a chance to hear their presentation.
What to do about at-risk students once you identify them is still a mystery. The data doesn’t have answers for that…yet.
Education not as “pink” in the media as it is in the classroom
I was surprised to read on Jessica Bennett’s tumblr blog that male sources outnumber female sources on the front page of the New York Times, even on the subject of education. Technology, politics, sure. But shocking that there are 8 male sources for every 3 female ones, when 76 percent of teachers are female. As I think back on many of the stories I’ve written, I am probably guilty of interviewing and quoting more men than women too.
(The NYT front page count was conducted by students at the University of Nevada and reported on the Poynter website.)
Rich kid, poor kid, fewer middle class
David Johnson, chief of the Social, Economic, and Housing Statistics Division at the U.S. Census Bureau, points out that the latest data on U.S. children, America’s Children: Key National Indicators of Well-being released on July 8, 2013, shows growing concentrations of rich and poor.
“We see an increase in the children living at the high end and an increase of children living at the low end, with a shrinking of children living in the middle of the income distribution,” said Dr. Johnson on a July 8, 2013 podcast.
The chart below shows that about 10 percent of kids live in extreme poverty, below 50 percent of the poverty threshold. But 12 percent of children live above 600 percent of the poverty threshold. That is almost double the rate of what it was in 1991. To make this a bit more concrete, we’re talking about a family of four — two parents, two kids — that make more than 136,866 a year. ($22,811*600/100. Here’s a link to the poverty thresholds).
It is interesting to consider what the implications for education are. More demand for fancy private schools? More public schools packed with very poor kids? A growing achievement gap?
Indicator Econ1.B: Percentage of children ages 0–17 by family income relative to the poverty line, 1980–2011
NOTE: The graph shows income categories derived from the ratio of a family’s income to the family’s poverty threshold. In 2011, the poverty threshold for a family of four with two children was $22,811. For example, a family of four with two children would be living below 50 percent of the poverty threshold if their income was less than $11,406 (50 percent of $22,811). If the same family’s income was at least $22,811 but less than $45,622 the family would be living at 100–199 percent of the poverty threshold.
SOURCE: U.S. Census Bureau, Current Population Survey, Annual Social and Economic Supplements.
Colleges struggle to release data on post-graduation employment and other metrics
Inside Higher Ed reports that a pilot group of 18 colleges are stumbling to release data on their education outcomes and post-graduation employment. “(T)he holes in the data were too large,” writes Inside Higher Ed’s Paul Fain, in explaining delays to the Gates Foundation-funded Voluntary Institutional Metrics Project.