I had long been under the impression that the United States had a particular problem in providing technical and specialized professional training for students who maybe aren’t academically inclined. But it turns out the United States isn’t alone, and even nations with once vaunted apprenticeship programs are no longer properly training students to enter the workforce.
A new report finds that the vocational training programs in 20 economically developed countries aren’t producing enough students with the skills to be junior managers, health care technicians and other workers that the labor market needs. It says that two-thirds of all job growth in the European Union is forecast to be in the “technicians and associate professionals” category. In the United States, nearly one-third of job openings in 2018 will require some sort of professional training after high school, but not a four-year degree, according to the Nov. 13, 2014 report, “Skills Beyond School Synthesis Report,” by the Organization for Economic Cooperation and Development (OECD).
Yet, in most countries, less than 20 percent of the labor force (aged 20 to 45) has a vocational certification. In the United States, only 12 percent of the labor force has a vocational certification. Another 10 percent has an associates degree; some of these degrees require applied career training, but many do not.
“Every county needs to upgrade,” said Stanley Litow, an IBM vice president who helped launch a new “P-Tech” model for vocational training in the United States. “Even the good apprenticeship programs are training for a narrow set of skills. Change is happening at a much faster pace. What we need are strong academics and workplace preparation that doesn’t prepare people for one job that doesn’t exist in the future.”
It’s interesting to consider this report in light of bleak employment prospects for some college graduates with four-year bachelor’s degrees. Many end up in low-paying jobs that don’t require expensive degrees. Perhaps some undergraduate students should consider obtaining a high quality technical certification instead. To be sure, many of the underemployed college students studied humanities and social sciences, such as sociology and psychology. Who knows if these students would enjoy a professional training program in, say, management.
The report documents how programs designed for the factory floor, such as manufacturing, engineering and construction, are falling out of favor across Europe and other developed nations. In Germany, enrollments in some specialties have fallen 50 percent. But a specialty known as “Fachwirt” — roughly akin to business administration, and commonly hired in health care and other service fields — has risen in popularity by 45 percent, and is now the most common advanced vocational exam taken. Still, vocational training is falling out of favor altogether among young Germans. The number of advanced vocational certification exams taken each year fell 24 percent from 1996 to 2010.
In the United States, health care is the most popular vocational field, accounting for 25 percent of the vocational degrees and certifications in the labor force, followed by engineering, manufacturing and construction. Teacher training is one of the least popular vocational subjects in North America and most of Europe, but it is very popular in Japan, Korea and Denmark. (Table 1.1 on page 31 of the report shows the breakdown for each country by speciality).
Data collection and analysis for vocational training is particularly difficult. Each country uses different labels and terminology. It’s a little-understood world of colleges, trade associations, diplomas, certificates and professional examinations without consistent standards. Vocational training is often lumped together with college or university education. Even to count the number of vocational degrees, researchers must tease out the numbers through statistical techniques and judgment calls. For example, some two-year degrees are more vocational in nature than others. U.S. students who complete vocational associates degrees make much more money than those who end their education with an academic associates degree, the report also notes.
The OECD authors argue that in order to enhance the image and status of vocational education, it should be uniformly renamed “professional education and training,” a term that is used in Switzerland.
But IBM’s Litow says that policy makers should focus more on revamping the content of vocational programs than on how they are labeled. “The problem is that vocational programs aren’t delivering,” he said. “We need to make large scale changes. I wouldn’t worry about what you call it.”
Litow hopes to expand his P-Tech model, which combines high school with two years of post-high school career training, including mentoring and internships, into nearly 100 schools by 2016. It began in 2011 as a pilot school in Brooklyn, N.Y., and is now in 27 schools across the country.
How do you identify a bad elementary school?
A new report out of New York City suggests that policy makers should identify troubled schools by their absenteeism rates — a relatively easy data point to obtain — and then work to fix the schools by addressing each one’s unique problems, from homelessness and child abuse to teacher turnover and safety.
The report, “A Better Picture of Poverty: What Chronic Absenteeism and Risk Load Reveal About NYC’s Lowest-Income Elementary Schools,” released Nov. 6, 2014 by the Center for New York City Affairs at The New School, is part of a growing body of literature that argues that the poverty level of schools’ populations isn’t necessarily a good way to identify schools that need extra resources.
The study points out that 87,000 elementary school children, from kindergarteners to fifth-graders, missed more than 10 percent of the 2012-13 school year. Some schools were affected much more than others by this chronic absenteeism. The researchers found that, at 130 elementary schools, at least one-third of the students were chronically absent for five consecutive school years. It was even worse at 33 of these schools, where more than 40 percent of the student body missed more than 10 percent of the school year for five years straight. (In New York City, 10 percent of the 180-day school year is about 18 missed school days, almost a month of school).
That affects not only the children who are missing school, but also the kids who are showing up. Teachers can’t move forward with new material when such a high percentage of children have missed earlier lessons and can’t keep up. The evidence is in the test scores: Only 11 percent of the students at schools with chronic absenteeism passed the city’s math and reading tests in 2012-13. Other schools with similar poverty levels but better attendance rates posted much higher test scores.
New York City Mayor Bill de Blasio took a different approach when he announced a new initiative on Nov. 3 to help a group of failing schools. The schools on de Blasio’s list were selected because their test scores had been stubbornly low for several years straight. His $150 million program plans to bring community services, such as social workers, psychiatric help and health care, into schools in poor neighborhoods. The program also involves lengthening the hours of the school day and expanding the school year to weekends and summertime.
The New School’s report’s list of the 35 elementary schools with the worst attendance problems includes nine from the de Blasio list, but the New School’s list of needy elementary schools is longer. “Maybe not all of these are the bottom of the barrel,” said Kim Nauer, the lead author of the New School report. “But I would argue that they deserve community school funds.”
Nauer found that schools with chronic absenteeism were likely to be beset by other poverty-related problems, such as male unemployment in the neighborhood and high rates of homelessness. But there was no common set of risk factors that applied to all the schools with chronic absenteeism. Her team created a “risk load” assessment, noting how much 18 different risk factors are affecting every public elementary school in the city. She hopes that this will help school leaders and community groups identify the support that each school needs. You can search her data by school here.
The New School Center has been documenting chronic absenteeism in New York City’s schools since 2008, when almost 29 percent of the city’s students were chronically absent. That declined to under 25 percent in 2012-13.
One school that has succeeded in combating chronic absenteeism is P.S. 48, an elementary school in South Jamaica, a poor neighborhood in Queens. Back in 2012, 160 of the school’s 550 students had been chronically absent. Principal Patricia Mitchell slashed that to about 20 students in 2013. Her school’s passing rate on the city’s standardized test more than doubled. Previously a social worker, Mitchell said she had paid home visits to understand why students weren’t coming to school. She learned, for example, that some parents weren’t getting to the laundromat often enough and didn’t want to send their kids to school in dirty uniforms. So she bought a washer and a dryer for the school and has school aides doing the laundry. She also holds regular parties to celebrate improvements in students’ attendance, replete with pizza, music, Macy’s gift cards and outings to Dave & Buster’s video arcade. “It’s like they won the lotto,” Mitchell said.
But not all schools that have reduced absenteeism have seen academic improvements. According to Nauer, a group of schools in the South Bronx hasn’t seen an improvement in test scores despite achieving better attendance figures. In the case of the Queens elementary school, it might be other things that Mitchell has done to improve the school that are raising student performance. That same year that attendance soared, Mitchell also won a $1.2 million grant from an outside foundation to bring in 15 community groups to help students and provide after-school enrichment programs. Not many high-needs schools have savvy administrators with the time and energy to write a successful grant proposal like that.
“Chronic absenteeism — it’s the signal,” said Nauer. “Then you need to fix the schools. You need to do detective work to figure out what’s going on. Each school is different.”
The number of charter schools surpassed 6,000 at the start of the 2012-13 school year, as these schools — publicly financed, but privately run — steadily increased by 7 percent throughout the United States that year. This annual growth contributed to a 47 percent increase in the number of charter schools over the seven years since 2006-2007.
The charter school data came as part of a “first look” report of annual data collected by states and school districts for the federal government, and released by the National Center for Education Statistics on Thursday, October 30, 2014. The full 2012-13 Common Core of Data report, as it is called, is expected to be released later this year.
Still, at 6,079 schools in total, charters represented only 6 percent of the U.S. public school system of 98,454 elementary, middle and high schools.
Number of charter schools in each state during the 2012-13 school year
(Use arrows to navigate, then click on any state to see the numbers of charter schools from 2006 to 2013. Interactive map created by Jill Barshay of The Hechinger Report.)
Charters are unevenly spread throughout the country. The first interactive map above shows that California leads the country with more than a thousand charter schools. Texas is number two, with more than 600 charters, followed by Florida with more than 500.
It’s interesting that charters have often become big political issues in states where relatively few charter schools operate. For example, in Connecticut — where controversies over charters have become part of this fall’s governor’s race — there are only 17 charter schools. Eight states — Alabama, Kentucky, Montana, Nebraska, North Dakota, South Dakota, Vermont and West Virginia — don’t allow charter schools to operate. Washington State didn’t either back in 2012-13, but its laws have since changed and its first charter school opened this past fall in 2014.
The federal government defines a charter school loosely as any school that provides free public elementary or secondary education under a charter. Typically, charter schools receive a per-pupil allotment of funds from the state or a local school district. Many charters supplement that with private fundraising. Most charters hire non-union teachers, but some have unionized faculty. Most charters operate independently of their local school districts and aren’t required to follow many of the district rules and regulations.
Charter growth rate in each state. Annual change between 2011-12 and 2012-13
(Use arrows to navigate, then click on any state to see charter growth from 2006 to 2013. Interactive map created by Jill Barshay of The Hechinger Report.)
Growth rates also have varied widely. In some states with an established charter movement, such as California, Florida and New York, there has been double-digit annual growth. In Texas and Louisiana, by contrast, growth is slowing. The biggest one-year increase in the number of charter schools was in New Hampshire, jumping from 15 to 22 schools, a 47 percent increase.
Charter school advocates (naturally) expect to continue seeing steady growth. The National Alliance for Public Charter Schools says that, when the data is released for the 2013-2014 year, it will show 600 new charter schools added that year and more than a 10 percent jump in student enrollment with 288,000 additional students attending charters. “It’s the largest increase we’ve seen in 14 years,” said Katherine Bathgate, director of communications and marketing at the advocacy group.
Bathgate says the waiting lists for seats in charter schools, which her group tracks, are growing longer even as the number of charter schools increase. If such demand continues, she predicts charter school growth will continue as well.
Growth in some states is tamped by laws that cap the number of charter schools. At the same time, other states are relaxing laws to permit more charters. For example, Mississippi recently changed laws that had made it very difficult for a charter to open. In Washington State, where charters are newly allowed, eight new charters are expected to open next fall 2015.
Additional charter school data, including the number of students enrolled, is expected to be released in November or December.
Correction: An earlier version of this column incorrectly stated that state lawmakers changed the law in Washington State, permitting charter schools to exist. The new law was approved directly by voters through a ballot initiative in November 2012. The text has been corrected.
What’s the best way to teach writing? The experts have many answers — and they often contradict each other.
In contrast to the thousands of studies on effective methods for teaching reading and mathematics, there are relatively few rigorous studies on writing instruction. That’s partly because it’s time-consuming and expensive to assess writing quality in a way that can be quantitatively measured. Commonly, researchers come up with an eight-point scale. They write descriptions and sample essays to show what each score involves. Then they train teams of graders to score properly and consistently. But writing quality is ultimately a subjective judgment. What you consider to be well-written, I might not.
Steve Graham, a professor of education at Arizona State University, has made a career out of monitoring research studies on teaching writing, to figure out which methods actually work. For a forthcoming article*, Graham and two colleagues, Karen Harris of ASU and Tanya Santangelo of Arcadia University, looked at approximately 250 of the most prominent studies on how to teach writing to students from kindergarten through 12th grade.
Graham’s review of the research doesn’t resolve the age-old debate of whether students learn writing best naturally — just by doing it — or through explicit writing instruction.
But there are effective practices where the research is unequivocal. Distressingly, many teachers aren’t using them. “We have confirmation of things we know that work, but are not applied in the classroom,” said Graham.
Here are three:
1. Spend more time writing
To teach kids to write well, you need to ask them to write a lot. You’re not going to become a great basketball player unless you play a lot of basketball. The evidence is strong that this is true for writing too. Five studies of exceptional literacy teachers found that great teachers ask their students to write frequently. In nine separate experiments with students, 15 additional minutes of writing time a day in grades two through eight produced better writing. Seventy-eight percent of studies testing the impact of extra writing found that student’s writing quality improved.
Several studies found unexpected bonuses from extra writing time. Not only did writing quality improve, so did reading comprehension. Another cluster of studies proved that writing improves a students’ mastery of the subject; the act of writing helps you learn. (Another reason for teachers to refrain from spoon-feeding printed notes to students.)
However, surveys of U.S. teachers reveal that after third grade, very little time is spent writing in classrooms. In fourth through sixth grade, on average, 20-25 minutes a day is spent on writing, according to Graham. Writing assignments rarely extend beyond a page; sometimes they’re not more than a paragraph. This is what teachers self-report, and if anything they’re probably overstating how much writing they’re asking of students.
In a 2011 survey of classroom writing instruction, “A Snapshot of Writing Instruction in Middle Schools and High Schools,” published in English Journal, Arthur Applebee and Judith Langer at SUNY Albany found that U.S. students were expected to write only a total of 1.6 pages of extended prose for English a week, and another 2.1 pages for all their other subjects combined. Applebee and Langer also observed classrooms across the four core subjects (English, science, math and social science/history) and found that, on average, only 7.7 percent of classroom time was devoted to writing a paragraph or more. Applebee and Langer called that “distressingly low.”
Why so little writing? Graham hypothesizes that many English language arts teachers are more passionate about literature than teaching writing. But in surveys teachers often say they don’t assign more writing because they don’t have the time to read and provide feedback on frequent long assignments. I can sympathize with a high school English teacher who has 37 kids in her class.
One could argue that fewer high quality writing assignments might be better than a bunch of low quality ones. But again, the teacher surveys and classroom observations reveal that students are more commonly asked to write summaries. “We don’t see a high level of writing activities that involve analysis and interpretation,” said Graham. “We’re not seeing development of skills you need for college and the workplace.”
Common Core may change things, as the standards ask for more writing and analysis, not just in English class but also in the social sciences, hard sciences and math.
It’s unclear what the ideal amount of time for writing is. Graham, who wrote a teachers’ guide of evidence-based techniques for teaching writing for the What Works Clearinghouse unit of the Department of Education, recommends one hour a day. He admits he doesn’t have research to substantiate that number. But he may be onto something: When Poland increased its language arts classes to more than four hours a week for each student, its scores on international tests began to soar.
2. Write on a computer
In 83 percent of 30 studies on the use of word processing software, students’ writing quality improved when they wrote their papers on a computer instead of writing by hand. The impact was largest for middle school students, but younger students benefited, too. The theory is that students feel more free to edit their sentences because it’s so easy to delete, add and move text on a computer. The more editing, the better the final essay.
I was concerned about how these experiments were constructed. Could graders have been more biased toward these word-processor essays because typed fonts are more legible than hand-written ones? In most cases, the hand-written essays were retyped first before the graders scored them. So graders had no idea which essays had been drafted by computer and which by hand, and still the word-processor essays were rated higher.
It’s also possible that the spell checkers and grammar checkers that are sometimes bundled with word processing software enable students to submit cleaner drafts, which are perceived to be of higher quality.
Some educators feel passionately about the importance of writing by hand, convinced that the act of writing neurologically imprints stronger memories. And there’s some early evidence that note taking might be more effective by hand. But if your goal is writing quality and not memorization, it seems the evidence points to word processing, especially beginning in middle school.
Another benefit for educators who believe that students should write not just for teachers: computerized text files are easier to share with classmates, providing more opportunity for a real audience and feedback.
Despite this evidence,teacher observations and surveys reveal that teachers have been slow to adopt this basic technology. In Arthur Applebee and Judith Langer’s observations, students used word processing software in only 5.1 percent of the classes. Separate 2008 and 2010 surveys by Graham show that “too many schools still use pencil and paper as the primary or only writing medium,” he wrote.
3. Grammar instruction doesn’t work
Traditional grammar instruction isn’t effective. Period. Six studies with children in grades three to seven showed that writing quality actually deteriorated when kids were taught grammar. That is, graders scored the essays of students who’d been taught traditional grammar lower than those of students who had not received the lessons.
Three studies did show that teaching kids how to combine two simple sentences into a single complex sentence was beneficial. (As a writer, I find that baffling as I am always trying to shorten my sentences! That makes me question the judgment of the essay graders.)
But traditional grammar — diagramming sentences or teaching grammar rules — didn’t help. Graham suspects that’s because grammar lessons often feel disconnected from actual writing. Graham found one study that showed great improvement in student writing quality when teachers modeled correct usage, showing how to use grammar rules in sentences that students were drafting. But not many experimental studies are looking at effective procedures for teaching grammar.
In this case, classroom practice isn’t totally at odds with the research. Grammar instruction has declined in U.S. classrooms over the last 40 years. But that might be because there isn’t much writing instruction going on at all.
* “Research-Based Practices and the Common Core: Meta-Analysis and Meta Synthesis,” (in press for The Elementary School Journal)
On average, low-income urban high schools with high concentrations of minority students sent about half, or 51 percent, of their 2013 graduates to college in the fall immediately following graduation. That could be either a two-year or a four-year college or university. By contrast, 70 percent of the students from high-income urban high schools with few minority students were enrolled in college in the fall. (Only high-income mostly white suburban high schools send more kids to college with 73 percent of the students enrolled in college in the fall).
But these averages mask big differences among public city high schools. This same data show that best 25 percent of the low-income, minority schools — about 130 high schools in the data sample — sent at least 60 percent of their 2013 graduates to college in the fall of 2013. The number was much higher at some. By contrast, the worst 25 percent of the high-income schools — about 60 of them — sent fewer than 60 percent of their graduates to college in the fall. (Low-income means that more than 50 percent of the students qualify for free and reduced price lunch. High minority means that more than 40 percent of the students are black and/or Latino).
“In every category of high school, there are clearly schools that are beating the odds,” said Doug Shapiro, director of the National Student Clearinghouse Research Center, which published the report, “High School Benchmarks 2014.”
This chart shows that 38 percent or fewer of the 2013 high school graduates from the bottom quarter of low-income high schools with high concentrations of minorities went to college in the fall of 2013. But among the top quarter of these low-income high schools, 60 percent or more of the students went to college in the fall.
It’s no surprise that some successful low-income schools would be doing much better than the average low-income school. And I wouldn’t be surprised to hear about a handful of low-income schools, perhaps small magnet schools that cream off the top students, that are doing as well as higher income schools. But what is surprising is that so many low-income schools — 25 percent of them — are doing better than so many of the higher-income schools. You wouldn’t expect such a big overlap, especially when the means for each group are 20 percentage points apart.
This data speaks to both sides of the debate on education reform. Those who says that income determines educational outcomes argue that you can’t reasonably ask schools to overcome a student’s family background. And they can point to the data here which show that students who attend higher income city high schools, on average, are 37 percent more likely to go straight to college than a student from a low income high school. Indeed, the fact that 75 percent of the high-income schools have more students going to college than 75 percent of the low-income schools is great evidence for those who say that income matters.
But the beating the odds data is music to the ears of the so-called school reformers who argue that better schools and teachers can get better student results. The fact that a quarter of the low-income schools are outperforming the high-income schools is exactly the kind of data that supports their cause. Interestingly, none of these are charter high schools. The National Student Clearinghouse excluded charters from the analysis because it was concerned that its charter school participants were too few to be nationally representative.
The National Student Clearinghouse isn’t revealing which high schools are outperforming or underperforming. But it hopes to publish the names of the high performing high schools in the future so that their practices can be studied and replicated. “Possibly next year,” said Afet Dundar, associate director of the National Student Clearinghouse Research Center.
While the National Student Clearinghouse is now tracking a giant data set of 3.5 million high school graduates from 2010 to 2013, a big shortcoming is that the data isn’t a nationally representative sample. It includes only student data from high schools who voluntarily participate in StudentTracker, a service that the National Student Clearinghouse markets to high schools so that they can see where their graduates end up. More than 3,000 high schools in all 50 states participate, covering 25 percent all high school graduates in the country. The participation rate is even higher in urban districts where 65 percent of the largest 100 districts participate, covering 40 percent of all urban high school graduates.The National Student Clearinghouse didn’t reveal which districts are or are not participating and so it is unclear how the missing high schools might be skewing the data.
Another shortcoming is that the data don’t give you a sense of how the students are faring in college. It does not reveal if the students are taking remedial classes, essentially repeating what they should have learned in high school. And we don’t have data yet on whether these students are eventually graduating from college.
This is the second year of the National Student Clearinghouse’s high school report and there aren’t enough years yet to show trends as to whether more kids are going to college than in the past. But it’s a welcome additional data point, beyond standardized test scores, to see which high schools are doing a good job.
The already muddy research on whether it’s better to hold back struggling students or promote them to the next grade just got muddier. A new study ,“The Scarring Effects of Primary-Grade Retention? A Study of Cumulative Advantage in the Educational Career,” by Notre Dame sociologist Megan Andrew, published Sept. 26, 2014, in the journal Social Forces is an empirically solid analysis that adds more weight to those who say retention — what education wonks call repeating a grade — is ultimately harmful.
Andrew mined two large data sets in a way no researcher has done before and concludes that kids who repeat a year between kindergarten and fifth grade are 60 percent less likely to graduate high school than kids with similar backgrounds, and even 60 percent less likely to graduate high school than siblings in the same family.
Before I discuss Andrew’s paper in more detail, it’s helpful to understand some history. Most early research overstated how harmful it is to be held back a grade. It tended to point out that the struggling kids who repeat a grade don’t fare as well as kids who stay with their class, most of whom are not struggling. But that’s shoddy research. These studies didn’t compare the held-back kids with the kids who were also failing, but were promoted nonetheless.
Related story: Why Los Angeles sends failing students on to the next grade
In data analysis terms, this early research conflated the bad effects being held back with the bad effects of the underlying issue that led a school (or a parent) to hold the child back in the first place. Consider a child who has trouble paying attention, can’t read by the end of fourth grade and is held back. Say, this child continues to get bad grades, tests poorly and eventually drops out of high school. Did the stigma of repeating fourth grade cause the child to become demoralized and to perform worse at school? Or was it his ongoing struggle with attention deficit disorder? If he had been promoted, would his academic career turned out differently? These early studies don’t say.
Even as the low quality research kept showing that holding kids back was bad, a growing chorus of critics urged schools to end “social promotion,” the practice of passing failing students onto the next grade. As my Hechinger Report colleague Molly Callister wrote here, 15 states and the District of Columbia have adopted policies requiring third-grade reading proficiency before a student can move to fourth grade. Two big cities, Chicago and New York City, undertook ambitious experiments in ending social promotion.
Those urban experiments attracted sophisticated researchers. Brian Jacob and Lars Lefgren studied students in Chicago, where the decision to hold a student back was based on a test score. The researchers were able to compare the experience of students who scored just below the threshold for passing with the experience of students who scored just above the threshold. Because of test measurement errors, these students were effectively testing at the same level — academically identical. But half were held back and half were promoted. In a 2009 paper, Jacob and Lefgren found that the harmful effects of retention largely melted away when comparing these two groups of students. Students held back in older grades still suffered a bit, but there was no decrease in high school graduation for students who’d been held back young. (Jacob, Brian A., and Lars Lefgren. 2009. “The Effect of Grade Retention on High School Completion.” American Economic Journal: Applied Economics, 1(3): 33-58.)
Four years later in 2013, a RAND study looking at New York City’s experiment with ending social promotion came to a similar conclusion — retention isn’t harmful. It also found that the kids who repeated fifth grade were better off than kids who just squeaked by and passed the test and moved on to sixth grade. (Study: “The Academic Effects of Summer Instruction and Retention in New York City.” Educational Evaluation and Policy Analysis, v. 35, no. 1, Mar. 2013, p. 96-117)
So a growing consensus was emerging in the research community that holding a kid back in younger grades isn’t harmful and sometimes helpful if accompanied by support services, such as summer school, tutoring and advising.
And now Andrew’s paper — contradicting the new consensus — lands. It’s a quantitatively rigorous study finding harmful effects for younger children. She looked at more than 37,000 children across the United States from two older multi-year surveys (NLSY 1979 and NELS 1988) and found that about 10 percent had been held back at school, most of them during the 1980s. The surveys included details of the family characteristics of the children. That allowed Andrew to create 6,500 matched pairs of students, where the retained and non-retained students had similar backgrounds. Their mothers had attained the same level of education and their families had the same household income. The students had scored the same on a pre-school cognitive test. (In layman’s terms, they started school with similar IQs). The matched students also had similar behavioral problems, as reported on the surveys. Home environment, gender and race were factored in, too. In other words, Andrew matched the held-back students with students who were equally “at risk” for being held back, but weren’t.
Then Andrew looked at whether these matched students eventually graduated from high school. And that’s where she found that the held-back children were 60 percent less likely to have graduated from high school than their matched “partners” who stayed on grade level. Andrew went one further to see if she could reproduce the results in a different way. Using the 1979 data survey, which included sibling information, she compared children who were held back with their siblings who weren’t held back. Again, she found the same result. Even in the same family, held-back kids were 60 percent less likely to graduate high school than their brothers and sisters. Astonishing!
Andrew acknowledges that held-back students often show a short-term boost in their grades and test scores, but she believes this boost “disappears” after just a few years. A sociologist by training, Andrew hypothesizes that being held back is so psychologically scarring that many students fail to regain their confidence in the long-term. In her paper Andrews argues that being held back is a one of the biggest negative events of a child’s life. “In surveys, students rank being retained in grade second only to a parent’s death in seriousness in some cases,” Andrews wrote.
At first blush, the data seem to defy common sense. (Data have a way of doing that!) Kids, especially boys with fall birthdays, are commonly held back in kindergarten as they get another year to mature. I have a hard time believing that they’re 60 percent less likely to graduate from high school than the kid who stayed with his class and moved on to first grade.
Unfortunately, Andrew wasn’t able to test whether kindergarten retention was less scarring than say, fourth grade retention. But by email she explained that the majority of the students were held back in the earliest grades, confirming that she found even held-back kindergarteners less likely to graduate from high school.
How much you buy Andrew’s conclusions depends on how similar you think her paired children are. If there were a characteristic that prompted a parent to hold back one child that his statistical “partner” doesn’t have, then the analysis isn’t clean. Her control group (the promoted partner) isn’t otherwise identical to the treatment group (the retained child). Andrew’s data sets didn’t list every behavioral problem and learning disability, so she couldn’t control for Attention Deficit Hyperactivity Disorder (ADHD) and other many other conditions. It’s quite possible that some of the held-back children had behavior issues or a mild learning disability and the promoted partner child didn’t. Years later, when Andrew found that the held-back child didn’t graduate from high school, it’s possible that factors related to the student’s behavior or learning issues — being placed in an alternative academic track, for example — impeded his academic career and not the psychological scarring of being held back in first grade.
I don’t want to suggest that ADHD makes it hard to graduate from high school, but I am trying to explain how Andrew’s research can fall into the same trap that the early research on retention fell into. It can accidentally conflate the bad effects connected to a behavioral or learning problem with the bad effects of the retention.
I asked Andrew how a parent should factor in her research when deciding whether to hold a student back. “My study is not a parent’s how-to guide on retention,” she replied by email, explaining that holding a child back is a very personal decision. The most important thing is to address your child’s underlying academic problems, whether you’re holding him back or passing him on to the next grade.
She explained her study is aimed at education policy officials who are deciding whether to have high-stakes tests that determine who moves on and who is held back. “My study is an argument about how a very expensive policy, grade retention, may actually undermine our shared goals of ensuring even child gets a quality education,” she replied. “I would argue that my study is evidence that we might take funds used for an expensive and likely deleterious policy and use them for earlier, pre-school interventions and …supplemental services… to help get a student up to speed.”
Even education data geeks agree that education data is completely inscrutable and inaccessible to parents
One of the many provisions of the 2001 federal education act, known as No Child Left Behind, was a requirement that states had to issue a “report card” for every public school. The report cards include things you might expect like student test scores and test score changes, but also a laundry list of data from graduation rates to school demographics.
Part of the purpose of making this data available was to help parents see how the students in their children’s school were faring and make more informed choices, whether it’s pressuring the school and district to do better, or taking their children elsewhere.
More than a decade later, much of this data remains inaccessible and inscrutable to parents — even to education experts.
To see the report cards in Florida, for example, you’d have to download Excel spreadsheets, or you can try clicking through a series of user unfriendly screens . In Hawaii, each school’s report includes a baffling distribution chart (see graphic on the right). There is a separate document, twice as long as the report card, that explains how to interpret all the figures and acronyms. Minnesota’s report cards disclose the number of students who are eligible for “celebration.” Of what, one might wonder, birthdays? Another state awarded an elementary school 17.29 points for an “average growth z score” without further explanation. The Education Commission of the States (ECS) issued a August 2014 report, “Rating States, Grading Schools” and concluded that too many report cards are hard to find and hard to understand.
It’s a sad outcome because the parents who could take advantage of this data the most, where school improvement is most needed, tend to be from the most disadvantaged and disenfranchised communities. These report cards need to be easily digestible.
“There are smart statistical people good at cranking out data, but they are not known for design. What is good design for a policy wonk, is not a good design for a parents and policy officials,” said John Bailey, a former Bush White House official, who now wears several hats in the education policy world, including vice president at the Foundation for Excellence in Education. (The conservative organization, headed by former Florida Gov. Jeb Bush, supports the greater use of education technology, promotion of charter schools and measuring schools by student test performance.)
By coincidence, in June 2014, Bailey had just collaborated on a paper about the role of prizes and competitions in generating good ideas. And he was inspired by a health design challenge where visual graphic designers reimagined patient information records. “There’s all this very wonky complicated data that’s given to patients that doesn’t make a lot of sense. It’s the same in education,” he said.
So Bailey came up with the idea to use $35,000 of his organization’s money to launch an education data design competition. Ed Excellence has reached out to the design community both for contestants and for judges. The winner will be announced in December.
It sounds like a fun fall project for a Rhode Island School of Design student. But it’ll be a tough challenge for even a seasoned graphics expert to put a school’s test scores in the context of its student demographics in an understandable way. If you just report straightforward test scores, schools with rich students will likely have higher scores than those with poor students. Showing how much students learn each year is better. But measuring academic growth is a complicated task and it’s hard to explain simply. Using regression analysis to adjust for poverty — even more complicated!
Hallmarks of good design are simplicity and minimalism. But state report cards are required to include lots of data. One of the problems mentioned in the ECS report are that parents are already overwhelmed by data. It cited one parent who complained that one report card was“[l]ike reading a corporate financial report of 20 pages.” Bailey hopes that designers will come up with a simple starting point, but give parents an intuitive way to dig deeper if they’re interested. “You don’t have to show all the proficiencies by subgroup on the same page,” Bailey suggested.
Some states and urban school districts have tried to improve their report cards. Washington, D.C. recently revamped its school report cards on its LearnDC website and they’re an inspiring model of simplistic design. Here’s an example. (Still, there are numbers that lack context. What’s that school classification score based on?) And New York City announced on Oct. 1, 2014 that it was revamping its school report cards. In addition to jettisoning simple letter grades for each school, the city is trying to make them parent friendly. Here’s a model example with fictitious numbers. It’s written in plain English, but way too wordy. And for a city that’s filled with brilliant designers, this document lacks inspiration, colors and graphics. Probably not a design competition winner.
Bailey says his ultimate goal is to raise the quality of discourse on education.”The more information that parents and policy makers have, the more informed the debate is going to be,” he said.
Useful report cards also won’t hurt the cause of data proponents, who’ve recently been burnt by the demise of national student data warehouse inBloom. If ordinary parents start to see data as something useful and not just a threat to their children’s privacy, perhaps the data geeks will have enough public support to be able to resurrect their dream of mining vast amounts of student data to improve education.
Related story: Big data and schools: Education nirvana or privacy nightmare?
Despite signs of a national economic recovery, homelessness in U.S. public schools steadily increased 8 percent, to 1.26 million students, in the 2012-13 school year from the previous year. That may not sound terrible, but consider that it is part of a 58 percent jump in the number of homeless students in the six years since the start of the economic recession of 2007-08.
Percent change in the number of homeless students in U.S. public schools over six years (2007/08 to 2012/13)
(Zoom in and click on any state to see actual numbers of homeless students and annual percentage changes for each state. Interactive map created by Jill Barshay and Sarah Butrymowicz of The Hechinger Report.)
“It’s safe to say there’s been a significant increase in homelessness in schools,” said Diana Bowman, director of the National Center for Homeless Education. Her organization, funded by the U.S. Department of Education, provides technical assistance for the federal Education for Homeless Children and Youth Program.
The U.S. Department of Education quietly released this data on homeless students, in grades pre-K through 12, without issuing a press release or detailed report. The new data were added to a publicly accessible database on September 22, 2014 as part of its annual Consolidated State Performance Report Data.
Some states saw much larger than average one-year increases in homelessness. Student homelessness in New Jersey grew by 77 percent and in Alabama by 68 percent over the most recent one-year period. Washington, D.C., Maine, Montana and New York also experienced sharp increases in the number of homeless students.
But Bowman cautioned against putting too much stock in sharp one-year fluctuations. States sometimes change counting methodologies; longer multi-year trends are more reliable.
More important, and distressing, is the data for the six-year period. Some less populous states saw some of the largest percentage increases in student homelessness. The number of homeless students grew by more than 140 percent in Oklahoma, Hawaii, Alabama, West Virginia, Montana, Idaho, North Dakota and Washington D.C. The color-coded map above highlights which states have suffered the greatest increases in student homelessness since 2007.
The majority of homeless students are not sleeping outside on park benches. According to the Department of Education’s data, three-quarters of homeless children are temporarily living “doubled up” with extended family members or neighbors. (Table 3 on page 2 of this report, “Education for Homeless Children and Youth, Consolidated State Performance Report Data, School Years 2010-11, 2011-12, and 2012-13” shows where homeless school children spend the night.)
“A lot of people think of families living in shelters,” said Bowman. “But it’s really a lot of other situations where a lot of homeless children live.”
The Department of Housing and Urban Development defines homelessness more narrowly, often not including people who are living with others. But Bowman explained that it makes sense for the Department of Education to have a more expansive definition, because the children of these families have still lost their primary residence and are often switching homes and changing schools every few months. “The education disruption makes it hard for them to perform academically. They’re losing friends and teacher connections. They also have greater health problems and emotional stresses,” said Bowman.
Data source: U.S. Department of Education Consolidated State Performance Report Data, 2007-13. Google chart created by Jill Barshay, The Hechinger Report
Many researchers have documented how devastating episodes of homelessness are for a student’s academic performance, both in the short term and over the long term. McKinney-Vento funds were established by Congress in 1987 to support homeless programs. A portion of these funds go to school districts based on the percentage of poverty in their school district. But, as Table 2 here shows, more than a third of the nation’s 1.3 million homeless children are enrolled in school districts that haven’t received any of these McKinney-Vento funds.
A 2014 University of Pennsylvania study found that homelessness was the third most important risk factor to consider when thinking about support programs for disadvantaged children, and that poverty alone wasn’t necessarily harmful to a child’s academic career.
Perhaps with this well-documented rise of student homelessness, lawmakers will start to think about better ways to strategically allocate Title I education dollars — not just to low-income children, but to the low-income children who need them the most.
Data analysis methodology and explanation: Original source data is from the U.S. Department of Education’s Consolidated State Peformance Report Data, in which states are required to report on a variety of figures, including homeless school children. To locate this data, go to eddataexpress.ed.gov, then click on “Build a State Table,” then “Build Table Now.” That will take you to a “State Tables” page. I clicked all states and then selected data under the “Homeless Program (McKinney -Vento)”. I selected “Total Number of Homeless Students Enrolled in LEAs with or without McKinney-Vento Subgrants – Total” for the all six years available, 2007-08, 2008-09, 2009-10, 2010-11, 2011-12, 2012-13. “LEA” is a local educational agency, commonly known as a school district.
Education theories come and go. Experts seem to advocate for polar opposites, from student discovery to direct teacher instruction, from typing to cursive hand-writing, and from memorizing times tables to using calculators. Who can blame a school system for not knowing what works?
One big problem is that education scholars don’t bother to replicate each other’s studies. And you can’t figure out which teaching methods are most effective unless the method can be reproduced in more than one setting and produce the same results. A new study, Facts Are More Important Than Novelty: Replication in the Education Sciences, published August 14, 2014 in Educational Researcher, found that education researchers have attempted to replicate other researchers’ results only a scant 0.13 percent of the time. Compare that with the field of psychology where there’s a 1.07 percent replication rate, eight times as much as in education. By contrast, replications within the field of medicine are commonplace and expected.
“When we teach science, we teach students that it’s important for other people to get the same findings as you,” said Matthew C. Makel of Duke University, one of the study’s co-authors. “Replication is a key part of the error-finding process. In education, if our findings cannot be replicated, we lose a lot of credibility with the scientific world and the greater public.”
“Error — or limited generalizability — won’t be found if no one looks,” he added. “And our findings show that, for the most part, in education research, we aren’t looking.”
Makel and his University of Connecticut colleague Jonathan Plucker conducted a text search through the entire publication history of the top 100 education journals and found that only 221 out of more than 165,000 scholarly articles were replication studies, in which researchers tried to reproduce the results of earlier studies. (The 221 includes both exact replications and approximate ones where the experiments were tweaked a bit, say, to see if the intervention would work with a different type of student).
You’d think in education, where best practices could actually help millions of children, there would be a priority on reproducing results. So why so little replication?
Part of it is unique to education. In psychology, for example, you can reproduce results fairly easily using another group of 25 undergraduates in a laboratory or clinic setting. In education, it’s far more complicated to find similar groups of students in similar school settings. Often poverty levels and racial makeups vary. And no two teachers are the same. Each will invariably put his own spin on the teaching method being tested. Many parents and school leaders are understandably reluctant to experiment on children at all.
The culture of the Ivory Tower is also to blame. Professors live by a “publish or perish” mentality. Their tenure, prestige and research funding are often based on how many articles they can get published in leading journals. And the editors and reviewers of these journals (staffed by fellow university professors) have a bias toward the new and the novel.
I talked with Steve Graham, an Arizona State University professor of education, who has edited five education journals. He says he gets 600 submissions a year for Educational Psychology alone, and thus has “the luxury to be very choosy.” He says he doesn’t publish replications studies “unless they cover new ground” (a sort of contradiction in terms). “We want studies that have significant new impact,” he said. “There’s a bias built in. I’m not saying it’s a good thing. It’s a problem. I recognize it.”
The problem affects Graham directly because his own research work involves metastudies of how to teach writing — that is, he synthesizes other researchers’ papers on effective writing instruction to figure out what works. The lack of duplications makes his work difficult. “I got a lot of noise in my metaanalyses,” he explains.
Graham suspects that if there were more research funds earmarked for replications, more academics would apply for them and conduct replication studies. (Foundations out there: hark, there’s a new way to fix education!) The American Psychological Association, also worried about the dearth of replication in its field, is looking to launch a new journal exclusively devoted to publishing replication studies. Perhaps education can create one too.
To be sure, Makel’s and Plucker’s word-search methodology — where they looked only for variants of the word “replicate” — may be exaggerating the lack of scientific process in education research. Neil Seftor, an economist at Mathematica, runs the What Works Clearinghouse for the Department of Education. He specifically examines what the majority of scientific studies say about the best way to teach, or about a particular curriculum or textbook. He admits that exact replications are rare, but says he wouldn’t be interested in exact replications such as those in a laboratory setting. “What you want in education is evidence over a variety of settings in the real world — urban areas, special ed,” Seftor said. When he searches for studies on new interventions, he said he often finds dozens of papers on each one, but they might not have the word “replication” anywhere in their text.
“I don’t think it would be fair to say that they’re all these educational approaches out there and they’ve only been studied one time,” Seftor said. (Admittedly, many of the studies Seftor looks at are unpublished and financed by the developer of the curriculum.)
Seftor, of course, would welcome more scientific studies on education theories. Often, when he is developing practice guides for teachers, the teaching methods recommended by experts don’t have much scientific evidence to support them.
In the meantime, the American Educational Research Association (AERA), which publishes a number of top education journals including Educational Researcher, decided last year (2013) to publish AERA Open as a new open-access research journal, and is specifically encouraging the publication of peer-reviewed replication studies in it. It just began accepting submissions on Sept. 15.
Maybe, once we see more replication studies in print, we’ll be able to judge whether they can filter down to the classroom and improve instruction.
By any measure, Poland has made remarkable education progress since the fall of the Berlin Wall. On the most recent 2012 international tests of 15-year-olds, known as PISA tests, Poland ranked 9th in reading and 14th in math among all 65 countries and sub-regions that took the test. It used to be on par with the United States, a mediocre performer. In math, for example, Poland gained 2.6 points a year between 2003 and 2012 while the rest of the world, on average, remained unchanged.
And on Sept. 9, 2014, when the Organization for Economic Co-operation and Development (OECD) released its annual indicators, “Education at a Glance 2014,” another important indicator appeared: Poland’s college graduation rate is soaring. In 2012, 25 percent of Poland’s adults held a college degree, up from only 11 percent in 2000. At that rate, it could soon eclipse the United States, where more than 40 percent of adults have a college degree (this includes two-year degrees).
“Poland is an interesting case study,” said Andreas Schleicher, director of education at the OECD. “It used to be modest. It is now at the frontier, in little more than a decade.”
How did Poland do it? Its political leaders scrapped their Communist-era education system back in 1998. Instead of the state sorting students into vocational tracks, it opened the system up and allowed students to make their own choices. Dismantling a central command system is not required in most countries, but others countries can learn from Poland, Schleicher says, in two other ways: educational improvements can happen relatively quickly and relatively cheaply.
“For other countries, Poland highlights that what’s really possible in a relatively short period,” Schleicher said. That upends the conventional wisdom in education that real progress is slow and incremental. Also, “None of this has been achieved by putting more money into the system,” he said.
Although spending per student has in fact gone up in Poland (largely because of declining birth rates and a declining student population), the growth in per-student spending remains well below the growth in education spending in other countries. (Click on the chart below, also from the 2014 Education at a Glance publication, to see a larger version).
Fast education results on a modest budget are alluring. And it will take more scrutiny of the Polish education system to understand what teachers are doing there. Classroom culture and student behavior may play a role. Polish teachers spend less time keeping order in their classrooms than teachers in any other nation, according to the OECD’s teacher survey (TALIS 2013).
But instructional time seems to be a key factor. A 2006 World Bank analysis credited initial rise in Poland’s PISA reading test scores, in part, to increased hours of classroom instruction. It noted that back in 2000, Polish students spent fewer than four hours a week reading and writing, but that by 2006, more than three-quarters of Polish students were spending more than four hours a week reading and writing.
Schools in the United States experimented with increased instruction time in the basics during the Bush administration, but have since retreated. Back in the 2000’s, No Child Left Behind policies that mandated student testing prompted a majority of school districts to increase instruction time devoted to the two tested subjects: reading and math. (Source: Center on Education Policy, NCLB Year 5: Choices, Changes, and Challenges: Curriculum and Instruction in the NCLB Era). But parents protested that other subjects, especially science, social studies and the arts, suffered. A majority of states have since received waivers from the testing requirements.
Apparently, parents in Poland did not feel the same way.