US educators lead the world in overestimating student poverty, which may affect educational mobility

Source: Andreas Schleicher OECD

AndreSource: Andreas Schleicher OECD

Do educators’ perceptions of how disadvantaged their students are matter? Put another way, when teachers think their students are underprivileged, do they have lower expectations for them, and do their students achieve less at school?

In a July 22, 2014, article “Poverty and the perception of poverty – how both matter for schooling outcomes,” Andreas Schleicher, director of education and skills at the Organization for Economic Cooperation and Development (OECD), argues that perceptions often matter more than reality, with distressing consequences. He found that principals in some countries vastly overestimate the poverty level of their students, and their perception of disadvantage negatively correlates with student math achievement. That is, the greater the misperception of poverty, the more likely it is for 15-year-old students’  math scores to be predicted by their actual socio-economic status, and the harder it is for disadvantaged students at the bottom of the socio-economic ladder to score among the top students.

 “In countries like France and the United States, perceived disadvantage is far greater than real disadvantage, and it makes a significant difference for student performance,” Schleicher wrote in the article.

Conversely, he found that educators in many top-performing nations greatly underestimate how disadvantaged their students are. Yet the truly disadvantaged students in these nations are more likely to score in the top tier on the PISA math test.

Schleicher’s data analysis mashed together three different data sets: international math test results, teacher surveys and socio-economic indicators. (Footnote: The math test was the OECD’s Program for International Assessment (PISA) given to 15-year-olds around the world. The survey data was from the OECD’s Teaching and Learning International Survey (TALIS) question 15c. Socio-economic indicators came from the OECD’s own index of economic, social and cultural status (ESCS) that is generated with the PISA test results.)

In the United States, for example, 65 percent of teachers work in middle schools where the principals surveyed said that more than 30 percent of their students come from socieconomically disadvantaged homes, the highest perception-of-poverty rate among the 30 countries analyzed by Schleicher. In reality, only 13 percent of American 15-year-olds come from disadvantaged homes, by OECD calculations. (Footnote: The OECD socio-economic index factors in not only income, but also parental education, educational resources at home and other family possessions. Because the United States is a relatively rich country, many among the 21 percent of school-age children living below the national poverty line are not counted in the low-income bracket by OECD standards, hence the OECD’s seemingly low figure of 13 percent.)

At the same time, Schleicher calculates that only 20 percent of disadvantaged students in the United States are able to score in the top quartile on the PISA math test. In France, it’s about the same. In Israel, another country for which there is a large gulf between perception and reality, only 10 percent of disadvantaged students score among the top in math.

By contrast, the percentage of actually disadvantaged children in Japan and Korea (about 10 percent) is similar to the percentage in the United States — but  only 6 percent of Japanese principals and 9 percent of Korean principals report believing that 30 percent of their students are disadvantaged.  Six times as many U.S. principals believe the poverty rate is that high. In Croatia, Serbia and Singapore, more than 20 percent of students are actually disadvantaged — much higher than in the United States — yet not more than 7 percent of principals say they have significant populations of disadvantaged students.

(Andreas Schleicher’s bubble chart, reproduced at the top of this story, depicts perception on the horizontal axis and actual disadvantage on the vertical axis. The larger the circle, the more educational inequality there is in that country, i.e., the more a student’s socio-economic status determines his math achievement. Click on the chart to see a larger version).

In Singapore more than half the students from the bottom quarter of the socio-economic spectrum score in the top quarter of the world’s students on PISA. In Japan, 45 percent of disadvantaged students perform better on the PISA test than their backgrounds would predict. That’s remarkable educational mobility: roughly half of the most disadvantaged students in the bottom 25 percent in these countries score in at the top 25 percent.

This is fascinating. I asked Scheicher how much this analysis hinges upon where you set the poverty level. If the OECD were to set the bar higher, closer to where the U.S. sets its own poverty line, there would not be such a giant gulf between perception and reality. And we could not blame US educators for overestimating poverty so enormously.

Schleicher admits that poverty is a relative measure, which each country defines differently. If the OECD used a higher bar, every nation’s poverty rate would simply be much higher. But American educators would still have the highest perceptions of student poverty. His conclusions about educational opportunity — or lack thereof — for the bottom quartile would still be true.

“Obviously, a child considered poor in the United States may be regarded as relatively wealthy in another country,” he wrote, “but the fact that the perceived problem of socio-economic disadvantage among students is so much greater in the United States – and in France too – than the actual backgrounds of students also suggests that what school principals in some countries consider to be social disadvantage would not be considered such in others.”

The main concern I have about the correlation between perceived poverty and educational opportunity is that U. S. students post mediocre performances on the PISA math test in general. Yes, low-income students don’t do well on PISA test, but most wealthy students don’t, either.  And I suspect that U.S. teachers don’t harbor lower expectations for rich Americans!

Related stories:

What makes for happier teachers, according to international survey

PISA math score debate among education experts centers on poverty and teaching

Top US students lag far behind top students around the world in 2012 PISA test results

The number of high-poverty schools increases by about 60 percent

Right and wrong methods for teaching first graders who struggle with math

manipulatives-01To help young kids who struggle with math, well-intentioned teachers often turn to non-traditional teaching methods. They use music and movement to involve the whole body.  They use hands-on materials such as popsicle sticks to help the students understand tens and hundreds. Or they encourage students to come up with different strategies for solving 7 + 8. One complicated way could be starting with 10 + 10 and then taking 3 away (because 7 is 3 less than 10) and then taking 2 away (because 8 is 2 less than the other 10). After many steps, the right answer emerges. And the students came up with it themselves. Good teaching, right?


A new study concludes that those first-graders who are behind their peers would have learned more if their teachers had just taught them to add and subtract the old-fashioned way. And then practiced it a lot.

The study, Which Instructional Practices Most Help First-Grade Students With and Without Mathematics Difficulties?, was published June 26, 2014 in Educational Evaluation and Policy Analysis, a peer-reviewed journal of the American Educational Research Association. 

Average and above-average students learn about as much with either the innovative or traditional approaches. It doesn’t much matter. But any random classroom is likely to have some strugglers in it; for them, the researcher conclude, traditional, teacher-directed instruction generally yields better results.

The researchers, led by Paul L. Morgan at Pennsylvania State University, analyzed U.S. Department of Education data from about 14,000 students across the United States who entered kindergarten in 1998. They first looked at how the students performed on math tests in kindergarten. The data included teacher surveys, allowing the researchers to track the methods that the kids’ subsequent first-grade teachers said they used. And finally, they had the students’ first-grade math scores.

The researchers found that the higher the number of struggling students, who scored in the bottom 15 percent in kindergarten, in a first-grade teacher’s classroom, the more likely the teachers were to use manipulatives (hands-on materials), calculators, music and movement (See Table 3 on page 12 in the study). The fewer the struggling students, the more likely that teachers stuck with traditional methods, such as showing the whole class how to solve something one way from the chalkboard and then having students practice the method using worksheets.

Yet, at the end of first grade, the researchers found that struggling students who were given traditional instruction posted significantly higher math score gains than the struggling students who had been taught by the progressive methods. Gains are measured by how much students math scores rose between kindergarten and the end of first grade. (See Table 5 on page 15 in the study.)

“Routine practice is the strongest educational practice that teachers can use in their classroom to promote achievement gains,” Morgan said.

first-grade-math-worksheets-1Although many educators dismiss rote learning as both boring and bad, Morgan believes it has its place. “Given my interest in children at risk, it’s a troubling observation that teachers are mismatching their instruction to what children with learning difficulties might benefit from,” Morgan said. “These kids with low math achievement in kindergarten are likely to struggle throughout elementary school and beyond. These kids are really at risk.”

Understanding why the more innovative methods didn’t work spectacularly is a matter of conjecture. Morgan theorizes that, just as children need to practice reading a lot and become fluent readers before they can analyze texts, math students need to become fluent with basic operations before they can talk about multiple methods for solving problems or arrive at deep conceptual understandings. “Maybe children with learning difficulties need more practice,” he said.

Innovative methods can also be more challenging to implement properly, and it could be that many teachers aren’t doing them right. It’s not easy to facilitate a math discussion. Six-year-olds are prone to goof around and stick popsicle sticks in their ears, taking away from precious teaching time. Instructional time can be lost while a teacher is setting up a musical lesson.

Does this mean we should all be drilling our first graders with Kumon worksheets? Morgan says not. “I don’t want kids going to school and doing worksheets all day. We all want kids to view mathematics as something that’s interesting and engaging and useful,” he said.  “At the same time, we don’t want to be providing instruction to kids that doesn’t have empirical evidence that it’s effective.”

Then what’s a teacher to do? I’ll leave it to others to figure out how to make routine methods engaging.




Federal education data show male-female wage gap among young college graduates remains high

Conventional wisdom has it that young men and women tend to earn similar wages as young adults, but that the male-female gap widens a lot with age, especially as women “lean out” during their child-bearing years. The Pew Research Center, for example, calculated that young adult women (ages 25-34) earned 93 cents for each dollar that her male counterpart earned in 2012. Near parity.

But the latest data from the U.S. Department of Education, which surveyed a nationally representative sample of 17,110 students who graduated in the 2007-2008 academic year, found that college-graduate women aren’t making anywhere near as much as their male counterparts are four years after college graduation.

The men who were in full time jobs made $57,800 on average. The women in full time jobs made $47,400 on average. In other words, these women, most of them unmarried and without children, were earning only 82 cents for each dollar that a man was in 2012.

The data, from the Baccalaureate and Beyond surveys, comes from the third group of college graduates that the United States Department of Education is tracking to see what happens in labor markets after college. This third group was first surveyed a year after graduation, in 2009. A second survey followed up four years after graduation, in 2012, and some of the data, largely focusing on post-college employment and wages, were released on July 8, 2014.

Women’s lower earnings defy easy explanation.

Part of the answer is that women and men tend to major in different subjects and go into different fields. Women are far more likely to pursue degrees in education and nursing, two fields which tend to be dominated by unionized jobs with low starting salaries. Higher salaries that come with seniority generally kick in after four years. Meanwhile, men are more likely to major in engineering, which is the major that produced the highest paying post-college jobs. College graduates who majored in engineering and were working full time earned $73,700 a year on average. Healthcare majors made $58,900 and education majors made $40,500.

Majors of 2008 Graduates by Gender (percent of each gender majoring in that subject)

Women Men
1 Business (20%) Business (28%)
2 Social sciences (17%) Other Applied (14%)
3 Other Applied (16%) Social Sciences (13%)
4 Humanities (12%) Engineering (12%)
5 Education (12%) Humanities (11%)
6 Healthcare fields (11%) Bio and phys science, sci tech, math and agriculture (9%)
7 Bio and phys science, sci tech, math and agriculture (7%) Computer and IT (5%)
8 General studies and other (3%) Education (4%)
9 Engineering (2%) General studies and other (3%)
10 Computer and IT (1%) Healthcare fields (2%)

Source: Computed using National Center for Education Statistics QuickStats and data from 2008/12 Baccalaureate and Beyond Longitudinal Study (B&B:08/12)

But that is only part of the story. When you break the salary data down by major and gender (using the Department of Education’s Quickstat data analysis tool), you see giant wage discrepancies among men and women who majored in the same subjects. The chart I created on this page shows that young male engineers make more than young female engineers ($73,000 vs. $65,000)*. Young male computer programers make more than young female computer programers ($71,000 vs. $60,000).  There’s even a big gender gap for social sciences majors ($51,000 vs. $40,000) and general studies majors ($64,000 vs. $44,000). However, the pay gap virtually disappears in education ($41,000 vs. $39,000) and health care ($59,000 vs. $56,000).

Using the Powerstats data analysis tool, I dug into the data more to look at the interplay of race and gender. And there are some startling results. Asian women were earning almost $53,000 a year — more than either black or Latino men and not far behind white men. Black men earned $52,000. Latino men earned $47,000. The top earners were Asian men at $63,000, followed by white men at $57,000.  White, black and Latino women are closely clustered together, making $45,000, $43,000 and $44,000, respectively.

I was curious if child rearing was having an effect on this data. For an apples-to-apples comparison, I created a second Powerstats chart to isolate only childless men and women. Some were married. Some were not. The wage levels and differentials described above remained. Even childless women earn far less than childless men. Asians earn more than other races. Black men fare better than Latino men.

It is possible that some of these salary gaps are the quirks of a four-year check up after college. Many of the graduates who will be the highest earners over the course of their lifetimes are in graduate school, earning their JD’s, MD’s and MBA’s. As full-time students, they’re excluded from the full-time employment data. And that means the data is a bit skewed by the people who go into high-paying fields that don’t require graduate degrees, such as engineering and computer science. Perhaps, once all the female lawyers and doctors join the workforce a few years from now, the gender pay gap will narrow some.

Data notes: I filtered the data to capture only students who are working more than 34 hours a week in one primary job. It does not include students who were in graduate school in 2012. All the salary figures are rounded to the nearest thousand. The numbers generated through the statistical tools do not add up exactly to the numbers in the published 2008/12 Baccalaureate and Beyond report. For example, the breakdown of male salaries by major show that men of all 10 majors had an average full time salary of almost $56,000. The report says that male salaries were almost $58,000. That may be due to some differences in weighting.

* This and subsequent salary data rounded to the nearest thousand dollars

Determining “cut scores” as New York students take the first Common Core high school exams

Page 56 of Board of Regents "Setting Performance Standards" dated June 23, 2014

Page 56 of Board of Regents “Setting Performance Standards” dated June 23, 2014

New York State education policy makers had a difficult task when they sat down to grade the first high school Regents exams linked to the Common Core standards, in algebra and English, last month. They needed to establish a high bar for meeting the new education standards, yet at the same time protect current students, who haven’t had much Common Core instruction, from punishment. And the Regents are high stakes tests: New York students must pass five of them to graduate from high school.

New York risked a high school dropout crisis if they set the bar too high. But setting the bar too low invites criticism that the new Common Core standards are hollow. “That ends up signaling to teachers that you don’t actually need to know the content,” said Kathleen Porter Magee, a Common Core proponent at the conservative Thomas B. Fordham Institute.

Which is why policy makers fussed over the fine-tuning of cut scores — that is, the numerical point where the cut is made between passing and failing.

At the end of June, the Board of Regents released documents showing how they had calculated two different cut scores that effectively split the baby. They set one passing score for today — one that the majority of students can already attain – and a second passing mark for the future, showing how far students must progress to be ready for introductory college classes.

Only 22.1 percent of New York’s algebra students — mostly eighth and ninth graders — hit that aspirational mark in June. In English, it was much better, with 46.8 percent of the 11th graders who took the test hitting the college-ready mark. The class of 2022, currently about to enter fifth grade, will be required to hit these higher thresholds to pass and graduate. That’s a tall order in education, where progress is usually slow and incremental.

Originally, New York State had planned to require students to hit the college-ready mark on this first go-around. But after the uproar over low passing rates for New York’s first Common Core tests for elementary and middle school students, in 2013, Regents officials backed down in February of 2014 and created a safe harbor for current students. After students took the new tests on June 3, scorers were told to make sure that the passing rates didn’t differ from previous years.

In the June documents (on page 55) the Regents noted that the historical pass rates for algebra exams had ranged from 64.5 percent to 74.6 percent, but they chose a pass rate of 65.4 percent for June 2014. While a peculiar and notably low number, it allowed the Regents Board not to lower the floor too much on current students. When New York lined up all the student test results from lowest to highest, 65.4 percent of students had been able to get at least 30 out of 86 questions correct. If the Regents had lifted the new required pass rate any higher than 65.4, they’d have had to pass students who got fewer than 30 correct answers.

“They’ve never had to do that before; it would have been humiliating for them,” said Carol Burris, the principal of a high performing high school in Rockville Centre on Long Island, and an outspoken critic of the state’s approach to testing.

Meanwhile, the Regents used their traditional approach for setting two higher levels of passing scores. They assembled a panel of educators who took the math test and pooled their collective judgment. The educators decided that students ought to have gotten 54 of 86 questions correct to meet the new Common Core standards  (the college-ready threshold mentioned above) and 73 questions correct to prove “mastery” of those new standards. Only 3.8 percent of New York’s eighth and ninth graders hit the mastery level on the new algebra exam, compared with more than 15 percent on the easier pre-Common Core exam. The graphic on the upper right corner shows how much tougher it is to be excellent in the new Common Core universe. (Click on it to see a larger version).

To understand how unusual the new Common Core Regents grading curve is, imagine a 10-question test, where you need to get 3.5 questions right to pass, but 6 right to get a B, and 8.5 right to get an A. On the old test, by comparison, you would have needed a similar 3.4 to pass, but only 4.5 correct to get a B, and 7.5 to hit the A. (I calculated these thresholds from Regents conversion charts here and here).

On the English exam, the Regents did almost the opposite. They picked a notably high pass rate of 76.6 percent on their historical range of 69.9 percent to 78 per cent.  But then they used a complicated process of overweighting the writing questions so that students could still pass, even if they bombed the multiple-choice reading comprehension section. Burris, the Rockville Centre principal, pointed out that you could still pass if you got as few as 5 out of the 24 multiple-choice questions correct — that’s worse that random guessing.

Indeed, the passing bar was so low for the Common Core English exam that there were reports that some students who failed the old exam (also administered in June) were able to pass the Common Core version. By all accounts, the Common Core English exam, modeled after the English Advanced Placement exam, was much tougher, with triple the amount of reading than previous English Regents exams — but easier to pass, because it was graded more leniently.

Perhaps the reason for selecting an easier pass bar for English has to do with the types of students who took this  exam. Unlike the algebra exam, which the state required for every first-time algebra student, the new Common Core English Regents test for 11th graders was voluntary.  (Students could take one or the other, or both.) Burris, for example, opted to give only the old pre-Common Core exam to her students. She said her informal survey of other Long Island principals found many had done the same thing. Those who did give their students the new Common Core English exam often gave it only to their honors and advanced students, she said.

If Burris is right, that means scores could be even lower two years from now, when the full student population takes the new exam. And it will be just as steep a mountain to get New York’s high schoolers college ready in English as it will be in math.


What makes for happier teachers, according to international survey


OECD 2013 TALIS Survey

OECD 2013 TALIS Survey

Teachers who say they get included in school decision-making and collaborate often with other teachers are more likely to say that teaching is a valued profession in their society.  In turn, these same teachers report higher levels of job satisfaction and confidence in their ability to teach and to motivate students, according to a 2013 survey of  middle-school teachers in 34 countries and regions around the world conducted by the Organization for Economic Co-operation and Development (OECD) and published on June 25, 2014.

“Those who get to participate and collaborate have a higher feeling of value,” said said Julie Belanger, an education analyst at the OECD and an author of the 2013 Teaching and Learning International Survey (TALIS).
Whether this perception of value directly affects actual student performances is unclear, at least from the survey results.  Analysts did not find a tight correlation between teachers feeling valued by society and students scoring high on the OECD international test known as PISA.*  Teachers in some high performing nations felt valued, teachers in other high performing nations felt undervalued.
In Singapore, for example, where students score very  high on the PISA test, nearly 70 percent of teachers felt that the teaching profession was valued. In Finland, another high performing nation, it was nearly 60 percent. But in Japan, whose students score among the top 10 of the world, only 28 percent of teachers felt their profession was valued. In Poland, another high performing nation, only 18 percent did.
This article also appeared here.

This article also appeared here.

In the United States, which ranked 36th in math and 24th in reading in the most recent (2012) PISA test, only about a third of teachers said they felt part of a valued profession, a sliver above the international average of 31 percent.
But in every country, the teachers who said their profession is valued in society tended also to report that their schools include them in decisions and that there was a positive collaborative atmosphere with other teachers at their school.  “They have the highest job satisfaction and confidence in their teaching abilities,” said Belanger. “It’s true across the board.”   The graphic on the upper right corner, Figure 7.9 from the TALIS report, shows that the teachers who collaborate more feel more confident about their teaching abilities. Click on it to see a larger version.
The TALIS report highlighted a number of concrete ways that schools can foster collaborative workplaces. For example, veteran teachers could formally mentor new teachers. Instead of sending teachers away to one-off workshops for professional development, schools could form networks of teachers. Teachers could collaboratively research topics of interest to them to improve their skills.

The TALIS survey included class sizes, but OECD analysts found no correlation between class size and job satisfaction. Indeed, some of the higher performing nations with the highest teacher job satisfaction rates have some of the larger class sizes. The TALIS report found that the size of a random class in the United States was 27 students, compared with 36 in Singapore and 32 in Korea.

More important to teacher satisfaction than class size  is the type of student in his classroom. The more behavioral problems and low-peforming students in the class, the more unhappy a teacher was. But the OECD also said these negative effects were mitigated in schools that had a supportive, collaborative atmosphere to help teachers handle behavioral disruptions. Interestingly, analysts also found no correlation between class size and behavioral disruptions. In other words, larger classes were not necessarily more difficult.

Whether teachers who feel valued by society actually teach better is unclear from the survey. But Belanger argues that it’s still important for teachers to feel that their profession is valued. “There’s a bigger picture,” she said. “With TALIS, what we’re trying to do is develop teaching as an attractive profession. If teaching is valued, it’s easier to recruit top candidates into the profession.”

This second TALIS survey was the first year that the United States participated in the OECD teacher survey. For a more detailed report of American teachers, click here.

*The top 10 in math on the most recent PISA test were (1) Shanghai, (2) Singapore, (3) Hong Kong, (4) Taipei, (5) Korea, (6) Macao, (7) Japan, (8) Lichtenstein, (9) Switzerland and (10) the Netherlands. In reading, they were (1) Shanghai, (2) Hong Kong, (3) Singapore, (4) Japan, (5) Korea, (6) Finland, (7) Canada, Ireland, Taipei and (10) Poland. (Canada, Ireland and Taipei had identical scores.)

Related Stories:

Top US students fare poorly in international PISA test scores, Shanghai tops the world, Finland slips

PISA math score debate among education experts centers on poverty and teaching

Top US students lag far behind top students around the world in 2012 PISA test results

Top US students decline, bottom students improve on international PISA math test

Study finds taking intro statistics class online does no harm



Chart created by Jill Barshay using Google Docs. Source data: Babson Research Group, “Grade Change,” January 2014

Online education has grown so fast that more than a third of all college students — more than 7 million — took at least one course online in 2012. That’s according to the most recent 2014 annual survey by the Babson Research Group, which has been tracking online education growth since 2002. Yet nagging worries remain about whether an online education is a substandard education.

The Babson survey itself noted that even as more students are taking online courses, the growth rate is slowing and some university leaders are becoming more skeptical about how much students are really learning. For example, the percent of academic leaders who say that online education is the same or superior to face-to-face instruction dipped to 74 percent from 77 percent. That was the first reversal in a decade-long trend of rising optimism.

Opinions are one thing, but what do we actually know about online learning outcomes? Unfortunately, there have been thousands of studies, but precious few that meet scientific rigor. One big problem is that not many studies have randomly assigned some students to online courses and others to traditional classes. Without random assignment, it’s quite possible that the kinds of students who choose to take online courses are different from the kinds of students who enroll in traditional courses. Perhaps, for example, students who sign up for online classes are more likely to be motivated autodidacts who would learn more in any environment.  Researchers can’t conclude if online learning is better or worse if the student characteristics are different in online courses at the onset.

On June 3, 2014, an important study, “Interactive learning online at public universities: Evidence from a six-campus randomized trial,” published earlier in Journal of Policy Analysis and Management,  won the highest rating for scientific rigor from What Works Clearinghouse, a division inside the Department of Education that seeks evidence for which teaching practices work. The study was written by a team of education researchers led by economist William G. Bowen, the former president of Princeton University and the Andrew Mellon Foundation, and Matthew M. Chingos of the Brookings Institution.

The online study looked at more than 600 students who were randomly assigned to introductory statistics classes at six different university campuses. Half took an online stats class developed by Carnegie Mellon University, supplemented by one hour a week of face-to-face time with an instructor. The other half took traditional stats classes that met for three hours a week with an instructor. The authors of the study found that learning outcomes were essentially the same. About 80 percent of the online students passed the course, compared with 76 percent of the face-to-face students, which, allowing for statistical measurement errors, is virtually identical.

The university campuses that participated in the fall 2011 trial were the University at Albany (SUNY), SUNY Institute of Technology, University of Maryland Baltimore County, Towson University (University of Maryland), Baruch College (CUNY) and City College (CUNY).

It’s worth emphasizing that this was not a pure online class, but a hybrid one. One hour a week of face-to-face time with an instructor amounts to a third of the face-to-face time in the control group. Also, the Carnegie Mellon course not only required students to crunch data themselves using a statistical software package, but it also embedded interactive assessments into each instructional activity. The software gives feedback to both the student and the instructor to let them know how well the student understood each concept. However, the Carnegie Mellon software didn’t tailor instruction for each student. For example, if students answered questions incorrectly, the software didn’t direct them to extra practice problems or reading.

“We believe it is an early representative of what will likely be a wave of even more sophisticated systems in the not-distant future,” the authors wrote.

The Department of Education reviewed only the analysis of passing rates in the Carnegie Mellon stats class trial. But Bowen and the study’s authors went further, and found that final exam scores and performance on a standardized assessment of statistical literacy were also similar between the students who took the class in person and online. They also conducted several speculative cost simulations and found that hybrid models of instruction could save universities a lot of money in instructor costs in large introductory courses in the long run.

But in the end, the authors offer this sobering conclusion:

In the case of a topic as active as online learning, where millions of dollars are being invested by a wide variety of entities, we might expect inflated claims of spectacular successes. The findings in this study warn against too much hype. To the best of our knowledge, there is no compelling evidence that online learning systems available today—not even highly interactive systems, of which there are very few—can in fact deliver improved educational outcomes across the board, at scale, on campuses other than the one where the system was born, and on a sustainable basis. This is not to deny, however, that these systems have great potential. Vigorous efforts should be made to continue to explore and assess uses of both the relatively simple systems that are proliferating, often to good effect, and more sophisticated systems that are still in their infancy. There is every reason to expect these systems to improve over time, and thus it is not unreasonable to speculate that learning outcomes will also improve.

Measuring the cost of federal student loans to taxpayers

Source: CBO May 2014

Cost of Federal Student Loan Programs 2015-2024. Source: CBO May 2014

Soaring student loan debt seems to be the next crisis waiting to explode. Universities keep jacking up their tuition and the U.S. government keeps financing it through a seemingly unlimited supply of student loans. As I’ve written before, student loans exceed $1 trillion and more than 11 percent of student loan balances are 90+ days delinquent or in default. When will student loan defaults, already the highest of any consumer loan category, simply get too high that politicians put the brakes on this lending machine?

I suspect one of the reasons that there hasn’t been more pressure in Washington to address student debt is that the federal student loan program is a profit center on the U.S. government’s official ledger books. The Congressional Budget Office (CBO) says that the four main student loan programs are expected to generate $135 billion in profit over the ten years from 2015-2024 (see here). It does that by making a lot of assumptions. Basically, it is guessing that more than enough students will pay back their loans, plus interest, over the next 30 years to more than offset all the loans that aren’t repaid. And it books those future cash flows as profits today.

The problem is that this profit may be no more than an accounting sleight-of-hand. In the same document, dated May 2014, the CBO explains that if it discounted future cash flows at a higher rate, using the same accounting methods that private banks use on their loans, then these profits would instantly vanish and turn into a tax-payer loss of $88 billion.

The non-partisan CBO has been arguing for years that it should use a higher discount rate because it would reflect market risks. Currently it discounts future cash flows by the yield on Treasuries, which is such a safe investment, that it effectively assumes that the economy will never go into a recession. What if the economy tanks and fewer students can get jobs and can’t repay their loans? Right now, the accounting assumptions don’t factor that possibility in.

In this case, the CBO’s hands are tied. Despite its desire to use more realistic accounting assumptions, the CBO is bound by a 1990 Congressional statute, the Federal Credit Reform Act (FCRA), to use the outmoded discount rate. It would take Congressional action to change that. (The chart on the right here shows the difference in costs. The big dark bar shows the $135 negative cost, or profit, by using the current FCRA methodology. The light bar shows a positive cost of $88 billion if a higher discount rate, known as fair-value accounting, were used.)

The CBO also points out that regardless of which accounting figure you believe in, neither the $135 billion profit figure nor the $88 billion loss factors in administrative costs. That’s the cost of issuing loans, tracking down students after graduation and collecting their monthly payments. Jason Delisle of the New America Foundation dug into federal budget documents and found that the administrative cost amounts to 1.7% of loan issuance. That’s a reasonable figure for overhead. But when you multiply that by the $650 billion dollars in undergraduate loans that the government expects to issue, suddenly profits turn into losses. Delisle calculates that the undergraduate lending program will cost taxpayers $16.7 billion even using the government’s inflated accounting figures.

“Officially then, federal loans to undergraduates, even using official government cost estimates, are made at a cost to taxpayers, not a profit as some claim,” Deslisle wrote. 

Controversial data-driven research behind the California court’s decision to reject teacher tenure

Underlying the California court’s decision on June 10, 2014 to reject teacher tenure as unconstitutional is a controversial body of academic research on teacher effectiveness.  The argument that won out was that tenure rules often force school districts to retain their worst teachers. Those ineffective teachers tend to end up at the least desirable schools that are packed with low-income and minority students. As a result, teacher tenure ends up harming low-income students who don’t have the same access as rich students to high-quality teaching.

But for this argument to carry weight we have to be able to distinguish good teachers from bad. How can we prove that California’s low-income schools are filled with teachers who are inferior to the teachers at high-income schools?

The nine plaintiffs, including Beatriz Vergara, who brought suit against the state. This slide, without names, was shown in court.

The nine plaintiffs, including Beatriz Vergara, who brought suit against the state. This slide, without names, was shown in court.

Dan Goldhaber, a labor economist at the University of Washington, and Eric Hanushek, a senior fellow at the Hoover Institution at Stanford, were two of the expert witnesses who spoke against teacher tenure in Vegara v. California. Both employ quantitative economic analysis in the field of education. They are both big proponents of using value-added measures to determine who is an effective teacher.

In value-added analysis, you  begin by creating a model that calculates how much kids’ test scores, on average, increase each year. (Test score year 2 minus test score year 1). Then you give a high score to teachers who have students who post test-score gains above the average. And you give a low score to teachers whose students show smaller test-score gains. There are lots of mathematical tweaks, but the general idea is to build a model that answers this question: are the students of this particular teacher learning more or less than you expect them to?  

Indeed, researchers using this value-added measure approach have sometimes found low-income schools have a high number of teachers who teach students with below-average test score gains.

Many researchers are questioning whether test-score gains are a good measure of teacher effectiveness. Part of the problem are the standardized tests themselves. In some cases, there are ceiling effects where bright students are already scoring near the top and can’t show huge gains year after year. In other cases, struggling students may be learning two years of math in one year, say catching up from a 2nd grade to a 4th grade math level. But the 5th grade test questions can’t capture the gains of kids who are behind. The test instead concludes that the kids have learned nothing. In both of these cases, with top and bottom students, the teachers would be labeled as ineffective.

Morgan Polikoff of the University of Southern California and Andrew Porter of the University of Pennsylvania looked at these value-added measures in six districts around the nation and found that there was weak to zero relationship between these new numbers and the content or quality of the teacher’s instruction. Their research was published in May 2014, after the Vegara trial ended. 



National student database controversy heats up again

This Inside Higher Ed piece by Libby Nelson explains the new push in Washington to create a national student database that would track students through college and into the work force. The idea, sometimes referred to as a “unit record data,” was originally proposed by the Bush Administration in 2005, but critics, citing student privacy concerns, were able to kill it. 

“In the past seven years, the voices calling for a unit record system have only intensified; there is now a near-consensus that a unit record system would be a boon for higher education policy makers, by tracking the flow of individual students into and out of colleges.”

A similar student privacy debate is playing out again. See this opinion piece opposing a new database, arguing, in part, that prospective employers could request to see these new student records. This blog post argues that this kind of student privacy criticism is “moot” because employers can already request to see transcripts.

Poverty among school-age children increases by 40 percent since 2000


National Center for Education Statistics, The Condition of Education 2014

National Center for Education Statistics, The Condition of Education 2014

One in five school-age children lived in poverty in 2012, compared to about one in seven children back in 2000. That’s a 40 percent jump in child poverty in the last dozen years. A household of four people with less than $23,283 in income in 2012 was defined by the Census Department as poor. 

This data comes from the latest annual report, “The Condition of Education 2014,” published by the National Center for Education Statistics on May 29, 2014. 


The child poverty rate had been declining in the 1990s, but has taken a turn for the worse since 2000.

There is a striking north-south divide when it comes to child poverty. Southern states tend to have more than 21 percent of their school age children living in poverty. Northern states tend to have fewer. Michigan is the only northern state to have this kind of severe child poverty. (Washington DC’s child poverty rate exceeds 21 percent as well). Click on the map for a larger image.

When you break the data down by race and family structure, there are other striking patterns. Almost 40 percent of all black children under 18 are living in poverty, compared with 33 percent of Hispanic children. Whites and Asians have a similar child poverty rate of 13 percent and 14 percent, respectively. But the high poverty rate is clearly intertwined with family structure. Only 15 percent of black children who live in a married-couple household live in poverty, compared with 53 of black children in a mother-only household. But poverty is stubbornly high even in married Hispanic households; 22 percent of Hispanic children who live in married-couple households are poor.


Older Posts »