Survey of U.S. school districts finds that more than half of elementary, middle and high schools have wifi in every classroom
The Consortium for School Networking (CoSN) released preliminary findings from a national survey of nearly 450 K-12 district technology leaders from 44 states on September, 16, 2013. Of course, the survey missed about 12,000 school districts and likely, the ones that responded might be more technologically advanced than many that didn’t. But I was struck that 57% of the elementary schools, and 64% of the secondary schools report 100% of their classrooms have wireless internet connectivity. That’s a very high number. I wonder how much it all costs. And is student achievement better for it?
The CoSN survey, which emphasizes that schools need more bandwidth, comes as the White House hopes to change the FCC subsidy for school and library internet access. See this Politico blurb.
New dataset says almost a quarter of African Americans are suspended in high school
The Center for Civil Rights Remedies, part of The Civil Rights Project at UCLA, aggregated publicly-reported school disciplinary data into one spreadsheet and released it on September 12, 2013. They also created a new handy, dandy web tool to see suspension rates by district. The problem is that fewer than half of the states in the U.S. publicly report how many students are suspended annually. (Many will provide this information by request, but this data project swept up only the data that was readily available on the Internet). I wonder what the utility is of a national database when more than half the data is missing.
Using the web tool, I tried to look at the suspension rates in New York City. Not there. But I was able to compare the suspension rates in Dallas and Houston, Texas. I noticed that not only did both cities have much higher suspension rates than the national average, but that Dallas suspends more than 42 percent of its Black high school students. Houston suspends about 30 percent of it Black high school students. Across the U.S., according to this admittedly incomplete data set, about 24 percent of all African Americans are suspended in high school.
When I clicked on the big spreadsheet, there were hyperlinks to state data, but I could not immediately see suspension rates by state and compare which states had higher suspension rates than others.
Teach for America teachers found to be at least as effective as other math teachers
A new Institute of Education Sciences study conducted by Mathematica found that middle and high school math teachers from Teach For America and the TNTP Teaching Fellows programs were as effective as, and in some cases more effective than, other math teachers in the same schools. It’s a note-worthy finding because TFA teachers are often criticized for not having enough teaching experience. The study found that students of TFA teachers received the equivalent of 2.6 additional months of learning.
Here’s the full report.
And here’s a detailed analysis by Dana Goldstein.
Report of troubles with using education data in Idaho
Bill Roberts writes in The Idaho Statesman on September 13, 2013 that teachers throughout the state of Idaho are unable to make good use of a much heralded Schoolnet data system because test score data arrive months too late and because some of the data is riddled with errors.
One teacher reported that she “never got test scores from April’s Idaho Standards Achievement Test last May as she expected. She didn’t see the scores on Schoolnet until fall – too late to examine them for lessons for that new school year.”
Another teacher said that the system errantly showed that only 2 percent of the state’s juniors were ready for college when the actual percentage ranged from the high 30s to the mid 40s.
Clinical trials for textbooks and curriculum
When I first happened upon the Institute of Education Sciences‘s “What Works Clearinghouse,” I wrote a little piece back in early June 2013 about the Saxon Math curriculum. But I didn’t realize how ground breaking this research was. In fact, I worried that my post was a bit PR-ish for the Saxon Math program. But Gina Kolata put the Institute’s work in context in this interesting NYT piece published on Sept. 2, 2013, which I nearly missed over Labor Day.
“…a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing.”
This is exactly the kind of data crunching that can change classroom practice.
(Dear Readers, I’m having trouble getting the IIE’s What Works Clearinghouse links to work today. If it doesn’t resolve soon, I’ll try to learn what’s going on.)
Q & A with Laura Hamilton: States should wait to evaluate teachers under Common Core
New Common Core aligned tests are sure to have an effect on how teachers are rated now that teacher evaluations in many states are tied to their students’ test scores. The Common Core standards in math and English emphasize greater critical thinking skills and non-fiction reading, and some districts in places like New York and Kentucky have already seen their students’ test scores fall dramatically after being tested on the tougher criteria.
The Hechinger Report spoke with Laura S. Hamilton, senior behavioral scientist at the RAND Corporation, whose research is concentrated on assessment, accountability and evaluation of teachers and school leadership, about the repercussions of the Common Core State Standards on teacher evaluation systems.
Question: Do different tests give different value-added scores for teachers?
Answer: Yes, there’s been work that shows even different sets of items on the same test can give different value-added estimates for teachers. A lot of the differences have to do with the teacher’s own content coverage and how well the curriculum that the teacher is using matches the content of the test. If it’s testing something that isn’t included in that teacher’s curriculum, it’s likely to be less sensitive to the teacher’s effects. So it can make a really big difference.
Q: Why is there such a difference?
A: I think part of it is because you know the assumption behind these kind of teacher evaluation systems is that you have a test that’s measuring what teachers taught and measuring whether students learned what teachers taught. That’s why we’ve seen with all the research on high-stakes testing that over time teachers adjust their instruction to try to make it match the content and the format of the test because that increases the likelihood that their students will be exposed to the tested content.
Q: How many years of data should be used to get a reliable rating for a teacher?
A: It’s a really hard question. It’s particularly hard now given that so many states are changing the test so that the value-added estimates based on the existing states’ tests aren’t necessarily going to be comparable to the ones that are based on new tests, say for the states that are adopting the Common Core aligned assessments.
And the other problem is that if you assume that teachers’ effectiveness stays the same over time, then aggregating across multiple years will give you a more stable value-added estimate. But we know that particularly in teachers’ first few years they tend to improve. So by averaging you won’t see that improvement.
Q: What effect do you see the Common Core standards having on teacher evaluations?
A: In my sense from talking with states is that they plan to continue using the same general type of evaluation system they already have in place. But, when these new tests are adopted one of the problems we’re going to see and we’ve already seen in many cases is students’ scores have declined because of lack of familiarity with the content as well as stricter cut score for determining proficiency.
One effect will be that we won’t have value-added tests that will be comparable across those years when states make those transitions. And I think that’ll be a challenge for how states to figure out how to deal with that in their evaluation system. In addition, it will take [teachers] a while to really learn what are on their tests – the materials they need to emphasize to make sure their kids are prepared for it.
The other thing is that the classroom observation systems that are currently used, a lot of them are sort of generic so they can be used in any grade or subject. They’re not focused on teaching of specific content so I think states and districts will want to take a close look at what they’re measuring with their observation protocols, and make sure that it’s consistent with the goals they have for promoting teaching that’s aligned with the Common Core standards.
Q: If you need multiple years to get a reliable rating on a teacher, and the tests will be different under Common Core, should there be a waiting period to get enough data under the new tests?
A: I would advise states and districts to institute a waiting period if at all possible because I think it’s very difficult to combine information across these very different kinds of tests and make any sense of it. Ideally, schools and teachers should have a couple of years to get up to speed on the standards and get familiar with the testing program. That’s not often feasible given the some of the policies states have enacted. But that would be ideal.
This interview has been edited for length and clarity.
Education industry ranks #1 in customer satisfaction
Sam Boonin is the vice president of products at Zendesk, a software company that collects online inquiries from customers and turns them into support tickets. Zendesk’s software is used by more than 30,000 companies and institutions, from Sony and Adobe to Twitter and Groupon. And so Boonin decided to sift through the customer satisfaction surveys to see which industries are doing the best job in solving customer problems.
Educational institutions topped the list for the past two quarters and the sector has been in the top three since Zendesk started the survey. “We were surprised that education did come out so high,” Boonin told me by phone.
Universities dominate the 1000 educational institutions in Zendesk’s customer base, but tutoring services and entire school districts are also among them. A typical online inquiry might be a University of Michigan student asking,“Where’s my transcript?” About a fifth of the end-users agree to fill out a customer satisfaction survey afterward, answering whether they were happy with the online service that they received. Education gets the most yeses.
Boonin suspects that people have such low expectations for customer service in education that they’re pleasantly surprised when they can accomplish the smallest things online. “Wow, you actually answered my facilities request. I can find out where my son’s transcript is,” explained Boonin.
By contrast, in the telecom industry, which ranks poorly in customer satisfaction, people are primarily lodging complaints online. In education, the inquiries are more favors than complaints. And the big complaints that people have in education, e.g. “Why doesn’t my university degree help me get me a job?”, are unlikely to be lodged online.
“I’m noticing a desire to treat students as customers. They (the universities) want to start to measure this,” said Boonin.
When I was a fellow at Columbia Business School a few years back, I was noticing the same customer satisfaction mentality among the administration and faculty. When I originally went to grad school in the mid-nineties, no one cared about pleasing students. I wonder, if pandering to students, because they’re paying so much for their degrees, produces good educational outcomes.
The problem in judging colleges by graduation rates
Karen Gross, President of Southern Vermont College, has an interesting piece on vtdigger.org on why it’s not a good idea to judge a university or college by its graduation rate and the prospective earnings of its graduates.
“…elite institutions, absent some adjustment, would rank higher than non-elite institutions on graduation rates without any explanation as to why that is occurring. And the lower graduation rate of less-elite institutions may be at least partially explained by the lack of preparedness of their students. For some students and their colleges, a graduation rate of 40 percent is success, not failure.”
Survey: Student poverty is rising and so is teacher pay
Last week, the National Center for Education Statistics released the first results of its newest School and Staffing Survey, which is administered to teachers and administrators across the United States every four years. The survey, which is meant to examine the characteristics of public school districts, including average teacher salary, sizes and types of districts, and incentives given to teachers, found that poverty among students is rising, as is teacher pay.
Here are the most interesting data points from the survey:
- The percentage of school districts with 50 percent or more students qualifying for free or reduced price lunch has grown. In the 2007-08 school year, about 34 percent of districts fell into that category. By the 2011-12 school year, this had increased by about 12 percentage points, to 46 percent of public school districts.
- While fewer districts are now offering salary scales for teachers, the average yearly base salary for teachers has increased. New teachers now start off making on average, about $35, 500, an increase of about $2,000 from 2007-08.
- The average number of teachers in public school districts has declined from 211 in 2007-08, to 187.
- Rural districts are still the most common, and account for about 48 percent of all public school districts.
- Teachers in suburban districts have higher average salaries than teachers in city, town, or rural districts. In the 2011-12 school year, the average for a suburban teacher with 10 years of experience was $53,500, an increase of $3,000 from 2007-08. In a rural district, a teacher with the same qualifications would receive, on average, $41,300.
- Teachers are now less likely to receive a signing bonus or relocation assistance. In the 2007-08 school year, nearly 7 percent of public school districts offered this as a tactic to recruit new teachers, and 3.6 percent offered help in relocating. Now, only 4 percent offer a bonus and 2.5 percent offer relocation assistance. Large districts with 20 or more schools or more than 10,000 students enrolled are more likely to offer these incentives.
- While only 11 percent of all public districts offer pay incentives for excellent teaching, city districts are overwhelmingly more likely to do so. About 35 percent of city districts reward good teaching with money, compared to 6.7 percent of rural districts.
Only 6 percent of college students do work study
The National Center for education statistics reports that only 6 percent of undergraduates earn money through work study programs. Yet 71 percent receive some sort of financial aid, such as grants or loans.
http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2013165