Some examples, large and small:
- Our state assessment task force recommended adopting the Smarter Balanced standardized tests without even trying to assess how much the necessary technology would cost—even though that cost is almost certainly enormous. The report simply dwelled (unconvincingly) on all the supposed benefits of the new tests. So no need to assess costs!
- Last year, the state enacted a teacher leadership program that takes experienced teachers out of the classroom to teach other teachers. Don’t worry, they reassured us, the state will pay for the costs. Then this year, it turns out that there’s not enough money for school aid because we spent so much on the teacher leadership program. Couldn’t have seen that coming.
- At a recent school board meeting here, there was a lengthy discussion about setting academic goals for the district. The board decided to focus on raising reading and math scores. Don’t worry, the superintendent assured the board, focusing more on reading and math doesn’t mean we’ll focus less on other subjects. But how is that possible?
- Two years ago, our district adopted specific numerical diversity goals for school attendance areas without any consideration of what it would take to meet those goals. After months and months of effort to produce new attendance area maps, when it became clear what it would take to meet the goals, the board backed off from the goals.
- Six years ago, to get a grant, the district instituted PBIS, a behavior-modification program that emphasizes reflexive compliance with school rules. There was no consideration whatsoever of possible downsides—for example, of whether the program encouraged acquisitiveness, taught mindless obedience, or would have other unintended consequences. We can get grant money = let’s do it!
- The biggie: Everywhere standardized test scores are held up as the measure of educational success. But even if higher test scores are a benefit, the scores tell us nothing about the associated costs. What was dropped from the curriculum to make those scores go up? Did the teaching techniques have harmful effects in other ways? Were the kids deprived of free play time or a decent lunch period to achieve those scores? Did the teaching achieve short-term success at the cost of creating a long-term aversion to the subject matter? Did the school have to start using behavioral control systems that teach authoritarian values? Did the kids also learn that learning is a joyless drudgery to be avoided as soon as they’re free of compulsion? The scores tell us nothing about those things. What good is that kind of partial information?
So why is it the default mode of school policy-making?
.
5 comments:
Headline from the Gazette:
State Board of Education hopes for legislative action on new test: Districts have concerns but say alignment with Iowa Core is worth it
Worth what exactly, though? Worth higher class sizes? Loss of a discretionary bus route or two? Worth not having tech available for instructional purposes?
You wouldn't know it from the article, but there are in fact other tests being developed to align with the common core. The choice isn't Smarter Balanced or non-aligned tests, the choice is Smarter Balanced, other aligned tests (with different costs), or less well aligned tests.
http://thegazette.com/subject/news/government/iowa/state-house/state-board-of-education-hopes-for-legislative-action-on-new-test-20150215
I was unaware that PBIS was tied to a grant. Can you say a bit more about this? Unless I'm forgetting -which is quite possible - it was presented as a research-based student acknowledgment plan that reduced office referrals. 80% teacher buy-in was necessary to implement PBIS.
Jane — Thanks for commenting! Here in Iowa City, no one has ever suggested to me that there was any kind of teacher buy-in requirement for PBIS. It was imposed district-wide as part of getting a five-year Safe Schools/Healthy Students grant. Every school in the district implemented it. I can’t imagine how they could have determined whether 80% of the teachers bought in, in any case—when your employer says they want your buy-in to get a million-dollar grant, I doubt most teachers would feel free very to hold out.
Yes, the justification was that it supposedly reduces office referrals. I’ve learned to be pretty skeptical of the way school officials throw the term “research-based” around; some possible concerns are discussed here. But even if it actually reduces office referrals, that’s just a variation on the theme in this post. That’s (very arguably) a benefit, but at what cost? What other consequences does the program have? Did anyone measure those? If it really is subject to the criticisms I make here—such as teaching reflexive obedience, encouraging acquisitiveness, modeling instrumental treatment of other people, and making good behavior seem like a chore that you should be paid for—shouldn’t someone be asking whether it’s too high a price to be paid for reducing office referrals?
If they don’t assess *all* of a program’s effects, calling it “research-based” is meaningless. They can validate anything with that kind of research.
Thanks for the information, Chris. I've tried to trace PBIS back to its origins to figure out if it had noble beginnings.
I'm a big Alfie Kohn fan, anyway, and love his book, Punished by Rewards.
Thanks, Jane -- I'm an Alfie Kohn fan, too, especially on the issue of rewards.
Post a Comment