Individuals in a World of Science
We are all the same, we are all different—this is the great modern dilemma. At the same time science and technology lets us see our patterns (guess what books we’ll like without ever meeting us, predict the probability with which a certain drug will have a certain side effect) our social independence encourages us to believe we cannot be so easily controlled (thus millions of people watching the same TV ad insisting they “think different”).
The tension can be felt most acutely in medicine, where a long and storied tradition of individualism (each patient is unique, with their own symptoms and history and makeup) confronts the most expensive products of modern megascience (every pill of a drug is the same, its workings validated through a test on thousands of people). And then you have doctors, caught in the middle: what are they to be—brilliant individuals, cunningly solving problems on their own (or, more realistically, with a small team) or dutiful cogs, administering the treatments shown most effective by large experiments?
For every individual person, you can come up with a story about why the larger results may not apply (most of the people in that study were young and healthy, but you are old and frail). But that just replaces hard science with educated suspicion. On the whole what was proven true on the whole must work better, right?
This is the position of evidence-based medicine, which says that doctors can’t be trusted to make these decisions by themselves. Unduly swayed by whim and bias, bribed in endless ways by the manufacturers of expensive drugs and tools, incentivized to give themselves more business, EBM proponents say we must take these choices out of their hands and give them to a panel of experts, who can review with time and distance what solid scientific studies say actually works and does not.
I’ve made it sound like I’m on the side of the scientific mass, but I’m really not. Is there any evidence that evidence-based medicine really works? Everything I’ve seen is shockingly inconclusive.
There have been big benefits from smaller interventions—giving doctors tools to encourage them to do the right things. Atul Gawande has been the greatest chronicler of such programs, from forceful reminders to wash your hands to careful checklists before surgery. But for the most part such programs aid doctors, not overrule them. This is good politics, but it’s also good science: everyone rebels against direct instruction.
I think we have a choice to make. Doctors can be simply told what to do—in which case, why require all those years of med school? why not write down all the rules and instructions and let any random nurse follow them?—or they can be taught the lessons of the science but allowed to practice it on their own. They can be show their own human frailties and biases, the huge value that comes from following the proven rules, trained in the common fallacies of probability andstatistics, but in the end, allowed to make the final judgment for themselves. We can screen out those who fail to learn these lessons, but if we can’t, at the end of the process, trust them to make their own decisions, why even bother to have doctors at all?
Medicine is the field where this is clearest, but the same tension has come to teaching as well—every student is the same, every student is different. We once allowed each teacher to direct their classroom in their own way, but high-stakes tests and “value-added” measurements now force all of them into the same mold.
Isn’t this a good thing, demands Matt Yglesias? We have science that shows good teaching can make a huge difference in people’s lives—doesn’t everyone deserve the benefits that come from having a good teacher? He dismisses the stories of the individual horrors that result from this process as mere anecdote—inevitably in imposing a one-size-fits-all solution there will be some negative side effects for a few, but the benefits for the many outweigh the costs. Again, I have tried to put this position in its most favorable light (I hope Matt will correct me if I’ve failed) but I’m flabbergasted by its callous naiveté. The problem with allowing hard incentive systems to squeeze out individual judgment is inevitably that people begin trying to game the system—they cheat on the tests, they coach students on the answers, they cut recess and art for more drill-and-skill. To dismiss the on-the-ground evidence of how badly these tests hurt kids, in favor of some Olympian view of the benefits of rising test scores, is ludicrous when the on-the-ground view is telling you the test scores are actually bogus.
Fine, Matt says, that just means we need to crack down on cheating. (This is always the first response of the incentive designer—we just need to improve the incentive system!) The fact that a couple teachers cheat on their students’ tests is no reason to give up on all the benefits better teachers can being. And that’s true, but blatant cheating is just the tip of the iceberg.
In medicine, we can at least measure whether people get healthy. A doctor with some radical new treatment can prove she’s right by testing it against the previous best answer and showing it works better. And we want brilliant teachers to do the same: to come up with innovative new ways of teaching students and prove they work better than the old stale system. But the ultimate goal of school is much less clear and more disputed—is it to create orderly little capitalist worker bees or curious independent thinkers?
Matt says please, we don’t need to enter into this debate. I’m only talking about the fundamentals—basic literacy and arithmetic. But I don’t think that really helps. What good is learning to read if, by the end, you hate doing it?
One solution is to measure students by real results, rather than artificial tests. Can a child read and understand? Ask them to tell you about the books they’ve read lately. (This was how my library’s summer reading program tested whether you actually read the books you claimed. Apparently I didn’t understand most of Snow Crash as a kid, but I loved reading it anyway.) Or, better yet, ask them a question that involves doing some research and see if they can look up and read the answer.
For math, ask them to build something that involves a little calculation, or make change, or any of the real-world activities these isolated skills are supposed to be actually useful for. What you learn from that will be much more revealing than which bubbles kids fill in on a sheet.
The other alternative is to put your trust in teachers, to assume they can tell the difference between a class that’s learning and a class that isn’t, and then give them a chance to do better. Take them to some of the best-run classes in the world and let them absorb the lessons for themselves. Have them meet regularly with their fellow teachers and discuss how they can make their teaching better. This is the humane response to those who want to reduce teaching to a rote question of merely reading off a script (no joke—this is literally what happens in the most test-driven schools…because, after all, science shows the script is best for test scores).
In both cases, I sympathize with the humane aims: I don’t want doctors to become shills for pharmaceutical companies, I don’t want poor kids to grow up unable to read. But I blanch at the inhumane means proposed to carry them out. As Seeing Like a State describes, the history of high modernist utopian projects has not been a pretty one. The quest for policy designers, then, is how to promote huge positive changes without crushing the individuals involved underfoot.
You should follow me on twitter here.
April 6, 2011