Wednesday, July 21, 2010

the angry post

So I've been spending some time at the Principal Center at the Harvard Grad School and I have sat through a broad range of classes. Some have been really useful and some have been totally useless. But today's class was particularly problematic and I'd like to explain why. I saw problems on three levels so I'll explain the class and then detail the problems.
The class was focused on the correlation between family involvement in school (both at home and in the school building) and student achievement (as measured on high stakes standardized tests). This correlation was developed in relation to different races and socio-economic groups through a series of direct studies and meta-analyses. OK, enough background. Here are the flaws.
1. The class -- it was simply a review of the presenter's study and not much else.
2. the relevance -- none. It related to large, urban public middle schools
3. the method -- ugh. See, here's the thing. Not only was this associating parental involvement and achievement with race (which then made both factors functions of genetics instead of socialization and culture which can ignore ethnicity) but it begged the question of "what can we know."
I think I have decided that we can't actually "know" anything, ever. Everything we perceive is simply filtered through our senses and is our personal brain's attempt to synthesize experience. We don't have binocular vision -- we have 2 eyes' worth of monocular vision which our brains use to create the illusion of binocular vision. We don't know anything scientifically -- we simply draw conclusions from evidence and assume that those conclusions, when they are borne out often enough, pass for "fact" which we can then know to predict phenomena. Any study, then, tries to accumulate enough sample evidence from which we can draw conclusions and "know" things like causal links. Pure bunk, I say.
In any research based system, the researcher has to be aware that their are variables which make any particular case unique. What he does is throw out variables that he think present variation below a threshold of significance, or which he thinks are accounted for by creating a large enough sample that they can not skew results. But those variables are practically infinite and each small statistical drag, when coupled with the infinite other ones, will add up to make any particular piece of empirical evidence meaningless. So any conclusion drawn by looking at trends established by all these facts which are actually more subtle, complex, and ultimately disparate must be faulty. This happens in a situation like Nielsen ratings. A relatively small sample represents the whole of the US and the assumption is that the behavior of the sample allows one to extrapolate the behavior of the larger group. But that's junk. Does the sample group act sincerely? Are they really representative of the group? Are there outside factors which influence or affect their decisions on any particular day but which wouldn't on another day? There is too much that isn't known or accounted for.

So basically, research and data driven decision making might be a necessary evil, but it is evil. And these studies which were also racist and irrelevant were even more evil. Thanks for listening and, um, yay Harvard?

No comments:

Post a Comment

Feel free to comment and understand that no matter what you type, I still think you are a robot.