Friday, May 7, 2010

quantifying learning

I guess this is a coup: my college has just gotten a six-figure grant to focus on student assessment. For readers new to this particular conversation, the hand-wringing about measuring student achievement has slowly been drifting upwards from K-12 into post-secondary education:
  • What are our students learning?
  • What SHOULD they learn?
  • How do we measure that learning?

Reactions are varied. Old-school (and tenured): "that's bullshit! Just more eduspeak to clog up our in-boxes and take time away from teaching!"

Administrative types: "We have to demonstrate our VALUE to society in the form of the skills/knowledge gained by students in college through their investment (tuition, fees and the like) and their opportunity cost of not working for 4 years." (The economic argument is a non-starter and also a slippery slope: once you go there you can't escape, but that is a subject for a longer, more philosophical post about what college is really ABOUT.)

I see the value of the approach, but I also can visualize the pitfalls. In MA, quantifying learning has been reduced to scores on standardized tests. Yeah, it's damn hard to do it through portfolios, or essays, or other broad measures that really demonstrate what a student has learned. So let's bubble in some Scantron (TM) sheets that can be scored by a machine.

Full disclosure: I NEVER use multiple choice questions. First, the clever students know exactly how to game them - they've been doing it since they were 5. Second, my students SUCK at MC as written by me - they are not used to having to THINK while they take a test. The tests they've taken reward a particular set of skills, memorization/regurgitation, that are not helpful in the college setting - nor in, dare I say it, real life.

It's easier to quantify fact-based learning on the physical side of geography ("what is rain shadow?") than in human geography ("what is globalization?") but I was amused when this came up in a recent conversation and all my colleagues in human geography were so adamant that we all teach differently and there is no commonality for a test. That too is bullshit: there are plenty of commonalities: we just don't want to have to do the work to sit down and figure out what we can "agree" on as far as testing. Yet with some work, I bet we could agree on 5-7 essay questions, and a rubric for evaluating them.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.