Most Professors Assess Students, Not Themselves (Turn Your Teaching into Publications #5)
Don't be like most professors.
This is my number one reason for conducting classroom research: Doing so holds you, the professor, accountable.
When I first started teaching, I simply copied what my favorite professors had done in the classroom without ever wondering if it was effective.. This had me lecturing for most of the period and occasionally asking clever questions to reveal what students didn’t know, which I believed made them all the hungrier to listen. I wanted to undercut the knowledge and beliefs about psychology that students came into class with—to make a sort of clearing into which I would step with the new and improved psychology that I had to offer. (Incidentally, this was how French phenomenologist Maurice Merleau-Ponty organized his books.)
Then, after four weeks or so into the semester, I would give students some sort of test. Crucially, I believed the tests were for students, not my teaching. If students did poorly, I thought, then they needed to make changes. In the event that everybody did poorly, I would just make the assessments easier. That is to say, if my teaching strategy wasn’t working, then I simply expected less from students.
To repeat: I never assessed what I was doing.
I think this is because I believed my methods were effective, so why bother assessing them? But belief and dogma are dangerous, because they lead to unexamined practices. Sincere self-examination through classroom research is a powerful antidote to these dangers. Classroom research makes sure that beliefs remain beliefs and are not misunderstood as truths or realities.
Professors Who Do Classroom Research Assess Themselves
Rather than using assessments to test what students are doing, classroom research turns the focus onto what you are doing as the professor. If your goal is to teach students critical thinking skills, then there has to be measurable proof that students’ critical thinking skills have actually improved. If students don’t show improvement, then something isn’t working. Correct what isn’t working and then reassess. In time, you will have evidence that you are improving as a teacher.
I won’t get into the details of study design—specifically how you can organize an assessment to identify which teaching decisions support or thwart student learning. Just know that all of that is still to come.
End-of-term Student Evaluations Aren’t The Answer
For decades, research has repeatedly shown how the end-of-semester evaluations that students give of their instructors aren’t very helpful. I read a bunch of these when I was just starting 12 years ago, and I was surprised to find bunch of them published over the last few months. For example, such evaluations are biased along sex, gender, and race, among others. (I suspect there are also discipline-specific biases.)
In my own experience, I’ve had courses that have gone very well and in which students and I were highly engaged. I’ve also had courses in which I gave minimal effort and lectured from slides before retreating back to my office to eat lunch. My evaluations have been consistently good year over year—4.5-5.0 on a 5.0 scale. It’s almost as if what I do doesn’t matter, at least not insofar as the evaluations are concerned.
Consequently, I don’t think end of semester evaluations are helpful for assessing my teaching. If we want valid and reliable data, then we have to collect it on our own.
Grades Aren't the Answer
It can be tempting to use student grades as a stand-in for your classroom research data collection. If more students are getting As than in the past, you might reason, then they are learning more. But this only works if student grades are determined using the same sort of evidence you would use in a self-study.
For example, last Spring in my behavioral research course I modified a rubric that we were using for our institution-wide gen ed assessment. Our assessment had shown that students were not learning scientific reasoning skills, and I wanted to see if one semester was enough time to see a measurable improvement. When students entered, I gave them a baseline assessment that judged them on their scientific reasoning skills. On a 4.0 ratio scale, the class average was 0.7. Translated into grades using the same sort of ratio scale, that would make the class average approximately 18%. Can you imagine the shock and disappointment of showing a bunch of straight-A students that they got an 18% on a test designed as a basic gen ed assessment (i.e., the courses they finished two years ago)? I decided that would be too strong a statement, so I adjusted the ratio scale such that a 0 = 60%, 1 = 70%, 2 = 80%, and so on. After all, it wasn’t their fault that they hadn’t learned how to apply scientific reasoning skills; they had never been challenged to do so. (At least that was my hypothesis.)
By 12 weeks into the term, average raw data scores in the class were 2.3/4.0—a three-fold increase in just 12 weeks! But students' actual grades didn’t go up three-fold, they only went up about 25%.
As you can see, I had to make adjustments to the assessment results in order to create fair classroom grades. We also decided to let attendance count for part of student grades—approximately 20%, I think it was. Now just showing up might be important for responsibility-building, but it is irrelevant to scientific reasoning skills. That means that a student could possibly get an A in the class without mastering the scientific reasoning skills assessment.
Of course, college classes often have all sorts of ways for students to get points towards their grade—not all of which are directly relevant in measuring their achievement of the target objectives. Grades aren’t a reliable measure of student achievement.
Comments
Post a Comment