Assessing Student Learning Gains Might Be Easier (And More Important) Than You Think

From PsychWiki - A Collaborative Psychology Wiki

Revision as of 02:40, 8 March 2009 by Prose (Talk | contribs)
Jump to: navigation, search

We are fortunate to live in an era when more institutions than ever value evidence-based decision-making. Institutions of higher education are no exception. Because the great majority of colleges and universities make teaching their first academic priority, it is in the best interest of professors to demonstrate that their teaching has substantial positive effects on student learning. Persuasive evidence of student learning compiled from a variety of assessment techniques can strengthen almost any teaching portfolio or tenure dossier.


Impeding Misconceptions

Nevertheless, there are several misconceptions that may stand in the way of faculty regularly assessing student learning outcomes (beyond those used for grading). Some of these impeding misconceptions include:

1. The belief that student evaluations of teaching should suffice. The problem with this belief is that student evaluations of teaching are not measures of student learning; they are (if constructed well) indices of how effective a teacher seems to students. The amount a student has learned (which is probably the most important outcome of teaching) and that student's opinion of his or her teacher's effectiveness are probably correlated, but they are not identical.

2. The most obvious approach to assessing student learning, pretest/posttest studies, have poor internal validity and therefore are not worthwhile. The only problem with this belief is the notion that because a research design has serious shortcomings, it has no value and should never be used. The value of pretest/posttest studies should not be overstated, but they should not be understated either. Practice effects, maturation effects, and other problems should be considered, but when a pretest/posttest study reveals an enormous increase in students’ knowledge or understanding over the course of a semester, the students’ learning almost certainly accounts for the majority of the increase.

3. Assessing student learning takes too much time. In fact, the time consumed by learning assessments is under the professor's control. One of the least time-intensive techniques is to create a control group of students who have not been (and never will be) enrolled in a professor's course and ask this control group to complete some of the same quizzes or tests that have already been administered to students enrolled in the professor's course. (The group of enrolled students should score far higher on the learning assessments than the control group.) In addition, because most students are usually at least minimally aware of whether they are learning or not, very brief surveys assessing students' perceptions of their learning can provide evidence that is at least suggestive. Even pretest/posttest studies, which might consume more class time than other techniques because they require two measurements to occur, can be made less disruptive by using shorter tests (which still capture a representative sample of the course's content), or by including in the pretest questions that appear on later (graded) quizzes and exams (such that the posttest is embedded into regularly graded work).


Simple Assessment Techniques

While discussing the misconceptions listed above, several simple techniques for assessing student learning were mentioned. More complex assessment techniques are certainly possible, but the following techniques are often adequate to suggest that substantial student learning is occurring:

1. Pretest/posttest studies. Though weak in internal validity, pretest/posttest studies (using a birthday or similar code to allow tracking while preserving students’ anonymity) are still useful. The tests themselves do not have to be long to provide meaningful evidence of learning, and by assessing knowledge of different portions of the course content in different sections of the same course, it is possible to assess knowledge gains across much of the course material. Moreover, as mentioned previously, as long as preserving students’ anonymity is not necessary, pretests containing questions that will be administered on later graded (non-anonymous) quizzes or tests can be used to reduce the amount of in-class time spent on the pretest-posttest study.

2. Quasi-experiments. Students enrolled in a course already have their learning assessed through graded quizzes, tests and other assignments. (And if it is not appropriate to use graded work as data in a learning assessment, similar ungraded assessments can be administered.) By creating a control group (of students who are similar to those in the course but who are not enrolled in it) that completes the same learning assessments, it becomes possible to demonstrate that enrolled students understand the course material much better than skilled test-takers who have not taken the course.

3. Surveys assessing students’ perceptions of their own learning. When a pretest was forgotten earlier in the semester, or when there is no time to develop a quality measure of knowledge and understanding at the end of a course, a self-report questionnaire may be better than nothing. Students’ beliefs about how much they have learned are much less than perfectly accurate, but they are usually not entirely disconnected from reality either.



Because many administrators and policy makers are no longer willing to assume that students who are taking classes must be learning, professors have more responsibility than ever to document that their teaching is working. The few assessment techniques mentioned above are simple, but if done often and with some variety, they can help build a compelling body of evidence concerning teaching effectiveness. The techniques are simple enough, in fact, that projects using them may be ideally-suited for delegation to undergraduates who are eager to gain research and assessment experience.

Personal tools
Namespaces
Variants
Actions
Navigation
Interaction
Toolbox