The Testing Pall

By Dr. Eugene Maier

It's that time of the year when the tests score from all the schools in the state—by grade level and subject matter—have been released to the press. As a consequence, the papers are filled with reportage, statistical analyses, editorial opinions. and readers' comments about what it all means.

One reads that schools “registered” a “surge” in math achievement while reading achievement showed “sluggish growth,” and that some schools showed “solid improvement almost across the board in meeting state benchmarks in core subjects” while others showed “strong progress” and still others were “tagged” as “educationally inadequate” for their failure to make the state's definition of “adequate yearly progress.”

The implication seems to be that tests provide an accurate measure of the educational development of our students and the quality of our schools. I find no compelling reason to make this assumption and wonder why it is so widely held. The only justification I see for making this assumption is to identify educational attainment with test performance, and this turns education into a travesty—a test-taking training grounds.

My jaundiced view of the efficacy of testing comes from decades of interactions with students. I find no particular correlation between mathematical expertise and test performance, especially on tests which yield to the memorization of definitions and proofs and carrying out algorithmic procedures—including so-called tests of problem solving which consist of versions of previously encountered problems that can be solved by using some standard, elementary technique—or allow little time for thinking and mulling.

By and large, those who do well on tests are good rote learners, that is, they are good memorizers and are adept at carrying out prescribed procedures, regardless of the their depth of understanding. On the other hand, there are those who are rattled or constrained by tests—those who despite their mathematical insight and knowledge have difficulty thinking when pressured by time or limited in mobility. I suspect many of the latter sour on school and, at best, endure it. As for the former, there ranks are swollen by those who have learned to do well on tests with very little understanding of the materials they are being tested on.

Elsewhere, I've referred to these as “swindlers,” (See “Another Case of Swindling” in the Gene's Corner archives.) Mathematics classes, especially when the focus is on the memorization and mastery of facts and procedures, lend themselves to swindling. On many occasions, when addressing audiences at conferences, I've confessed to swindling and ask if there are any other swindlers in the room—others who have taken a mathematics class, gotten a satisfactory or better grade in the class, and have had no real understanding of what the class was about. Hands pop up all over the room.

There are situations. I suppose, where tests are useful. If one wants to find out, say, if someone knows the capitol of New Jersey or the date Oregon was admitted to the Union or Ohm's Law or the definition of “quadrilateral”, a test might be appropriate, although one might wonder about the merits of encumbering one's memory with a bunch of information one can easily access elsewhere. On the other hand, using a written, or computerized, test to assess one's adeptness, say, as a tennis player or a tuba player or a potter or a public speaker would be clearly inappropriate.

What about mathematical adeptness? Are written timed tests appropriate for its assessment? In my experience, I found that those who were adept—those who had developed understanding and insight along with skill—generally did well on such tests. But then, so did the accomplished swindlers—those who were good mimickers and memorizers. So, some 25 years ago, I quit giving such tests.

By conversing with students, observing their interactions with others, listening to them explain their thinking, asking them to report in writing on their progress on assignments, and reading and commenting on their written work—without assigning grades to it—I, and the students, came to a better understanding of their knowledge and insight than any number of quizzes and tests would have provided.

Also, the whole classroom atmosphere changed. Students were relieved of the stress of tests and I was relieved of the stress of preparing and grading them—something I never found enjoyable. Swindling all but disappeared—students couldn't succeed by spewing forth their rote learning on examination papers. Where test days used to demand the students' attention, now all class days were equally important. The focus shifted from doing well on tests to developing understanding and building insight. Once test scores where eliminated, competition in the classroom gave why to cooperation and comradery.

My role also changed. I began to feel more like a mentor and less like a taskmaster. Like the swindlers, who could no longer hide behind test scores, I could no longer hide behind them either. I could no longer pretend that the course grades I gave were determined by performance on supposedly objective tests, while in reality, no test is really objective—someone writes them and someone determines how they are to be graded. I had to make clear to students whatever course grade they received was a subjective decision I made, based on my interaction with them and my observation of them and their work, tempered by whatever experience and expertise I possessed.

Without the ominous pall of testing hanging over the classroom, it became a much more open and invigorating place to be, a place where learning rather than passing tests held sway. So even if tests sores are surging, I don't see the current testing craze heralding a brighter day for education; I only see gloomier classrooms.