Using confusion theory to determine student understanding

Four college students studying in pairs looking at books

When you give a test in a typical classroom, have you ever wondered if students were just guessing? It is difficult to determine student understanding at the time, what they do and do not comprehend. Just the stress of taking a test can seem to result in confusion. Thus we developed the idea for the “confusion” test. We decided to explore survey responses using confusion theory tools.

In this particular type of survey we asked students to select the best choice from possible choices. For each item, the choices remained the same. For example, we could ask students to classify a problem by syllabus objective or to classify a teaching situation by pedagogical method. The goal was not to score individuals, as done on a test, but to score the group collective as to how accurate it selects the measures presented. In each survey, there is a key, constructed by the test designer. In the sense of machine learning, the class collective is regarded as the “machine” and the variations of responses are studied.

We wanted to see if students understood what was the principle topic or objective in a set of precalculus problems. We could ask whether the topic for a given problem was percentages, fractions, or any of another seven other topics. What is unique about this study is first, the use of confusion theory, not used in assessment literature and research to our knowledge, and second, we did not ask students to solve the problems, but rather to indicate what the problem was about. This could be and was an enlightening experience for us and the students. How often do they take a math test and not have to find the answer? We believed this could be an indicator of their knowledge about the problem. For example, if the student doesn’t know this basic information, the answer submitted may usually be regarded as a guess. We do know students guess and have learned how to guess.

Here is an example of a simple confusion matrix (Table 1), in which the subjects are classified and the actual identifications given. The columns are the counts of what the classifier selects. So, there were 13 images of a robin administered. The classifier is given an actual image of a Robin, Oriole, or Meadowlark, and then classifies it according to some internal knowledge.

Confusion matrix chart

Table 1 – A simple confusion matrix

 

These are actual counts of responses. With respect to the Robin, the false negatives are the Orioles (1) and Meadowlark (1), and the false negatives Oriole (3) and Meadowlarks (3). Two actual Robin images were negatively classified, respectively, as an Oriole and a Meadowlark. Totaling all the entries, the classifier was to identify a total of 38 objects, of which 13 were Robins, 20 Orioles, and 5 Meadowlarks. The values on the diagonal 11, 9, and 1, are the true positives. The most natural question is this: Is this classifier any good? We needed a measure to help determine this.

In the first survey, we used a 25 question test on misconceptions in algebra and arithmetic and classified them as to what principle mathematics topic each fits best [1]. Misconceptions in mathematics have been studied for years, but persist in all elementary courses [2], [3], [4]. All questions were fairly typical, but the distractors for the multiple choices were specifically chosen so a student pursuing the solution incorrectly would likely find their incorrect answer in the list. In this survey, students read only the question, seeing none of the distracters. Thus, we gathered the opinions of many students, all with very similar training, all currently enrolled in the same course, all with the same math background of only the problem type. This required fewer problems, and gave a collective opinion. Students were not asked to solve these problems, but just put them in the correct category. We measured/examined:

  1. Student confusion about selecting the dominant problem objective.
  2. How the key would change if the student voting majority determined the key. This would be recalibrating confusion by using a voter preference method.
  3. A range of measures of confusion.

These results have been previously reported (Allen, Goldsby 2014), showing students were in general agreement on what techniques were needed to solve various math problems.

Then we changed the goal to examine what pedagogy students might select as teaching methods. 83 pre-service teachers took this pedagogy survey to identify what they considered the “proper” pedagogy for a given topic. Typical questions include the likes of:

  1. A pound of coffee costs $2.50. How much does six ounces cost? How do I teach this?
  2. Every student understands fractions by informal knowledge. How do I reinforce this?
  3. What is the best way to teach the idea of a linear relationship?
  4. What is the best way to teach computations with decimals?

The possible responses were Exploration/Inquiry, Guided Invention, Mental Math, Examples, Models, Group Learning, Theory, and Direct Instruction. For each question, we constructed a key, and then other instructors took the survey, and from their responses other keys were constructed. Note, this was not a mathematics test per se, and a variety of responses were possible. Essentially, there were multiple correct or at least acceptable responses. This seemed at times to “confuse” the students as they were used to the focus on the correct answer in mathematics.

Most students selected models for almost every type of question, indicating modeling was a significant factor in their thinking or possibly the instructor’s thinking. In addition, exploration, guided intervention, and examples were strongly favored. Rules, group learning, theory, direct instruction, and mental math were not in their thoughts. Perhaps, the focus of the course in which they were all enrolled influenced the approaches selected. Consequently, the accuracy was very low, 0.2091. Even the kappa measure was low, 0.0355.

In conclusion, we found the survey:

  • Can be given as a pretest to identify whether a student even knows how to proceed.
  • Can be used to identify vague questions, weak predicates. (i.e., to identify actual confusion).
  • Can be used as a formative instrument to identify possible problems.

From our earlier study, pre service teachers are fairly certain of what type of problem they are considering. However, this study indicated they are not at all clear on how to teach the topic. Are we surprised?

 

About the Author
Dianne S. Goldsby, Ph.D.

Dianne S. Goldsby, Ph.D.

Dianne Goldsby, Ph.D., has been a faculty member of the Teaching, Learning and Culture Department at Texas A&M University for 14 years. She is currently the coordinator for the department’s online M Ed programs. Her primary focus is teaching mathematics education courses, including mathematics methods, a problem solving course, and integrated mathematics and science. She has authored peer-reviewed journal articles, book chapters, and research summaries, given professional development workshops for Texas/New York teachers, and presented over 60 times at international, national, regional, and local conferences. Goldsby is a former SERA division co-chair, reviewer for mathematics teaching journals, associate editor of the newsletter Focus on Mathematics Pedagogy and Content, and a former associate editor for School Science and Mathematics Journal. Her research interests center on pre-service teacher perceptions of mathematics and mathematics teaching.

Works Cited
  1. Allen, G. Donald, Scarborough, Sherri, and Goldsby, Dianne, Misconceptions exam for methodology. Internet: http://disted6.math.tamu.edu/confusion/confusion.html
  2. Resnick, L. Mathematics and science learning: A new conception. Science, 220, 477-478, 1983.
  3. Mestre, J. Why should mathematics and science teachers be interested in cognitive research findings? Academic Connections, 1987, pp. 3-5, 8-11. New York: The College Board.
  4. Pines (Eds.), A. L., Towards a taxonomy of conceptual relations. In L. West and A. L. Pines (Eds.) Cognitive structure and conceptual change (pp.101-116). New York, Academic Press, 1985.