Does Data Display Impact Instructional Decisions?

Data, Charts, Graphs

Over the course of a school day, teachers make countless instructional decisions. Data, ranging from anecdotal observations to assessment results, is regularly used to guide decision making. In fact, we know teachers can use student achievement data to improve student performance. How can achievement data improve student performance? The path from data to decisions is complex but research tells us that data interpretation is an important step. Achievement data can be displayed in different ways, including reports with visualizations (tables, charts and graphs) and online gradebooks. This made Pearson researcher Kristen DiCerbo and I question, “Does data display impact instructional decisions?” We presented our research findings in full on Saturday, April 18th at the American Educational Research Association Annual Meeting in Chicago (session 46.030).

To address this research question, just under 200 teachers responded to an online survey. The survey questions presented teachers with a scenario, a data display showing class assessment results and question requiring an instructional decision. For example, one decision required teachers to recommend a group of students to an accelerated enrichment program or an intervention support program based on their assessment performance on two skills. Teachers were asked to make this decisions three times and each time they were given the assessment results in a different data display. The displays used the same source data but the data was visualized in different ways (as a scatter plot, bar chart and table) and labeled with different student names.

Examples of Data Displays

Examples of Data Displays

 

What did we find? Data display does impact the type of instructional decisions teachers make. When we looked at selections across the visualizations, we found about 98% of teachers changed the students they nominated for the accelerated group and 99% of participants changed the students they nominated for the support group. Moreover, when using the scatterplot, teachers tended to overemphasize the y-axis (scatter plot below). Students that performed well on the skill measured by the y-axis (even if they performed poorly on the skill measured by x-axis) were more frequently selected for the accelerated program. Likewise, students that performed poorly on the skill measured by the y-axis (even if they performed well on the skill measured by x-axis) were more frequently selected for the support program. This was not the case when teachers examined the same assessment data using a table or a bar chart. As you can imagine, these instructional decisions have real ramifications–students that would have reaped the most benefit from an accelerated program and an intervention support program are overlooked.

Scatter plot chart

Scatter Plot Used for Accelerated Program and Support Program

Clearly, data display matters! Teachers regularly interface with data displays when examining gradebooks and reports. To better serve teachers, data displays and reports should be developed with user experience research and testing that involves end users. In this case, that means teachers.

View more posts by Tasmin Dhaliwal.

Connect with her on Twitter: @tasmin_dhaliwal

 

About the Author

Tasmin Dhaliwal is Research Associate in the Center for Digital Data, Analytics & Adaptive Learning. Tasmin’s research interests focus on teacher interaction with data displays and assessment reports, and impact on instructional decision making. She previously worked in teacher development for Teach For America and taught elementary school.

 

This blog post was originally published on the Research & Innovation Network blog and was re-posted with permission.