Serious games are not so serious

Male and female middle school students sitting side by side working on laptop computers

Kids do the funniest things when playing games.

I’ve spent a good portion of the last three years watching kids play learning games of various kinds and trying to make inferences about what they know from their actions in the game. I have tried to design games to capture evidence of how learners understand important concepts. Many would call the games I work on “serious games” because they are meant for a purpose in addition to (but not in place of!) entertainment.

I also use a lot of exploratory data analysis to find patterns in data and also to uncover types of plays and players that don’t fit into our expected patterns. This is where we get to see how kids continue to find fun in ways we didn’t expect. They don’t take our serious games too seriously. Here are two examples:

First, in our Alice in Arealand game, players in the early levels focus on a skill called area unit iteration. This is the first step in learners coming to understand that the number you get from the area formula is the number of unit squares in a space. Area unit iteration is simply the filling of a space with unit squares laid end-to-end in a non-overlapping way. One of the game levels building this skill involves filling in a wall so that a yeti can’t get to Alice. We carefully gather data about where students place all those squares. Most of them do as we intended and place the snow in the gaps. A few others? They put the snow on the yeti’s face! (see the right hand panel above) Well, yes, I suppose that would also stop the yeti from seeing Alice. How did we find this? By looking at the frequencies of the coordinates at which players placed the snow.
Alice in Arealand game image

Now that we know that some percentage of kids do this, does it tell us anything about what they understand about area? Is it just having some fun? We can’t really tell if we just look at that action in isolation. However, if we combine it with other actions in the game, it does begin to tell us something. We can see some students who solve the tasks quickly go back and try to explore and experiment. They often do have quite strong content knowledge. Other students do the face covering early in their task sequence and we see them on other levels taking a long time to solve puzzles. On our outside measures, they often tend to have lower scores on understanding of geometric measurement.

As a second example. I was involved in data analysis for the SimCityEDU game. A key component in one of our scenarios was reducing pollution but maintaining sufficient power in the city, in part by bulldozing coal-burning power plants. When we graphed how frequently players engaged in bulldozing behavior, most ranged from 2-10 bulldozing events. However, a small group had more than 150! What were they doing? They bulldozed nearly the entire city!

The availability of these kinds of alternative solutions are what many people view as part of the fun of games. From an assessment standpoint, how do we view that much bulldozing? In fact, eliminating so many buildings does drastically reduce power needs in the city and one can successfully complete the game with that strategy. So, are players that do this really good at understanding the system of the city (one of the competencies we are assessing) or is bulldozing just a really fun thing to do? How do we interpret this behavior?

We have to view it in relation to other behaviors we observe in other situations. Has the player shown evidence in other places of strong systems thinking? Have they engaged in a lot of “transgressive” play? Combining evidence from many game scenarios and levels, and potentially from sources outside the game, helps us know how to interpret this evidence. We can build statistical models that let us assign the evidence to either systems thinking or just playing around depending on what we see.

We could design learning games to try to eliminate the possibility of these different solutions so we could get more clear evidence, but designing them out completely really runs the risk of alienating gamers. Instead, we can look for traces of unusual play in data and use statistical modeling to help us interpret them.

 

About the Author
Kristen DiCerbo, Ph.D.

Kristen DiCerbo, PhD

Kristen DiCerbo, PhD, was a principal research scientist for the Center for Learning Science & Technology within Pearson’s Research & Innovation Network. Dr. DiCerbo’s research program centered on digital technologies in learning and assessment, particularly on the use of data generated from interactions to inform instructional decisions. She has conducted qualitative and quantitative investigations of games and simulations, particularly focusing on the identification and accumulation of evidence. She previously worked as an educational researcher at Cisco and as a school psychologist. She holds doctorate and master’s degrees in Educational Psychology from Arizona State University.