Taking Serious Games Seriously in Education [Research]
This article first appeared on the EDUCAUSE Review website and was re-posted with permission.
Games can serve as a means of not just developing domain-specific knowledge and skills but also identity and values key to professional functioning. The data from games enable understanding how students approach and solve problems, as well as estimating their progress on a learning trajectory.
Interest in games for learning has grown for many years. Focus initially centered on the potential to engage students with new ways to capture and maintain attention. While the learning portion was important, much discussion explored reward systems and badges as ways to motivate student participation and practice. However, when people started looking at the potential of games for student engagement, they found that games align themselves well with theories of learning in many other ways.
James Paul Gee pointed out some of these overlaps more than a decade ago, in his now-classic book, What Video Games Have to Teach Us About Learning and Literacy.1 In this early book, Gee drew most of his examples from entertainment games, rather than education-specific games. At the time, many learning games consisted of drill-and-practice activity that promoted skill automaticity, but far fewer sought to promote conceptual understanding. Since then, concentrated work has created serious games — games with more than just entertainment as a goal — that embody the principles of deeper learning.
At the same time, the fields of cognitive psychology, educational psychology, and computer science have come together to create the interdisciplinary field of learning science. Many people in the field have turned their attention to games, along with other digital learning environments like intelligent tutors and simulations. As learning scientists have engaged with them, games have become stronger learning tools because of:
- Tighter ties to research-based learning progressions,
- Better links to elements of professionalization, and
- Better design for assessment.
Serious Games in Action
Before exploring each of these advances in more detail, consider two examples of games that exemplify some of the points raised. The first is Mars Generation One: Argubot Academy, designed and built by a team from GlassLab, the Educational Testing Service, and my research team at the Center for Learning Science & Technology at Pearson. We designed the game to teach and assess skills of argumentation, including identifying evidence of different types, matching claims to evidence to form arguments, and evaluating claim and evidence links in others’ arguments. The game takes place in the first colony on Mars, where the inhabitants frequently debate how the colony should be run, including the primary source of protein (soy? fish? insects?) and whether they should allow earth-like pets. Debates over these issues are settled via robot battles with special robots called argubots. Players must equip their argubots for battle with claims and match those to evidence that support them, and then go into battle and identify weaknesses in their computer opponents’ claim-evidence matches.
Nephrotex: An epistemic game
A second game, Nephrotex, is a semester-long experience in which players assume roles as interns in a fictitious bioengineering firm (the eponymous Nephrotex). The interns are assigned to a project that requires them to collaborate on the development of a novel nanotechnology-based membrane for use in kidney dialysis systems. Students review technical documents, conduct background research, and examine research reports based on actual experimental data. They then develop hypotheses based on their research, test those hypotheses in the provided design space, and analyze the results, first individually and then in teams. Students communicate with their team members and design advisor, using built-in chat and e-mail, and they record their activities and reflect on them in an engineering notebook.
Both games collect extensive data about player actions, which is analyzed via sophisticated statistical models. Instructors and students can receive reports about players’ knowledge, skills, and other attributes based on their patterns of game play.
Games and Learning Progressions
Learning progressions, sometimes called learning trajectories, describe how learners move from less sophisticated to more sophisticated understandings of a given topic. If we think of standards or learning outcomes as the goal, learning progressions describe how students get there. Of course, most domains offer multiple paths to get to outcomes, and most learning progressions attempt to identify those and/or identify paths associated with a particular curriculum. Learning scientists in various domains have conducted extensive research to define and validate progressions, outlining both stages and characteristic understandings and misconceptions at those stages. The idea of progressions aligns tightly to game design, where increasing levels of complexity mark successive levels in the game.
The first step in designing Argubot Academy, for example, was to review research on learning progressions in argumentation. Consolidating this research, the development team constructed a learning progression made up of multiple components, including interpretation and expression. For example, stages of expression of an argument included:
- Preliminary: Generates at least one reason to support a specific point, in sentence form
- Foundational: Generates multiple reasons to support a point and uses these reasons to counter others’ arguments in an engaging, familiar context
- Basic: Builds logical, hierarchically structured arguments by selecting and arranging reasons and evidence to support main and subsidiary points
The team then aligned game levels to these stages. In early levels, players focus on linking single pieces of evidence to claims. They then move to identifying multiple pieces of evidence of different types (observation, expert testimony, etc.) for a claim in response to opposing argubots. Finally, players build multiple argubots to create structured arguments.
Two of the available types of argubots, the authoritron and consebot
The core of an authoritron with a claim inserted by the player
The core of an authoritron with a claim inserted by the player
The technology allows for immediate feedback on student choices, which helps them improve their skills and move through game levels, thus stages of the progression. The rules programmed into the game govern the computer opponents’ behavior. If a player’s claim and evidence-linking is successfully attacked (for example, if the evidence is irrelevant or does not support the claim), that person’s robot immediately loses power. They can see not only that their argument wasn’t strong, but also what the weakness was, based on the type of attack an opponent launched. Players also immediately see the success or failure of their own attacks on opponents’ arguments in the power changes of the opponents’ argubots. If players lose their arguments, they can immediately rebuild their argubots with new claim and evidence pairings, and proceed to the next level after mastery.
Games and Professionalization
Apart from learning skills and knowledge of a domain, becoming a professional in a given area involves developing an identity, for example as an engineer, a psychologist, or a biologist. Novices must come to understand the beliefs that people in a given profession hold and assimilate those into their own belief structures. Commercial games have long employed the concepts of identity, allowing players to build avatars, join guilds, and form teams, all around specific combinations of knowledge and skill. Instead of building identities as wizards, can we use games to build identities more applicable to the real world?
David Williamson Shaffer, a University of Wisconsin professor, describes the concept of an epistemic frame as the interaction of knowledge, skills, identity, values, and epistemology (what counts as justification of action in a domain community) used in a particular profession. He and his colleagues hypothesized that if beginning engineering students could develop some elements of an epistemic frame, they would be more likely to persist in difficult introductory math and science courses. Many students begin to develop elements of professional identity and practice in practicum experiences in their junior or senior years. First-year students generally lack the engineering knowledge and skills to participate in practica. Shaffer’s group attempted to address this challenge by creating a game in which players could assume professional identities, but gaps in students’ technical knowledge and skill could be filled in through technology. For example, rather than asking students to complete technical design components beyond their skill level, the components can be generated by the simulation based on the constraints the students do have the skills to specify.2
One key to most practicum experiences, and to developing identity and values, is interaction with professionals in the field. These interactions serve both to scaffold knowledge and skills and to model the expert’s thinking. Nephrotex is designed so that students interact with live mentors in the program. The mentors, called design advisors, offer scaffolded problem solving opportunities, model professional practice, allow students to explore the professional domain, and invite them to participate in conversations to reflect on their work. Also of note, Shaffer and colleagues, including Art Graesser at the University of Memphis, are investigating the interactions and determining which could be automated into an intelligent “AutoMentor” that could take some of the load off the human mentors.
In a study of the impact of playing Nephrotex on student opinions of engineering, researchers found that more first-year women playing Nephrotex than women in a control group had positive changes in their view of engineering over the course of a semester.3 These results point to the promise of games to encourage student persistence in difficult STEM subjects.
Games and Assessment
Finally, games provide the opportunity for invisible assessment, allowing us to gather information without interrupting the flow of instruction (hence the term “invisible”). This allows reporting to instructors and students with feedback about progress immediately and in time to make adjustments to teaching and learning. It also eliminates the common complaint about the heavy time requirements of both giving and scoring traditional assessments.
In Argubot Academy, players never stop to answer a multiple choice question or take a quiz. Instead, the game gathers information about which evidence they choose, which claims they link it to (and how often they change their links), and how they choose to attack other robots. The records of player actions while in the game environment can then be used to build statistical models that let us estimate students’ placement on the argumentation learning progression. Are they at an early stage where they link one reason to one claim? Can they generate multiple reasons? Can they use multiple types of arguments?
In addition, technology lets us capture information about students’ process of solving problems, rather than just the final product of their work. Nephrotex provides a key example of this. The researchers analyze the chat logs between students and between students and their mentors, using natural language-processing techniques to categorize interactions, for example as relating to values or identity. They then use a technique called Epistemic Network Analysis to analyze the relationships among types of interactions and map patterns of change over time. They are able to subsequently identify student levels of identity and values — similarity to professionals and clients in the scenario — along with their knowledge and skills. Reports are then created to provide feedback to students and instructors about these levels and patterns.
These invisible assessments in games assess things we might not otherwise be able to. Identity and values, if assessed at all, are generally measured by self-report. Gathering information about student interactions “invisibly” in a digital environment allows us to make observations in different, authentic contexts, providing a broader range of evidence on which to base our inferences.
Over the past 15 or so years, work by learning scientists has moved educational games beyond their early incarnations. We can now align game levels to research-based learning progressions (and also validate those progressions using game evidence). Games can serve as a means of not just developing domain-specific knowledge and skills but also identity and values key to professional functioning. Finally, we can use the data from games to gain an understanding of how students approach and solve problems, as well as estimating their progress on a learning trajectory.
For those thinking about using learning games in the classroom, these developments mean you can ask more questions about games, such as:
- What is the model of learning embodied in the game? What skills are needed for success in the game, and how are they sequenced in the game? Does that match known, research-based learning trajectories?
- Can you clearly identify cognitive and non-cognitive skills and attributes targeted in the game?
- Do reporting functions in the game link player actions to estimates of knowledge, skill, or ability?
Research on games and learning has moved from asking “Do games help students learn?” to “What features of games help students learn what kinds of knowledge, skills, and attributes?” Consumers of learning games can likewise demand more sophistication from the games available for educational purposes.
- James Paul Gee,What Video Games Have to Teach Us About Learning and Literacy (New York: Palgrave Macmillan, 2003).
- Golnaz Arastoopour, Naomi C. Chesler, Cynthia M. D’Angelo, David Williamson Shaffer, Jamon W. Opgenorth, Carrie Beth Reardan, Nathan Patrick Haggerty, and Clayton Guy Lepak, “Nephrotex: Measuring First-year Students’ Ways of Professional Thinking in a Virtual Internship,” paper presented at the 2012 meeting of the American Society for Engineering Education, San Antonio, TX (June 10–13, 2012).
About the Author
Kristen DiCerbo, Ph.D., is a principal research scientist for the Center for Learning Science & Technology within Pearson’s Research & Innovation Network. Dr. DiCerbo’s research program centers on digital technologies in learning and assessment, particularly on the use of data generated from interactions to inform instructional decisions. She has conducted qualitative and quantitative investigations of games and simulations, particularly focusing on the identification and accumulation of evidence. She previously worked as an educational researcher at Cisco and as a school psychologist. She holds doctorate and master’s degrees in Educational Psychology from Arizona State University.