This spring, I began work on Lawyer’s Quest, a web-based role-playing game that students can play to learn the materials in my law courses. Students take on quests within a course which are based on a particular course learning objective, and encounter various non-player characters (NPCs) and objects in the game. Encounters involve answering various questions using the course materials, and receive various rewards, such as items that improve game play. As players progress in the game, they can earn the ability to effectively “skip” some question interactions with NPCs. A basic game economy is implemented based on the currency of “bits” that are earned through defeating enemies, solving riddles, or successfully opening treasure chests. Bits can be used to purchase food in the game; food is a limiter on movement on the map for players. In addition, players can use bits in interactions with a wandering trader (with whom players may haggle over prices of items available through API calls to ChatGPT) and with a roaming blacksmith (with whom players may haggle over the cost to repair items in their inventory, also through a Chat API).
The entirety of Lawyer’s Quest, from the initial design, python code, flask implementation, content, graphics, etc. was co-developed primarily with ChatGPT & Claude.ai (with some later work using OpenAI’s Codex and some early work with Cursor), with the bulk of the application development completed in under two months. The website is hosted on Heroku using an Azure MySQL database, along with other web components like a hosted proxy server, making the game easily scalable.
As a part of an experimental elective on Social Media and Intellectual Property law, I integrated a total of six quests in Lawyer’s Quest into the assignments for students to complete the course. The first five quests were based on learning objectives on various aspects of intellectual property law (such as trademarks, copyright, fair use, DMCA, CCPA, and GDPR), and the sixth quest was designed as a competitive review activity, where students competed with each other for who can answer the most consecutive questions correctly in a “tournament”-style competition. Students then completed a timed, two-hour, open-book exam on these materials that contained the same item types (true/false, multiple choice, and fill-in-the-blank) as Lawyer’s Quest.
I then collected data on how each student performed on the exam, how many quests they successfully completed in the game, and how many questions they ultimately answered correctly in the game. This data is presented in Table 1 below.
Table 1 – Student Outcomes
Student |
LQ |
Exam |
Questions Answered |
1 |
6 |
46 |
406 |
2 |
0 |
0 |
0 |
3 |
0 |
20 |
0 |
4 |
5 |
40 |
568 |
5 |
2 |
35 |
200 |
6 |
0 |
34 |
0 |
7 |
0 |
22 |
29 |
8 |
3 |
38 |
280 |
9 |
0 |
30 |
0 |
10 |
1 |
36 |
34 |
11 |
6 |
17 |
528 |
12 |
0 |
27 |
7 |
I then investigated whether there was any correlation between exam scores (out of a maximum of 48 points) with the quest completion or questions answered in the game.
While a small sample size (and uncontrolled for any other potential confounding covariates such as student historical GPA, demographics, etc.), there is some indication of a relationship between participating in the game and exam performance. Pearson’s r for the number of quests completed and exam score is 0.40, and 0.36 for the number of questions answered correctly in the game and the exam score: both suggest a moderate positive correlation. In addition, there was a difference in average exam scores between students that completed at least one quest (35.3/48) in the game compared with those that completed none (22.2/48), though this result was not quite significant (p = 0.065).
Interestingly, if student 2 and 11 are removed (student 2 did not complete the exam or use the game, and student 11 appeared to have run out of time on the exam but only managed to attempt around 1/3rd of the questions), a thirteen point average difference in exam scores remains, but this difference appears to be significant (p = 0.0055), and the Pearson r values increase to 0.84 and 0.74 respectively indicate a strong correlation between quest completion and exam scores, and questions answered in Lawyer’s Quest and exam scores.
Obviously, such a small sample can only be suggestive of a relationship. My plan is to expand the use of Lawyer’s Quest to a larger sample and then examine the outcomes in more detail later this year. That said, the results here are promising that this intervention can be impactful on student performance.
Moreover, the game caused several students to answer hundreds of practice questions about the materials – far more than students would have interacted with the materials in the course design I would typically implement (practice exams generally are shorter). I find it interesting that some students were highly engaged with the course – again, hard to say whether the game was the cause of this engagement or the result of pre-existing student motivation towards the course or the subject. However, this preliminary information supports the idea of continuing forward with this idea.