One activity alluded to earlier is the creation of items for use in teaching. Chat and other GenAI systems can generate these items for use in a variety of ways for formative and summative assessment.

With Chat, one effective method for generating items is to provide the AI with the written materials associated with the area you wish to assess. For example, if you are using an open educational resource or other work with the applicable license to create a derivative, you could upload the relevant portion of the resource to Chat with a prompt requesting that Chat “create ten multiple choice questions concerning the enclosed file.”

What kind of item to have Chat generate depends on the learning objectives of your particular unit, module, or the content that you are teaching. Giving Chat your learning objective(s) with your prompt can help to generate more fine-tuned items. You can also ask Chat to assess the Bloom’s Taxonomy level of your specific learning objectives and ask it to generate items that are appropriate to the level (identify, apply, analyze – different taxonomy levels that will cause Chat to generate somewhat more sophisticated items for use).[i]

Multiple choice is also one of several types of items that Chat can generate. What you ask Chat to do should be aligned with your specific outcomes for your course and the materials that you are teaching. However, generating these items alone may still pose some additional hurdles for you. For one, if you are using a learning management system, you will still have to load these items into that system to create an assessment for students to complete. This can be particularly time consuming if you need a large set of items.

D2L BrightSpace supports the import of items into its question library and provides specific file layouts for importing questions. For example, multiple choice items can be uploaded in the format:

NewQuestion,MC,,,

QuestionText,Which aspect of AI-generated ideas should instructors be cautious about?,,,

Points,1,,,

Difficulty,1,,,

Option,100,Ensuring alignment with course objectives.,,

Option,0,Replacing all textbook content with AI outputs.,,

Option,0,Completely relying on AI without review.,,

Option,0,Sharing AI-generated content without any editing.,,

Hint,Check page 87 of the textbook,,,

Feedback,AI outputs should be carefully aligned and edited to match course goals.,,,

Each item should start with NewQuestion,MC,,, so that BrightSpace will be able to separate out each item in the file. This format relies on commas to separate fields which can create import problems if the text of the question generated includes a comma. In addition, the file extension needs to be .csv in order for BrightSpace to recognize the file on import. Other item types are also supported for bulk import, such as True/False, Multi-Select, short answer, and matching.

You can also ask Chat to create a text file that contains items generated in your chat session, though be sure to not indicate you wish a .csv file, as it tends to put in quotation marks in each line of the file and that will prevent import.

You may use a different LMS that has a different file layout requirement, but Chat is likely to be able to follow the layout if you provide an example to it as part of your prompt. In addition, there may be a custom GPT in the marketplace that will support your particular LMS or other application for file transfer. For example, there is a GPT I found specifically to generate BrightSpace items (though in testing it stumbled at the file creation step).

As mentioned earlier, other documents you may have can be used for item generation. For example, lecture or meeting transcripts could be used to reinforce learning from the lecture itself. You may provide your students with academic journal articles or reported cases and could use those as well to generate items to test student understanding of the reading.

The other value here for the instructor is that Chat can generate a lot of different questions for your use. This can be helpful if you wish to create a larger pool of items for use in a question pool, where the LMS will randomly select a subset of questions from the pool to present to each learner for each attempt of the quiz assignment. Such a configuration is helpful in reducing quiz fatigue for students. BrightSpace also provides a way to give the learner a hint for how to answer the question, which can be particularly helpful for practice activities. In my example, I asked Chat to generate hints based on the page number of the text from which it derived its question as a reference point for the student to study the content if they missed the question initially.

I’ve also set the difficulty level based on the Bloom’s Taxonomy level, which permits further customization of question pools (for example, having separate level 1 “identify”-type questions from higher level 3 “apply”-type questions could help make the assessment fairer and ensure that all students receive a roughly equally difficult assessment based on balanced question pools on the content). You could also further customize your prompt to assign different points for items based on the item type.

Accomplishing this type of complex item development is entirely possible using existing GenAI tools with layered prompt engineering (where you break down the problem into chunks and provide that in steps to the GenAI system).

On the other hand, if you don't want to do all this prompt engineering with your lecture transcripts, I've created subscription services on this site to help you create these importable .csv files for Brightspace.

Once the file has been created, you can drag and drop the file into the LMS, import the items, and then review them. This gives you a chance to edit items that may be incorrect or that imported incorrectly before deploying the items into an assessment.

Plickers

There are a variety of other applications of this approach with other systems that essentially true/false and multiple choice items of students. Plickers is one such system. Plickers is a variation on other systems that have the students use a SmartPhone and an app to participate, or that require custom hardware for students to participate. The advantage with Plickers is that students can hold up a paper QR code that you have provided them in advance to be able to register their selection from the item you display on screen in the classroom. Plickers has an instructor app that can be used on a SmartPhone to register each student’s vote and display the results of all responses.

Recently Plickers also offers a remote synchronous method to use during Zoom or Microsoft Teams class meetings as well.

Plickers also supports bulk import of questions, where the question would appear on the first line, and each option would appear on a subsequent line. Chat can generate items in this format for rapid import for building out a formative assessment.

Kahoot!

Kahoot! is a similar online system that permits the presentation of multiple choice and true/false items for students. Kahoot! plays more like a game and can be used in-person as a real-time competitive activity, or asynchronously for students to complete in a set period.

Kahoot! also supports bulk import of items that are organized in a spreadsheet, where the second column contains the question, and the next four contain the possible answers, with the next column the amount of time for the student to answer the question, and the last column containing the number that corresponds to the correct choice.

Both Plickers and Kahoots! also permit you to include an image with each item. I use Chat (the subscription version includes access to DALL-E for image generation from a prompt) to generate images that I use with these items. However, there are a number of GenAI systems that will create images from text prompts like Canva and MidJourney.

And, if there is another system that you use with these kinds of items that supports file import, you can provide the file format to Chat and write a prompt to have it place items it generates into that format. This type of automation opens up the possibility of generating substantial assessment content with far less effort compared with creating these items manually. Other academic research has found that students generally prefer AI-generated items, and such items tend to perform on average as well as handwritten items.



[i]  Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives.