During our pre-first day meetings, the faculty were given a session on the use of AI. Some new(er) sites and technologies were discussed and we were tasked with trying them out. So I did. There was one resource that could make quizzes. I live on quizzes -- they are more frequent than full scale essays and easier to grade than single-page responses. I usually use 2 kinds of quizzes, vocabulary and reading. So I went to the website and put into the "resource" the entire (public domain) text of a particular book which I enjoy reading, and I asked the AI to make me a reading quiz (short answer). It came up with a bunch of very interesting questions. I asked it for an idea map which it constructed using only the text of the book. Rather than see this as a boon, I am worried. As the short-cut apps improve, there will be less reason to have to read the text. I work hard at explaining to my students that the summaries available miss stuff, but this trainable AI will miss less and less. I looked at the quiz and was comforted to know 2 things:
1. The questions I ask appear to be spot on as the AI came up with the same ideas
2. There are details more insignificant and less necessary than the ones I ask about.
I also found that the AI system could make very simply inferences but did not have the breadth of knowledge to explore deeper concepts. There was room for me to introduce external concepts into the classroom, ones that AI could not import, but that might also be temporary, as I could train the system on my notes, or on external (approved) resources which will bolster my specific point of view about the text. I also noted that the AI could not distinguish between "new vocabulary" referring to the reading level of the English words and the fact that the text included invented words. But on the whole, this could easily generate passable reading quizzes and book summaries.
Then I asked a different system to make a multiple choice quiz for a specific unit of a specific vocabulary book that we use. I did not provide the text. The system found the text and the unit and crafted a multiple choice quiz. It wasn't the style I use and it didn't have the trick answers I like to lay down as mines to trick those who do not read or study thoroughly. Could I refine it to cover multiple units? Maybe, I guess. Could I tweak my prompt to tell it the style I want? Probably.
All in all, this could be a real time saver for me. Except that I would then not be refreshing MYSELF by rereading the text or poring over the vocab book -- test item creation is its own form of study and I, as the teacher, miss out on that opportunity when I outsource the quiz making to the skynet.
Do I go to my classes and tell my students about these resources? Heck yeah. But wouldn't I, then, be providing for them a way to get around having to study (in the case of the literature quiz, at least) because the internet can provide a more guided set of summaries than a static "Spark Notes" can. Maybe this should be the new mode of teaching a book. Tell students to use a resource to generate a brain map, or practice quizzes or chapter summaries before reading that chapter and then let them try again after reading it. I'm not sure yet, but we are driven by the results we want to see and I have to come to terms with what skills and habits will be essential in the future and then shape my approach to address those needs.
AI can replace pre-teaching. AI can replace quiz making (and, in some multiple choice cases, grading). AI can replace note taking and essay generating. What can't AI do that I can, and that I bring to the classroom? Maybe, bringing humanity and the love of learning to the classroom. Or maybe it's that I can balance a hockey stick on my nose.
Well, AI? Can you balance a hockey stick on your nose? Yeah...you run away. Dan, FTW.
No comments:
Post a Comment
Feel free to comment and understand that no matter what you type, I still think you are a robot.