University professors are back in the classroom this week, and many of them—myself included—are struggling with whether to allow students to use AI-driven chatbot programs like ChatGPT in writing assignments, and if so how. Some professors want to embrace AI. Some have brought back hand-written essays to prevent students from using it. Some want to do precisely nothing differently.
As a political science professor, my solution this term has been to use the content of the course I’m teaching—Human Security and Global Politics—as an opportunity to talk to students about the use of AI in learning as a human security problem. While I do not dismiss out of hand AI’s potential to be in some ways a useful learning tool, I’ve decided not to experiment or get creative with ChatGPT this year. Instead, I’ll encourage my students not to use it at all at this stage of their careers. Here are six reasons why.
First, like most educators, I have not yet been trained to grade work that has been produced with the help of AI in a way that is fair to both cheaters and honest students, efficient for me and likely to produce the kind of student learning I view as a public good in democracies. That’s because, like most universities, my university is still not providing guidance to professors on whether or how this should be done. At the University of Massachusetts-Amherst, where I teach, the academic integrity policy does prohibit the use of ChatGPT or other large language models in the classroom without the permission of the professor. But it does not specify which uses of AI are appropriate for which purpose.