Teaching Critical AI Literacy Toolkit
This document is meant to help instructors prepare for the new semester by providing advice and guidance on some of the issues around incorporating generative artificial intelligence (AI) in teaching and addressing student use of AI for coursework.
This page will cover:
- What is artificial intelligence and critical AI literacy?
- Implications for Academic Integrity
- Using AI with students – or avoiding student AI use
- Detecting and handling unauthorized student AI use
- Additional Resources
What is artificial intelligence and critical AI literacy?
Artificial intelligence was defined in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence as: “The field of study focused on creating machines that can perform tasks that typically require human intelligence.” We have had artificial intelligence for a while now doing facial recognition or listing recommendations for us in our Netflix account, but what is newer, and more impactful for education, is generative AI which OpenAI released to the general public in November 2022 with ChatGPT.
It is important to know that the Large Language Models (LLMs) that tools like ChatGPT or Google’s Gemini use generate content based on probabilities and predictions. The AI models are trained to determine what the next most likely word in a sentence will be, not whether what is generated is correct, biased, misleading, etc. Many AI tools, like research assistants (e.g., Perplexity, Elicit) and tools like Google’s NotebookLM, allow for curated content. This means it is more likely to generate accurate and relevant results, but can still leave information out, or misconstrue the meaning. Thus, evaluating AI output is always necessary.
So what is critical AI literacy? Generative artificial intelligence comes with a number of limitations and problems that seem like they will be hard, if not impossible, to resolve. These include such issues as the inherent bias in AI generated content, lack of transparency in where information is coming from or even how the tools work, built-in surveillance, disparity in access to the most robust versions of the tools, and the environmental costs of the computing power needed for AI. Critical AI literacy is ensuring that we, and our students, learn about these concerns, and know what AI tools can and can’t do, and the consequences of relying on them.
Implications for Academic Integrity
The code of conduct states that academic misconduct encompasses any of the following:
- Cheating
- Plagiarism
- Fabrication
- Multiple submissions of work
- Unauthorized recording and/or use
- Facilitation of any act of academic misconduct
- Unauthorized use of artificial intelligence
- Other acts of academic dishonesty.
Because there is a wide variety of learning goals across schools, disciplines, majors, and lower vs. upper division classes, it is almost impossible to have one campus-wide AI policy around student use of AI. A writing class that is designed to teach students how to read, synthesize and effectively communicate information may not want students using AI tools initially to develop their skills in these areas. But in an advanced computer science class, an instructor may want their students using AI tools to be on top of current trends in the field.
This means each instructor must clearly state what their policy is around AI use in their course. Your AI policy should be clearly stated in your syllabus, and it is important to also talk to students about your policy. If you are limiting the use of AI, explain to students why you are doing that. It can be as simple as you want them to learn foundational skills and not offload them to AI. Having that foundational knowledge will make students more effective users of AI down the road. It is also important to make it clear to students that no matter what, they are responsible for the work they submit.
has three levels of sample statements faculty can use to help communicate your AI policy – all the way from absolutely no AI use to broad AI use. Those sample statements are available here (scroll to the bottom for the AI statements).
Though students can use AI to help them complete work, take tests, and write papers, this is also an opportunity for educators to pivot the way we teach, focus on what we actually want students to know coming out of our classes, and assess that knowledge in different ways than we have in the past.
On a related note, has policies that prohibit sharing private or restricted data with any third party without permission and this includes any AI tools that you either upload data to, or that integrate with any of the programs on a computer. wants to ensure that these AI tools are not scraping institutional information.
Using AI with students – or avoiding student AI use
Employers are increasingly seeking workers with AI skills and so it is important to teach students about appropriate AI tools for your field and have students use AI for certain tasks, and with an understanding of its limitations (again, focusing on critical AI literacy). Assignments can incorporate student use of AI tools by having students interacting with tools through prompting and iterative use to get the desired results, and then evaluating the generated content, along with a reflection on their use of AI. For example, if you have students do a research project, they can use AI research assistants to help them find and summarize sources and then compare how the AI tool summarized the article vs. how they would summarize the article findings themselves.
When students are using AI tools as part of any activities or assignments, have them always identify the tool they used and how they used it. Doing so gives students a chance to reflect on AI’s capabilities and allows you to see their level of AI use. After all, faculty are still learning about the capabilities of AI too! You can even have students copy and paste the AI-generated content as part of the assignment and then have them reflect on it or compare it to content they generated themselves.
If you feel that your course content and assignments really do need to steer clear of incorporating AI, there are some ways you can help deter students from using AI to complete work. Basically, you want to try and think of some different ways to assess student knowledge that might make using AI a little less appealing.
- Scaffold assignments so that students are turning in smaller assignments at shorter intervals and can perhaps complete some of the work in class. This way, you will see the students’ progress along the way, and it will be noticeable if they turn in a final project completed entirely by AI.
- Students can demonstrate their knowledge in different ways, like with an in-person presentation, or for an online class, slides with students’ recording their voices.
- Students can complete short written responses during class instead of a multiple-choice test outside of class. For online classes, perhaps more frequent but shorter exams might help since there would be lower stakes, or surprise exams or pop-quizzes with shorter time for completion might deter AI use.
- Students can do collaborative activities where they are working with others. There are activities like social annotation (all students read and annotate the same text online) or shared documents (common in many workplaces where a group writes and tracks changes in the same online document).
Don’t forget that you can ask AI to help you modify your tests and assignments to hopefully lessen your workload as you adapt your teaching to this new reality.
Detecting and handling unauthorized student AI use
Since AI will likely become highly integrated into all that we do, it is important to get away from a suspicious or accusatory attitude with students. Focus on the fact that you want students to meet the course learning objectives and that it is in the students’ best interest to do so. We all strive to encourage intrinsic motivation in students, and that is even more important in this era of AI.
If you do get an assignment or paper from a student and you suspect that they used AI to complete the work against your course policy, here are some steps you can take:
- AI detectors have been shown over and over to not work but some faculty do use them as another tool to help with monitoring student AI use. If you do use an AI detection tool, be clear with students that you are doing so, and that you understand its limitations. It is just one more tool in your toolbox when assessing student work. Then do not rely solely on the AI detection report to determine whether AI was used or not.
- Go back and look at previous work by the student and compare it to the suspicious assignment. This is particularly helpful with writing assignments as you can compare whether or not the writing style is different.
- Try putting your assignment instructions in one or two AI tools and see what kind of results you get. Obviously, AI generates different results each time, but it helps give you a sense of what AI is capable of in relation to your assignment instructions.
- Invite a conversation with the student and approach it with curiosity, not accusation. Ask the student questions about their paper topic, or about the assignment content. Ask them about sources used or any points they made. This can give you a sense of what the student actually knows about the content.
- There will be times when a student will not admit using AI to complete an assignment, even when you suspect they used AI against your course policy. Since there is no way to absolutely prove AI use in many instances, try advising the student on using AI tools in a way that keeps them as the author/creator/learner. Emphasize the importance of knowing the information and course content as they will be expected to have these skills and capabilities when they go out into the workforce as a college graduate in their major (that they are paying for!). Again, try to refrain from accusation and use it as a teaching moment.
Because AI use cannot always be proven, there will be times when we have to let it go. We may think it is likely a student used AI when they were asked not to, but all we can do is go through some of the steps listed above and have a conversation with the student. Talking about critical AI literacy in your class and being clear on your expectations around AI use for each assignment can also help deter students from using AI as a shortcut.
Resources and Readings Used for this Guide:
Georgieva, M., Webb, J., Stuart, J., Bell, J., Crawford, S. and Ritter-Guth, B. (2025, June). AI ethical guidelines. Educause.
Goodlad, L.M.E. & Stoerger, S. (2024). Teaching critical AI literacy: Advice for the new semester. Rutgers University Office of Teaching Evaluation and Assessment Research.
Hsu, H. (2025, June). The end of the essay or What happens after AI destroys college writing. The New Yorker.
Kratz, S.A. (2025, Feb.). Difficult conversations: Coming to terms with AI in the writing classroom. The Important Work.
Langlois, L., Duberry, J., Wahlisch, M., Perry, M., Toupin, S., Gaumond, E. and Ramos, M. (2025). Critical conversations about AI: A glossary for big questions and bold ideas.
Minz, S. (2025, April 2). Writing in the age of AI suspicion. Inside Higher Ed.
Mollick, E. (2025, July). Against “brain damage.” One Useful Thing.
Oregon State University. (2024). Bloom’s taxonomy revisited.
Roberts, J. (2025, April 4). When students use AI in ways they shouldn’t. Edutopia.
Turnitin. (2023). Discussion starters for tough conversations about AI.
Watkins, M. (2025, May 5). Your students need an AI-aware professor. The Chronicle of Higher Education.*
Ward, Stephanie. (2025) A Quick Guide on Teaching Critical AI Literacy--Advice for the New Semester, Fall 2025.
*The Chronicle of Higher Education is available to faculty through the Databases A-Z list on the library website:
This guide was prepared by Stephanie Ward, in consultation with the CETL Teaching Coaches