Academic Integrity

Syllabus Statements for Artificial Intelligence

This resource highlights considerations and provides examples of what to include in your course syllabus around artificial intelligence tools and academic integrity.

Ethics and Education

Griffith, T. L. (2023, February 14). Why using AI tools like ChatGPT in my MBA innovation course is expected and not cheating. The Conversation. https://theconversation.com/why-using-ai-tools-like-chatgpt-in-my-mba-innovation-course-is-expected-and-not-cheating-198957   

Abstract: Hear from an educator of SFU’s MBA program who is revising her 2023 course syllabi due to the new ease of access to AI tools. The article discusses how there isn’t a “silver bullet” to addressing ChatGPT and the importance of looking at various social and technical aspects of work including the target, times, talent, technology and technique (the 5T framework). Also highlighted is the notion that we, both students and professionals, need to be careful consumers as generative AI is often wrong.

Bowers-Abbott, M. (2023). What are We Doing About AI Essays? Faculty Focus. https://www.facultyfocus.com/articles/teaching-with-technology-articles/what-are-we-doing-about-ai-essays/

Abstract: With the developments in artificial intelligence (AI) Software, AI essays have come to the forefront. The author explores the idea that AI authorship has already existed for some time, such as generating content for online businesses, but now is a possible replacement for contract cheating services. With that said, the author identifies some of the limitations of AI writers – they often generate similar and sometimes identical phrases, they struggle with navigating course-specific contexts, and are mostly limited to text-based sources

Foltynek, T., Bjelobaba, S., Glendinning, I. et al. ENAI Recommendations on the ethical use of Artificial Intelligence in Education. Int J Educ Integr 19, 12 (2023). https://doi.org/10.1007/s40979-023-00133-4

The European Network for Academic Integrity presents recommendations for addressing artificial intelligence in education (AIED) in an ethical manner, while recognizing that it is an ongoing process with how quickly artificial intelligence is evolving. The article highlights the importance of determining when AIED is unethical, recognizing that artificial intelligence can threaten academic integrity but also presents opportunities. The authors state that proper acknowledgement is required but can be done in different ways depending on the context, and discuss the importance of educating both students and teachers.

Fricke, V. (2022). The End of Creativity?! – AI-Generated Content under the Canadian Copyright Act. McGill Business Law Platform. https://www.mcgill.ca/business-law/article/end-creativity-ai-generated-content-under-canadian-copyright-act

Much of the conversation around generative artificial intelligence (AI) has been centered around written work. This article goes through some considerations around AI-generated content and “who” owns it, with examples involving AI-generated art. Framed within the context of the Canadian Copyright Act, the article tackles the possibility that ownership might reside with the AI’s developer, the user of the AI, the AI itself, or in public domain. With the Canadian Copyright Act protecting works that are original, this leads the discussion to what the definition of “original” is. Currently, AI-generated content does not fall under the Canadian Copyright Act, but this might evolve with future changes in the law.

Perrotta, C., & Selwyn, N. (2020). Deep Learning Goes to School: Toward a Relational Understanding of AI in Education. Learning, Media and Technology, 45(3), 251–269. https://doi.org/10.1080/17439884.2020.1686017 

Abstract: In Applied AI, or ‘machine learning’, methods such as neural networks are used to train computers to perform tasks without human intervention. In this article, we question the applicability of these methods to education. In particular, we consider a case of recent attempts from data scientists to add AI elements to a handful of online learning environments… we provide a detailed examination of the scholarly work carried out by several data scientists around the use of ‘deep learning’ to predict aspects of educational performance. This approach draws attention to relations between various (problematic) units of analysis: flawed data, partially incomprehensible computational methods, narrow forms of ‘educational’ knowledge baked into the online environments, and a reductionist discourse of data science with evident economic ramifications. These relations can be framed ethnographically as a ‘controversy’ that casts doubts on AI as an objective scientific endeavour, whilst illuminating the confusions, the disagreements and the economic interests that surround its implementations. 

Lesson Plans

Eaton, Sarah Elaine. (2023, April 7). How to talk to your students about ChatGPT: A lesson plan for high school and college students. Learning, Teaching, and Leadership: A blog for educators, researchers, and other thinkers, by Sarah Elaine Eaton, Ph.D. https://drsaraheaton.wordpress.com. CC BY-NC 4.0.

This 45-60 minute lesson plan provides an opportunity for instructors and students to explore how technology like ChatGPT works, how to fact-check AI-generated content, and how this technology might impact academic integrity. Prior to the lesson, students read an online article and then the class comes together for a guided discussion. The lesson plan provides discussion questions and possible follow-up activities to extend the learning.

Eaton, S.E., & Kumar, R. (Eds.). (2023). Academic Integrity Lessons: Practical Ideas for Teaching, Learning, and Assessment. University of Calgary. Calgary, Canada.

https://dx.doi.org/10.11575/PRISM/42226

Edited by Sarah Elaine Eaton and Rahul Kumar, this resource contains lesson plans designed for university-level students, with contributions from authors in Finland, the UK, Qatar, and Canada. Each lesson plan outlines the learning objectives, the lesson preparation needed, the learning activities, and possible follow-up activities. The lesson plans include suggestions for approaching academic integrity from a positive rather than punitive orientation. Included are also ideas for encouraging critical thinking around ChatGPT and artificial intelligence tools.

Academic Integrity

Eaton, Sarah Elaine (2019). How to lead a discovery interview about contract cheating. Werklund School of Education & Taylor Institute for Teaching and Learning. CC BY-NC-SA 4.0.

Abstract: Contract cheating and generative artificial intelligence use in student assessments have many similarities when it comes to detection and investigation approaches. This guide from Dr. Sarah Elaine Eaton (2019) provides helpful suggestions for questions to ask students when you are meeting with them about academic integrity concerns.  

While organizations are still developing best practices and guidelines around how to cite and reference the use of generative AI, the KPU Library has information and suggestions around best practices if using MLA or APA. Given that official style guidelines are still being established, we suggest keeping the focus on if it is clear where information is from rather than details (ex. whether to use a comma or a period in a reference), as well as a strong recommendation to ask/require students to include an appendix in assignment submissions that shows what content was taken from the tool and what prompts were used. 

International Center of Academic Integrity (2023, June 14). ICAI-Canada Statement on Artificial Intelligence and Academic Integrity. https://academicintegrity.org/images/ICAI_Canada_Statement_on_Artificial_Intelligence_and_Academic_Integrity.pdf

Recognizing the rapid advances in the development of artificial intelligence (AI), Canadian educational institutions face challenges in using it responsibly. ICAI Canada aspires to represent higher education institutions, guiding educators and institutional leaders on the ethical use of artificial intelligence in teaching, learning and research. It’s crucial to align AI with learning goals, be transparent about its use, and educate users on ethical implications like privacy and bias. ICAI-Canada remains up-to-date with AI advancements, providing educational resources for fostering integrity in an increasingly technology-driven education landscape. In this article, ICAI is offering their current recommendations for responsible AI integration.

Concerns and Controversies

Susnjak, T. (2022). ChatGPT: The end of online exam integrity?. Ithaca: Cornell University Library, arXiv.org. Retrieved from https://arxiv.org/abs/2212.09292.

Teo Susnjack (2022) discusses their research findings regarding ChatGPT’s capabilities and explores the ways ChatGPT poses challenges for maintaining academic integrity in online exams. The author presents some possible partial solutions and prevention strategies in addition to problematizing approaches centred around proctoring techniques. 

Bertram Gallant, T. (Winter 2024). How Do We Maintain Academic Integrity in the ChatGPT Era?. Liberal Education – AAC&U. https://www.aacu.org/liberaleducation/articles/how-do-we-maintain-academic-integrity-in-the-chatgpt-era  

Dr. Bertram Gallant addresses the challenges and opportunities of maintaining academic integrity in the era of ChatGPT as the release of ChatGPT in November 2022 has sparked concerns about increased cheating in higher education. She highlights the importance of maintaining the values of integrity in teaching, learning, and assessment despite the emergence of new technologies.

The article explores various perspectives on how to address academic integrity concerns in the GenAI era. It acknowledges the tension between ensuring integrity and promoting effective teaching methods, emphasizing the need for a balanced approach. Some suggestions for educators to mitigate cheating are increasing intrinsic motivation, enhancing self-efficacy, making coursework meaningful, and reducing cheating temptations and opportunities. Additionally, the author encourages educators to offer learning experiences that GenAI cannot replicate, such as human-to-human interaction and skill development.

Dumin, L. (2023, June 4). AI detectors: Why I won’t use them. Medium. https://medium.com/@ldumin157/ai-detectors-why-i-wont-use-them-6d9bd7358d2b

A question that is top of mind for many instructors is how to address students using generative AI in their writing. Using AI-detectors might seem like the solution at first, but these detectors can be unreliable and lack transparency. Laura Dumin, a Professor of English and Tech Writing, shares some of the issues that she sees with AI-detector programs and how she addresses cheating with her students instead.

Research on AI and Academic Integrity

An updated Wiley survey shows mixed feelings about AI in North American post-secondary institutions. Both students and instructors feel there is a rise in cheating, with a significant number of instructors suspecting recent incidents. Many anticipate more cheating and worry about AI’s impact on critical thinking and writing. However, they also acknowledge AI’s potential to aid academic support, including in critical thinking and writing skills.


Have a question related to Academic Integrity?

For information about academic integrity as it relates to generative AI, please email academic.integrity@kpu.ca