The views expressed are not to represent that of ATUS, or its subsidiary departments, and is intended as an op-ed of the author. -AJ
As I recently wrote in capping off my two years of posting on the AI-nomics of EdTech, I concluded my regular updates on the state and evolution of AI. At the beginning, I was writing an update to that post once if not twice a week, to track the changes of AI and my own experiments and research in the area. I was looking at the implication on how these platforms could be used in higher education and at WWU. It lead in presenting in several conference sessions, and even taking the (digital) main stage at the 2023 Adobe MAX conference as an educational speaker presenting on Adobe’s Firefly AI.
However with the rapid pace (or breakneck speed) of AI advancements, which has far outstripped the slow-moving state and federal legislation designed to regulate these changes, it is nearly impossible to write about everything I’m tracking. Since 2022, AI has undergone significant developments, with generative models like GPT-4o, Gemma, Ollama, Phi, Claude and beyond becoming increasingly sophisticated and embedded in various educational contexts. Here in my home state of Washington, the WA State CIO has put out interim guidelines, but we in the state still await further guidance and updates to these guidelines out of subcommittee
“[t]he AI Community of Practice will be discussing use cases for generative AI through the subcommittee process. Potential uses cases of “safe AI” by the state include may include cybersecurity scans, environmental assessments (e.g. sea grass videos by DNR), and chatbots to more effectively answer questions about state agency services.”
WA office of the CIO – Aug 2022
In such a short timeframe, much of the developments in AI have transformed (and disrupted) how we approach learning, assessment, and content creation. While questions of academic integrity with AI’s rise has taken headlines, it can also be said that AI has the potential to innovate teaching and learning. In both cases, the debates and discussion are ongoing until we have firm guidance or legislation. And, arguably, we as a society discover what place AI should have in our creative, academic, and work lives.
The easiest example to bring up that brought with it a firestorm around generative AI is writing, and how much AI (if at all) is ok for use in the creation of new writing. Much of this has dominated the headlines, so I will pivot away from this debate and instead shed some light to AI’s encroachment into photography and the arts. AI has brought up new questions as to what is deemed human creativity and the creative or inspiration process as a whole. Copyright and ethics of the works used to train this models still leave patent and trademark offices further backlogged as robots that create things (for the humans who are ‘designing’ with them) have lead to questions (in the USA) to grant such things (as human innovation can be patented; not a robot). It brings up further ethical debates as well as the common defense for those rapidly creating things like this that “AI is just a tool.” But is it an equitable tool? Is it fair and equitable that those with large sums of money can create and monetize creations using models built on information without permission or compensation to their creators? What are the legal and ethical terms of use of AI in copyright and patent filings?
Another area is, again, the data creative LLMs are built off of. Artists have, and creative software creators, have started consortiums and movements to help prevent unauthorized usage in the training of AIs. National Geographic photographer and ocean conservationist Cristina Mittermeier, most recently announced OverlayAI , helping artists exclude their images from AI training models. It is a fork of the Coalition for Content Provenance and Authenticity (C2PA) that included Adobe, BBC, and others, looking to provide an open technical standard for the public to trace the origin and authenticity of media.
Higher Ed + Future Economy of Work
AI detection tools, while valuable for promoting equitable assessment, are not perfect and often fall short in accurately identifying AI-generated content. It is my belief, and shared by several other Instructional Designers, that this has highlighted the urgent need for AI literacy among both educators and students. I believe that AI, when properly integrated and scaffolded within educational frameworks, can be a powerful tool for enhancing learning, fostering creativity, and developing critical thinking skills; if not skills of the workforce. Employers today are looking for AI skills in graduates as business operations leverage AI for productivity and better business efficacies. However, as the technology evolves, so must our approaches to education, requiring a careful balance between embracing innovation and maintaining rigorous academic standards. And as our graduates go into their fields, as more business and entities look towards AI within the workplace, higher ed needs to strike a balance between AI as a literacy and AI as the boogyman to assessment.
AI Detection Debate
As of 2022, Western Washington University’s Academic Technology and User Services (ATUS), in collaboration with the Center for Instructional Innovation and Assessment (CIIA) and the Learning Systems Team (LMS) has been actively engaged in evaluating the role of AI in academic settings. Initially, AI detection was available through TurnItIn’s Simcheck product. However, this function was discontinued by the vendor on 12/31/23, leaving Western without an enterprise-level AI detection tool. This has lead to evaluating other options as well as evaluating other institutions have done with regards to AI detection within the state of Washington as well as nationally. It is often cited that the University of Washington (UW), opted out and does not support or deploy an AI detection tool to their faculty.
This false positive rate has the potential to inaccurately call into question the submissions of thousands of students. Additionally, a study from Stanford found that AI detection tools more often identify the writing of non-native English speakers as AI generated (Mathewson, 2023). Experiences at Boston University have shown that international students seem to be particularly at risk for being flagged by the SimCheck AI Detection tool (Rubenstein, 2023).
Due to these concerns, UW requested that SimCheck/Turnitin turn off the AI Writing Detection tool in UW’s Simcheck integration.
UW IT Connect- September 26, 2023
AI Detection + WWU
ATUS, Learning Systems, and CIIA have been exploring new options, including advanced products from TurnItIn and similar tools, to address the need expressed by some faculty. The LMS team has been working closely with faculty to assess the university’s needs for AI detection and the broader implications of AI in academic work. Instructors are encouraged to share their insights and experiences with these tools to help guide the university’s approach. CHSS early in 2024 sent a memorandum strongly requesting that some sort of AI detection for Canvas be acquired and put in place within Canvas. Several vendor demos were attended by ATUS and CIIA staff, and were presented to ATC early spring 2024.
However, since those demos to today, due to the fast pace of AI, these products (and their prices) may have changed. For instance, TurnItIn now includes a valuable tool within their product that provides “Revision History,” giving faculty insights as to the writing process of a document on top of AI potentiality. At this time, Learning Systems and CIIA, have only free tools and tactics for faculty going into the Fall term that abide by the WA State CIO guidelines put in place in August 2023.
Additionally, ATUS and CIIA have developed resources and guidance to help instructors and students navigate the academic use of AI. These resources include clear guidelines on when and how AI can be used in coursework, pathways for academic use of AI, and strategies for evaluating student work when AI use is suspected.
- FAQs: Generative AI Guidelines
- Academic Uses of AI for Students
- AI and Chatbots Teaching Considerations (TLCo-op)
- Evaluating Student Work when AI is Suspected
- Pathways for Academic Use of AI at WWU: Flowchart
- See also: AI Detection at Western and Plagiarism Detection at Western
The ongoing focus and conversations seem to be in and around integrating AI responsibly while ensuring academic integrity and supporting teaching and learning objectives. For faculty whose courses rely on the assessment of a student’s holistic learning and writing abilities, some sort of AI detection tool to determine if they are evaluating a student or a machine’s learning, and the degree of either, would be valuable.
🔮The Crystal Ball of AI & Higher Education
It is widely researched and accepted that there is no AI detection tool that can 100% identify for certain the degree to which an AI can be used, but using a tool does provide faculty with a mechanism to equitably evaluate all course papers that prohibit students in the use of AI. However, on the horizon, consumer technology will make a significant leap in Q4 of 2024, as AI becomes more integrated into operating systems. With the likelihood that most majorly-used operating systems for PCs and mobile devices will have some kind of natively-integrated AI, it will inevitably continue to potentially blur the lines for students as to what is and is not appropriate technologies to use within college courses. We are also beginning to see the first generation of AI on device take the market across all the ecosystems; Android, iOS, MacOS, and Windows. The latter are promising hardware this fall that will do AI generation without all on device, removing the need of AI cloud computing. Although, we don’t fully know what outputs this kind of hardware will produce. Regardless, as students come to campus, some will have these devices. Or, dare I say, faculty might too.
It is my speculation that with the oncoming deep integration (unless government policies intercede based off litigation of the sourcing of data of the models in AI) that higher education will have to better address the AI literacy divide and build a bridge that brings students across as skilled with AI and understand its appropriate use cases. It can also be forecasted, that as AI become more ubiquitous, its conversational abilities processing natural language can open up creative opportunities for faculty with assignments. For instance, using AI with ‘memory’ or retrieval augmented generation (RAG) abilities, could provide ways for students to interact with course materials in real time after lectures are over. From the faculty side of things, course development and meeting and exceeding current accessibility standards can be greatly assisted with AI within course development and materials.
ATUS + Learning Systems | AI Resources
ATUS and Learning Systems continues exploring how generative artificial intelligence tools, such as ChatGPT, Bing (CoPilot) Chat, and Adobe Firefly, are being used in higher education. Coordinating with relevant offices and entities as we continue to learn more about the implications for teaching, learning, technology, policies, and procedures.
- Academic Uses of AI for Students
- AI and Chatbots Teaching Considerations (TLCo-op)
- Evaluating Student Work when AI is Suspected
- Pathways for Academic Use of AI at WWU: Flowchart
- See also: AI Detection at Western and Plagiarism Detection at Western
- FAQs: Generative AI Guidelines
For more detailed guidance, instructors and students are encouraged to refer to specific resources provided by ATUS and the CIIA, such as the “Pathways for Academic Uses of AI at WWU” and related academic honesty procedures. The AI FAQ is frequently updated to meet the demand of questions for faculty, students, and (coming soon) staff at WWU; both in support of academic integrity and to abide by Washington State CIO guidelines.
You must be logged in to post a comment.