TurnItIn AI Checker (Do Robots Check for Electric Sheep?)

As of 3 April, many in the EdTech sphere read about the kerfluffle TurnItIn made with the forced opt-in and rollout of their AI features in their plagiarism detection. In the 48hrs that followed, the EU and UK forced them to walk it back (of sorts) in their regions, and left US higher ed institutions asking what options they had, as well as asking about the price in the future.

Meanwhile, I was off to NWMET in Helena, Montana, to present on the very topic of AI and its implications with it in teaching and learning. Sitting with a layover in SeaTac, I was trying to stay up to date with this big announcement, all while updating my presentation. Many at the conference were interested in what this meant for them when they returned to their offices on Monday, as some had not even heard the news yet. I unfortunately, didn’t have a lot to go off of as, like my peers, we were all in various states of travel when the news broke.

Now that I’m back, I spent the morning running some tests though it to see what it would detect, see if I could decern how it detects AI content and to what degree, and (as always) try to discover if I could break or circumvent it from detection.

In short, in this very lab setting test (by no means was this a test in the wild), doing a straight-up GPT dump into Canvas and running it through TurnItIn yielded results I’ve seen with other checkers: It flagged it 100% generative. Meaning out-of-the-box, it gives faculty a good eyes-over-the-shoulder approach to submissions that they can tell their classes. If students know they can’t just GPT dump in a full response, then it may prevent that. However, using AI is more nuanced than just copying and pasting. I checked to see if it would detect a short response that was co-authored between myself and QuilBot. Unfortunately, I believe the repose I submitted was too short or perhaps it could not make an adequate determination; so no result was given.

So on the third pass, I took the Ada Lovelace generative output and did some human reworking of what was created. My strategy was in the first two paragraphs use what was generated but use my own human-power to rework and reword parts of what was there. But, straight up leave the thrid paragraph as generative and only use Quillbot to rephrase it for me with no human intervetion. The result was it was a clean submission and was not flagged. That response and the edits both human and bot-generated can be seen here in this speed through below.

As I stated at NWMET and now in the video above, in higher ed, we really need to encourage faculty to have open conversations with their classes about the use of AI, where it is appropriate, and how much is appropriate. AI detection tools like this are a good jumping-off point for everyone, as they bring a level of transparency that learning is on the effort of the learner, not a computer. But, AI in education can be a useful tool to help foster learning, and I would even argue perhaps help writers form better writing.

Below is the “guide” that TurnItIn is making available within the checkout of a submission.