Skip to content

Thoughts on integrating Generative AI in university education (specifically at bachelor’s level): Good and bad use cases

Last updated on August 14, 2024

Sharing some thoughts, based on a discussion with colleagues, on how generative AI (GenAI) should and should not be intergrated into teaching at bachelor’s degree level. (Also, since you’re here, you might be interested in this post about human educators’ benefits relative to AI.)

First, in my opinion, teachers should have the final authority to decide if and how AI will be used in their courses. University can (and should) give guidelines, but it should not force teachers to integrate GenAI into their courses, especially in specific ways.

On the other hand, it’s part of teachers’ professional development to learn about these methods and think how they should (and should not) be used in education.

One issue here is that, if I’ve understood correctly, some teachers in Finland accept the use of DeepL. This means that they tell students it’s ok to write in Finnish and then use that tool to translate the text in AI-generated English. This is for courses in English.

I don’t approve of that, and am not allowing AI-generated translations in my courses. The students need to learn to express themselves in English and if they take an English-language course, they need to write their own English.

I also don’t buy the idea that “I did the thinking, AI just presents my ideas”. I don’t believe you can separate thinking from writing to an adequate extent. Writing is thinking.

Furthermore, I disagree with Ethan Mollick in that AI-generated text cannot be identified. I think he focuses on the wrong premise which is that “accuracy won’t be 100%”. That premise is true (it’s true for all prediction), but I’ve seen in countless of practical cases that students pretty much use the standard copy-paste ChatGPT text which is really easy for a trained eye to detect. (Paavo Ritala calls this “botshit”, which I think is a good description.)

When I detect such text, I approach the student privately and ask if they were using AI. Many of them admit to using it! I then give them a chance to redo. Some don’t admit — if they give a suspicious explanation as to how their text was created or if I still think the text is way too close to typical AI-generated text, I propose we have a meeting on the topic where we’ll discuss on the subject matter in real-time.

A couple of times, the story has changed after this and all of a sudden they admit to using AI, though they never claim they violated the course rules on purpose (I clearly mention every time no AI-generated text is not allowed, at all…). At this point, I let them keep their face, give them a proverbial lecture on why they shouldn’t let AI do their thinking, and ask them to redo the work.

Overall, when it comes to writing, I’m actually very anti-AI at the moment. I’m very pro-AI when it comes to data analysis and learning. Though in AI-assisted data analysis, one needs to know the method first themselves, because AI makes mistakes and if you don’t understand the method, you won’t spot these. I’ve run machine learning, statistical testing, qualitative analysis using AI and have seen mistakes in each, yet have seen very good results in these use cases, too. Perhaps the worst is qualitative analysis, which requires thinking — AI-generated themes from qualitative (text) data can be a starting point but beyond very simplistic descriptive themes, a trained qualitative analyst with domain expertise on the topic at hand should fairly easily beat tools like ChatGPT.

AI for bachelor’s level students is very dangerous because these students are only in the process of developing basic skills. So, unless students use AI in the way that supports their skill development, they are using it wrongly, in my opinion. Advanced users that already possess basic skills can use it more extensively, because they have the basic skills that help them identify and fix issues.

Currently, it seems to me that any use of GenAI needs to come after we can be sure that students already possess the basic skills they want to use GenAI for. For example, if the student already knows how to write good English, the use of AI for writing assistance is more acceptable (though not in a way the output would be directly copied, only in the capacity of solving the blank page problem). If students already know the basics of statistical testing, they can use GenAI for statistical testing. And so on. Otherwise, students would just use AI to replace their lack of skills and they’ll never learn the actual skills (the worst possible outcome).

Another positive use case is to learn skills like writing or data analysis with the help of GenAI, so that the GenAI acts as a tutor, explaining you the concepts, giving you tasks, checking them and giving feedback, and so on. But that requires a specific approach of questioning, reflection, doing yourself, etc. (there MIGHT be good tutors implemented as custom tools, but I can’t name one). This process is more laborious than simply copying GenAI’s outputs, which is why most students wouldn’t do it. Students also don’t necessarily know how to probe or question GenAI in a way that supports their learning. Usually, students just want to get the job done (e.g., their assignment completed) and move on. Sadly, few students want to genuinely learn – they rather see courses as “necessary evils to complete” (at least that’s my impression — surely doesn’t apply to all students, but surely it does apply to many).

To this end, special tools or systems for pedagogical purposes can be developed. Cipherbot (https://cipherbot.qcri.org) is one example of such systems. It allows the teacher to create a course, upload specific learning materials. After this, students can ask Cipherbot questions about the learning material, and Cipherbot recommends additional questions that support the student’s learning. This way, the learning is personalized and students can proceed at their own pace. Cipherbot also has a function for tutoring.

Published inenglish