How to handle AI in education?

In this post, I’m discussing the use of AI — specifically LLMs or large language models, also known as generative AI, of which Open AI’s GPT models (including ChatGPT) are prominent examples. I will first present a case for why detection and prevention of such models in education is not feasible or even possible. Then, I’ll propose ways of instructing students to properly use AI in learning. Finally, I’ll discuss the implications for different student types.

Why Detection and Prevention Fails

There might be some indicative cues of whether a text piece was written by student or by GPT, such as the lack of grammar mistakes (GPT writes near-perfect language, students often don’t :), lack of aforisms and metaphors (people often use but rarely seeing GPT use them), etc.

However, the detection and prevention method seems fundamentally flawed for at least the following reasons:

  • there’ll be a non-trivial number of false positives and negatives which means there’ll be need for additional verification which is likely to involve guesswork (i.e., ultimately we don’t know whether a text was written by an AI, a human, or combination thereof).
  • as soon as such cues become public information students will start circumventing them, resulting in a game of cat-and-mouse, and
  • the hybrid use of GPT is particularly difficult to detect as relatively minor edits to a text can significantly change its appearance and thus fool both algorithmic and manual detection.

How to Instruct Students to Use AI

Based on above reasoning, my take is that students should be allowed the use of GPT (as we cannot prevent them from using it and they’re likely to use it in their jobs anyway), but we must teach its ethical use — that is,

  • declare its use instead of pretending the text was 100% created by you,
  • explain how you used it in detail (the prompts, the editing process, etc.), and
  • verify all facts presented by GPT as it has a tendency to hallucinate (verification done using credible sources such as academic research articles, government/institute/industry reports, statistical authorities). Facts refer to numerical information — the concept definitions of GPT tend to be accurate, based on my own experience.

Concerning the GPT models, we must bear in mind the instrumentality maxim: technology is just a tool. A bad student will use it in a bad way; a good student will use it in a good way. While we cannot remove the badness from this system, we can engage in some measures to tilt it in favor of the good, such as encouraging ethical behaviors and penalizing unethical ones. The bottom line is that “Zero GPT” just isn’t a realistic policy option, like “Zero Wikipedia” wasn’t either.

Implications of AI for Different Student Types

Furthermore, let us take this apart a bit more by considering four student types:

  • A bad student = one that doesn’t want to learn but just wants to pass a course/degree with minimum effort.
  • A good student = one that wants to learn and do a good job passing their courses/degree.
  • A poor student = one that has good intentions (is a good student) but struggles to learn from some reason or another.
  • A talented student = one that has good intentions and is good at learning.

Implications for different types are the interesting part. I am not too much interested in the bad students. They might be viewed as being out of scope – in some cases, the attitude is what it is and cannot be changed. We cannot force learning. In other cases, it might be possible to convert a bad student to being a good student, but I don’t see GPT relevant for this (cf. the instrumentality hypothesis).

For the good students, we need to tell how to use GPT in a good way, so they know how to do so and so that they can use it to learn more efficiently. My hypothesis is that GPT supports the learning of talented students as they can use GPT to amplify their already good learning strategies. However, I am not sure about the implications for poor students, but whatever they may be, those students also need guidance.

In terms of skills that educators would need to pass on to their students, at least two readily come to mind:

  • The ability to ask questions: for GPT to be useful, one needs to ask it the right questions. A “right question” is one that supports learning. Learning a topic requires coming up with *many* questions that become progressively more advanced. So, the student needs to be able to craft progressively more difficult questions in order to increase his or her knowledge (in between, the student obviously needs to read and reflect on the answers). This skill relies equally on formulating the substance of the question (i.e., what is actually being asked?) as well as phrasing the question (i.e., how is the question being asked?). Both factors affect the response quality – for example, student who wants to know about the history of AI could learn that there is a concept called “AI winter” and then ask the AI to explain this concept. But, there are in fact two AI winters, so the sequence and formulation of the questions can yield some gaps in the student’s learning. Thus, “prompting strategies” and “prompt engineering” are skills relevant here.
  • The ability to evaluate the quality of answers: once the student receives an answer, they need to be able to assess the quality of the answer. What does quality mean? At least two criteria: veracity, so that the answer is true or correct, and comprehensiveness, so that the answer contains the necessary information to satisfy the information request. A third criterion could be connection – i.e., the answer introduces related concept for the student to increase their learning by formulating new questions to the AI based on these concepts.

In terms of overall learning (i.e., ensuring that the student masters what he or she is expected to master to get a degree), the optimal mix would be going back to controlled exams (they’ve been diminishing in use over time, this might reverse some of that change) for some courses, while teaching the correct use of GPT in others.

Conclusion

AI is coming into education (or, rather, it’s already here). Educators cannot prevent the use of AI models in learning. Instead, they should make sure such models are used ethnically and in a way that supports students’ learning.

Acknowledgments

Thanks to Mikko Piippo for the LinkedIn discussion that inspired this post 🙂