This post contains an example of how to declare the use of AI in academic work (see the ‘Methodology’ section below) as well as explaining triangulation as a way of verifying AI-generated information. (Read also my previous post about how to handle AI in education.)
OpenAI’s ChatGPT (September 25 Version) was used to define triangulation and present examples of it. The prompt given to ChatGPT was as follows:
“define triangulation in one sentence. then, give three examples of triangulation in the context of verifying AI-generated information.”
The response given by ChatGPT was evaluated by using the author’s expert judgment to assess whether the information it gave was correct. The response was modified by removing redundant examples that did not fit with the context of this work and by adding contextually suitable examples of credible data sources for thesis work. The content generated by AI (and modified by the author of this document) can be found in section ‘Triangulation’. The original chat can be found online (https://chat.openai.com/share/2c36a655-6656-4ba6-aff4-7cfe4f521f02).
So, you can see some principles of declaration (highlighted in yellow):
- The specific tool and its version is made public: “ChatGPT (September 25 version)”
- The prompt is made public: “The prompt given was…”
- The original chat is made available so that people can compare AI-generated text with the final text: “can be found online ([link])”
- The process of verifying the AI-generated content is explained: “was evaluated by…”
- The process of modifying the AI-generated content is explained: “was modified by…”
- The location of AI-generated content in the document is specified: “can be found in section…”
Triangulation refers to the use of multiple methods, data sources, or perspectives to validate or cross-check information.
In thesis work, students can apply source cross-referencing as a form of triangulation. This involves comparing AI-generated information with multiple, trusted data sources or databases (e.g., government-issued statistics, and most importantly peer-reviewed academic research articles) to ensure consistency and accuracy.
The scope of using AI
AI can be used in many ways to support the academic writing process. In some parts of the manuscript, it is more appropriate to use AI:
- abstract — OK! There’s no harm in letting the AI help you summarize your work.
- definitions — OK! There’s no harm in letting the AI define concepts.
- literature review — LIMITED OK! AI can summarize previous research BUT it can make mistakes and focus on irrelevant aspects. DO NOT use AI-generated literature reviews ‘as is’ — I’ve applied here multiple times and even though adjusting the prompting helps, the outputs always require manual heavy editing.
- methodology — LIMITED OK! I’ve used AI to come up with research designs, research questions, and hypotheses. It gives good results! Meaning, the ideas tend to make sense. Of course, you as the subject matter expert need to manually assess the quality of these ideas. For a seasoned researcher it’s easier as they’d already know the field well; for a novice research, it’s much harder. So, you need to establish your own baseline understanding of a field to be able to make sense if research questions, hypotheses, or methods make sense in your domain.
- results — LIMITED OK! Here, the same logic applies as previously. You can use AI to help you with your statistical analysis, for example, but then you need to know the basics of statistical analysis to ensure that the AI did the job correctly. If you don’t know the basics of statistical analysis, you should first learn them before using AI, because if the AI makes a mistake, you wouldn’t be able to detect it. The risk of invalid results therefore increases if you don’t know the basics. Of course, AI can help you learn the basics! You can ask if questions about different methods in an iterative manner and then use Google to cross-check from reputable sources if the answers are consistent. So, you can learn many things by doing things with the AI – this is the beauty of AI-human collaboration.
- discussion — LIMITED OK! The implications should be based on your own thinking. I’ve seen many discussion sections that list AI-generated outputs as “original” ideas — those often are neither based on the study’s findings nor really that original. The only way I’d use AI here is for coarse inspiration — absolutely NOT as a replacement of your own thinking: what do your results imply for theory and practice? It is impossible to do good research without you actually taking a breath and thinking about this question.
- conclusions — LIMITED OK! As in the previous case, the conclusion of your work should be based on your own thinking. The AI can help you address the blank page problem but it should not replace your thinking.
NOTE: In all of the above cases, the principles of verification and declaration still apply. So, you need to (a) tell how you used the AI and (b) manually make sure what it wrote is correct. There’s no shortcut here.