Is the use of generative AI permitted at RUB?
Yes: Generative AI tools can be used at RUB in research, teaching and studies. However, their use can be restricted in certain contexts (e.g. written examinations) or linked to labelling requirements. In this respect, generative AI tools are no different from other tools. This means that the rules and requirements for the use of AI tools must be clarified in the instructions for the preparation of a term paper (together with other permissible or non-permissible aids). Additionally, there needs to be an explanation on how students must clarify the use in the declaration of autonomy, e.g. “I used ChatGPT exclusively for generating the outline of my term paper.” or “I used ChatGPT for revising my term paper.” In this respect, too, the use of AI tools is no different from other tools.
Who is the author of an AI-generated text?
Generally, neither the developers who trained the model nor the model itself has authorship of AI-generated content. However, authorship can lie with the users if they continue to work with the generated content and it thus becomes a product of the users’ own intellectual work. However, it is a case-by-case decision whether authorship can be attributed or not.
Are there labelling obligations for AI-generated content?
One of the basic values in academia is that processes of knowledge-making should be as transparent as possible. However, conventions for labelling AI-generated content have not yet been developed. From a legal point of view, two aspects in particular must be taken into account for examinations. Firstly, every examination regulation stipulates for written examinations that the permitted aids are announced at the beginning of the semester in which the module takes place or together with the assignment. If the use of an AI application is permitted in this context, the student confirms with the submission of the written paper not only that the work was written independently, but also that no sources and aids (in this case, including the use of AI tools) other than those specified were used and that citations were marked. Secondly: If regulations about permitted/non-permitted aids, issued to students with topics or at the beginning of a semester, state an obligation for labelling, it needs to be complied with.
Do new regulations have to be made and existing ones changed?
Concerning RUB, we assume that examination regulations with regard to generative AI do not have to be adapted extensively. As a rule, declarations of independence already stipulate that all auxiliary materials must be stated. This also applies to generative AI. However, it would be advisable to specify this in the information that students receive about permitted and non-permitted aids in examinations. Furthermore, it must be decided individually regarding the intended learning outcomes in courses whether and in what way the use of generative AI is permissible. If it is relevant for the examinations, the regulation must be communicated to the students at the beginning of the course together with other information on the examination. It is then also binding for the teachers.
If lecturers are planning oral examinations in addition to written assignments, examination regulations must permit this and it must be indicated accordingly in the module description.
Can people be obliged to use tools such as ChatGPT?
If the use of AI tools is to be mandatory, the terms of use of the respective software must be observed. In particular, it depends on how the user's data is handled. This can vary greatly depending on which software or platform is used. If no data protection-compliant solution can be provided, use must be viewed critically and may only be voluntary.
As a teacher, am I allowed to use ChatGPT to grade exams?
Examinations that go beyond highly pre-structured forms such as multiple choice examinations (e.g. theses) are protected by copyright. If the data entered as a prompt – as in the case of ChatGPT – is saved and reused, entering the text of a written exam is an impermissible reproduction and therefore a copyright infringement.
From the perspective of examination law, assessment may only be carried out by the examiners, not by software. Therefore, from the perspective of examination law, it is possible to use AI applications as an aid in the assessment process. However, the result must be critically examined and the grade can only be determined by the person responsible for the assessment.
What happens if there is a case where it is suspected that generative AI was used illegitimately?
The standard regulation for cheating applies here: If the candidate attempts to influence the result of an examination or course work by cheating or using unauthorised aids, the examination or course work in question is assessed as “insufficient” (5.0). The respective examiner or the invigilator shall make a record of the assessment. The examination board will review the case. In case of multiple or other serious attempts at cheating, the candidate can be excluded from taking further examinations and exmatriculated after a prior hearing.
In concrete terms, this means that the examiner must decide whether the student has used AI tools as an inadmissible aid or has not made the use clear. This depends on the aids that have been designated as permissible for this examination. The formal decision as to whether an attempt at cheating has been conducted is made by the examination board.