AI Guidance for Staff and Students

Generative Artificial Intelligence guidance for students

Guidance and advice for students on the use of Generative Artificial Intelligence (such as ChatGPT) within the University.

The technology, ethics, and use of AI is a fast moving area. This guidance is current as of March 2023 and will be updated as necessary.

University position

There is currently a lot of interest in generative AI systems. ChatGPT (by OpenAI) is just one example, but there are others (such as DALLE-2, CoPilot, and Google Bard). It is an exciting area and naturally we want to explore what it can do and learn how to make use of it. 

The University position is not to impose a blanket restriction on the use of generative AI, but rather to: 

  • Emphasise the expectation that assignments should contain students’ own original work; 
  • Highlight the limitations of generative AI and the dangers of relying on it as a source of information; 
  • Emphasise the need to acknowledge the use of generative AI where it is (permitted to be) used. 

Some assignments may explicitly ask you to work with AI tools and to analyse and critique the content it generates, others may specify that AI tools should not be used, or only used in specific ways. This will depend on the learning objectives for your courses. Please refer to the specific criteria for your assignments and ask your lecturers if in doubt. 

Expectation of own original work 

All work submitted for assessment should be your own original work. In some cases, you may be asked to sign a declaration of own work. It is not appropriate to misrepresent AI generated content as your own work.

Important note

Be aware that if you use AI tools (such as ChatGPT or others) to generate an assignment (or part of an assignment) and submit this as if it were your own work, this will be regarded as academic misconduct and treated as such. 

“Academic misconduct is defined by the University as the use of unfair means in any University assessment. Examples of misconduct include (but are not limited to) plagiarism, self-plagiarism (that is, submitting the same work for credit twice at the same or different institutions), collusion, falsification, cheating (including contract cheating, where a student pays for work to be written or edited by somebody else), deceit, and personation (that is, impersonating another student or allowing another person to impersonate a student in an assessment).” (University of Edinburgh, Academic Misconduct Procedures) 

 Current limitations of generative AI 

Generative AI offers a number of benefits, but it also has its limitations, which you need to aware of. 

It is important that you

  • Understand the limitations of any AI system you are using; 
  • Check the factual accuracy of the content it generates; 
  • Do not rely on AI generated content as a key source - use it in conjunction with other sources. 
  • Generative AI tools are language machines rather than databases of knowledge – they work by predicting the next plausible word or section of programming code from patterns that have been ‘learnt’ from large data sets; 
  • AI tools have no understanding of what they generate. A knowledgeable human must check the work (often in iterations); 
  • The data sets that such tools are learning from are flawed and contain inaccuracies, biases and limitations; 
  • They generate text that is not always factually correct; 
  • They can create software/code that has security flaws, bugs, and use illegal libraries or calls – or infringe copyrights; 
  • Often the code or calculation produced by AI will look plausible but contain errors in detailed working on closer inspection. A human trained in that programming language should fully check any code or calculation produced in this way; 
  • The data their models are trained on is not up-to-date – they currently have limited or constrained data on the world and events after a certain point (2021 in the case of ChatGPT); 
  • They can generate offensive content; 
  • They produce fake citations and references; 
  • Such systems are amoral - they don’t know that it is wrong to generate offensive, inaccurate or misleading content; 
  • They include hidden plagiarism – meaning that they make use of words and ideas from human authors without referencing them, which we would consider as plagiarism; 
  • There are risks of copyright infringements on pictures and other copyrighted material. 

Important note

Over-reliance on AI tools simply to generate written content, software code or analysis reduces your opportunity to practice and develop key skills (e.g. writing, critical thinking, evaluation, analysis or coding skills). These are all important skills that are valued and required to succeed in and beyond your time at University. 

Citing and acknowledging the use of AI 

Where the use of AI is permitted in assessed work, it is important to be transparent about the use of such tools and content generated from them. 

Content generated from AI is non-recoverable - it can not be retrieved or linked to in the same way that other digital sources can. For this reason, current convention is to cite AI generated content as “personal communication” (because it is based on asking a question or giving a prompt and receiving an answer). This is usually an in-text only citation. 

Each reference style (e.g. Harvard, APA) will set out how to do this, so you should consult the guidance for the reference style you are using. 

Additionally, if you use any generative AI tool (such as ChatGPT) to help you (e.g. generate ideas or develop a plan), you should still acknowledge how you have used the tool, even if you do not include any AI generated content in your work. You should acknowledge the AI tool used, describe how you used it, and indicate the date you accessed it. 

Further guidance 

Further guidance on academic misconduct (including plagiarism) and how to avoid it

Further guidance on general referencing

 

Back to Edinburgh AI Homepage