Academic Integrity,  Assessment and Evaluation,  Generative AI

Determining Levels of Acceptable GenAI Use

Many instructors are contemplating what level of GenAI (e.g., ChatGPT) use is acceptable for students when completing their assessments.  Instructors who have tried the tools have seen how well they can generate text in response to a prompt or request. It concerns them that they will not be able to tell when a student has completed an assessment with or without this kind of assistance and the validity of assessment will become uncertain.

If you’re new to GenAI, read these posts:

Need to learn more about ChatGPT

New! USask Enterprise GenAI

New module added to academic integrity tutorial

 

Students are gaining more experience with these tools and looking for explicit guidance from their instructors so that they can meet expectations and avoid academic misconduct.

Make your ChatGPT and other artificial intelligence expectations clear

 

The table below was shared by educational consultant Leon Furze (see his blog post on the “Artificial Intelligence Assessment Scale”), and is elaborated in Perkins et al “Navigating the Generative AI Era” (see pre-print paper here).  The purpose is to help instructors define the level of acceptable GenAI use.

Use this table to think about the permitted and restricted uses that would apply in your courses or to specific assessments, depending on what those assessments are designed for.  You could even use this table and the explanation provided by the authors to show students what you mean by levels of assistance or support and what “too much” reliance on GenAI would look like.

After reviewing this table, note important cautions below about placing a ban on GenAI (Level 1) and permitting full-on use of GenAi (Level 5).

 

Source:  The AI Assessment Scale: Version 2 – Leon Furze

 

Caution about Level 1

Consider that “Level 1 = No use of AI” means you have to set a clear restriction but that this is likely to be the most difficult or administratively burdensome to enforce because of the monitoring and control it will require.

  • Use of invigilated assessments are also likely to require accommodations, in addition to other costs associated with holding in-person assessments (e.g. invigilators, appropriate classroom space, time limits)
  • If the assessment is not invigilated, effort will be needed to clearly identify unpermitted use.
  • Detection tools are not reliable or approved for use
  • There are many methods to obscure use of GenAI, and information about these are widely available (video example)
  • When unpermitted use is identified, then the work associated an academic misconduct response will be required

For these reasons associated with instructor workload as well as validity and authenticity of assessments, bans on use are not recommended in most cases.  Instead, redesign of assessment to include and acknowledge appropriate use is recommended, especially according to Levels 2, 3, and 4 in the above table.

 

Caution about Level 5

Consider that “Level 5 = Full AI” means students are not required to acknowledge their use of GenAI tools, according to this table.

  • In the professional futures of some of our graduates, use of GenAI may be permitted and co-writing using GenAI may be expected in most, if not every, case.
  • Legal decisions about areas of copyright and notions of authorship are pending in North American contexts.

Until responsible use matters are settled in professional or legal contexts, it is advisable that there be some type of acknowledgement with any use of GenAI at this time.