Approach to the use of AI

Download Approach to the use of AI Pdf, 38.5 MB, opens in new window. here.
When you as a teacher wish to use AI-tools to support your work and enhance the learning opportunities of your students, there are some things that you need to attend to in order to use AI responsibly.
The Approach to the use of AI presented by Educate will help you do just this. The approach is based on existing legislation and guidelines in Sweden and in the European Union.
As a teacher, you can approach AI legislation and guidelines with different focuses:
- What do you need to consider as a teacher?
- What do you need to consider as a supervisor at all levels of education from a research perspective?
- What can your students expect and what are they expected to do as regards the use of AI?
Accordingly, you will find these focus areas in the companion below.
There is a printable guide as well as an online companion that offers you examples of what each aspect entails in educational practice. The companion also helps you explore the background of each aspect in relevant legal documents and guidelines.
Please, note that the citations in the companion may not be verbatim, but summarised or shortened.
Alignment with legislation and frameworks
The Approach to AI has been carefully aligned with key legislation and guidelines, including the GDPR, EU AI Act, UNESCO and OECD principles, and national frameworks like Sweden’s National AI Strategy. The statements are based on these key documents. In the companion, there are key terms that you may need to find definitions for. Examples of such terms are controllers and processors mentioned in the GDPR, and deployers in the AI Act. Whenever possible, we try to explain terms. It is also important to clarify that in some cases we have chosen to reference recitals in the AI Act. Recitals aim to clarify the goals of the regulation and how its provisions should be understood, but are not legally binding. The companion is not a complete overview of frameworks or legislation, it only helps you explore the background of each approach statement in relation to education.
Please note that in Swedish copyright law, it is possible for the owner of the copyright to clearly state a reservation against using their materials as training data for AI. This means that you may have to be aware of how the AI processes data that you share or upload. This perspective is not included in the companion.
Selected supporting documents
Ethics Guidelines for Trustworthy AI Pdf, 1.7 MB, opens in new window.
European Union AI Act Pdf, 839.5 kB, opens in new window.
GDPR Pdf, 959.3 kB, opens in new window.
UNESCO Recommendation on the Ethics of Artificial Intelligence Pdf, 1 MB, opens in new window.
The European Code of Conduct for Research Integrity Pdf, 456.3 kB, opens in new window.
UNESCO AI Competency for Students Pdf, 745.3 kB, opens in new window.
UNESCO AI Competency for Teachers Pdf, 1.1 MB, opens in new window.
Keywords
In the supporting documents we have used, there are keywords and terms that can be important to understand. There are also key concepts that you might need to learn more about to understand how to approach the use of AI in education.
High-risk AI systems
Defined in the AI Act as systems that are intended as safety components of products or are products themselves, covered under specific Union harmonization legislation, requiring third-party conformity assessment. These systems are classified as high-risk when their use involves a significant risk to health, safety, or fundamental rights. Additionally, AI systems listed in Annex III of the EU AI Act are also considered high-risk unless their deployment does not pose substantial risks or materially influence decision-making outcomes. It means that depending on your use of the system, it could be considered high-risk. Education and Vocational Training is one of the areas mentioned in Annex III:
- AI systems for determining access or admission to educational or vocational training.
- Systems for evaluating learning outcomes and steering educational processes.
- Tools assessing educational levels for determining access.
Personal Data
Any information relating to an identified or identifiable natural person (data subject), such as names, identification numbers, location data, or online identifiers.
Processing
Any operation performed on personal data, including collection, storage, use, disclosure, or deletion.
Data Controller
The entity determining the purposes and means of processing personal data.
Data Processor
The entity processing personal data on behalf of the controller.
Transparency Obligations
Requirements for AI providers to disclose the operation and limitations of AI systems, especially those interacting with humans.
Provider
Any entity developing or placing an AI system on the market.
Deployer/User
Any entity or individual using an AI system under their authority.
How did we use AI to create the Approach to the use of AI?
The Approach to the use of AI was created by Educate in an effort to inform about rules, regulations and frameworks that can help navigate the relatively novel area of AI in education. We have focused on creating an approach, and to also facilitate in finding explanations and support for choices and actions taken.
Educate has developed the Approach to the use of AI by first collecting and reviewing available information in frameworks, legislation and other key documents, comparing with similar guidelines at other higher education institutions, and then summarising, contextualising and explaining the information in order to create "approach statements" that can provide the foundation for our work and development.
In our work, we have used ChatGPT and Elicit for finding and selecting key documents. We have used Chat4All, Perplexity AI, ChatPDF and Claude for supporting analysis of the selected documents and cross-referencing. None of the actual text in the Approach to AI has been generated by AI. In the companion, AI has been used to ensure that keywords are used systematically. Quality assurance has been completed both by internal review at Educate, key stakeholders at JU as well as external reviewers at RISE.
Approach to AI Companion
Information security, cybersecurity, and data privacy
Handle personal and student data responsibly, ensuring security and protection.
Teachers must ensure that student data is anonymized or securely stored when using AI for academic purposes, such as tracking performance or providing tailored feedback.
“Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk” (GDPR, Article 32, Section 1).
“High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 1).
“High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 4).
“Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities” (Artificial Intelligence Act, European Parliament and Council, Recital 76).
“Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems. Prevention of harm to privacy also necessitates adequate data governance that covers the quality and integrity of the data used, its relevance in light of the domain in which the AI systems will be deployed, its access protocols and the capability to process data in a manner that protects privacy” (Ethics Guidelines for Trustworthy AI, 2019).
“Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good” (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022).
Students can expect that their personal data is handled securely and used only to enhance their learning experiences. Data collected through AI should respect privacy and not be shared without consent. At the same time, students are expected to protect their own data and act responsibly in digital environments by following university guidelines and avoiding sharing sensitive or personal information with unauthorized parties or platforms.
“Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk” (GDPR, Article 32, Section 1).
“Access to data. In any given organisation that handles individuals’ data (whether someone is a user of the system or not), data protocols governing data access should be put in place. These protocols should outline who can access data and under which circumstances. Only duly qualified personnel with the competence and need to access individual’s data should be allowed to do so” (Ethics Guidelines for Trustworthy AI, 2019).
“AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle…” (Ethics Guidelines for Trustworthy AI, 2019).
“Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good” (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022).
Managers and administrators are responsible for implementing and reviewing data protection policies. They must ensure compliance with institutional, EU, and national standards for data privacy in AI applications.
“Controllers and processors shall implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk” (GDPR, Article 32, Section 1).
“High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 1).
“High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 4).
Staff members at the university should understand the importance of data privacy and take active steps to ensure secure handling of personal data. Staff are encouraged to familiarize themselves with institutional data protection guidelines and implement best practices.
“Each controller and, where applicable, the controller's representative, shall maintain a record of processing activities under its responsibility” (GDPR, Article 30, Section 1).
“The controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation” (GDPR, Article 24, Section 1).
“High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 1).
“High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard” (Artificial Intelligence Act, European Parliament and Council, Article 15, Section 4).
Researchers must ensure data is handled in compliance with data protection laws and secured from unauthorized access throughout its lifecycle.
“The controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk…” (GDPR, Article 32, Section 1).
“Pay particular attention to issues related to privacy, confidentiality, and intellectual property rights when sharing sensitive or protected information with AI tools” (Living Guidelines on the Responsible Use of Generative AI, 2024).
AI transparency
Explain how and why you use AI tools.
Teachers should clearly communicate to students how AI tools are used in grading, providing feedback, or supporting learning materials. This transparency helps students understand the role AI plays in their assessments and builds trust in AI’s role in education.
“Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.” (AI Act, Article 50, Section 1)
”Traceability. The data sets and the processes that yield the AI system’s decision, including those of data gathering and data labelling as well as the algorithms used, should be documented to the best possible standard to allow for traceability and an increase in transparency.” (Ethics Guidelines for Trustworthy AI, 2019)
“Explainability. Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system). Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings.” (Ethics Guidelines for Trustworthy AI, 2019)
“Communication. AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.” (Ethics Guidelines for Trustworthy AI, 2019)
Students can expect transparency regarding how AI impacts their learning or assessments, including clear communication on how AI tools influence grading, feedback, or personalized learning experiences. Additionally, students are encouraged to engage actively by asking questions about the AI tools used and understanding their own data rights within these systems. Students are also responsible for being honest and transparent in their use of AI tools, ensuring that they do not use such tools in ways that violate academic integrity, such as plagiarism or unauthorized assistance. You may be required to explain how you have used an AI system.
“Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.” (Artificial Intelligence Act, European Parliament and Council, Article 50, Section 1)
”Traceability. The data sets and the processes that yield the AI system’s decision … should be documented to the best possible standard to allow for traceability and an increase in transparency.” (Ethics Guidelines for Trustworthy AI, 2019)
“Explainability. Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions. Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings.” (Ethics Guidelines for Trustworthy AI, 2019)
“Communication. AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.” (Ethics Guidelines for Trustworthy AI, 2019)
Managers and administrators are responsible for implementing policies that ensure all AI use is transparent. Policies should include how and why AI is used in educational settings and ensure that this information is accessible to both students and staff.
“High-risk AI systems shall be accompanied by clear instructions for use and shall include information that explains the system's intended purpose, capabilities, and limitations.” (Artificial Intelligence Act, European Parliament and Council, Article 13, Section 2)
“The controller shall take appropriate measures to provide any information [...] in a concise, transparent, intelligible, and easily accessible form, using clear and plain language.” (GDPR, Article 12, Section 1)
”Traceability. The data sets and the processes that yield the AI system’s decision, including those of data gathering and data labelling as well as the algorithms used, should be documented to the best possible standard to allow for traceability and an increase in transparency.” (Ethics Guidelines for Trustworthy AI, 2019)
“Explainability. Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system). Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings.” (Ethics Guidelines for Trustworthy AI, 2019)
“Communication. AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.” (Ethics Guidelines for Trustworthy AI, 2019)
University staff should be informed and regularly updated on the role of AI in educational tools and processes. Clear communication about the implementation and impact of AI systems supports trust and shared responsibility within the institution.
“The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, where and for what purpose.” (GDPR, Article 15, Section 1)
Staff who use AI systems in their daily roles must be prepared to provide transparency about how these systems interact with personal data, including responding to questions from students or colleagues.
“High-risk AI systems shall be accompanied by instructions for use and other relevant documentation... This documentation shall include concise, complete, correct and clear information that is relevant, accessible and comprehensible.” (Artificial Intelligence Act, European Parliament and Council, Article 13, Section 2)
”Traceability. The data sets and the processes that yield the AI system’s decision, including those of data gathering and data labelling as well as the algorithms used, should be documented to the best possible standard to allow for traceability and an increase in transparency.” (Ethics Guidelines for Trustworthy AI, 2019)
“Explainability. Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system). Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings.” (Ethics Guidelines for Trustworthy AI, 2019)
“Communication. AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. This entails that AI systems must be identifiable as such. In addition, the option to decide against this interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.” (Ethics Guidelines for Trustworthy AI, 2019)
Researchers must disclose their use of AI tools, detailing methodologies, datasets, and tools used to ensure transparency and reproducibility.
“Honesty in developing, carrying out, reviewing, reporting, and communicating on research transparently, fairly, thoroughly, and impartially. This principle includes disclosing that generative AI has been used” (Living Guidelines on the Responsible Use of Generative AI, 2024)
“Transparency obligations shall include the provision of clear and adequate information about the system’s capabilities and limitations…” (AI Act, Article 13)
Human oversight
Use AI to support, not replace, human decision-making.
Teachers are encouraged to use AI as a supportive tool that enhances their teaching but does not replace their judgment. AI may offer insights into student progress and areas for improvement, yet teachers retain the authority and responsibility to make final assessment decisions based on their professional judgment.
“It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
“High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use” (Artificial Intelligence Act, European Parliament and Council, Article 14, Section 1)
“AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. This requires that AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight.” (Ethics Guidelines for Trustworthy AI, 2019)
“Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good.” (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022)
Students benefit from AI tools that enhance learning by providing tailored feedback and resources. However, students can trust that critical educational decisions, such as grading and evaluation, remain under human control, ensuring fairness. Students are also encouraged to engage with AI tools responsibly, understanding their role as complementary rather than authoritative sources. Additionally, students are expected to uphold academic integrity by ensuring that the work they submit is their own and to avoid over-reliance on AI tools for academic tasks.
“It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
“High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.” (Artificial Intelligence Act, European Parliament and Council, Article 14, Section 4)
Managers and administrators must establish protocols ensuring that AI applications in education always incorporate human oversight. This involves enabling staff to review and validate AI-generated recommendations and make autonomous decisions based on professional standards and institutional policies.
“The notion of ‘deployer’ refers to any natural or legal person, including a public authority, agency, or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.” (Artificial Intelligence Act, European Parliament and Council, Article 13, 2024)
“Deployers should in particular take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks. Those obligations should be without prejudice to other deployer obligations in relation to high-risk AI systems under Union or national law.” (Artificial Intelligence Act, European Parliament and Council, 2024, Recital 91)
“High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons during their period of use.” (Artificial Intelligence Act, European Parliament and Council, Article 14, Section 1)
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” (GDPR, Article 22, Section 1)
University staff should be informed about the importance of human oversight in AI-assisted educational tools, ensuring that these tools are used to complement professional expertise rather than replace it. Staff members play a critical role in validating AI-generated insights, applying their knowledge and judgment to uphold educational standards and safeguard student welfare.
“High-risk AI systems shall be designed and developed in such a way... that they can be effectively overseen by natural persons during the period in which the AI system is in use.” (Artificial Intelligence Act, European Parliament and Council, Article 14, Section 1)
Researchers must maintain oversight of AI systems to ensure their use aligns with ethical research practices and human accountability.
“Researchers remain ultimately responsible for scientific output… Authorship implies agency and responsibility, so it lies with human researchers.” (Living Guidelines on the Responsible Use of Generative AI, 2024)
“The requirement of accountability... necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment, and use.” (Ethics Guidelines for Trustworthy AI, 2019)
AI ethics
Use AI ethically, following established guidelines.
Teachers are expected to apply AI in alignment with ethical standards, ensuring fairness and avoiding any form of bias in educational outcomes. When using AI for grading or feedback, teachers must verify that AI outputs are unbiased and transparent, maintaining equitable treatment of all students.
“Teachers are expected to be able to internalize essential ethical rules for the safe and responsible use of AI, including respecting data privacy, intellectual property rights and other legal frameworks; and habitually incorporate these ethics into evaluations and utilizations of AI tools, data and AI-generated content in education.” (UNESCO. 2024. AI competency framework for teachers)
“A human-centred approach to AI in education is critical – an approach that promotes key ethical and practical principles to help regulate and guide practices of all stakeholders throughout the entire life cycle of AI systems. The approach encompasses four core principles: the design and use of AI should be at the service of strengthening human capacities as well as sustainable development; access to, and deployment of AI, should be equitable and inclusive; AI models in use should be explainable, safe and do no harm; and finally, the selection, use and monitoring of the impact of AI should be human controlled and human accountable.” (UNESCO. 2024. AI competency framework for teachers)
“Trustworthy AI has three components, which should be met throughout the system's entire life cycle: 1. it should be lawful, complying with all applicable laws and regulations; 2. it should be ethical, ensuring adherence to ethical principles and values; and 3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Each of these three components is necessary but not sufficient in itself to achieve Trustworthy AI. (European Commission. 2021.” Ethics guidelines for trustworthy AI.)
“Education and awareness to foster an ethical mind-set. Trustworthy AI encourages the informed participation of all stakeholders. Communication, education and training play an important role, both to ensure that knowledge of the potential impact of AI systems is widespread, and to make people aware that they can participate in shaping the societal development.” (European Commission. 2021. Ethics guidelines for trustworthy AI.)
Students can trust that AI used in their educational experience follows ethical guidelines, ensuring that assessments and feedback are fair and unbiased. Students are encouraged to engage with AI tools responsibly, aware that AI is a supportive tool, not a replacement for human judgment or personal effort.
The UNESCO AI competency framework (2024) describes and summarises AI ethics in the following way.
Students are expected to be able to develop a basic understanding of the ethical issues around AI, and the potential impact of AI on human rights, social justice, inclusion, equity and climate change within their local context and with regard to their personal lives. They will understand, and internalize the following key ethical principles, and will translate these in their reflective practices and uses of AI tools in their lives and learning:
Do no harm: Evaluating AI’s regulatory compliance and potential to infringe on human rights
Proportionality: Assessing AI’s benefits against risks and costs; evaluating context appropriateness
Non-discrimination: Detecting biases and promoting inclusivity and sustainability (understanding AI’s environmental and societal impacts)
Human determination: Emphasizing human agency and accountability in AI use
Transparency: advocating for the rights of users to understand AI operations and decisions
Managers and administrators should implement policies that uphold ethical standards in AI, ensuring responsible usage and training staff in ethical AI practices. This includes monitoring AI systems for adherence to transparency, fairness, and accountability within the institution.
"Institutions must promote trust in AI by ensuring systems operate transparently and ethically." - OECD AI Principles (2019)
"Upholding ethical standards in AI usage is essential to maintaining trust and integrity in institutions." - OECD AI Principles (2019)
"Institutions must prioritize transparency and accountability in AI operations, ensuring compliance with ethical standards." - Digital Education Action Plan (2021-2027)
Staff at the university should be informed about ethical standards in AI and apply these principles in their work. Staff members play a role in monitoring AI applications to ensure they respect privacy, fairness, and transparency, promoting an ethical approach to technology that benefits the university community.
"AI systems should be designed to uphold ethical principles and align with societal values." - UNESCO AI Ethics Recommendations (2021)
"All AI applications in educational settings must reflect ethical values and principles that safeguard users' rights." - European Ethical Guidelines for Trustworthy AI (2019)
"The use of AI must be compatible with the institution’s ethical and democratic values." - BRJU, §1038
Legally, AI use in education must comply with ethical standards set by the EU and international bodies, including avoiding discrimination, ensuring transparency, and maintaining accountability. These regulations mandate that AI systems are fair and uphold the rights of individuals, protecting users from unethical practices.
"Ethical AI systems must be fair, transparent, and accountable." - EU AI Act, Chapter 4
"Ethics in AI is essential to legal compliance and societal acceptance." - GDPR, Recital 75
"AI systems in education must respect fundamental rights, democratic values, and human rights." - Swedish Higher Education Act, Chapter 1
Bias and fairness
Employ AI tools for fair and inclusive use, avoiding bias.
Teachers should ensure that AI tools used in education treat all students equitably. For example, when utilizing AI for assessment, teachers must verify that the AI system operates without biases that could disadvantage any group based on demographics or background, actively working to detect and mitigate such biases.
The AI Act specifically mentions that the “Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system” and that this includes “examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations” (Artificial Intelligence Act, European Parliament and Council, Article 10, Section 2).
“In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle. Besides the consideration and involvement of all affected stakeholders throughout the process, this also entails ensuring equal access through inclusive design processes as well as equal treatment. This requirement is closely linked with the principle of fairness.” (Ethics Guidelines for Trustworthy AI, 2019)
”Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation.” (Ethics Guidelines for Trustworthy AI, 2019)
“Accessibility to this technology for persons with disabilities, which are present in all societal groups, is of particular importance. AI systems should not have a one-size-fits-all approach and should consider Universal Design principles addressing the widest possible range of users, following relevant accessibility standards. This will enable equitable access and active participation of all people in existing and emerging computer-mediated human activities and with regard to assistive technologies.” (Ethics Guidelines for Trustworthy AI, 2019)
“AI systems raise new types of ethical issues that include, but are not limited to, their impact on decision-making, employment and labour, social interaction, health care, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental freedoms, including freedom of expression, privacy and nondiscrimination. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice and stereotyping.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Students are assured that AI tools in their educational journey will uphold fairness and not exhibit bias that could impact their evaluations. Students are encouraged to engage with AI tools in a way that respects diversity and supports inclusivity within their educational environment. They are also expected to avoid using AI in ways that perpetuate biases or discrimination and to act in accordance with the university’s commitment to fairness and inclusivity.
"High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately." (Artificial Intelligence Act, European Parliament and Council, Article 13, Section 1).
”Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation.” (Ethics Guidelines for Trustworthy AI, 2019)
“AI systems raise new types of ethical issues that include, but are not limited to, their impact on decision-making, employment and labour, social interaction, health care, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental freedoms, including freedom of expression, privacy and nondiscrimination. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice and stereotyping.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Managers are responsible for ensuring that AI tools implemented in the institution adhere to fairness standards. This includes conducting regular reviews of AI algorithms to detect, document, and address any biases, ensuring the tools used reflect institutional values of inclusivity and equal treatment.
”Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation.” (Ethics Guidelines for Trustworthy AI, 2019)
“AI systems raise new types of ethical issues that include, but are not limited to, their impact on decision-making, employment and labour, social interaction, health care, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental freedoms, including freedom of expression, privacy and nondiscrimination. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice and stereotyping.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"Training, validation, and testing data sets shall be relevant, representative, free of errors, and complete, and shall take into account the characteristics of the specific geographical, behavioural, or functional setting for which the system is intended." (Artificial Intelligence Act, European Parliament and Council, Article 10, Section 3).
"Personal data shall be processed fairly and in a transparent manner in relation to the data subject." (GDPR, Article 5, Section 1a).
Staff at the university are expected to be aware of AI ethics and fairness guidelines, supporting a culture where AI tools are used inclusively and without discrimination. Staff can contribute by reporting any suspected biases in AI tools to ensure alignment with the institution's fairness standards.
”Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation.” (Ethics Guidelines for Trustworthy AI, 2019)
“AI systems raise new types of ethical issues that include, but are not limited to, their impact on decision-making, employment and labour, social interaction, health care, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental freedoms, including freedom of expression, privacy and nondiscrimination. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice and stereotyping.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"Personal data shall be processed fairly and in a transparent manner in relation to the data subject." (GDPR, Article 5, Section 1a).
Researchers must mitigate biases in AI methodologies, ensuring fair representation and non-discrimination in datasets and algorithms.
“Training, validation, and testing datasets should be relevant, representative, free of errors, and complete, ensuring fairness and avoiding discrimination” (AI Act, Article 10, Section 3).
“Reliability in ensuring the quality of research… involves being aware of possible equality and non-discrimination issues in relation to bias and inaccuracies” (Living Guidelines on the Responsible Use of Generative AI, 2024).
Risk management
Assess and manage AI risks.
Teachers are encouraged to regularly evaluate the potential risks associated with AI tools they use in teaching, such as unintended biases in assessments. By staying informed on risk management practices, teachers can take proactive steps to mitigate any adverse effects on students' learning experiences.
“Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: (a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose." (Artificial Intelligence Act, European Parliament and Council, Article 9, Section 2)
"Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good." (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022)
Students benefit from risk management practices that ensure AI tools are safe, reliable, and ethically implemented. Students should also participate responsibly, reporting any concerns related to AI usage in their assessments or feedback. Furthermore, students are expected to recognize the risks of misuse or over-reliance on AI tools, ensuring that their use aligns with the university’s policies and standards for ethical academic conduct.
“Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"Providers of high-risk AI systems shall establish, implement, and document a risk management system as part of their quality management system." (Artificial Intelligence Act, European Parliament and Council, Article 9, Section 1)
"Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good." (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022)
Managers and administrators should implement comprehensive risk management frameworks to evaluate, monitor, and mitigate risks associated with AI in educational settings. This includes conducting regular audits, enforcing compliance with data protection laws, and providing training on risk management for staff.
“Proportionality and Do No Harm. It should be recognized that AI technologies do not necessarily, per se, ensure human and environmental and ecosystem flourishing. Furthermore, none of the processes related to the AI system life cycle shall exceed what is necessary to achieve legitimate aims or objectives and should be appropriate to the context. In the event of possible occurrence of any harm to human beings, human rights and fundamental freedoms, communities and society at large or the environment and ecosystems, the implementation of procedures for risk assessment and the adoption of measures in order to preclude the occurrence of such harm should be ensured.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
“Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"High-risk AI systems shall undergo a conformity assessment procedure prior to being placed on the market or put into service to identify, estimate, and evaluate risks." (Artificial Intelligence Act, European Parliament and Council, Article 19, Section 1)
"Where a type of processing [...] is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall [...] carry out an assessment of the impact of the envisaged processing operations on the protection of personal data." (GDPR, Article 35, Section 1)
University staff should be informed of the risk management protocols associated with AI tools used in their work. Staff members are encouraged to follow guidelines, report any identified risks, and help in maintaining a secure AI-driven environment in compliance with ethical and institutional standards.
“Resilience to attack and security. AI systems, like all software systems, should be protected against vulnerabilities that can allow them to be exploited by adversaries, e.g. hacking. Attacks may target the data (data poisoning), the model (model leakage) or the underlying infrastructure, both software and hardware. If an AI system is attacked, e.g. in adversarial attacks, the data as well as system behaviour can be changed, leading the system to make different decisions, or causing it to shut down altogether. Systems and data can also become corrupted by malicious intention or by exposure to unexpected situations. Insufficient security processes can also result in erroneous decisions or even physical harm. For AI systems to be considered secure, possible unintended applications of the AI system (e.g. dual-use applications) and potential abuse of the system by malicious actors should be taken into account, and steps should be taken to prevent and mitigate these.” (Ethics Guidelines for Trustworthy AI, 2019)
“Fallback plan and general safety. AI systems should have safeguards that enable a fallback plan in case of problems. This can mean that AI systems switch from a statistical to rule-based procedure, or that they ask for a human operator before continuing their action. It must be ensured that the system will do what it is supposed to do without harming living beings or the environment. This includes the minimisation of unintended consequences and errors.” (Ethics Guidelines for Trustworthy AI, 2019)
Researchers must identify and mitigate risks associated with AI systems, ensuring safety, security, and compliance with ethical standards.
“The risk management system shall comprise the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose…” (AI Act, Article 9, Section 2)
“Processes to clarify and assess potential risks associated with the use of AI systems... should be established” (Ethics Guidelines for Trustworthy AI, 2019)
Accountability
Take responsibility for aligning your use of AI with legal, ethical, and academic standards.
Teachers are responsible for ensuring that their use of AI aligns with academic standards and ethical guidelines. They must be prepared to address and be accountable for any issues that arise from the AI tools used in teaching, such as assessments or personalized feedback.
“It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"Providers and deployers of high-risk AI systems shall ensure that their systems are used in accordance with the instructions for use and have the necessary competence, including an adequate level of AI literacy and training, to properly fulfill their tasks" (Artificial Intelligence Act, European Parliament and Council, Article 29, Section 2).
As a teacher, you are expected to know who is responsible and accountable for deployment and use of an AI system. Often, you are responsible for the results and decision-making produced by the system. “The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness. It necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.” (Ethics Guidelines for Trustworthy AI, 2019)
Students can expect transparency and accountability from their teachers and administrators regarding the use of AI in educational settings. They are encouraged to ask questions about AI processes that affect their education and to report any concerns. Students are also responsible for engaging ethically with AI tools made available to them and for maintaining accountability for their actions, ensuring that AI usage does not violate the university’s rules on academic integrity or lead to misconduct.
"Processing shall be lawful only if and to the extent that at least one of the following applies: the data subject has given consent to the processing of their personal data for one or more specific purposes." (GDPR, Article 6, Section 1). This quote highlights the principle of lawful and transparent handling of personal data, emphasizing accountability in AI use. For students, this means they must align their AI usage with legal and ethical standards, ensuring transparency and consent when handling or interacting with AI systems. The GDPR takes into account when use is only private, but when writing a thesis or using AI in the context of education, this is not always the case.
”As a student, you are responsible for reading and understanding the information provided by JU as regards what is and what is not permitted at examinations, for essays, etc. Please note that different rules may apply for different examinations and course components. It is your responsibility to read and understand the information provided.” (Student web, 2020. Rights and regulations: Cheating, disruptions and harassment)
Managers and administrators should establish clear accountability policies for AI usage across the institution, ensuring all staff understand their responsibilities. This includes maintaining transparency about AI applications, ensuring ethical standards are met, and providing resources and support to uphold these standards.
“It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision-making and acting, but an AI system can never replace ultimate human responsibility and accountability.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
"Providers of high-risk AI systems shall establish a quality management system to ensure compliance with the requirements of this Regulation." (Artificial Intelligence Act, European Parliament and Council, Article 17, Section 1).
"The controller shall be responsible for, and be able to demonstrate compliance with, [data protection] principles." (GDPR, Article 5, Section 2).
Staff members across all levels are expected to uphold accountability standards for AI usage in their roles, ensuring transparency, fairness, and adherence to institutional guidelines. Staff should actively participate in training and follow established protocols to maintain responsible AI usage in the institution.
Researchers must ensure accountability for AI use and outcomes, fostering trust in AI-driven research.
“Accountability for the research from idea to publication, for its management and organisation, for training, supervision, and mentoring, and for its wider societal impacts” (European Code of Conduct for Research Integrity, 2023)
“Researchers remain ultimately responsible for scientific output… Researchers maintain a critical approach to using the output produced by generative AI and are aware of the tools’ limitations” (Living Guidelines on the Responsible Use of Generative AI, 2024)
AI empowerment and AI dependency
Use AI to enhance, not replace, your own competence or teaching capabilities.
Teachers can use AI to support their teaching efforts, streamlining tasks such as grading or providing feedback. However, they should maintain control over final decisions to ensure AI complements, rather than replaces, their instructional roles. AI should empower teachers to enhance learning experiences.
"Educators need to ensure that AI systems they are using are reliable, fair, safe, and trustworthy and that the management of educational data is secure, protects the privacy of individuals, and is used for the common good" (Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators, European Commission, 2022).
“Empowering teachers’ human-accountable use of AI: The ethical and legal responsibilities for designing and using AI should be attributed to individuals. In the specific context of AI competencies for teachers, this human-accountable principle implies that AI tools should not replace the legitimate accountability of teachers in education. Teachers should remain accountable for pedagogical decisions in the use of AI in teaching and in facilitating its uses by students. For teachers to be accountable at the practical level, a pre-condition is that policy-makers, teacher education institutions and schools assume responsibility for preparing and supporting teachers in the proper use of AI.” (UNESCO, AI Competency Framework for Teachers, 2024)
“Member States should encourage research initiatives on the responsible and ethical use of AI technologies in teaching, teacher training and e-learning, among other issues, to enhance opportunities and mitigate the challenges and risks involved in this area. The initiatives should be accompanied by an adequate assessment of the quality of education and impact on students and teachers of the use of AI technologies. Member States should also ensure that AI technologies empower students and teachers and enhance their experience, bearing in mind that relational and social aspects and the value of traditional forms of education are vital in teacher-student and student-student relationships and should be considered when discussing the adoption of AI technologies in education. AI systems used in learning should be subject to strict requirements when it comes to the monitoring, assessment of abilities, or prediction of the learners’ behaviours. AI should support the learning process without reducing cognitive abilities and without extracting sensitive information, in compliance with relevant personal data protection standards.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Students should engage with AI as a tool to enrich their learning, without compromising their personal agency or responsibility. For example, while AI may suggest personalized learning paths, students are encouraged to remain active in their learning journey and make informed choices. Students are also expected to avoid dependency on AI tools, ensuring that they develop their own skills and competencies and contribute meaningfully to their education through independent effort and critical thinking. It is important to follow assignment and examination instructions because the learning you experience in courses is designed in specific ways.
“AI-related teaching and learning should serve to build core AI competencies that allow students to accommodate new knowledge, as well as adapt to solving problems in new contexts with novel AI technologies. First and foremost, these core competencies must include values associated with an ethical and human-centred mindset. … The competencies also reflect the need to understand controversies surrounding AI and the key ethical principles that guide regulation, as well as foster practical skills to combat bias, protect privacy, promote transparency and accountability, and adopt an ethics-by-design approach to the co-creation of AI. … These core competencies constitute the foundation for further learning and more specialized use of AI in further education, work and life.” (UNESCO, AI Competency Framework for Students, 2024)
Managers and administrators should select and implement AI tools that empower both staff and students, enhancing rather than undermining their roles. Policies should prioritize AI systems that support academic and administrative tasks without eroding human decision-making.
“Empowering teachers’ human-accountable use of AI: The ethical and legal responsibilities for designing and using AI should be attributed to individuals. In the specific context of AI competencies for teachers, this human-accountable principle implies that AI tools should not replace the legitimate accountability of teachers in education. Teachers should remain accountable for pedagogical decisions in the use of AI in teaching and in facilitating its uses by students. For teachers to be accountable at the practical level, a pre-condition is that policy-makers, teacher education institutions and schools assume responsibility for preparing and supporting teachers in the proper use of AI.” (UNESCO, AI Competency Framework for Teachers, 2024)
“Member States should encourage research initiatives on the responsible and ethical use of AI technologies in teaching, teacher training and e-learning, among other issues, to enhance opportunities and mitigate the challenges and risks involved in this area. The initiatives should be accompanied by an adequate assessment of the quality of education and impact on students and teachers of the use of AI technologies. Member States should also ensure that AI technologies empower students and teachers and enhance their experience, bearing in mind that relational and social aspects and the value of traditional forms of education are vital in teacher-student and student-student relationships and should be considered when discussing the adoption of AI technologies in education. AI systems used in learning should be subject to strict requirements when it comes to the monitoring, assessment of abilities, or prediction of the learners’ behaviours. AI should support the learning process without reducing cognitive abilities and without extracting sensitive information, in compliance with relevant personal data protection standards.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Staff members across all areas are encouraged to view AI as a supportive tool that complements their work. AI can assist in administrative and academic tasks, but staff should continue to exercise professional judgment and avoid over-reliance on AI.
Researchers should use AI tools to augment their research capabilities without becoming overly reliant on them.
“Generative AI may create possibilities and risks that can be hardly anticipated… Researchers maintain a critical approach to using the output produced by generative AI and are aware of the tools’ limitations” (Living Guidelines on the Responsible Use of Generative AI, 2024)
“AI systems should serve as tools for people, with the ultimate aim of increasing human well-being” (UNESCO, Recommendation on Ethics of AI, 2022)
Innovation and continuous improvement
Use AI to innovate and continuously improve your work in education.
Teachers can use AI tools to explore new teaching methods, analyze student performance data for insights, and refine instructional strategies. This continuous improvement helps adapt teaching to students' needs and enhances learning outcomes.
"Public sector innovation projects where AI tools are adapted and used, and where skills enhancement takes place, can have a significant impact on the effectiveness and quality of activities" (Swedish Ministry of Enterprise and Innovation, National Approach to Artificial Intelligence, 2018).
"Given the novel ethical issues triggered by AI and the potentially transformative opportunities AI may provide, it is crucial to equip teachers with the human-centred mindset, ethical behaviours, conceptual knowledge and application skills needed to make use of AI to enhance students’ learning and their own professional development" (UNESCO, AI Competency Framework for Teachers, 2024).
Students benefit from AI-driven innovations that bring in new learning tools, personalized resources, and adaptive technologies, enabling a customized and engaging educational experience. Students are encouraged to leverage these resources actively to improve their learning outcomes. Additionally, students are expected to participate in their learning process proactively, exploring innovative solutions while adhering to the university’s policies on responsible AI use. In short: try new tools, but always follow the rules.
"Public sector innovation projects where AI tools are adapted and used, and where skills enhancement takes place, can have a significant impact on the effectiveness and quality of activities" (Swedish Ministry of Enterprise and Innovation, National Approach to Artificial Intelligence, 2018).
Managers and administrators should create an environment that supports AI-driven innovation, ensuring access to resources and training that explore AI's potential to improve institutional operations, teaching quality, and research capabilities.
"Sweden needs pilot projects, testbeds and environments for development of AI applications in the public and private sectors, that can contribute to the use of AI evolving in a safe, secure and responsible manner. Sweden needs to continue to develop efforts to prevent and manage the risks associated with AI. Sweden needs to develop partnerships and collaborations on the use of AI applications with other countries, especially within the EU." (Swedish Ministry of Enterprise and Innovation, National Approach to Artificial Intelligence, 2018).
"The potential benefits of AI can be enormous in both the private and public sector, even if it is difficult to quantify them today. Public sector innovation projects where AI tools are adapted and used, and where skills enhancement takes place, can have a significant impact on the effectiveness and quality of activities" (Swedish Ministry of Enterprise and Innovation, National Approach to Artificial Intelligence, 2018).
Staff are encouraged to utilize AI for streamlining operations, enhancing administrative efficiency, and supporting academic staff. AI innovations allow staff to improve workflows and contribute to a forward-looking educational environment.
"The potential benefits of AI can be enormous in both the private and public sector, even if it is difficult to quantify them today. Public sector innovation projects where AI tools are adapted and used, and where skills enhancement takes place, can have a significant impact on the effectiveness and quality of activities" (Swedish Ministry of Enterprise and Innovation, National Approach to Artificial Intelligence, 2018).
Researchers can leverage AI to accelerate discovery, analyze complex datasets, and explore innovative methodologies. Continuous improvement in AI tools ensures that research stays at the forefront of academic excellence and societal impact.
"This Regulation should support innovation, respect freedom of science, and not undermine research and development activity" (AI Act, Recital 25).
"AI has great potential for accelerating scientific discovery and improving the effectiveness and pace of research" (Living Guidelines on the Responsible Use of Generative AI, 2024).
Sustainability
Use AI to support sustainability and minimize environmental impact.
Teachers can contribute to sustainability by using AI tools that are resource-efficient and by encouraging digital solutions that reduce paper use, thereby lowering the environmental footprint of educational practices.
“In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle. Sustainability and ecological responsibility of AI systems should be encouraged, and research should be fostered into AI solutions addressing areas of global concern, such as for instance the Sustainable Development Goals. Ideally, AI systems should be used to benefit all human beings, including future generations.” (Ethics Guidelines for Trustworthy AI, 2019)
“Sustainable and environmentally friendly AI. AI systems promise to help tackling some of the most pressing societal concerns, yet it must be ensured that this occurs in the most environmentally friendly way possible.” (Ethics Guidelines for Trustworthy AI, 2019)
“Social impact. Ubiquitous exposure to social AI systems in all areas of our lives (be it in education, work, care or entertainment) may alter our conception of social agency, or impact our social relationships and attachment. While AI systems can be used to enhance social skills, they can equally contribute to their deterioration. This could also affect people’s physical and mental wellbeing. The effects of these systems must therefore be carefully monitored and considered.” (Ethics Guidelines for Trustworthy AI, 2019)
“Awareness of efficient prompting and AI literacy can lead to less resources being used. All actors involved in the life cycle of AI systems must comply with applicable international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Students benefit from a focus on sustainability in AI by engaging in environmentally friendly practices and increasing their awareness of the environmental implications of digital technologies. Students are encouraged to use digital resources responsibly and to prioritize eco-friendly practices in their studies. They are also expected to adopt sustainable habits, such as minimizing resource waste and supporting the university’s efforts to reduce its environmental impact. Using AI tools may have an unintended effect on our environment.
“Awareness of efficient prompting and AI literacy can lead to less resources being used. All actors involved in the life cycle of AI systems must comply with applicable international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Managers and administrators should implement policies prioritizing sustainable AI usage, such as selecting energy-efficient systems and promoting eco-friendly practices across the institution to help reduce overall environmental impact.
“Awareness of efficient prompting and AI literacy can lead to less resources being used. All actors involved in the life cycle of AI systems must comply with applicable international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Staff are encouraged to use AI and other digital tools in ways that promote sustainability. By adopting energy-efficient technologies and minimizing unnecessary waste, staff contribute to the university's commitment to environmental stewardship.
“Awareness of efficient prompting and AI literacy can lead to less resources being used. All actors involved in the life cycle of AI systems must comply with applicable international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
Researchers should ensure that AI methods and tools are optimized for energy efficiency and sustainability while balancing innovation with ecological responsibility. AI-driven research should aim to contribute positively to environmental challenges and sustainability goals.
“Awareness of efficient prompting and AI literacy can lead to less resources being used. All actors involved in the life cycle of AI systems must comply with applicable international law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems, including but not limited to its carbon footprint, to ensure the minimization of climate change and environmental risk factors, and prevent the unsustainable exploitation, use and transformation of natural resources contributing to the deterioration of the environment and the degradation of ecosystems.” (UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2022)
“Sustainability and ecological responsibility of AI systems should be encouraged, and research should be fostered into AI solutions addressing areas of global concern, such as the Sustainable Development Goals” (Ethics Guidelines for Trustworthy AI, 2019)
“The system’s development, deployment and use process, as well as its entire supply chain, should be assessed… Measures securing the environmental friendliness of AI systems’ entire supply chain should be encouraged” (Ethics Guidelines for Trustworthy AI, 2019)