Using artificial intelligence in education, as a student, teacher or researcher, requires an understanding of the risks and consequences of using AI-based tools. To help us approach AI in education and in everyday life, a number of ethical guidelines are available. We can use the ethical guidelines to provide a framework for teaching, assessment and examination.

Summary

The ethical guidelines described below can be summarised as the need to use AI tools responsibly, from a broader perspective and in line with existing policies, legislation and regulations. The following points are central in this context:

  • Transparency: Educators must ensure transparency in the use of AI, including the data sources used, algorithms applied and any decision-making processes.
  • Beneficience: The use of AI in education should aim to benefit students, teachers and society as a whole.
  • Accountability: Teachers must be responsible for the use of AI systems in education, and there should be clear lines of accountability for any outcomes resulting from their use.
  • Professional responsibility: Teachers and trainers must use AI in a responsible and professional manner, taking into account the potential impact on learners.
  • Security: AI systems used in education must be secure and protect against unauthorised access or misuse of data.
  • Respect for fundamental rights: AI systems must respect fundamental rights, including privacy, non-discrimination and freedom of expression.
  • Fairness: AI systems must be designed to avoid and mitigate any prejudice or discrimination.
  • Inclusion: AI should be designed in a way that ensures that all learners can benefit equally, regardless of background or ability.
  • Accessibility: AI should be designed in a way that makes it accessible to all learners, including those with disabilities or other special needs.
  • Human oversight: AI should be used to support, not replace, human teachers, and there should always be human oversight of AI systems used in education.

Summary of the European Commission's approach

Ethical use of AI requires educational responsibility, accountability and transparency at all stages, accessibility and inclusiveness, and consideration of the development of competences that create individuals ready for a changing world.

The European Commission

In October 2022, the European Commission published guidelines on the ethical use of artificial intelligence (AI) in education. You can read the full document on the Publications Office of the European Commission: Ethical guidelines for teachers on the use of artificial intelligence (AI) and data in teaching and learning External link, opens in new window..

There are also guidelines on how to assess the trustworthiness of AI, which can also give us guidance on how we view the use of AI in education.

The European Commission's Ethical Guidelines for Trustworthy AI External link, opens in new window. describe several points that clarify the conditions for using and developing the use of AI.

In summary, including other similar policy documents and guidelines developed at European level, such as the New Skills Agenda for Europe and Artificial Intelligence for Europe, we can summarise an ethical approach to AI in the EU in the following points:

  • Human autonomy and agency: AI tools in education should support student autonomy and decision-making and empower them to participate in the learning process.
  • Inclusion and equity: AI systems should be accessible to all students, taking into account different needs and backgrounds, and ensuring that no one is left out.
  • Pedagogical responsibility: Teachers should maintain control of the educational process and use AI as a tool to improve teaching and learning, taking into account its limitations.
  • Ethical use of data: Data collected with AI tools should be used in an ethical way that protects learners' privacy and ensures proper consent and transparency.
  • Personalised learning and well-being: AI systems should be used to provide personalised learning experiences while taking into account learners' well-being and mental health.
  • Lawful, ethical and robust AI is the basis for trustworthy AI.
  • Respect for human dignity: AI systems should be designed to benefit individuals and society and ensure that human values and fundamental rights are upheld.
  • Openness and transparency: AI systems should be explainable and provide clear information about their capabilities and limitations to users and stakeholders.
  • Fairness and non-discrimination: AI systems should avoid prejudice and ensure equal treatment of individuals, regardless of their characteristics or background.
  • Accountability: Those involved in the development and deployment of AI systems should be responsible for their actions and decisions.
  • Robustness and safety: AI systems should be reliable, safe and resilient to minimise risks and potential harm to users and society.
  • Privacy and data governance: Personal data should be handled securely and in accordance with data protection regulations, and users should have control over their data.
  • Life-long learning: Promote a culture of continuous learning and provide opportunities for individuals to acquire new skills throughout their lives.
  • Digital skills: Emphasise the development of digital competences, including AI skills, to ensure that individuals are prepared for the digital era.
  • Critical thinking and creativity: Promote skills such as critical thinking, problem solving and creativity to enable individuals to adapt to new challenges and find innovative solutions.
  • Social and emotional skills: Recognise the importance of social and emotional skills, such as empathy, teamwork and resilience, to thrive in a rapidly changing world.
  • Collaboration with stakeholders: Encourage collaboration between educational institutions, employers and other stakeholders to ensure that skills development initiatives are relevant and effective.
  • Ethical and legal framework: Develop and apply a robust regulatory framework to ensure the responsible and ethical use of AI technologies in Europe.
  • Innovation and competitiveness: Support research, innovation and entrepreneurship in AI to strengthen Europe's competitiveness and technological progress.
  • Social and economic impact: Address the potential impact of AI on employment, society and the economy, and promote inclusive growth and social cohesion.
  • Education and training: Integrate AI education and training programmes to equip individuals with the necessary skills to engage with AI technologies effectively.

UNESCO

UNESCO published the first global standard on AI ethics - Recommendation on the Ethics of Artificial Intelligence External link, opens in new window. in November 2021. This framework was adopted by all 193 Member States.

The protection of human rights and human dignity is at the centre of the Recommendation, which is based on the promotion of fundamental principles such as transparency and fairness, as well as the importance of human oversight of AI systems.

What makes the Recommendation exceptionally applicable is its comprehensive policy action areas, which allow policy makers to put the core values and principles into action with respect to data governance, environment and ecosystems, gender equality, education and research, and health and social well-being, among many other areas.

The main objective of the UNESCO Recommendation on higher education and the ethical use of AI for students is to promote the integration of AI education into higher education curricula and learning outcomes, promote ethical awareness and critical thinking about AI, and ensure that AI systems used in educational contexts uphold human rights, fairness and transparency.

A Swedish perspective

The Swedish Government has developed a national approach to artificial intelligence called Nationell inriktning för artificiell intelligens External link, opens in new window.. It states that "Sweden should be the best in the world at using the opportunities of digitalisation" and outlines a number of points to enable this:

  • Sweden needs to develop regulations, standards, norms and ethical
    principles to guide ethical and sustainable AI and the use of AI.
  • Sweden needs to work for Swedish and international standards and regulations that promote the use of AI and prevent risks.
  • Sweden needs to continuously review the need for digital infrastructure to utilise the opportunities that AI can provide.
  • Sweden needs to continue its efforts to make data available that can form a comprehensive infrastructure for using AI in areas where it adds value.
  • Sweden needs to continue to take an active role in the EU's work to promote digitalisation and to enable the benefits that the use of AI can bring.

Swedish strategy and policy

The overall goal is to create AI education and the use of AI in education that promotes ethics, skills development and sustainability to maximise the positive impact of AI in the education sector in Sweden.

The points are also applicable to higher education, and together with the Förordning om artificiell intelligens (Swedish) External link, opens in new window. (Faktapromemoria 2020/21:FPM109) which proposes the application of European legislation, taking into account, among other things, the development of the EU AI Act External link, opens in new window., describes how the use of AI can be limited. For example, the proposal uses a risk-based approach to create a structured distinction between different types of AI systems and how they are used. Some types of AI systems are prohibited, while others are allowed to be used. However, there are restrictions and requirements, such as registration and monitoring by the responsible authority. Low-risk AI systems can be used without restrictions. It is proposed to set up public bodies at national and EU level to monitor the market and ensure compliance. As students, teachers, trainers or researchers, we will need to navigate laws and regulations that describe how AI can and may be used.

By summarising the National Direction for Artificial Intelligence, we can also compile ethical guidelines that form the basis for legislators' work to regulate AI in Sweden. Partly in the assessment from the government under the chapter "Education":

  • Swedish universities and colleges need to educate enough people in AI, especially in terms of continuing and further education for professionals with a completed academic degree or equivalent.
  • Sweden needs a strong AI content in non-technical education programmes in order to create conditions for a broad and responsible application of the technology.
  • Sweden needs a strong link between research, higher education and innovation in AI.

Partly by summarising the "Education" chapter; to benefit from AI and ensure its ethical use, AI skills, including cybersecurity, are needed in different places. Education, interdisciplinarity, lifelong learning and collaboration are emphasised as important:

  • Access and dissemination of AI skills in society to benefit from the technology.
  • Ethical, safe and sustainable use of AI.
  • The need for interdisciplinary expertise to ensure ethical use of AI.
  • The importance of lifelong learning and training in AI.
  • Collaboration between research, higher education and innovation in AI to meet the demands of technological and societal developments.