The use of AI in education and research offers great opportunities, but it also requires us to address the challenges of information security. Understanding and managing these risks is central to creating safe and reliable uses for AI-based tools.

Information security is not just about technology, it also includes organisational and human aspects. For those who use AI tools in their daily work with education, research and administration, it is important, for example, to take responsibility for the information security of the data that they handle and ensure that the tools are used in a safe, legal, ethical and sustainable manner.

Information security as a foundation for sustainable AI use

Teachers, researchers and administrative staff have a key role in ensuring the safe and responsible use of AI tools, in line with Swedish legislation and the GDPR.

The safe use of AI tools means following guidelines that address confidentiality, integrity and accessibility. The following principles are, therefore, central to managing information security in connection with the use of AI:

Data security

Ensure that data shared with AI tools is protected from unauthorised access, manipulation or loss, including encryption, access control and secure data management.

Integrity and reliability

AI tools should be designed to provide consistent and accurate results without compromising data integrity or relying on insecure data sources.

Protection against cyberthreats

Educators and researchers must be aware of and protect their data from threats such as phishing, ransomware and other types of cyberattacks. This can be done, for example, by avoiding sharing sensitive data with AI tools.

Responsible data collection

Data used in conjunction with AI tools shall always be handled in accordance with applicable legislation, thus, data collected using AI tools shall comply with data protection laws and regulations, such as the General Data Protection Regulation 2016/679 (GDPR).

Transparency and accountability

Ensure that the approaches that you use, and the decisions and the analyses that you make using AI tools are clearly documented, traceable and auditable. This builds trust in your actions and shows that you take responsibility.

Continuous monitoring and updating

You shall always monitor the AI ​​tools that you use. This means that you need to stay regularly updated on information that can support your ability to manage and assess new security risks that could affect the AI ​​tools that you use.

Training and awareness

As a user of AI tools, you should participate in training on information security and AI to learn the basics, advanced usage and “best practices”.

Guidelines

As an AI user in Sweden, you need to comply with both EU rules and national laws and guidelines for information security. This is fundamental for sustainable and ethical AI use.

By ensuring that the AI ​​tools that you use comply with existing legislation and guidelines, you contribute to creating a safe educational landscape that promotes innovation, collaboration and learning.

If you want to know more about EU rules and national laws and guidelines, you can read, for example: EU Regulation for AI