Marcus’ research helps quality assuring future AI models

Doctoral student Marcus Gullstrand

From PlayStation to neural networks. Marcus Gullstrand is a doctoral student in computer science within the framework of AFAIR’s research project XAI-Pro: ”It’s definitely possible to have a career in AI in Jönköping – as a researcher and in the professional world!”

Why have you chosen a career in AI?

”I've always been fascinated by computers. Desktops, laptops, phones, server computers... it's so much fun! It all started with games and my first PlayStation, and the interest has stayed with me from primary and secondary school to my university studies. For me, AI is a natural progression of that interest. AI gives us more opportunities the more we learn. Personally, I find AI interesting since it's a technology that can be used in all sorts of domains, for different purposes. Stock trading, car design, robotics or healthcare... It's not specific to any single subject, and the possibilities are endless.”

How did you become a part of the XAI-Pro project?

”I studied a Basic Science Year at the School of Engineering here in Jönköping. After this, you are guaranteed a place on a program, and I chose the three-year program in Data and Mobile Platforms, DMP. I did a thesis based on AI technology and enjoyed it so much that I continued with the then-new master's program ‘AI Engineering’. So in total, I did a total six years of studies in Jönköping. Then I worked at Combitech here in Jönköping for a year. But I had only been working for a few months when I was asked about doctoral studies. During my master’s, I had both worked part-time and collaborated scientifically with Maria Riveiro, who is the project leader for XAI-Pro, so that's how I got involved in the project.”

You're researching neural networks? What is that?

”Neural networks are a subcategory within machine learning, and machine learning algorithms have the ability to learn from experience or data. You could say that a neural network functions like the human brain. These types of self-learning algorithms are what drive the development of AI. I focus on deep networks and how we can extract individual things that the network has learned, such as recognizing a cat or dog. As humans we know exactly what a cat looks like and can distinguish a cat from a dog without problems. We know, from experience, what information corresponds to a cat. But what kind of data does a neural network need, and how can you train it to understand that this information says ‘cat’? My research is about framing the parts of the network that make the cat decision and how the cat is described internally by the network. It's a bit like brain surgery – cutting out just that little bit of the AI model that deals with cats, learning how it works, and how it connects to the rest of the AI model. These are central questions within the research within Explainable AI. As AI technology considered, I think neural networks are particularly interesting as it’s both a bit of an art and a science to create the architecture behind these and then train them to do all sorts of things.”

Why is this knowledge important?

”If you know how an AI model makes decisions, and also know how much of the model that contributes to the decision, it also becomes easier to find weaknesses in the model. When we are certain that the model has been trained and learned to classify ‘cat’ data correctly, we can use that knowledge when we are going to train it to recognize other animals, for example, a dog or horse. We also know what in the model's internal structure contributes to the actual decision for a cat or a dog. This research is important for being able to quality assure the models that will then be used out in society.”

XAI-Pro (Explainable AI for product and service development) is one of the research projects driven within AFAIR.

 

Here you can read more about the XAI-Pro Project.

2024-02-16