A game-based approach to learn about ethics in Machine Learning

Over the last decade, the development of artificial intelligence has accelerated exponentially. Nowadays, the scope of Machine Learning applications has expanded to include almost every sector and industry, taking on a crucial role in the fields of science and engineering.

While their added value is no longer in question, unforeseen pitfalls can hide behind this promising technology. Society, democracy, and the environment are all spheres that can be compromised by the ethical drifts that algorithms can potentially cause, particularly when biases and their consequences are not considered.

At EPFL, no less than 223 courses address concepts related to Machine Learning, and more and more teachers are working to integrate ethical considerations in their lessons. However, these are by nature difficult to address and require the development of specific teaching approaches.

Cécile Hardebolle is a researcher who specializes in engineering learning and teaching. Together with Patrick Jermann and Maria Carla Di Vincenzo, she is leading the development of a new pedagogical tool which takes the form of a game based on an interactive story. Drawing from case studies, this tool allows students to virtually confront and explore these issues in a safe environment.

Learning by playing

The game was originally a prototype developed by two Master’s students, Alexandre Pinazza and Ester Simkova, as part of the course „How People Learn: Designing Learning Tools II“ taught by Roland Tormey.

„The goal of the exercise was to design a tool to support and integrate the learning of concepts related to ethics and sustainability in the fields of science and engineering. We really liked the idea proposed by Alexandre and Ester, which is based on an interactive scenario, a bit like those choose-your-own-adventure books. So we refined their idea by reworking the narrative structure to include analysis, reflection, and self-evaluation of affective reactions,“ explains Cécile Hardebolle.

After a laboratory evaluation phase (which results are currently being published) and a first pilot in the „Machine Learning for Behavioral Data“ class of Tanja Käser last spring, the game was tested by teachers in engineering schools during the 50th annual conference of the European Society for Engineering Education (SEFI).

The goal this semester was to validate the integration of the game in a classroom. „The framework of Nicolas Flammarion and Martin Jaggi’s course was perfect for this because they are both interested in digital ethics issues and had already started introducing these concepts into their program last year,“ says Hardebolle.

The device was thus debuted in the course „CS-433: Machine Learning,“ taught to more than 500 students by the two professors.

Collaborating to address a shared concern

Cécile Hardebolle and Nicolas Flammarion

„We might think that an algorithm is not a human reasoning and therefore it will be impartial and won’t have biases in its decision-making, but this is completely false. What is important to keep in mind is that the data we use contains a lot of bias. The entire process from data collection, its usage, and decision-making by algorithms often not only preserves these biases but also introduces new ones.

Ultimately, we reach a decision that can have significant consequences, for example, for an underrepresented group,“ explains Nicolas Flammarion. „Our goal was mainly to train engineers who have a vocation to take responsibility and make decisions in academia as well as in the private sector in companies.“

This observation echoes with Cécile Hardebolle’s viewpoint: „It is really important that we train our students to reflect on the negative impact that the technology they are going to design can have. How can we ensure that the expected benefits are not completely ruined by the harm that this technology may cause?“

The objective is to develop our engineers‘ ability to respond to the needs of society while minimizing the risks associated with technology development

Cécile Hardebolle

A virtual experience

The students had a week prior to Nicolas Flammarion’s class to play online. The game immerses students into the role of a data scientist mandated to develop models that raise important ethical considerations.

© CEDE / 2023 EPFL

Before a design decision, the scenario pauses and asks the students to reflect and justify their choices. The story then continues based on the consequences generated by these choices confronting the future engineers with their own cognitive biases and the dangers related to the nature of the data used. At the end, the game proposes to re-examine the choices that led to negative impacts and provides students with the opportunity to reformulate them, putting emphasis on emotions they felt.

A debriefing session links the game experience with its underlying ethical concepts, then the course provides mathematical approaches to identify, evaluate, and quantify biases and reviews existing methods for reducing them.

However, equity criteria are subject to limits and do not eliminate all issues. „It is important to keep in mind that Machine Learning has no guarantee of being aligned with our societal values if we do not take necessary measures,“ emphasizes Nicolas Flammarion.

„This is why it seemed necessary to me to develop a dedicated chapter. The work that Cécile Hardebolle and CEDE are undertaking perfectly reinforces these notions, and has initiated the reflection upstream of my lesson“ says Flammarion.

Embedded in a larger effort to integrate ethics education in engineering, this learning tool is part of swissuniversities‘ „P-8 Digital Skills“ program. It is available to all EPFL teachers, adding to the series of interventions developed by CEDE and the Teaching Support Center (CAPE), both part of the Center LEARN for Learning Sciences.

Author(s): Julie Clerget
Importiert von EPFL News