Our Position

Resolution: Professionals & Managers’ Social Responsibility for an Ethical AI

Adopted by Eurocadres General Assembly 15 October 2020

AI systems are neither neutral nor autonomous. The choices that preside over their deployment are capable of transforming the future of work and the conditions for the exercise of individual and civic freedoms, for better or for worse. This is why our responsibility cannot be thought of in a purely individual way, nor disconnected from social and societal issues. It is essential to re-embed these technologies in the social and societal reality at the service of the general interest.

The risks of AI systems are multiple:

The risk of total surveillance with dangerous mass social control;

  • The risk of mass profiling;
  • Risks arising from "overconfidence" in AI;
  • Risk of direct damage caused by AI/ in bad faith;
  • Weakening of human responsibility (paradox of choice and decision-making fatigue);
  • Discrimination and bias.

The ethical challenge

ERPs[1] and platforms think and organize work through algorithms that represent the foremen of the 21st century. Invisible and difficult to control, these algorithms determine the instructions to be carried out in the course of work.

These professions, which impact many professional fields, are at the crossroads of social and environmental issues. They require an awareness of the consequences of the technical choices and configurations.

Guaranteeing the ethics of the design and use of solutions based on AI devices requires that these professions be both professionally committed and socially responsible.

Ways to be professionally committed and socially responsible.

Facilitating the work of users of AI-based solutions by designing the devices as an activity aid and a decision aid, implies that the devices are also designed according to the expectations and professional ethics of the profession(s) concerned.

The aim is to enable humans interacting with an AI system to maintain their decision-making capacity by exercising supervision and control. The opacity of algorithms makes it necessary to explain their operation and control their decision-making capacities. We must be able to elaborate the issue of “data and algorithm” in our trade union action, making them part of the training and information rights for workers. This requires obligations of certification, qualification and explanation in the human sense of the term, by an algorithmic trusted third-party authority where trade unions are represented and that aims to

  • ensure the ethics of AI systems in their design and their use
  • be able to deal with any request for intervention from a developer or data scientist confronted with an ethical problem.

[1] Enterprise Resource Planning