A Short Account on the Context of This Charter
written 12 may 2025
Starting sometime in 2023, I found myself increasingly uneasy, when casually talking to friends, relatives and random people, and sometimes within my professional environment where I collaborate with fellow scientists and medical doctors, of disclosing that I use AI in my research. Now, my research is not entirely based on AI, far from it, as I actually primarily use other types of methods based on mathematical modelling; yet, I use AI in some forms, often combined with the other methods. My concerns arose because, as its applications have become widely available to the general public, I tend to associate AI to many modern digital tools which I do not agree with in many respects - specifically, I think that most of AI-based tools should be withdrawn. Yet, in my work, Machine Learning, which is a part of AI, has allowed us to make tremendous progress in realising healthcare applications, whose deep meaning I believe in. I thus found myself facing a contradiction: should I entirely reject AI on the account of my belief that it does and will do harm in excess of its true benefits? The fix I found for myself has been to make my feelings and reflections more explicit, clear, and sharing with friends and colleagues. For now, my stance is to use and advise using AI in controlled ways, following clearly stated guidance, definitely much more cautiously than the current ways and not embraced unrestrictively. The below charter is an attempt at formalising some simple ethics for using AI in my research field. It is meant to evolve through the evolution of AI in our society, and the advice and suggestions I will receive. All versions will be kept below in chronological order.
A Charter for the Computer Vision and Medical Image Processing Researcher in the AI Era
29 march 2025 version
The researcher commits to:
- Use all possible available means to monitor and ensure personal data confidentiality
- Systematically evaluate the environmental impact of data storage, computing resources, and power consumption required to experimentally develop and evaluate, then to potentially use, the proposed methods, so as to choose the least impacting scientific and methodological approach*
- Avoid, as much as possibly predictable, to develop applications (or methods leading to applications) with an alienating potential for its users** or targeted at non-humanistic endeavours***
- To give oneself the right to not pursue research which would contradict one or several of the above principles
* other problem solving approaches than AI exist, in particular mathematical modelling and explicit problem solutions
** in the sense of lowering the user’s skill level or of preventing them to build up the skill; an example may be given by some of those applications based on generative AI
*** in the sense of not serving genuine and deep mankind interests; an example may be given by applications with commercial endeavours