ChatGPT, the famouschatbot, fascinatessome and worries others. One thing is certain: OpenAI’s innovation leaves no one indifferent. The tool has sent shivers down professors’ spines in recent weeks, following instances of cheating by students who used the virtual assistant.Generative AIis causing concern across many sectors. What arethe risks of ChatGPT? What areits ethical and societal implications? Howcan we harness this technological power so that it generates only positive value?
THE ETHICAL AND SOCIETAL RISKS OF CHATGPT
- Algiarism: a new term used to describe plagiarism by chatbots, which poses a threat to authors and many industries.
- Biases or discrimination: even though the chatbot is pre-trained and supervised, it is not infallible. It is possible that offensive or dangerous comments could slip through the cracks of its neural network.
- The accuracy of the data: The internet is full of technical, conceptual, and scientific inaccuracies… ChatGPT gathers information that may be false and presents it to us in its content.
- The replacement of various professions: contrary to what one might think, high-value-added professions could also be at risk. This is the case for journalists and graphic designers, who have been divided on the subject of ChatGPT in recent weeks.
- The creation of biased chatbots: if ChatGPT had been created by experts with questionable ethics, the tool would be just as biased. One example is ChatCGT, a chatbot designed by Vincent Flibustier and his brother, who programmed it with left-wing views.
- Cybercrime: Placing complete trust in a chatbot can pose a significant risk if the virtual agent was created with malicious intent (such as a financial scam).
Responses to the ethical and societal risks raised by ChatGPT
“Constraints spark the imagination.”
This famous line by Georges Brassens can be applied to the topic of artificial intelligence, and in particular to OpenAI’s remarkable chatbot.
While ChatGPT brings its share of risks, vulnerabilities, and concerns, it also prompts us to reevaluate our models and seek solutions to address and overcome these challenges. Due to the risk of plagiarism, some schools have already blocked the chatbot, while OpenAI has unveiled its new AI Text Classifier tool, which can identify text generated by ChatGPT.
Numerous articles have discussed the phenomenon of job displacement that could result from the deployment of AI. Although companies have already embarked on their technological revolution, the question of the future of jobs remained abstract, due to the difficulty in understanding what would actually change in everyday life. Yet, reflecting on this issue is essential to ensuring economic stability. ChatGPT has brought this prospect into stark reality.
The risk of unemployment looming over certain professions is real, but let’s never forget that AI is a human creation and requires experts capable of overseeing it and driving its development. As economist Pascal de Lima explains so well, professions will undergo a transformation:“Not all jobs will necessarily disappear, but they will evolve and become more sophisticated in the years to come. Others, on the contrary, will emerge… It is primarily jobs involving analysis and interpretation that will be preserved.”
No matter how powerful technology may be, it still needs to be imbued with empathy, fairness, critical thinking, values of equality and equity, and even a sense of humor… Only humans can teach it these principles and guide its development.

As for other threats (cybercrime, malicious misuse, misinformation, discrimination, etc.), they require careful consideration and action. Experts to address and counter them will be essential, but that is not enough. We need AI that is regulated, ethical, responsible, and subject to oversight. Regarding the accuracy of information provided by chatbots, for example, one might wonder whether committees will be created to validate or reject their outputs. Who will decide what is true or false in the responses of virtual assistants? There is still much to be done and invented before we dive headfirst into the technological deep end. A new ecosystem is emerging. Before embracing intelligent systems across the board, it is important to understand their limitations and the threats they pose. On January 23, the CNIL (Commission nationale de l’informatique et des libertés) announced the creation of an artificial intelligence department. Similar initiatives are expected to follow in the coming months within other organizations. One thing is certain: AI can generate positive value provided that all stakeholders in society collaborate on the issue. The goal is therefore to foster a shared vision and a productive framework across the various spheres of our society (economic, technological, legal, political, etc.).
The rollout of this technology raises social and ethical questions. Algorithms are evolving faster than our ability to think them through. This isn’t a problem, as long as we remain aware of it, assess the situation, and anticipate future developments. Resistance to change or technophobia no longer have a place. AI has made a stunning entrance into our lives with ChatGPT. It’s up to us to teach it to knock gently and ask permission before entering.

