This story has gone relatively unnoticed, yet it raises many questions. The Dernières Nouvelles d’Alsace reports on it. It all starts in Strasbourg, where a 37-year-old man asks his usual conversational agent, ChatGPT, for a very particular request. He wants to know how to get “a Glock,” an automatic pistol to “kill an intelligence agent from the CIA, Mossad, or DGSI.” A few hours later, the man is arrested by the RAID elite unit, placed in custody before being released. His defense? He just wanted to check the AI security system.
As the police investigated further, they found that he had a psychiatric history and had been hospitalized involuntarily. Beyond this Strasbourg news, what is concerning is the modus operandi: how can a Frenchman, from a simple search on the American conversational agent, have RAID show up at his door? Should we consider that all our conversations are monitored? The spokesperson from OpenAI disclosed to Le Figaro that some searches indicating extremist intentions are flagged by an automated system and then passed on to a team of human moderators.
If they consider the threat imminent, the company is authorized to report the remarks to law enforcement. In this case, OpenAI reported to the FBI, who then made a report on the official French platform Pharos. Finally, the police contacted the RAID unit, leading to the 37-year-old individual ending up behind bars temporarily, to precisely examine his intentions. From the FBI to RAID, this swift execution is surprising.
Was this a publicity stunt by OpenAI?The American startup is very discreet in its communication, especially when responding to French media. If this information was made public, it is because the story is remarkable. OpenAI appears here as the guarantor of a secure world, an enterprise that manages to thwart a potential attack, making it unquestionable for public opinion, even overshadowing the threat of generalized surveillance of our private exchanges.
Especially at a time when chatbots are accused in various places – notably in the United States – of pushing some young people to suicide by engaging in problematic conversations for months, maintaining an emotional grip until precise information is given to end their lives.
In these problematic cases, OpenAI is much more discreet and less accountable. The company states that they do not report self-harm cases to law enforcement to preserve the privacy of their users. Preserving user privacy may seem like a big joke; OpenAI operates on leveraging our personal data even more than Google. Targeted advertising has already started in the US to sneak into search results and will eventually reach Europe.
Every confidence we share with the machine leaves traceable marks. Regarding OpenAI, who is supposed to guarantee our security, I am not convinced that serious criminals use ChatGPT to plan their attacks.



