Home United States United States: ChatGPT targeted in a criminal investigation after a shooting on...

United States: ChatGPT targeted in a criminal investigation after a shooting on a Florida campus

4
0

It’s a first in the United States. Florida Attorney General James Uthmeier announced the opening of a criminal investigation targeting OpenAI and its ChatGPT interface, in relation to the shooting that occurred in April 2025 at Florida State University. The attack, carried out by 20-year-old student Phoenix Ikner, resulted in two deaths and six injuries on campus. Authorities are now focusing on the exchanges between the suspect and the conversational robot before the act took place.

According to the information gathered, the shooter reportedly questioned ChatGPT about aspects related to weapons, ammunition, and the best times to maximize the impact of the attack. During a press conference, James Uthmeier stated that ChatGPT had “provided significant guidance to the shooter before he committed this hateful crime.” “My investigators told me that if this thing on the other side of the screen (ChatGPT) were a person, we would charge it with homicide,” he added.

“A legal territory unprecedented”

According to information obtained by The New York Times, the suspect also inquired about how the country would react to a campus shooting and what the peak times were within the university. The prosecutor specified that this procedure marks a turning point, transitioning from a civil investigation to a criminal one, while acknowledging that pursuing a company in this area constitutes “unprecedented legal territory.”

On the other hand, OpenAI has denied any responsibility, stating that “ChatGPT was not responsible for this heinous crime” and that the tool only provided “factual responses.” The company indicates that it is cooperating with authorities and is working to strengthen its mechanisms to “detect dangerous intent” and “respond appropriately.” The ongoing investigation does not rule out possible legal action and may include a review of internal practices and security policies of the company.

In January, Google and Character.AI reached settlements with families accusing conversational robots of harming minors and leading one of them to end their life.