Local

AI ethicist weighs in on Florida Attorney General’s investigation into ChatGPT

Attorney General James Uthmeier launches criminal investigation into OpenAI amid claims the AI chatbot advised a suspected gunman on weapons and targets.

TALLAHASSEE, Fla. — State prosecutors are working to figure out what role ChatGPT may have played in a deadly mass shooting at Florida State University.

On Tuesday, Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI’s ChatGPT over whether the artificial intelligence platform offered advice to a suspected gunman who killed two people and wounded six others last year at Florida State University.

He also urged lawmakers to crack down on AI’s use in criminal behaviors and asked them to implement “protections to safeguard our children from the dangers of AI.”

Uthmeier said prosecutors had reviewed chat logs between ChatGPT and the suspected FSU gunman and found the chatbot advised the suspect on what type of gun and ammunition to use and which locations would allow for the most potential victims.

It’s not the only time investigators say artificial intelligence has been used nefariously.

Volusia County Sheriff’ Mike Chitwood said 31-year-old Luis Diaz Polanco shot a Volusia County deputy, just weeks after asking AI about stand your ground laws.

A now settled lawsuit also alleges 14-year-old Sewell Setzer died by suicide after becoming addicted to his relationship with an AI chat bot.

AI ethics researcher Dr. Chrissann Ruehle with Florida Gulf Coast University told Channel 9 many AI companies are still in an exploratory phase.

“We’re figuring out the boundaries of how AI can be used, in this case, in very nefarious ways. And it takes getting to the legal system for the tech providers in order to respond,” said Ruehle.

She says most platforms are reacting to new nefarious uses. Developers are looking at that language used by users who commit crimes. They are building up keyword based systems to flag harmful prompts.

“Based on its prediction model, if it predicts this person is looking to potentially perform a mass shooting, then at that point, it can issue a flag,” said Ruehle, “Obviously it’s going to send the flag to the user, but what happens internally within OpenAI?”

Ruehle told Channel 9 AI companies should all have ethics codes, a code of conduct, and a mechanism in place to investigate nefarious use of the technology.

“They need to have an infrastructure in place in order to support these types of incidents when they come in. And I think it’s so new for them. I’m not sure that they have that yet,” said Ruehle.

As part of the state’s criminal investigation of Chat GPT, prosecutors have subpoenaed Open AI’s policies and internal training materials regarding users who threaten to harm themselves or others.

Ruehle says that information will allow the state to discover what infrastructure does exist and how the technology is self-policing itself.

Click here to download our free news, weather and smart TV apps. And click here to stream Channel 9 Eyewitness News live.

0