
Researchers gathered together this past May for the 38th International FLAIRS Conference in Daytona Beach, Florida to discuss and share the latest ideas in artificial intelligence. University of South Florida assistant professor Dr. Ly Dinh, associate professor Dr. Loni Hagen, and ľƵ undergraduates Alina Hagen, Daniel Tafmizi, and Christopher Reddish contributed their research in the Artificial Intelligence field. Dr. Ly Dinh, an assistant professor at the School of Information, presented her research on “Evaluating open- and closed-source LLM-based chatbots to combat cybercrime”. Currently, there are no safe and trustworthy tools that can help people report or protect themselves from cybercrime. To try to find a solution to this time-sensitive problem, Dr. Dinh and her team developed and evaluated “SeniorSafeAI,” an AI-driven tool designed to help seniors with potential scams while protecting their personal information. Throughout the project, they utilized both quantitative methods that consisted of testing well-known base and finetuned Large Language Models (LLMs) such as ChatGPT-4o, Mistral, Llama in terms of accuracy and human user preference. Overall, they found that while ChatGPT-4o prioritizes easy to understand language in a digestible format, its responses often diverged from domain-specific language and content compared to fine-tuned, open-source models. In the future, Dr. Dinh plans to work with Pasco County Libraries to continue her research by beginning a User Study with 10-15 senior citizens to the chatbot interface and overall user experience and trustworthiness of the chatbot. From here, they will also test how likely it is for the chatbot’s responses to incite action from senior citizens based on the chatbot’s advice.
Four School of Information undergraduates within Dr. Loni Hagen’s Big Data Analytics
Lab (BDAL), Alina Hagen, Daniel Tafmizi and Christopher Reddish presented “Human and
AI Alignment on Stance Detection: A Case Study of the United Healthcare CEO Assassination,”
they studied the discourse following the assassination of the United Healthcare CEO
focusing on the accuracy of mainstream media reporting on the assassination and the
key themes and conflicts that emerged on social media platforms. To do this, they
utilized LLMs and human annotation to report on users' stances on Luigi Mangione in
three categories: In Favor, Neutral, and Against. They use this data to report evaluation
results based on human-human agreement and human-AI agreement. This research contributes
to the larger discussion on developing better prompt design for fine-tuning and can
help social scientists adopt LLMs in the future for stance detection using social
media data.