Sword and Shield: Uses and Strategies of LLMs in Navigating Disinformation
Résumé
The emergence of Large Language Models (LLMs) presents a dual challenge in the fight against disinformation. These powerful tools, capable of generating human-like text at scale, can be weaponised to produce sophisticated disinformation, yet they also hold promise for enhancing mitigation strategies. This paper investigates the complex dynamics between LLMs and disinformation in small, localised settings through a communication game based on online forums, inspired by Werewolf, with 25 participants. We analyse how Disinformers, Moderators, and Users leverage LLMs to advance their goals, revealing both the potential for misuse and combating disinformation. Our findings highlight the varying uses of LLMs depending on the participants' roles and strategies, underscoring the importance of understanding their effectiveness in this context. We conclude by discussing implications for future LLM development and online platform design, advocating for a balanced approach that empowers users and fosters trust while mitigating the risks of LLM-assisted disinformation.
Mots clés
- Large Language Model
- Communication Game
- Werewolf
- Chatbot Uses
- Manipulation Strategies
- Influence
- Human-centered computing → Empirical studies Disinformation
- Human-centered computing → Empirical studies Disinformation Influence Manipulation Strategies Chatbot Uses Large Language Model Communication Game Werewolf
Domaines
| Origine | Fichiers éditeurs autorisés sur une archive ouverte |
|---|---|
| licence |