AIS Logo
← Back to Library
Exploring Concerns of Fake News on ChatGPT: A Network Analysis of Social Media Conversations

Exploring Concerns of Fake News on ChatGPT: A Network Analysis of Social Media Conversations

Pramukh N. Vasist, Satish Krishnan, Thompson Teo, Nasreen Azad
This study investigates public concerns regarding ChatGPT's potential to generate and spread fake news. Using social network analysis and text analysis, the authors examined social media conversations on Twitter over 22 weeks to identify key themes, influential users, and overall sentiment surrounding the issue.

Problem The rapid emergence and adoption of powerful generative AI tools like ChatGPT have raised significant concerns about their potential misuse for creating and disseminating large-scale misinformation. This study addresses the need to understand early user perceptions and the nature of online discourse about this threat, which can influence public opinion and the technology's development.

Outcome - A social network analysis identified an engaged community of users, including AI experts, journalists, and business leaders, actively discussing the risks of ChatGPT generating fake news, particularly in politics, healthcare, and journalism.
- Sentiment analysis of the conversations revealed a predominantly negative outlook, with nearly 60% of the sentiment expressing apprehension about ChatGPT's potential to create false information.
- Key actors functioning as influencers and gatekeepers were identified, shaping the narrative around the tool's tendency to produce biased or fabricated content.
- A follow-up analysis nearly two years after ChatGPT's launch showed a slight decrease in negative sentiment, but user concerns remained persistent and comparable to those for other AI tools like Gemini and Copilot, highlighting the need for stricter regulation.
ChatGPT, Disinformation, Fake News, Generative Al, Social Network Analysis, Misinformation