Wikipedia at the Crossroads: Harnessing AI Without Losing the Human Touch

A few months ago, I was invited to write a short response to a question submitted to the AI Helpdesk—a Dutch-language platform “where anyone can receive scientifically reliable and clear answers to questions about Artificial Intelligence (AI).” The question I was asked to address was: What are the prospects for AI and Wikipedia? Below is the original English version of my answer:

Platforms like Wikipedia are currently experiencing a pivotal intersection with AI, which offers vast potential for developing and improving the online encyclopedia while posing significant risks. By effectively leveraging AI tools, Wikipedia can improve article quality, translation, and accessibility. However, challenges such as misinformation, bias, loss of human editorial input, illusion of consensus, and community disengagement demand attention. A balanced approach, integrating human oversight with AI capabilities, is essential for harnessing benefits while minimising harm.

Opportunities for Collaboration

1. Enhancing Content Quality:

AI tools can assist in generating, editing, and proofreading articles, improving the consistency and accessibility of information.
Translation AI enables multilingual access, broadening Wikipedia’s reach globally and ensuring inclusivity.

2. Supporting New Contributors:

By offering automated suggestions, AI can guide new contributors, lowering the barriers to entry and enhancing participation diversity.

 Challenges to Address

1. Bias and over-standardization of narrative:

LLMs, trained on dominant narratives, may perpetuate biases or marginalize minority viewpoints (Burton et al., 2024).The use of proprietary training data by LLMs risks reinforcing systemic biases and eroding cultural pluralism.

2. Misinformation and Quality Control:

AI tools lack inherent fact-checking ability, risking the propagation of errors (Yasseri, 2025).
Automated edits or content generation without rigorous oversight could compromise Wikipedia’s credibility.

3. Community Dynamics:

Over-reliance on AI might lead to reduced human editorial engagement, undermining the collaborative spirit of Wikipedia (Cui & Yasseri, 2024).
Community members may feel displaced if AI assumes tasks traditionally managed by volunteers.

Mitigating Harms: A Balanced Approach

1. Strengthening Human-AI Collaboration:

Employ AI as a tool to assist, not replace, human editors (Tsvetkova et al., 2024). AI can handle repetitive tasks while humans ensure nuance, context, and ethical considerations. AI can even be used to facilitate human users’ deliberation and conversations (Traeger et al., 2020). To this end, transparent AI models, with clear documentation of data sources and decision processes, combined with careful design and task allocation, are essential.

2. Developing AI-Resistant Safeguards:

Wikipedia can implement robust review mechanisms to vet AI contributions.
Establishing clear guidelines for integrating AI outputs into Wikipedia’s content will preserve quality and neutrality. Many Wikipedia language editions have local mechanisms to monitor contributions by simple bots, new users, etc. This could be generalized to include contributions that benefited from AI, particularly large language models. Otherwise, we might be dealing with the risk of the illusion of consensus when there is no consensus (Burton et al., 2024).

3. Promoting Digital Literacy:

Empower users to critically evaluate AI-generated content. Public education on the workings and limitations of AI will reduce misinformation risks (Yasseri, 2025). This might be a trivial point to someone familiar with Generative AI, that the content produced by Large Language Models is “made up” and by no means is guaranteed to be factually accurate (at least at the moment). A typical Wikipedia editor might not know this. A typical Wikipedia reader most likely does not know this. Hence, promoting digital literacy could go a long way in mitigating some of these risks.

The Path Forward

AI offers transformative potential for Wikipedia but requires thoughtful integration to align with its mission of providing free, neutral, and diverse knowledge. By embracing a human-centered approach and addressing the challenges of bias, misinformation, and community impact, Wikipedia can continue as a global beacon of collaborative knowledge in the AI era.

References:

Burton, J. W., Lopez-Lopez, E., Hechtlinger, S., Rahwan, Z., Aeschbach, S., Bakker, M. A., … & Hertwig, R. (2024). How large language models can reshape collective intelligence. Nature Human Behaviour, 1-13.

Cui, H., & Yasseri, T. (2024). AI-enhanced collective intelligence. Patterns5(11).

Traeger, M. L., Strohkorb Sebo, S., Jung, M., Scassellati, B., & Christakis, N. A. (2020). Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences117(12), 6370-6375.

Tsvetkova, M., Yasseri, T., Pescetelli, N., & Werner, T. (2024). A new sociology of humans and machines. Nature Human Behaviour8(10), 1864-1876.

Yasseri, T. (2025).The Memory Machine: How Large Language Models Shape Our Collective Past, VerfBlog.

Published by Taha Yasseri

Workday Full Professor and Chair of Technology and Society at Trinity College Dublin and Technological University Dublin. Director of the TCD-TU Dublin Joint Centre for Sociology of Humans and Machines (SOHAM).