Global Voices has released a new policy regarding the use of artificial intelligence (AI) tools in the creation of content for the platform. The policy addresses the growing use of tools based on large language models (LLMs) such as ChatGPT and DeepSeek, as well as image generators like Midjourney. The core of the policy centers on maintaining trust with its audience and upholding the organization’s mission of amplifying unheard voices.
The Importance of Trust
At its heart, the new policy is about maintaining the trust that Global Voices has cultivated with its readers. The organization stresses that LLMs are not designed to be factually accurate. While these tools might sometimes generate correct information, this is due to the probabilistic nature of their models, not any inherent commitment to truth. As a media organization dedicated to journalistic principles, Global Voices emphasizes the importance of factual reporting, arguing that relying on LLMs for writing, translation, or image generation could undermine this commitment and erode reader trust.
Amplifying Unheard Voices
The second key reason for the policy stems from Global Voices’ commitment to amplifying voices often marginalized in mainstream media. The policy states that because LLMs rely on existing data for training, the outputs they provide are often biased toward popular and prevalent information available online. This can push the internet toward homogeneity, suppressing diversity, minority opinions, and the voices Global Voices aims to highlight.
We strive to publish ideas and knowledge even — indeed, especially — if they don’t quite fit with the standard language or the standard ways of thinking.
Beyond the Core Mission
The organization also cited other considerations, underscoring its awareness of the broader impact and implications of LLMs, including:
- Environmental Impact: The energy consumption of LLMs.
- Copyright Issues: Potential copyright infringement related to both the training data and the outputs of LLMs.
- Labor and Ethics: Concerns about the human workers involved in improving and working on LLMs, who often experience poor working conditions and reduced pay.
Finding a Balance
The policy acknowledges that AI tools provide ease for writing, translation, and illustration, and recognize that they are especially helpful for non-native speakers of a language. The policy differentiates between basic tools like spell-check, grammar correction, and synonym finders and the types of AI-based tools which are not appropriate for the needs of Global Voices. The new policy mirrors the organization’s long-standing practice of not accepting articles copied from encyclopedias or translations that are merely word-for-word dictionary look-ups. Global Voices aims to ensure the writing, translation, and illustration it publishes are not based on automated generation.
Enforcement and Future Updates
As a decentralized organization, Global Voices depends on trusting its contributors. Because no current technology can reliably detect AI-generated content, the organization reserves the right to review any writing, translation, or illustrations that raise concerns about their use of AI. Contributors may be asked to rewrite or resubmit material, or the content may be rejected or removed entirely. The policy also acknowledges that technology within this field is evolving rapidly and will be updated as necessary.