The Evolving Information Ecosystem
As an academic librarian, I’ve witnessed three significant digital revolutions. The first was the advent of the internet through browsers, followed by the emergence of Web 2.0 with its mobile and social media platforms. Now, we’re in the midst of the third revolution, driven by the increasing ubiquity of AI, particularly generative AI.

The current AI revolution is met with a mix of fear-based thinking and a rhetoric of inevitability, often accompanied by criticisms of those who are perceived as “resistant to change.” However, what’s lacking is a balanced discussion that highlights both the benefits and risks of AI in specific contexts, along with strategies for risk mitigation.
Assessing AI Through an Ethical Lens
Academics should evaluate AI as a tool for specific interventions and assess the ethics of those interventions. The burden of building trust should rest on AI developers and corporations. Our experience with Web 2.0 serves as a cautionary tale; while it delivered on its promise of a more interactive web, it also had significant societal costs, including the rise of authoritarianism, the erosion of truth, and increased polarization.
To avoid similar outcomes with AI, we need to develop an ethical framework for assessing its uses. Two key factors complicate ethical analysis: the ongoing nature of AI interactions beyond the initial user-AI transaction, and the lack of transparency about AI’s underlying processes. We must demand greater transparency from tool providers.
Applying Ethical Principles
The Belmont Report provides a valuable framework for ethical assessment, outlining three primary principles: Respect for persons, Beneficence, and Justice. Respect for persons encompasses autonomy, informed consent, and privacy. When evaluating AI interventions, we should ask: Is it clear when users are interacting with AI? Can users control how their information is used? Is there a risk of overreliance or dependency?
Beneficence requires that benefits outweigh risks and that risks are mitigated. This principle demands attention to both individual and systemic levels. For instance, while AI tools may offer personalized search results, they may also harden boundaries between discourses and exacerbate disciplinary confirmation bias.
The principle of Justice demands that those who bear the risks also receive the benefits. AI raises significant equity concerns, as models may be trained on biased data. We must rigorously test AI tools for prejudicial or misleading content and ensure they work equitably across different groups.
Mitigating Risks and Promoting Benefits
To harness the potential of AI while minimizing its risks, we need to develop a habit of thinking about potential impacts beyond the individual. The environmental impacts of generative AI models, which require vast computing power and electricity, must also be considered. Uses of AI for trivial purposes may fail the test for beneficence due to their environmental harm.
The principles outlined in the Belmont Report offer a flexible framework for ethical assessment, allowing for rapid developments in AI. By applying these principles, academia can promote the beneficial use of AI while avoiding potential harms.
Gwendolyn Reece, director of research, teaching and learning at American University’s library, emphasizes the importance of ethical considerations in AI adoption. As we move forward, it’s crucial to prioritize transparency, accountability, and equity in our engagement with AI technologies.