The AI Future We’re Creating
This spring, I visited a high school in central Illinois where the cornfields begin at the edge of the parking lot. I asked a classroom full of students to imagine their AI-powered future. The silence was palpable before they began to share their thoughts. “Robots will do everything better than us,” one student said with resignation. Another asked anxiously, “Will there be jobs for people like me?” Then, a quiet voice from the back row said, “It depends on us.”
Those four words have stuck with me through policy discussions and tech summits. These teenagers understood something many experts miss: AI isn’t an autonomous force with predetermined outcomes. It’s a human creation whose impact depends on human choices – choices being made in rooms where most of us aren’t present.
From rural communities to corporate boardrooms, AI is reshaping how we live, work, and learn. It already influences decisions about healthcare, education, credit, and justice. Yet most people affected by these systems can’t see how they work or influence their development. Some AI systems replicate hiring biases, automate insurance claim denials, and make flawed criminal justice assessments. These aren’t glitches – they’re symptoms of a deeper misalignment between technology and public accountability.
Learning from History
We’ve seen this pattern before. The Industrial Revolution promised abundance but delivered dangerous work conditions until labor movements fought for change. The internet democratized information access but created a surveillance economy that commodified personal data. Social media gave voice to millions but eroded public trust and accelerated polarization. Each time, technology outpaced regulation, and the gap between them determined who benefited and who suffered.
Creating a Better Future
AI raises the stakes with deeper entanglement, faster decisions, and increased opacity in critical areas. What’s at issue isn’t just convenience or productivity, but the structure of our institutions, opportunity distribution, and system credibility. To close the dangerous gap between AI’s advancement and societal readiness, we must prioritize:
- AI Literacy: Teaching people to understand how algorithms shape their lives and how to interrogate these systems. Programs like Finland’s “Elements of AI” and the AI Education Project in the U.S. are models we’re supporting.
- Transparency: Requiring high-impact AI systems to include public documentation about their data, functioning, and monitoring. A public registry would give researchers and journalists tools to hold these systems accountable.
- Inclusion: Putting power in the hands of those most affected by AI systems. Organizations like the Algorithmic Justice League show what community-driven innovation looks like.
Democratizing AI Governance
Counterintuitively, democratizing AI governance doesn’t mean slowing innovation – it prevents technological dead ends. Technologies that distribute decision-making tend to be more adaptive, resilient, and valuable. We’re seeing early examples of inclusive AI governance in practice, from the Global Digital Compact’s call for participatory multilateral structures to community workshops at Harvard’s Berkman Klein Center.
Taking Action
If you’re concerned about these issues, here are immediate steps you can take:
- Join local oversight efforts by contacting your city council about AI use in municipal services.
- Ask your employer about their AI evaluation practices.
- Engage with local organizations providing resources for citizen engagement in tech governance.
The students I met understood that creators embed their values into technological systems. As AI reshapes our institutions, the question isn’t whether it will advance quickly, but whether it will advance justly. The future of AI is ours to shape.