Sandeep Nailwal, co-founder of Polygon and the open-source AI company Sentient, believes that artificial intelligence will never achieve consciousness because it lacks the fundamental quality of intention, inherent in humans and other biological life forms.
“I don’t see that AI will have any significant level of conscience,” Nailwal stated in an interview with Cointelegraph, also dismissing the idea of a doomsday scenario where AI becomes self-aware and turns against humanity.
Nailwal expressed skepticism about the idea that consciousness arises randomly from complex chemical processes. While such processes enable complexity at the cellular level, he argued, they do not lead to consciousness.
Instead, Nailwal is more concerned about the potential for centralized entities to misuse artificial intelligence for surveillance and to limit individual freedoms, emphasizing the importance of transparency and the democratization of AI. He stated, “That is my core idea for how I came up with the idea of Sentient, that eventually the global AI, which can actually create a borderless world, should be an AI that is controlled by every human being.”
The executive further advocated for individuals to have their own custom AI models to safeguard their interests against AIs deployed by powerful institutions.
In October 2024, the AI company Anthropic published a paper examining ways AI could potentially harm humanity and possible solutions. The paper concluded that AI did not pose an immediate threat, but could become dangerous as AI models become more sophisticated in the future.
David Holtzman, former military intelligence professional and chief strategy officer of Naoris, a decentralized security protocol, spoke to Cointelegraph. Holtzman expressed concern that AI represents a significant risk to privacy in the short term. Like Nailwal, Holtzman argued that centralized institutions, like governments, could deploy AI for surveillance, making decentralization a crucial defense against AI threats.