Balancing the Scales: Navigating the Regulatory Needs of Crypto and AI
Technology continues to evolve at a rapid pace, often outpacing the regulations designed to govern it. This is particularly evident in the parallel development and rise of both cryptocurrency and artificial intelligence (AI). Having spent years reporting on the intersection of innovation, finance, and policy, I’ve witnessed firsthand the transformative power of these technologies.
In the past, the decentralized nature, volatile markets, and potential for misuse of cryptocurrency led me to believe that it was the most pressing issue for regulators. The need for immediate action seemed paramount.
However, with the emergence of sophisticated AI — especially AI agents with the capacity for independent decision-making — my perspective has shifted. Both have the potential for major societal impact, and both require urgent attention. If forced to prioritize, I now see AI as the greater challenge. Its reach, its potential to blur ethical lines, and even its capacity to disrupt the crypto sector itself make it a more complex and immediate concern. We are in a regulatory race, and the consequences of falling behind could be profound.
The Ongoing Need for Crypto Regulation
Let’s consider cryptocurrency. When Bitcoin initially gained traction, it was hailed as a revolutionary alternative to traditional finance. However, it also raised red flags for regulators. The anonymity of blockchain transactions, dramatic price swings, and potential for use in illicit activities like money laundering created regulatory challenges.
I remember the frenzy of 2017, when Initial Coin Offerings (ICOs) were popping up everywhere, raising billions of dollars without proper oversight. This was a wake-up call for governments and financial watchdogs. The Financial Action Task Force (FATF) stepped in with guidelines to curb illicit uses of crypto, and countries began to develop laws to regulate exchanges and wallet providers. However, the global regulatory landscape remains variable, with only about half of surveyed jurisdictions having robust crypto regulations in place. This leaves room for risks to persist.
The need for crypto regulation hasn’t diminished. With the total market value of cryptocurrencies reaching $3.1 trillion in early February 2025, digital assets are no longer a niche interest—they’re a significant part of the financial ecosystem. The rise of decentralized finance (DeFi), where users can lend, borrow, and trade without traditional intermediaries, has only added to the complexity. These platforms are innovative but often operate in a murky legal space, with little protection for users if things go wrong.
The collapse of FTX in 2022, which wiped out $8 billion in investor funds, was a stark reminder. While regulators like the U.S. Securities and Exchange Commission (SEC) and the European Securities and Markets Authority (ESMA) have started regulatory efforts, fragmented global rules still leave gaps. Cross-border crypto transactions happen in regions with weak or no regulations, elevating the stakes for financial stability and preventing crime.
The Rapid Ascent of AI Agents
As significant as these issues are, they have been overshadowed by the rapid rise of AI. Initially, AI was viewed as a tool for improving efficiency, such as predictive analytics and targeted advertising. However, this has changed dramatically. Today’s AI, especially generative models like GPT-4 and autonomous AI agents, are not just tools; they are decision-makers. In finance, AI now manages portfolios, executes trades, and approves loans, tasks previously reliant on human expertise. Based on the rate of AI adoption, AI could handle up to 30% or even 40% of all financial transactions by 2030.
This raises serious questions about accountability and risk: Who is liable when an AI agent makes a poor decision? How do we ensure these systems are transparent and fair? What happens when they make complex decisions at a scale and speed that humans cannot easily oversee?
AI agents are now deeply embedded in trading, using vast amounts of data to spot market trends and make split-second transactions. The European Central Bank (ECB) cautioned in 2024 that AI-driven trading could lead to sudden market crashes if algorithms converge on the same strategies or increase market volatility. Also, integrating AI into the crypto world could multiply the risks. AI is being used to optimize trading strategies, detect fraud, and govern decentralized organizations. The use of AI in crypto smart contracts could open doors to exploitation if these systems aren’t carefully designed.
Regulatory Systems and Intellectual Property
Another area where AI poses unique challenges is intellectual property. Generative AI can produce content, including text, images, and music, in seconds, but who owns the resulting output? AI-generated reports and analyses are becoming standard in finance, but the legal status of this content is unclear. There are unresolved cases of AI developers using copyrighted financial data to train models. In a survey conducted in my private groups, over 70% of business owners using AI for content creation were unsure about the legal implications.
This uncertainty is even more pronounced in crypto. AI-generated content is often used to promote new tokens or sway market sentiment, sometimes without disclosing AI involvement. These gray areas can be breeding grounds for abuse.
Our regulatory systems are struggling to keep up. Crypto regulation has made some progress, but it’s still a fragmented effort. AI regulation, on the other hand, lags even further. The EU’s AI Act, passed in 2024, is a step in the right direction and categorizes AI systems by risk level, setting stricter rules for high-risk applications. However, the law has been criticized for not fully addressing the global nature of AI development or the specific challenges posed by AI agents.
Prioritizing AI Regulation
In my view, AI needs to take precedence. The risks of AI — volatility, fraud, and regulatory gaps — are serious, but they’re largely contained within finance. AI can potentially reshape society, from all elements of life. Its ability to amplify risks within crypto, such as through AI-driven trading bots or flawed smart contracts, highlights the need for a comprehensive approach.
The lessons from regulating digital assets, such as the need for consumer protections and international cooperation, can inform AI regulation. However, AI’s unique challenges, from ethical concerns to systemic risks, require urgency and innovation. Regulators need to act quickly, establishing clear rules for AI-driven decision-making and ensuring these systems are transparent and accountable. This will require technical expertise and collaboration across borders and sectors. Initiatives like the UK Financial Conduct Authority’s Digital Sandbox are a positive step, but they need to scale up and be adopted globally.
Ultimately, the regulatory race between crypto and AI isn’t about choosing one over the other; it’s about recognising the risks each poses and responding accordingly. Both are transformative technologies. However, AI’s potential to disrupt decision-making, challenge ethical norms, and destabilize systems makes it the more immediate priority. We can’t afford to wait. The future of finance, technology, and society depends on getting this right. The clock is ticking.