The emergence of agentic AI has taken the technology world by storm, but its rapid ascent has been accompanied by a significant challenge: the absence of a universally accepted definition. This lack of standardization is causing confusion among Chief Information Officers (CIOs) and other IT leaders as they navigate the complex landscape of AI products and services.
The Problem of Definition
Agentic AI, which has supplanted generative AI at the pinnacle of the technology hype cycle, is touted by numerous vendors as a revolutionary solution. However, the term ‘agentic AI’ means different things to different people. Some experts define it as a tool capable of making autonomous decisions within an organization, learning from past experiences, and adapting its responses accordingly. Others suggest that any AI with some degree of decision-making functionality qualifies as agentic.
The Reality Behind ‘Agent-Washing’
Critics argue that many vendors are misrepresenting their products, pitching simpler AI chatbots, assistants, or add-ons to large language models (LLMs) as agentic AI. According to Zach Bartholomew, VP of product at Perigon, many so-called agents are merely ‘LLM wrappers’ or ‘glorified LLM workflows.’ Chris Shayan, head of AI at Backbase, echoes this sentiment, stating that there’s a lot of ‘agent-washing’ in the IT industry. He notes that basic automation is often rebranded as autonomous agents, leaving CIOs and CTOs struggling to distinguish between genuine agentic AI and rebranded traditional algorithms with improved interfaces.
True Autonomy in AI
Shayan emphasizes that true agents can reason through multiple steps and possess some independent decision-making authority. For instance, the banking industry has started implementing AI agents that can detect unusual transaction patterns and take appropriate action without constant human supervision. True autonomy in software, according to Shayan, means the ability to handle end-to-end processes independently, from gathering information and analyzing options to executing actions and learning from outcomes.
The Autonomy Continuum
Not all AI experts agree on the definition or the timeline for the development of true agentic AI. While Bartholomew believes that true agents are about a year away from deployment, David Lloyd, CAIO at Dayforce, views agentic AI as a spectrum of capabilities rather than a binary definition. Lloyd suggests that the focus should be on whether a particular AI tool drives business value or quantifiable value, rather than getting bogged down in definitions.
Navigating the Complexity
Ilia Badeev, head of data science at TrEvolution, contends that the term ‘AI agent’ is currently more of a marketing label than a well-defined term. He advises CIOs and IT procurement leaders to look beyond the labeling and focus on the capabilities they need. Badeev recommends evaluating the functionality, accuracy, and price of AI tools, rather than getting caught up in the ‘agent’ nomenclature.
Best Practices for CIOs
To navigate this complex landscape, CIOs and IT procurement leaders are advised to ask critical questions before investing in AI agents. These include: Can the AI plan and execute multi-step processes autonomously? Does it learn or improve over time? What kind of decisions can it handle independently? Can it take meaningful actions without requiring human approval? How well does it integrate with the existing IT stack? Bartholomew also stresses the importance of retaining the option to audit the agent’s actions, suggesting that human oversight will remain crucial in the foreseeable future.
Conclusion
In conclusion, while the lack of standardization in agentic AI presents challenges, it also offers an opportunity for CIOs and IT leaders to carefully evaluate their needs and make informed decisions. By focusing on the capabilities and value that AI tools can bring to their organizations, rather than getting caught up in marketing terminology, they can harness the true potential of this emerging technology.