For years, I’ve been reading science fiction and books about artificial intelligence. These works have presented some worrisome thought experiments about the future of work. Self-driving trucks, autonomous cabs, and increasingly capable AI have raised concerns about the displacement of human workers. Chess Grandmasters first experienced this with Deep Blue, and the consensus among computer scientists has been that computers would eventually solve all tasks. If OpenAI is to be believed, this point may be much closer than we think. The company continues to roll out new, more advanced features and versions of its AI intended to match or even surpass human labor within months.
The first major sign of this shift came in February, when OpenAI released its Deep Research function, running on its o1 model, which costs $200 a month to access. This tool allows users to request a paper or research memo on a specific topic, which ChatGPT generates in about 5 to 30 minutes. Though derivative, these documents explain topics at a high level using a wide range of sources.
According to The Information, OpenAI will be rolling out AI “agents.” These virtual employees will be able to use applications, search the web, and complete various tasks. Similar to how companies use AI systems for online help chats, these agents aim to replace humans in more complex jobs.
A “high-income knowledge worker” agent would cost $2,000 a month. A software developer agent would cost $10,000 a month, and a “PhD-level research” tier would reach $20,000 a month. There is no set timeline for the release of these products. However, this is the goal for AI firms: offering a high-priced professional product for companies to replace their labor.
If these agents perform as advertised, their high price could seem worthwhile. A code agent would work 24/7, without benefits, and could be “fired” without notice or severance. If competitive with human programmers, paying $120,000 a year for an AI agent might be cheaper than staff. However, their success hinges on the agents’ competence. There are, however, concerns about OpenAI. The company has a history of management issues, including Sam Altman’s brief firing in November 2023. OpenAI has also rushed new features while consistently losing large amounts of investor money. They lost $5 billion last year, and their products produce more false information than ever before.
Deep Research is compelling, but it also hallucinates frequently. An unreliable researcher who fabricates information is useless. If the company hasn’t solved hallucination and safety issues in its less complex systems, there’s no reason to believe the same problems won’t plague their $20,000-a-month AI agents.
