Bold statement first: AI isn’t just hype or tomorrow’s miracle—it’s a real technology with real, varied effects, and understanding its different uses is the key to seeing the full picture. And this is where many people get lost, because they’re watching only one side of the coin and assuming that’s the whole story.
The debate over whether AI is the next big thing or just overblown noise has never been more heated. For some, AI is the most helpful coworker you’ll ever have; for others, it’s simply a powerful search tool; and a third group believes it’s vastly overrated. Yet there isn’t a single, agreed-upon verdict on which view is correct.
Tech leaders have long promoted AI as a force that will reshape jobs and spark a new industrial era. Critics, meanwhile, say much of the hype is marketing, and some researchers and executives are sounding safety alarms as they exit their roles. The tension became especially visible after a viral essay by an AI CEO and investor suggested that AI could threaten almost any job that involves working at a computer.
A big reason for the mixed opinions might be simpler than it seems: people are using different kinds of AI in different ways, yet all of it gets bundled under the same umbrella.
“People have exposure to the technology at very different levels,” says Matt Murphy, a partner at Menlo Ventures who has invested in AI companies like Anthropic. “That gap is widening as the technology evolves quickly.”
If you’re using free, consumer-focused AI applications for simple tasks like making a shopping list or planning a trip, you’re likely only seeing one facet of what AI can do. A Menlo Ventures report from mid-2025 estimated that only about 3% of AI users were paid subscribers, though Murphy predicts that number will rise rapidly as needs grow.
Premium AI users gain access to more capable tools, such as agents that autonomously handle tasks instead of just providing chat-based responses, and they encounter fewer usage limits. For example, Anthropic’s Claude CoWork agent is available only with the $20-per-month Pro plan or higher. A similar pattern exists with OpenAI’s Codex for coding.
This is the kind of AI that fuels job-impact concerns—and it’s at the heart of the provocative claims by Shumer and others. In his viral piece, Shumer described telling an AI to build an app, outline its features, and fine-tune everything from user flow to design, with the AI then writing tens of thousands of lines of code and even making usability judgments. He suggested the AI could eventually improve itself.
These assertions aren’t without controversy. Some AI researchers challenged the extent of the performance described, and Shumer later apologized, saying it was a major misstep and that he learned from the experience.
Even discounting bold anecdotes, many experts question whether the scenarios Shumer described are feasible with paid plans, especially since the specific model and project details weren’t clear. He claimed to rely mainly on OpenAI’s GPT-5.3 Codex and to be testing a medium-to-high-complexity app.
Carnegie Mellon professor Emily DeJeu cautions against judging AI’s capabilities solely from free tools. She says it would be misguided to draw broad conclusions about what AI can do from those pared-down versions.
Oren Etzioni, former CEO of the Allen Institute for AI, likens the gap between free and paid AI to comparing an eager intern with a seasoned, reliable one. Free tiers can generate summaries and basic content, but for rigorous research or drafting sophisticated documents, paid capabilities are usually necessary.
Still, many AI firms are steadily bringing more advanced features into free tiers. Stanford AI expert James Landay notes that this blurs the line between free and paid, suggesting there may be less practical difference than some expect. For instance, Anthropic recently released Sonnet 4.6, which they say narrows the performance gap with their premium Opus models.
Tensions around AI and work have spilled over into markets. In early February, software stocks fell after Anthropic unveiled an industry-tailored AI assistant for fields like law and finance, fueling fears that AI could automate knowledge work beyond software development.
Despite excitement, skepticism remains about how quickly and how deeply AI will transform jobs. Some studies have cast doubt on AI’s capabilities and adoption speed. A group from the Center for AI Safety and Scale AI reported flawed results when leading models were tasked with data visualization or coding tasks. A separate industry study found developers take longer to write code when using AI, though these findings came from early-2025 tools.
Experts like Landay argue that AI’s role in software development is overstated. AI is a helpful speed-up tool, but it’s not a magic, self-writing assistant. Coding is a structured, testable domain where AI excels at automation, but many other professions involve complexity and judgment that aren’t as neatly codified.
In short, AI will reshape many industries, but its impact will vary by task, domain, and the sophistication of the tools you’re using. It’s essential to recognize the differences between free, consumer-grade AI and paid, enterprise-grade AI if you want a realistic sense of what’s coming—and what still requires human insight and oversight.
Discussion prompt: Do you think AI will primarily augment professional work, replace certain roles, or create entirely new kinds of jobs? Share your view and any experiences you’ve had with AI in your own work.