AI stock prices could collapse and AI intelligence is overhyped
Nikos Katsikanis - 13 October 2025
I’ve spent enough long nights pairing with AI agents to know they amplify developers’ skills, not replace them. The market keeps insisting AI will replace whole teams any day now, but up close, the cracks are obvious.
Agents amplify what I already know
If I’ve previously hand-coded something or understand how to, AI agents can help me reach the goal faster if I move in small, testable steps. It’s best to let the AI handle minor tasks, then validate the output immediately. The agent happily generates the tedious tests that I, as a human, tend to deprioritise.
I treat code as the single source of truth. It’s a denser, more precise language than human communication (German, English, or anything else). AI can’t invent a product vision for me, and it can’t read my mind or the minds of my customers.
Teams are worth more than layoff savings
I’ve yet to meet an AI that can deliver business value faster than the best developers I knew before this new wave of agents. We keep hearing from big-tech founders who believe companies can replace seasoned developers with AI tools. That idea ignores the deep context required to work effectively in a large, established codebase.
When a teammate leaves, I don’t just lose a pair of hands. I lose institutional memory, unspoken conventions, and the reflexes built from late-night incidents. Those things don’t show up in documentation or tickets.
The smarter play is to keep the core team and teach them how to use AI to enhance their output. Let AI handle the dull, low-risk work such as form validation, boilerplate CRUD, and basic tests. That balance is healthier than firing staff and hoping an AI model can learn the system solo. For critical infrastructure work like IaC, human-written code or a slow, incremental AI-assisted approach is usually the better option.
Why human context still matters
I’m currently the architect of a national U.S. non-profit’s web platform (AGCRA) that includes nearly 100,000 lines of business logic, not counting test suites and ETL code. I carry two years of context in my head, built over thousands of hours of debugging, refactoring, and manual testing. No AI is going to replace that anytime soon. You can’t load years of context into a model every time you want to change something.
AI companies are still not profitable and could disappear
If major AI providers go bust, model quality could slide back to whatever we can run locally on a chunky laptop. Right now that’s roughly GPT-3-level performance if you’re willing to wire up an ultra-spec GPU rig in your home office.
Cloud models won’t vanish, but their prices can rise quickly. When budgets tighten, companies will make fewer API calls and shift workloads to self-hosted checkpoints. If that happens, there could be a rush to rehire all the developers who were laid off in the name of “AI efficiency.”
Local models will always handle the boring stuff
Even if the AI market cools, GPT-3-class models will still be useful for automating the tedious parts of onboarding and scaffolding. They write boilerplate, generate CRUD screens, and sanity-check docstrings. That frees me to focus on the harder, higher-value parts of a system that demand real engineering judgment.