Many web framework debates have been arguments about abstraction. Which layer hides the underlying platform best? Which one lets you type the least? Which one feels nicest to write? I have been doing some version of this for years now: pick a stack, learn it, ship with it, and retrain when the next layer arrives.
We are in a shift with agentic coding, where now the agents are mainly writing the code and the humans are mainly reading it. This changes the criteria. The next phase of framework choice will not be about abstraction. It will be about functionality. Does this layer add capability the platform doesn't already have, or is it sugar over something the platform already provides?
Agents write almost all the code on my current Next.js stack, leaving me mostly to read. That's where the mental tax of hyper-abstraction becomes obvious. To review a Tailwind string like flex items-center gap-4, I have to mentally reverse-engineer it back into standard CSS. The keystroke compression is real, but the decompression is on me—I am paying the metabolic cost of a sugar rush I didn't get to enjoy.
An argument for Tailwind is that you have more concise CSS syntax that lets you develop faster. That argument was for the writer. When the writer was a human, that mattered. When the writer is no longer a human, the argument has lost its load-bearing customer.
My experience with Tailwind is an observation of many, but I think it points at something bigger: modern web development was designed to be ergonomic for humans who type code. But we, gradually and then suddenly, stopped being the ones who type. This 'less typing' productivity argument that built the modern web stack is becoming irrelevant.
Now let's run the same audit on the rest of my stack. While a backend framework has its sugar, the modern frontend is a towering wedding cake. React's component model exists because composing UIs in raw DOM is tedious for a human to maintain by hand. Next.js's Server Actions exist because manually writing API endpoints and wiring up fetch requests is tedious for a human to type out. State managers like Zustand exist because manually passing data up and down a component tree is tedious for a human to track. There is a pattern. They're all different flavors of ergonomic sweetener.
None of these tools are bad. They solve real problems. They will not die overnight, and existing codebases don't rewrite themselves. But the gravitational pull of these abstractions on greenfield projects gets weaker as agents write more of the code.
Cutting Sugar
The abstractions that survive must pass these tests: Does this layer provide actual functionality, or is it only an abstraction over something the platform already gives me? Why would I take the time to learn this abstraction versus learning the underlying platform API?
In practice, I will be aware to:
- Chose readability over writeability;
- Abstract for reasoning and correctness;
- Lean in to standards and platform APIs.
Upon removing sugar, you get closer-to-the-platform code. This isn't a cost; it's a feature: even if the code may be more verbose, it makes maintenance and debugging much easier. The removal of sugar-filled libraries reduces dependency hell and gives the code longevity beyond hype cycles.
Not every abstraction layer is doomed. TypeScript survives because it optimizes for reasoning and correctness, not ergonomics. TypeScript isn't syntactic sugar; it's syntactic broccoli. It adds verbosity but in return makes the code easier to understand.
Counterarguments
We require libraries in userland to move the web platform forward.
I recall jQuery leading to document.querySelector. Yes, keep doing that. Such libraries act as necessary polyfills for the future. But its ultimate destiny is deprecation. Once the platform natively adopts the feature, drop the library. Do not marry a long-term architecture to ephemeral syntax.
Teams benefit from the common coding standards these abstractions provide.
True. Without guardrails, agent-generated vanilla JS quickly becomes unpredictable spaghetti. But we confuse runtime frameworks with development guardrails. You don't need to ship a heavy-duty runtime to users just to enforce file structures. Enforce your architecture through strict linting, build-time tools, and explicit agent instructions. Keep the consistency; drop the production bundle.
The AI is trained on the abstractions.
True, but they have seen exponentially more standard HTML, CSS, and vanilla JavaScript. Furthermore, frameworks are moving targets. Because LLM training spans years of shifting paradigms, agents suffer from "temporal hallucinations" like confidently mixing Next.js 12 and 14 APIs. Web standards don't have a "v2 vs. v3" problem. To get the most reliable code, use the APIs that haven't changed in a decade.
Experiment
Now let's experiment with this notion. What happens when we put our codebase on a low-sugar diet? If we have a well-defined agentic layer (instructions, skills, etc.) and tooling (type-checking, linting, etc.) in place, could we get a long-term productivity gain by building a web application as close to the bare metal as possible, where every library needs to earn its keep?
Under this strict audit, I would have few libraries survive. Shadcn and all its baggage would preferably be replaced by native components with modern styling. A fast bundler (like Vite) survives because it optimizes network delivery. A testing framework (like Jest) survives because it exposes breaking changes.
Conclusions
My decades-long, countless iterations of "pick a stack, learn it, ship with it, and retrain when the next layer arrives" can be broken. In the long run, instead of spending time learning the abstractions of a framework, I should be better off perfecting my knowledge of the underlying standards and platform APIs instead.
Let the agents do the typing. Use the rigor of the development tooling to catch them when they slip. The era of the hyper-palatable, sugar-coated codebase is ending.