But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:
function Hello() {
return <button>Hello</buttton>
}
Fair challenge to the idea. But what i am saying is that every line of boilerplate, every import statement, every configuration file consumes precious tokens.
The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.
Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?
Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?
At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.
The way to use LLM's for development is to use the API.
I'm not so worried about the money but more about context rot. I used spec driven development for a week and I had constant compacting with Claude code. I burned 200€ in one week and now I'm trying something different: only show diffs and try to always talk to me in interfaces.
I do think that at some point there will be frameworks or languages optimised for LLMs.
AI coding tools are burning massive token budgets on boilerplate thousands of tokens just to render simple interfaces.
Consider the token cost of "Hello World":
- Tkinter: `import tkinter as tk; tk.Button(text="Hello").pack()`
- React: 500MB of node_modules, and dependencies
Right now context windows token limits are finite and costly. What do you think?
My prediction is that tooling that manage token and context efficiency will become essential.