> If I need to spool up a jet turbine feeding H100's just to decipher my error messages, the solution is a better compiler, not a larger jet engine.
You don't need a jet turbine and H100s for that, you need it once for the whole world to get that ability; exercising it costs comparatively little in GPU time. Like, can't say how much GPT-4o takes in inference, but Llama-3 8B works perfectly fine and very fast on my RTX 4070 Ti, and it has a significant enough fraction of the same capabilities.
Speaking of:
> A better compiler helps you reason about errors.
There's only so much it can do. And yes, I've actually set up an "agent" (predefined system prompt) so I can just paste the output of build tooling verbatim, and get it to explain error messages in it, which GPT-4 does with 90%+ accuracy. Yes, I can read and understand them on my own. But also no, at this point, parsing multiple screens of C++ template errors or GCC linker failures is not a good use of my life.
(Environment-wise, I'm still net ahead of a typical dev anyway, by staying away from Electron-powered tooling and ridiculously wasteful modern webdev stacks.)
> A better language reduces boilerplate.
Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.
Oh wait, wrong reality.
> Better language features help you be more expressive.
That's another can of worms. I'm not holding much hopes here, because as long as we insist on working directly on plaintext codebase treated as single source of truth, we're already at Pareto frontier in terms of language expressiveness. Cross-cutting concerns are actually cross-cutting; you can't express them all simultaneously in a readable way, so all the modern language design advances are doing is shifting focus and complexity around.
LLMs don't really help or hurt this either, though they could paper over some of the problem by raising the abstraction level at which programmers edit their code, in lieu of the tooling actually being designed to support such operations. I don't think this would be good - I'd rather we stopped with the plaintext single-source-of-truth addiction in the first place.
> Its appearance as a silver bullet withers the closer you get to essential complexity.
100% agreed on that. My point is, dealing with essential complexity is usually a small fraction of our work. LLMs are helpful in dealing with incidental complexity, which leaves us more time to focus on the essential parts.
> Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.
> Oh wait, wrong reality.
You are a bit too cynical. The tools (compilers and interpreters and linters etc) that people are actually using have gotten a lot better. Both by moving to better languages, like more Rust and less C; or TypeScript instead of Javascript. But also from compilers for existing languages getting better, see especially the arms race between C compilers kicked off by Clang throwing down the gauntlet in front of GCC. They both got a lot better in the process.
(Common) Lisp was a good language for its time. But I wouldn't hold it up as a pinnacle of language evolution. (I like Lisps, and especially Racket. And I've programmed about half of my career in Haskell and OCaml. So you can rest assured about my obscure and elitist language cred. I even did a year of Erlang professionally.)
---
Btw, just to be clear: I actually agree with most of what you are writing! LLMs are already great for some tasks, and are still rapidly getting better.
You are also right that despite better languages being available, there are many reasons why people still have to use eg C++ here or there, and some people are even stuck on ancient versions of or compilers for C++, with even worse error messages. LLMs can help.
You don't need a jet turbine and H100s for that, you need it once for the whole world to get that ability; exercising it costs comparatively little in GPU time. Like, can't say how much GPT-4o takes in inference, but Llama-3 8B works perfectly fine and very fast on my RTX 4070 Ti, and it has a significant enough fraction of the same capabilities.
Speaking of:
> A better compiler helps you reason about errors.
There's only so much it can do. And yes, I've actually set up an "agent" (predefined system prompt) so I can just paste the output of build tooling verbatim, and get it to explain error messages in it, which GPT-4 does with 90%+ accuracy. Yes, I can read and understand them on my own. But also no, at this point, parsing multiple screens of C++ template errors or GCC linker failures is not a good use of my life.
(Environment-wise, I'm still net ahead of a typical dev anyway, by staying away from Electron-powered tooling and ridiculously wasteful modern webdev stacks.)
> A better language reduces boilerplate.
Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.
Oh wait, wrong reality.
> Better language features help you be more expressive.
That's another can of worms. I'm not holding much hopes here, because as long as we insist on working directly on plaintext codebase treated as single source of truth, we're already at Pareto frontier in terms of language expressiveness. Cross-cutting concerns are actually cross-cutting; you can't express them all simultaneously in a readable way, so all the modern language design advances are doing is shifting focus and complexity around.
LLMs don't really help or hurt this either, though they could paper over some of the problem by raising the abstraction level at which programmers edit their code, in lieu of the tooling actually being designed to support such operations. I don't think this would be good - I'd rather we stopped with the plaintext single-source-of-truth addiction in the first place.
> Its appearance as a silver bullet withers the closer you get to essential complexity.
100% agreed on that. My point is, dealing with essential complexity is usually a small fraction of our work. LLMs are helpful in dealing with incidental complexity, which leaves us more time to focus on the essential parts.