The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.
Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.
> Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.
Except the steps to to that are disable bitlocker, create a local user account (assuming you initially signed in with a Microsoft account because Ms now forces it on you for home editions of windows), delete your existing keys from OneDrive, then re-encrypt using your local account and make sure not to sign into your Microsoft account or link it to Windows again.
A much more sensible default would be to give the user a choice right from the beginning much like how Apple does it. When you go through set up assistant on mac, it doesn't assume you are an idiot and literally asks you up front "Do you want to store your recovery key in iCloud or not?"
FYI BitLocker is on by default in Windows 11. The defaults will also upload the BitLocker key to a Microsoft Account if available.
This is why the FBI can compel Microsoft to provide the keys. It's possible, perhaps even likely, that the suspect didn't even know they had an encrypted laptop. Journalists love the "Microsoft gave" framing because it makes Microsoft sound like they're handing these out because they like the cops, but that's not how it works. If your company has data that the police want and they can get a warrant, you have no choice but to give it to them.
This makes the privacy purists angry, but in my opinion it's the reasonable default for the average computer user. It protects their data in the event that someone steals the laptop, but still allows them to recover their own data later from the hard drive.
Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.
The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.
It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.
And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.
I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.
Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.
Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.
I think the author is correct to a point but I don't believe the examples they've chosen provide the best support for their case. Gen Z buying iPods and people buying N64 games again is not evidence of the monoculture breaking apart - it's a retreat into the past for the enlightened few because their needs are not being met by modern goods and services. You cannot buy a dedicated MP3 player today with the software polish and quality of life that an iPod had in the early 2000s (or even a Zune).
Instead, I see the growth and momentum behind Linux and self-hosting as better evidence that change is afoot.
> If you read Steve's writeup, it's clear that this is a big fun experiment:
So, Steve has the big scary "YOU WILL DIE" statements in there, but he also has this:
> I went ahead and built what’s next. First I predicted it, back in March, in Revenge of the Junior Developer. I predicted someone would lash the Claude Code camels together into chariots, and that is exactly what I’ve done with Gas Town. I’ve tamed them to where you can use 20–30 at once, productively, on a sustained basis.
"What's next"? Not an experiment. A prediction about how we'll work. The word "productively"? "Productively" is not just "a big fun experiment." "Productively" is what you say when you've got something people should use.
Even when he's giving the warnings, he says things like "If you have any doubt whatsoever, then you can’t use it" implying that it's ready for the right sort of person to use, or "Working effectively in Gas Town involves committing to vibe coding.", implying that working effectively with it is possible.
Every day, I go on Hacker News, and see the responses to a post where someone has an inconsistent message in their blog post like this.
If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.
I find the title a bit misleading. I think it should be titled It’s Faster to Copy Memory Directly than Send a Protobuf. Which then seems rather obvious that removing a serialization and deserialization step reduces runtime.
A new Tesla, without subscription, now has worse Steering Assist than a $22K Toyota Corolla.
Back when Autopilot launched, in consumer cars, it was pretty unique. But the market has moved on significantly, and basic Steering Assist/Full-Speed-Range Automatic Cruise Control, are pretty universal features today.
Google quietly announced that Programmable Search (ex-Custom Search) won’t allow new engines to “search the entire web” anymore. New engines are capped at searching up to 50 domains, and existing full-web engines have until Jan 1, 2027 to transition.
If you actually need whole-web search, Google now points you to an “interest form” for enterprise solutions (Vertex AI Search etc.), with no public pricing and no guarantee they’ll even reply.
This seems like it effectively ends the era of indie / niche search engines being able to build on Google’s index. Anything that looks like general web search is getting pushed behind enterprise gates.
I haven’t seen much discussion about this yet, but for anyone who built a small search product on Programmable Search, this feels like a pretty big shift.
Curious if others here are affected or already planning alternatives.
UPDATE: I logged into Programmable Search and the message is even more explicit: Full web search via the "Search the entire web" feature will be discontinued within the next year. Please update your search engine to specify specific sites to search. With this link: https://support.google.com/programmable-search/answer/123971...
I feel like Claude Code is starting to fall over from being entirely written by LLMs. How do you even begin to fix precise bugs in a 1M+ LOC codebase all written by AI? It seems like LLMs are great for quickly adding large new features but not great for finding and fixing edge-cases.
What I find particularly ironic is that the title make it feel like Rust gives a 5x performance improvement when it actually slows thing down.
The problem they have software written in Rust, and they need to use the libpg_query library, that is written in C. Because they can't use the C library directly, they had to use a Rust-to-C binding library, that uses Protobuf for portability reasons. Problem is that it is slow.
So what they did is that they wrote their own non-portable but much more optimized Rust-to-C bindings, with the help of a LLM.
But had they written their software in C, they wouldn't have needed to do any conversion at all. It means they could have titled the article "How we lowered the performance penalty of using Rust".
I don't know much about Rust or libpg_query, but they probably could have gone even faster by getting rid of the conversion entirely. It would most likely have involved major adaptations and some unsafe Rust though. Writing a converter has many advantages: portability, convenience, security, etc... but it has a cost, and ultimately, I think it is a big reason why computers are so fast and apps are so slow. Our machines keep copying, converting, serializing and deserializing things.
Note: I have nothing against what they did, quite the opposite, I always appreciate those who care about performance, and what they did is reasonable and effective, good job!
> make sure not to sign into your Microsoft account or link it to Windows again
That's not so easy. Microsoft tries really hard to get you to use a Microsoft account. For example, logging into MS Teams will automatically link your local account with the Microsoft account, thus starting the automatic upload of all kinds of stuff unrelated to MS Teams.
In the past I also had Edge importing Firefox data (including stored passwords) without me agreeing to do so, and then uploading those into the Cloud.
Nowadays you just need to assume that all data on Windows computers is available to Microsoft; even if you temporarily find a way to keep your data out of their hands, an update will certainly change that.
I agree, I’m one of the Very Serious Engineers and I liked Steve’s post when I thought it was sort of tongue in cheek but was horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering. There absolutely is a large contingent of engineers who believe this, and it has a real world impact on my job if my bosses think you can just throw a dozen AI agents at our product roadmap and get better productivity than an engineer. This is not whimsical to me, I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job.
Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
It's because a lot of people that werent skilful werent on your path before.
Now that pandora's box has been re-opened, those people feel "they get a second chance at life". It's not that they have no shame, they have no perspective to put that shame.
You on the other hand, have for many years honed your craft. The more you learn, the more you discover to learn aka , you realize how little you know.
They don't have this. _At all_.
They see this as a "free ticket to the front row" and when we politely push back (we should be way harsher in this, its the only language they understand) all they hear is "he doesn't like _me_." which is an escape.
You know how much work you ask of me, when you open a PR on my project, they don't. They will just see it as "why don't you let me join, since I have AI I should have the same skill as you".... unironically.
In other words, these "other people" that we talk about haven't worked a day in the field in their life, so they simply don't understand much of it, however they feel they understand everything of it.
You can always count on someone coming along and defending the multi-trillion dollar corporation that just so happens to take a screenshot of your screen every few seconds (among many, many - too many other things)
Nothing particularly notable here. A lot of it seems to be 'We have something in-house designed for our use cases, use that instead of the standard lib equivalent'.
The rest looks very reasonable, like avoiding locale-hell.
Some of it is likely options that sand rough edges off of the standard lib, which is reasonable.
Absolutely none of this is true.
Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).
Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.
There was never "no interest" at Docker in cgv2 or rootless.
Never.
cgv2 early on was not useable. It lacked so much functionality that v1 had.
It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.
Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).
The best part about this blog post is that none of it is a surprise – Codex CLI is open source. It's nice to be able to go through the internals without having to reverse engineer it.
Their communication is exceptional, too. Eric Traut (of Pyright fame) is all over the issues and PRs.
My guess is Tesla is desperately in need of Q1 revenue and they want people to scarcity buy FSD lifetime @ $8K. Otherwise the strategy doesn’t make any sense. They’re saying FSD and basic Autopilot will go behind subscription and that subscription prices are expected to go up. Basically laying out that they’re planning to lock you into a subscription and then price gouge. It’s so transparent that I think the point isn’t actually the gouge but to make that threat move lifetime FSD sales.
"The problem is that Docker the technology became so successful that Docker the company struggled to monetize it. When your core product becomes commoditized and open source, you need to find new ways to add value."
No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
"Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."
No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.
"If you’re giving away your security features for free, what are you selling?"
This, is what actually is going to cost their business, I'm extremely grateful for what they have done for us. But they didn't gave themselves a chance.
Their behaviour has been more akin to a non-profit.
Great for us, not so great for them in the long run.
> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.
ever had a client second guess you by replying you a screenshot from GPT?
ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?
no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.
Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time
I'm not a huge fan of these experiments that subject the public to your random AI spam. So far it's bothered 10 companies directly with no legal authority to actually follow up with what is requested?
It only became off by default after those "daily rage sessions" created sufficient public pressure to turn them off.
Microsoft also happens to own LinkedIn which conveniently "forgets" all of my privacy settings every time I decide to review them (about once a year) and discover that they had been toggled back to the privacy-invasive value without my knowledge. This has happened several times over the years.
Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.