Indian here (~15+ years in tech). I've seen this behavior a lot, and unfortunately, I did some of this myself earlier in my career.
Based on my own experience, here are a few reasons (could be a lot more):
1. Unlike most developed countries, in India (and many other develping countries), people in authority are expected to be respected unconditinally(almost). Questioning a manager, teacher, or senior is often seen as disrespect or incompetence. So, instead of asking for clarification, many people just "do something" and hope it is acceptable. You can think of this as a lighter version of Japanese office culture, but not limited to office... it's kind of everywhere in society.
2. Our education system mainly rewards results, not how good or well-thought-out the results are. Sure, better answers get more marks, but the gap between "okay" and "excellent" is usually not emphasized much. This comes from scale problems (huge number of students), very low median income (~$2400/year), and poorly trained teachers, especially outside big cities. Many teachers themselves memorize answers and expect matching output from students. This is slowly improving, but the damage is already there.
3. Pay in India is still severely (serioualy low, with 12-14+ hour work days, even more than 996 culture of China) low for most people, and the job market is extremely competitive. For many students and juniors, having a long list of "projects", PRs, or known names on their resume most often the only way to stand out. Quantity often wins over quality. With LLMs, this problem just got amplified.
Advice: If you want better results from Indian engineers(or designers or anyone else really), especially juniors (speaking as of now, things might change in near future), try to reduce the "authority" gap early on. Make it clear you are approachable and that asking questions is expected. For the first few weeks, work closely with them in the style you want them to follow.. they usually adapt very fast once they feel safe to do so.
I've been helping a bit with OWASP documentation lately and there's been a surge of Indian students eagerly opening nonsensical issues and PRs and all of the communication and code is clearly 100% LLMs. They'll even talk back and forth with each other. It's a huge headache for the maintainers.
I suggested following what Ghostty does where everything starts as discussions - only maintainers create issues, and PRs can only come from issues. It seems like this would deter these sorts of lazy efforts.
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.
Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.
Is this cultural? I ran a small business some years ago (later failed) and was paying for contract work to various people. At the I perceived the pattern that Indian contractors would never ever ask for clarifications, would never say they didn't know something, would never say they didn't understand something, etc. Instead they just ran with whatever they happened to have in their mind, until I called them out. And if they did something poorly and I didn't call them out they'd never do back as far as I can tell and wonder "did I get it right? Could I have done better?". I don't get this attitude - at my day job I sometimes "run with it" but I periodically check with my manager to make sure "hey this is what you wanted right?". There's little downside to this.
Your comment reminded me of my experience, in the sense that they're both a sort of "fake it till you make it".
I spot-checked one of the flagged papers (from Google, co-authored by a colleague of mine)
The paper was https://openreview.net/forum?id=0ZnXGzLcOg and the problem flagged was "Two authors are omitted and one (Kyle Richardson) is added. This paper was published at ICLR 2024." I.e., for one cited paper, the author list was off and the venue was wrong. And this citation was mentioned in the background section of the paper, and not fundamental to the validity of the paper. So the citation was not fabricated, but it was incorrectly attributed (perhaps via use of an AI autocomplete).
I think there are some egregious papers in their dataset, and this error does make me pause to wonder how much of the rest of the paper used AI assistance. That said, the "single error" papers in the dataset seem similar to the one I checked: relatively harmless and minor errors (which would be immediately caught by a DOI checker), and so I have to assume some of these were included in the dataset mainly to amplify the author's product pitch. It succeeded.
> Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.
Except the steps to to that are disable bitlocker, create a local user account (assuming you initially signed in with a Microsoft account because Ms now forces it on you for home editions of windows), delete your existing keys from OneDrive, then re-encrypt using your local account and make sure not to sign into your Microsoft account or link it to Windows again.
A much more sensible default would be to give the user a choice right from the beginning much like how Apple does it. When you go through set up assistant on mac, it doesn't assume you are an idiot and literally asks you up front "Do you want to store your recovery key in iCloud or not?"
FYI BitLocker is on by default in Windows 11. The defaults will also upload the BitLocker key to a Microsoft Account if available.
This is why the FBI can compel Microsoft to provide the keys. It's possible, perhaps even likely, that the suspect didn't even know they had an encrypted laptop. Journalists love the "Microsoft gave" framing because it makes Microsoft sound like they're handing these out because they like the cops, but that's not how it works. If your company has data that the police want and they can get a warrant, you have no choice but to give it to them.
This makes the privacy purists angry, but in my opinion it's the reasonable default for the average computer user. It protects their data in the event that someone steals the laptop, but still allows them to recover their own data later from the hard drive.
Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.
This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
I think we must make it clear that this is not related to AI at all, even if the product in question is AI-related.
It is a very common problem with modern marketing teams, that have zero empathy for customers (even if they have one, they will never push back on whatever insane demands come from senior management). This is why any email subscription management interface now is as bloated as a dead whale. If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone.
It’s a shame that Proton marketing team is just like every other one. Maybe it’s a curse of growing organization and middle management creep. The least we can do is push back as customers.
Once men turned their thinking over to machines in the hope that this would set them free.
But that only permitted other men with machines to enslave them.
Frank Herbert, Dune, 1965
Every time, over the years, that there has been some kind of headline saying renewables have overtaken fossil fuels, when you look at it a bit more closely there is always a big 'but'. For example, it was compared to coal (not taking into account electricity from gas), or it was for one day, or it was a percentage of new installations, or it excludes winter, includes nuclear etc.
This time, however, it looks like it's actually true and that's just for wind and solar. This is incredible, and done through slowly compounding gains that didn't cause massive economic hardships along the way.
Yuck, this is going to really harm scientific research.
There is already a problem with papers falsifying data/samples/etc, LLMs being able to put out plausible papers is just going to make it worse.
On the bright side, maybe this will get the scientific community and science journalists to finally take reproducibility more seriously. I'd love to see future reporting that instead of saying "Research finds amazing chemical x which does y" you see "Researcher reproduces amazing results for chemical x which does y. First discovered by z".
>this error does make me pause to wonder how much of the rest of the paper used AI assistance
And this is what's operative here. The error spotted, the entire class of error spotted, is easily checked/verified by a non-domain expert. These are the errors we can confirm readily, with obvious and unmistakable signature of hallucination.
If these are the only errors, we are not troubled. However: we do not know if these are the only errors, they are merely a signature that the paper was submitted without being thoroughly checked for hallucinations. They are a signature that some LLM was used to generate parts of the paper and the responsible authors used this LLM without care.
Checking the rest of the paper requires domain expertise, perhaps requires an attempt at reproducing the authors' results. That the rest of the paper is now in doubt, and that this problem is so widespread, threatens the validity of the fundamental activity these papers represent: research.
I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.
It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.
And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.
I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.
Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.
Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.
First of all, I’m skeptical about these being free. Time isn’t free, and the tokens to make these projects certainly weren’t free.
Second of all, all of these SaaS apps that don’t actually have a need for recurring charge probably should be paid one time. I don’t use Loom — I use CleanShot X and it was a one-time $30 payment and has a lot of great features I benefit from. I can’t reimplement it in $30 of tokens or $30 of my time.
But for an app whose use case doesn’t change and is recurring for no reason? Yeah there’s probably not much value in recurring payments outside of wanting to support the developer. I pay a lot of indie devs out of the goodness of my heart, and I’ll continue to do that.
But the value for “SaaS apps” without clear monthly costs should have always been under scrutiny.
I think the author is correct to a point but I don't believe the examples they've chosen provide the best support for their case. Gen Z buying iPods and people buying N64 games again is not evidence of the monoculture breaking apart - it's a retreat into the past for the enlightened few because their needs are not being met by modern goods and services. You cannot buy a dedicated MP3 player today with the software polish and quality of life that an iPod had in the early 2000s (or even a Zune).
Instead, I see the growth and momentum behind Linux and self-hosting as better evidence that change is afoot.
I find the title a bit misleading. I think it should be titled It’s Faster to Copy Memory Directly than Send a Protobuf. Which then seems rather obvious that removing a serialization and deserialization step reduces runtime.
> If you read Steve's writeup, it's clear that this is a big fun experiment:
So, Steve has the big scary "YOU WILL DIE" statements in there, but he also has this:
> I went ahead and built what’s next. First I predicted it, back in March, in Revenge of the Junior Developer. I predicted someone would lash the Claude Code camels together into chariots, and that is exactly what I’ve done with Gas Town. I’ve tamed them to where you can use 20–30 at once, productively, on a sustained basis.
"What's next"? Not an experiment. A prediction about how we'll work. The word "productively"? "Productively" is not just "a big fun experiment." "Productively" is what you say when you've got something people should use.
Even when he's giving the warnings, he says things like "If you have any doubt whatsoever, then you can’t use it" implying that it's ready for the right sort of person to use, or "Working effectively in Gas Town involves committing to vibe coding.", implying that working effectively with it is possible.
Every day, I go on Hacker News, and see the responses to a post where someone has an inconsistent message in their blog post like this.
If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.
I just want to say this isn't just amazing -- it's my new favorite map of NYC.
It's genuinely astonishing how much clearer this is than a traditional satellite map -- how it has just the right amount of complexity. I'm looking at areas I've spent a lot of time in, and getting an even better conceptual understanding of the physical layout than I've ever been able to get from satellite (technically airplane) images. This hits the perfect "sweet spot" of detail with clear "cartoon" coloring.
I see a lot of criticism here that this isn't "pixel art", so maybe there's some better term to use. I don't know what to call this precise style -- it's almost pixel art without the pixels? -- but I love it. Serious congratulations.
Solar prices in the US are criminal, protecting oil and gas who bought all the politicians.
Canada here. 7.6kw on our roof for $0 out of pocket thanks to $5k grant and $8k interest free loan.
It makes 7.72Mwh per year, worth $1000. Tight valley, tons of snow.
We put that on the loan for 8 years, then get $1000 per year free money for 20 years or so. Biggest no brainer of all time.
Dad in Victoria Australia just got 10.6kw fully installed and operational for $4000 AUD. ($2,700 USD)
Australia has so much electricity during the day they’re talking about making I free for everyone in the middle of the day.
Of all the guns that rural Americans love, the humble foot-gun is the most beloved.
---
Someone else can argue the morality, ethics, economics, and politics of it all, but VERY simply, US Federal Government Agencies are machines for redistributing wealth from cities to rural areas.
Rural America voted quite heavily to stop those subsidies. That's what efficiency means.
---
Maturity means suspending judgement and listening to people you disagree with, but I feel that's very out of style these days.
Google quietly announced that Programmable Search (ex-Custom Search) won’t allow new engines to “search the entire web” anymore. New engines are capped at searching up to 50 domains, and existing full-web engines have until Jan 1, 2027 to transition.
If you actually need whole-web search, Google now points you to an “interest form” for enterprise solutions (Vertex AI Search etc.), with no public pricing and no guarantee they’ll even reply.
This seems like it effectively ends the era of indie / niche search engines being able to build on Google’s index. Anything that looks like general web search is getting pushed behind enterprise gates.
I haven’t seen much discussion about this yet, but for anyone who built a small search product on Programmable Search, this feels like a pretty big shift.
Curious if others here are affected or already planning alternatives.
UPDATE: I logged into Programmable Search and the message is even more explicit: Full web search via the "Search the entire web" feature will be discontinued within the next year. Please update your search engine to specify specific sites to search. With this link: https://support.google.com/programmable-search/answer/123971...
I've seen an interesting behavior in India. If I ask someone on the street for directions, they will always give me an answer, even if they don't know. If they don't know, they'll make something up.
This was strange. I asked a lot of Indian people about it and they said that it has to do with "saving face". Saying "I don't know" is a disgraceful thing. So if someone does not know the answer, they make something up instead.
Have you seen this?
This behavior appears in software projects as well. It's difficult to work like this.
Long time ago Sourceforge and then GitHub promoted into the current default the model of open source distribution which is not sustainable and I doubt it is something that the founding fathers of Free Software/Open Source had in mind. Open source licenses are about freedom of using and modifying software. The movement grew out of frustration that commercial software cannot be freely improved and fixed by the user to better fit the user's needs. To create Free software, you ship sources together with your binaries and one of the OSI-approved licenses, that is all. The currently default model of having an open issue tracker, accepting third party pull requests, doing code reviews, providing support by email or chat, timely security patches etc, has nothing to do with open source and is not sustainable. This is OK if it is done for a hobby project as long as the author is having fun doing this work, but as soon as the software is used for commercial, production critical systems, the default expectation that authors will be promptly responding to new GitHub issues, bug reports and provide patches for free is insane. This is software support, it is a job, it should be paid.
Smells like an article from someone that didn’t really USE the XML ecosystem.
First, there is modeling ambiguity, too many ways to represent the same data structure. Which means you can’t parse into native structs but instead into a heavy DOM object and it sucks to interact with it.
Then, schemas sound great, until you run into DTD, XSD, and RelaxNG. Relax only exists because XSD is pretty much incomprehensible.
Then let’s talk about entity escaping and CDATA. And how you break entire parsers because CDATA is a separate incantation on the DOM.
And in practice, XML is always over engineered. It’s the AbstractFactoryProxyBuilder of data formats. SOAP and WSDL are great examples of this, vs looking at a JSON response and simply understanding what it is.
I worked with XML and all the tooling around it for a long time. Zero interest in going back. It’s not the angle brackets or the serialization efficiency. It’s all of the above brain damage.
Based on my own experience, here are a few reasons (could be a lot more):
1. Unlike most developed countries, in India (and many other develping countries), people in authority are expected to be respected unconditinally(almost). Questioning a manager, teacher, or senior is often seen as disrespect or incompetence. So, instead of asking for clarification, many people just "do something" and hope it is acceptable. You can think of this as a lighter version of Japanese office culture, but not limited to office... it's kind of everywhere in society.
2. Our education system mainly rewards results, not how good or well-thought-out the results are. Sure, better answers get more marks, but the gap between "okay" and "excellent" is usually not emphasized much. This comes from scale problems (huge number of students), very low median income (~$2400/year), and poorly trained teachers, especially outside big cities. Many teachers themselves memorize answers and expect matching output from students. This is slowly improving, but the damage is already there.
3. Pay in India is still severely (serioualy low, with 12-14+ hour work days, even more than 996 culture of China) low for most people, and the job market is extremely competitive. For many students and juniors, having a long list of "projects", PRs, or known names on their resume most often the only way to stand out. Quantity often wins over quality. With LLMs, this problem just got amplified.
Advice: If you want better results from Indian engineers(or designers or anyone else really), especially juniors (speaking as of now, things might change in near future), try to reduce the "authority" gap early on. Make it clear you are approachable and that asking questions is expected. For the first few weeks, work closely with them in the style you want them to follow.. they usually adapt very fast once they feel safe to do so.