Hacker Newsnew | past | comments | ask | show | jobs | submit | cube2222's commentslogin

I think the main problem in estimating projects is unknown unknowns.

I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.

Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.


But how long it'll take you to make that PoC? Any idea? :P

Yeah, I have written multiple almost completely-vibecoded linters since Claude Code came out, and they provide very high value.

It’s kind of a best case scenario use-case - linters are generally small and easy to test.

It’s also worth noting that linters now effectively have automagical autofix - just run an agent with “fix the lints”. Again, one of the best case scenarios, with a very tight feedback loop for the agent, sparing you a large amount of boring work.


I bought a w-oled monitor for office work and gaming, very happy with my oled tv. I returned it after a couple days.

I got unbearable eye strain from it, even though I use rather large fonts, and the ppd was the same as with my previous IPS. Yes, the “more fuzzy” text was very much noticeable too.

Maybe it varies by person, maybe it’s influenced by things like astigmatism, but I totally see where the author is coming from, and I too am waiting for the new OLED panels to see if there’s an improvement.


(Author here.)

I do have astigmatism. You do make me wonder if this plays a part as well...


In my experience, it seems to. My astigmatism (or other eye stuff) seems to move different colours different amounts, leading to wider RGB pixels and making things like Cleartype so much worse. So people were enjoying Cleartype and I was hating the obvious colour-changes and fringes that somehow they weren't seeing. I assume some people are lucky enough to have aberrations that actually make cleartype more pleasant.


I do too. Combined with progressive lenses and I have significant chromatic aberration issues. Blue and red pixels require different focus, which is sometimes an issue when solid blues and reds are on screen in close proximity. I turn off pure blue colors in my terminal emulator, for example.


That sounds familiar. I also have ever so slight green-brown color blindness. It's only really noticeable in low light (like in the woods in evenings), but that could well all stack up to be a problem.

I also have significant problems with blue LEDs around the house, to the point where I've removed, replaced, or covered almost all of them. They really, really bother me because it feels like my eyes never focus on them and they leave me feeling slightly disoriented.


I’ve gone through this series of videos earlier this year.

In the past I’ve gone through many “educational resources” about deep neural networks - books, coursera courses (yeah, that one), a university class, the fastai course - but I don’t work with them at all in my day to day.

This series of videos was by far the best, most “intuition building”, highest signal-to-noise ratio, and least “annoying” content to get through. Could of course be that his way of teaching just clicks with me, but in general - very strong recommend. It’s the primary resource I now recommend when someone wants to get into lower level details of DNNs.


Karpathy has a great intuitive style, but sometimes it's too dumbed down. If you come from adjacent fields, it might be a bit dragging, but it's always entertaining


>Karpathy has a great intuitive style, but sometimes it's too dumbed down

As someone who has tried some teaching in the past, it's basically impossible to teach to an audience with a wide array of experience and knowledge. I think you need to define your intended audience as narrowly as possible, teach them, and just accept that more knowledgeable folk may be bored and less knowledgeable folk may be lost.


When I was an instructor for courses like "Intro to Programming", this was definitely the case. The students ranged from "have never programmed before" to "I've been writing games in my spare time", but because it was a prerequisite for other courses, they all had to do it.

Teaching the class was a pain in the ass! What seemed to work was to do the intro stuff, and periodically throw a bone to the smartasses. Once I had them on my side, it became smooth sailing.


I think this is where LLM-assisted education is going to shine.

An LLM is the perfect tool to fill the little gaps that you need to fill to understand that one explanation that's almost at your level, but not quite.


Spacelift | Remote (Europe) | Full-time | Senior Software Engineer | $80k-$110k+ (can go higher)

We're a VC-funded startup (recently raised $51M Series C) building an infrastructure orchestrator and collaborative management platform for Infrastructure-as-Code – from OpenTofu, Terraform, Terragrunt, CloudFormation, Pulumi, Kubernetes, to Ansible.

On the backend we're using 100% Go with AWS primitives. We're looking for backend developers who like doing DevOps'y stuff sometimes (because in a way it's the spirit of our company), or have experience with the cloud native ecosystem. Ideally you'd have experience working with an IaC tool, i.e. Terraform, Pulumi, Ansible, CloudFormation, Kubernetes, or SaltStack.

Overall we have a deeply technical product, trying to build something customers love to use, and have a lot of happy and satisfied customers. We promise interesting work, the ability to open source parts of the project which don't give us a business advantage, as well as healthy working hours.

If that sounds like fun to you, please apply at https://careers.spacelift.io/jobs/3006934-software-engineer-...

You can find out more about the product we're building at https://spacelift.io and also see our engineering blog for a few technical blog posts of ours: https://spacelift.io/blog/engineering

Additionally, we're hiring for a new product we're building, Flows. Mostly the same requirements and tech stack, without the devops bits. You can see a demo of Flows and apply for it here: https://careers.spacelift.io/jobs/6438380-product-software-e...


> If AI coding is so great and is going to take us to 10x or 100x productivity

That seems to be a strawman here, no? Sure, there exist people/companies claiming 10x-100x productivity improvements. I agree it's bullshit.

But the article doesn't seem to be claiming anything like this - it's showing the use of vibe-coding for a small personalized side-project, something that's completely valid, sensible, and a perfect use-case for vibe-coding.


That’s really cool, and a great use-case for vibe coding!

I’ve been vibe-coding a personalized outliner app in Rust based on gpui and CRDTs (loro.dev) over the last couple days - something just for me, and in a big part just to explore the problem space - and so far it’s been very nice and fun.

Especially exploring multiple approaches, because exploring an approach just means leaving the laptop working for an hour without my attendance and then seeing the result.

Often I would have it write up a design doc with todos for a feature I wanted based on its exploration, and then just launch a bash for loop that launches Claude with “work on phase $i” (with some extra boilerplate instructions), which would have it occupied for a while.


I agree with you as far as project size for vibe-coding goes - as-in often not even looking at the generated code.

But I have no issues with using Claude Code to write code in larger projects, including adapting to existing patterns, it’s just not vibe coding - I architect the modules, and I know more or less exactly what I want the end result to be. I review all code in detail to make sure it’s precisely what I want. You just have to write good instructions and manage the context well (give it sample code to reference, have agent.md files for guidance, etc.)


> I know more or less exactly what I want the end result to be

This is key.

And this is also why AI doesn't work that well for me. I don't know yet how I want it to work. Part of the work I do is discovering this, so it can be defined.


I've found this to be the case as well. My typical workflow is:

1. Have the ai come up with an implementation plan based on my requirements

2. Iterate on the implementation plan / tweak as needed, and write it to a markdown file

3. Have it implement the above plan based on the markdown file.

On projects where we split up the task into well defined, smaller tickets, this works pretty well. For larger stuff that is less well defined, I do feel like it's less efficient, but to be fair, I am also less efficient when building this stuff myself. For both humans and robots, smaller, well defined tickets are better for both development and code review.


Yeah, this exactly. And if the AI wanders in confusion during #3, it means the plan isn’t well-defined enough.


There actually is a term for this LLM-assisted coding/engineering. Unfortunately it has been pushed away by the fake influencer & PR term "vibe coding" which conflates coding with unknowledgeable people just jerking the slot machine.


Sounds like so much work just not to write it yourself.


Getting it right definitely takes some time and finesse, but when it works you spend 30 minutes to get 4-24+ hours of code.

And usually that code contains at least one or two insights you would not normally have considered, but that makes perfect sense, given the situation.


I’ve checked out codex after the glowing reviews here around September / October and it was, all in all, a letdown (this was writing greenfield modules in a larger existing codebase).

Codex was very context efficient, but also slow (though I used the highest thinking effort), and didn’t adapt do the wider codebase almost at all (even if I pointed it at the files to reference / get inspired by). Lots of defensive programming, hacky implementations, not adapting to the codebase style and patterns.

With Claude Code and starting each conversation by referencing a couple existing files, I am able to get it to write code mostly like I would’ve written it. It adapts to existing patterns, adjusts to the code style, etc. I can steer it very well.

And now with the new cheaper faster Opus it’s also quite an improvement. If you kick off sonnet with a long list of constraints (e.g. 20) it would often ignore many. Opus is much better at “keeping more in mind” while writing the code.

Note: yes, I do also have an agent.md / claude.md. But I also heavily rely on warming the context up with some context dumping at conversation starts.


All codex conversations need to be caveat with the model because it varies significantly. Codex requires very little tweaking but you do need to select the highest thinking model if you’re writing code and recommend the highest thinking NON-code model for planning. That’s really it, it takes task time up to 5-20m but it’s usually great.

Then I ask Opus to take a pass and clean up to match codebase specs and it’s usually sufficient. Most of what I do now is detailed briefs for Codex, which is…fine.


I will jump between a ChatGPT window and a VSCode window with the Codex plugin. I'll create an initial prompt in ChatGPT, which will ask the coding agent to audit the current implementation, then draft an implementation plan. The plan bounces between Chat and Codex about 5 times, with Chat telling Codex how to improve. Then Codex implements, creates an implementation summary, which I give to Chat. Chat then asks to add a couple of things fixes, then it's done.


Why non-thinking model? Also 5-20 minutes?! I guess I don’t know what kind of code you are writing but for my web app backends/frontends planning takes like 2-5 minutes tops with Sonnet and I have yet to feel the need to even try Opus.


I probably write overly detailed starting prompts but it means I get pretty aligned results. It does take longer but I try to think through the implementation first before the planning starts.


In my experience sonnet > opus, so it’s not surprise you don’t “need” opus. They charge a premium on sonnet now instead


Yeah, I uninstalled and reinstalled with homebrew, and it’s working well now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: