Hacker Newsnew | past | comments | ask | show | jobs | submit | dainiusse's commentslogin

This has nothing to do with part price. They sell for what people pay. And this new neo is for putting scale, but 8gb means you get hooked and then "climb the ladder"

OpenAI again making confusion with names...

Given their close ties to Microsoft, I expect to start seeing names like "ChatGPT One" and "ChatGPT OneX"

Which would be an improvement over the already existing "GPT-5.1-Codex-Max-xHigh"

.Net copilot GPT

Sounds like a Ubiquiti security camera now.

I an afraid the same will happen with iPhone foldable. No, it doesn't need to have multiuser support, but how does Tim make you still want an iPad? And macOS - through limitations.

How much $ do you burn in tokens?

I don't understand the mac mini hype. Why can it not be a vm?

I don't know but I'm guessing that it's because it makes it easy to give access to it to Mac desktop apps? Not sure what's the VM story with Mac but usually cloud VM stuff is linux so it may be inconvenient for some users to hook it up to their apps/tools.

The question is: what type of mac mini. If you go for something with 64G + +16 cores, it's probably more than most laptop so you can run much bigger models without impacting your job laptop.

64GB Mac Mini is easily in the $2000 territory. At that point you might as well just buy a DGX Spark and get proper CUDA/Linux support.

It absolutely can be a vm. Someone even got it running on a 2 dollar esp32. Its just making api calls

it's because Apple blocks access to iMessage and other Appe services from non Apple os.

If you, like me, don't care about any of that stuff you can use anything plus use SoTA models through APIs. Even raspberry pi works.


No, this is "AGI test" :D


Have we even agreed on what AGI means? I see people throw it around, and it feels like AGI is "next level AI that isn't here yet" at this point, or just a buzzword Sam Altman loves to throw around.


I guess AGI is reached, then. The SOTA models make fun of the question.


This is AGI


That tech debt will be cleaned up with a model in 2 years. Not that human don't make tech debt.


What that model is going to do in 2 years is replace tech debt with more complicated tech debt.


One could argue that's a cynically accurate definition of most iterative development anyway.

But I don't know that I accept the core assertion. If the engineer is screening the output and using the LLM to generate tests, chances are pretty good it's not going to be worse than human-generated tech debt. If there's more accumulated, it's because there's more output in general.


Only if you accept the premise that the code generated by LLMs is identical to the developer's output in quality, just higher in volume. In my lived professional experience, that's not the case.

It seems to me that prompting agents and reviewing the output just doesn't.... trigger the same neural pathways for people? I constantly see people submit agent generated code with mistakes they would have never made themselves when "handwriting" code.

Until now, the average PR had one author and a couple reviewers. From now on, most PRs will have no authors and only reviewers. We simply have no data about how this will impact both code quality AND people's cognitive abilities over time. If my intuition is correct, it will affect both negatively over time. It remains to be seen. It's definitely not something that the AI hyperenthusiasts think at all about.


> In my lived professional experience, that's not the case.

In mine it is the case. Anecdata.

But for me, this was over two decades in an underpaid job at an S&P500 writing government software, so maybe you had better peers.


I stated plainly: "we have no data about this". Vibes is all we have.

It's not just me though. Loads of people subjectively perceiving a decrease in quality of engineering when relying on agents. You'll find thousands of examples on this site alone.


I have yet to find an agent that writes as succinctly as I do. That said, I have found agents more than capable of doing something.


Does the product work? Is it maintainable?

Everything else is secondary.


Was this written by llm?


Dogfooding their future products!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: