Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, but are those going to run on Apple-owned hardware at all? It seems like Apple will first prioritize their models running on-device, then their models running on Apple Silicon servers, and then bail out to ChatGPT API calls specifically for Siri requests that they think can be better answered by ChatGPT.

I'm sure OpenAI will need to beef up their hardware to handle these requests - even as filtered down as they are - coming from all of the Apple users that will now be prompting calls to ChatGPT.



they're going to be using nvidia (or maybe AMD if they ever catch up) to train these models anyways


not necessarily so, in terms of tflops per $ (of apple’s cost of gpus, nit consumer), and tflops per watt their apple silicon is comparable if not better


> and tflops per watt their apple silicon is comparable if not better

If Apple currently ships a single product with better AI performance-per-watt than Blackwell, I will eat my hat.


flops/$ is simply not all (or even most) that matters when it comes to training LLMs.... Apple releases LLM research - all of their models are trained on nvidia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: