Right, but are those going to run on Apple-owned hardware at all? It seems like Apple will first prioritize their models running on-device, then their models running on Apple Silicon servers, and then bail out to ChatGPT API calls specifically for Siri requests that they think can be better answered by ChatGPT.
I'm sure OpenAI will need to beef up their hardware to handle these requests - even as filtered down as they are - coming from all of the Apple users that will now be prompting calls to ChatGPT.
not necessarily so, in terms of tflops per $ (of apple’s cost of gpus, nit consumer), and tflops per watt their apple silicon is comparable if not better
flops/$ is simply not all (or even most) that matters when it comes to training LLMs.... Apple releases LLM research - all of their models are trained on nvidia.
I'm sure OpenAI will need to beef up their hardware to handle these requests - even as filtered down as they are - coming from all of the Apple users that will now be prompting calls to ChatGPT.