Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They did talk about Siri being better at voice recognition using Apple's own on-device models, so I imagine that will eventually apply more broadly.


On-device models will not be big enough in the near future. What makes ChatGPT so awesome at recognition is that their model is huge, and so no matter how obscure the topic of the dictation, ChatGPT knows what you're talking about.


Apple also talked about their private compute cloud, which allows larger models and workflows to integrate with local AI models. It sounds like they will figure out which features require bigger models and which don't. So I think there is a lot of room for what you're mentioning in the future of this AI platform.

Plus, they talk about live phone call transcriptions, voice transcription in notes, the ability to correct words as you speak, contextual conversations in siri, etc. It 100% sounds like better voice recognition is coming


Pretty sure transcription is done locally on Pixel phones and it's pretty good. Not as good as ChatGPT, but most of the way there. If current iOS is like a 50, Pixel is like a 90 and OpenAI is like 98.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: