Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My advice would be "don't expect them to do anything with LLM or similar so that when they don't, you won't be disappointed."

Expecting Apple (or any company) to be chasing the current hype is more likely to be disappointing (see Google and Bard or Bing and its mistakes). Apple, with its very cautious nature for curation of its brand image would likely be some time out.

I would also point out that Apple's prominent place in regulatory views would make it more hesitant to do things that they may have to open up.

Wait until after the regulatory dust has settled... and after the various lawsuits about copyright infringement or section 230 and GPT have settled ( https://www.marketplace.org/shows/marketplace-tech/chatgpt-i... ).

I don't believe that Apple has any appetite for becoming more of a target for government regulators or wading into untested legal waters. But that's my crystal ball - yours apparently sees different things.



I agree w parent that the end user is who matters--Siri is just not very good at answering what seem like basic questions.

What made google amazing was it settled conversational disputes or provided instant (if limited) familiarity on a subject. Siri fails to provide verbal feedback on relatively simple questions, instead referring people to their iPhones for "web results."

As an end user, the product's failure understand or make sense of the intent of a user is even harder to deal with in Home / HomeKit. I often find myself pulling up the Home app to hunt down and manually operate some accessory because voice requests are just failing.

Common patterns happen throughout a home covered in HomeKit and Homepods and yet this AI is unable to provide reasonable suggestions for automation modifications, scene tweaks or suggestions for additional accessories.

Siri-based requests for songs or albums from Apple Music on HomePod is abysmal, providing covers, or flat out wrong genre, wrong era that my listening habits should well weight away from.

It is just bad--architecture design be damned the product fails under "normal" use. Outwardly, it seems like a MobileMe-level failure, where SJ asked at a town hall "Can anyone tell me what MobileMe is supposed to do?"

All that said, I agree with this comment that it is a mistake to expect Apple to integrate LLM that uses any known model into its product.

Even if Apple wanted to, I don't know where the company could source data that is manicured to "safe" enough to serve as a basis for responses by Siri.

It doesn't really matter, to end users how they fix it.

The company's job is to drop the product or iterate until it figures out how to better satisfy they end user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: