Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think a lot of the time it’s about jumping on the hype train. “We’re going to use shiny new AI Agent for X” is easy to understand for higher-ups as progress. This feels pretty normal for progress.

I do think some of the underlying technologies have merit in certain areas, but imo they’re mostly in customer service/success where actual agents are overburdened and thus can’t provide a good customer experience. Knowledge regurgitation is something current tech (LLMs) are good at, so to me this makes sense. Feels like in the longer term more technical functions will start to work better once reasoning becomes better.



I'm actually not so sure that LLMs are good at knowledge regurgitation. They're good at generating text that semi-plausibly looks like knowledge regurgitation (which may or may not be incomplete or wrong).

See the recent Google AI Summary mishaps for some good examples of this.


I’m thinking of knowledge regurgitation in the context of a very structured environment — a la knowledge base for a company & internal policies as opposed to the entire internet.

A better way to convey this might be that LLMs are good at being conversational and given the appropriate context and guardrails, they can regurgitate knowledge from said context with reasonable accuracy.

Google’s mishaps (eating rocks, etc.) demonstrate there’s still quite a bit of work to do for this to work at scale, but the tech is still pretty good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: