Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting article because it talks about all the things that happen with the use of large language models on their own. Large language models are amazing at mimicry and composition, and are a key part of getting to great Q&A.

But on their own they have no idea of factual correctness. That's what excites me about what we're doing with Andi. The answers are not only well generated, but do well on factual questions, especially given this is the first day live. There are some non-GPT models we're using that do well at this too.

Are you doing much with language models at Kagi yet? It's a fun area to work on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: