Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did ChatLLaMA just echo back what ChatGPT spit out? That's not particularly promising


and then ChatGPT showed ChatLLaMA how it should have responded by roasting itself from LLaMA's perspective?

Now that's an alpha LLM flex.


I did exactly that. It makes sense given that it's a small model which isn't trained on RLHF.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: