Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can read my own writings without overfitting the neurons in my brain. The key I think is contextualization, something LLMs are great at already. The open question is how to utilize that contextualization ability during training.

The argument that LLMs can’t possibly scale because of data contamination falls apart the moment we discover a method to incorporate context-learning into the training loop.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: