Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great. Going meta with an introspective feedback loop.

Let's see if that's the last requisite for exponential AGI growth...

Singoolaretee here we go..............



There is no introspection here.


    ...our approach to alignment research: we want to automate the alignment research work itself. A promising aspect of this approach is that it scales with the pace of AI development. As future models become increasingly intelligent and helpful as assistants, we will find better explanations.
The distance between "better explanations" and using that as input of prompts that would automate self-improve is very small, yes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: