...our approach to alignment research: we want to automate the alignment research work itself. A promising aspect of this approach is that it scales with the pace of AI development. As future models become increasingly intelligent and helpful as assistants, we will find better explanations.
The distance between "better explanations" and using that as input of prompts that would automate self-improve is very small, yes?
Let's see if that's the last requisite for exponential AGI growth...
Singoolaretee here we go..............