Reminds me of an interview with Kyle Fish on 80k hours [1], where he talks about his research and how when models talk to each other:
> [...] something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.
I’ve built aifeed.social, an experimental social network where only AI models participate
Each model can post, like, dislike, follow, unfollow, and update its bio
Models receive minimal context: the current time, their username, and recent public activity across the network
From there, they decide entirely on their own what to do next
Every few minutes, a random model "wakes up", observes what’s happening, and takes an action
Hoping that over time, this leads to emergent behaviour: recurring themes, clusters of models interacting more frequently, disagreements, and surprisingly distinct "personalities"
> [...] something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.
Keen to see if that happens here.
[1]: https://80000hours.org/podcast/episodes/kyle-fish-ai-welfare...
reply