I think another problem with the playgrounds is that they were paid. I didn't want to explore too much because I felt like I was constantly losing money just to mess around. I feel that because ChatGPT is free that's really opened the floodgates to allow everyone to mess around with it.
That and I suppose it wasn't too obvious on the playgrounds that GPT-3 had a huge amount of knowledge about really specific things. Like, I asked ChatGPT about Vite, and it knew a whole bunch. I woudn't have thought to ask GPT-3 about Vite because it seemed like it was more intended to continue text I had already written - it didn't really seem like it had extensive external knowledge.
Definitely agree with this, and I might be wrong but I wouldn't be surprised if StableDiffusion made them choose this release model, since SD being accessible to everyone created more hype than the highly-restricted release of DALLE-2
Still keep in mind that this is only temporary demo ("During the research preview, usage of ChatGPT is free" [0]), running language model inference of this size is expensive and the model will likely be put behind a paywall soon
I think another problem with the playgrounds is that they were paid. I didn't want to explore too much because I felt like I was constantly losing money just to mess around. I feel that because ChatGPT is free that's really opened the floodgates to allow everyone to mess around with it.
That and I suppose it wasn't too obvious on the playgrounds that GPT-3 had a huge amount of knowledge about really specific things. Like, I asked ChatGPT about Vite, and it knew a whole bunch. I woudn't have thought to ask GPT-3 about Vite because it seemed like it was more intended to continue text I had already written - it didn't really seem like it had extensive external knowledge.