Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I struggle with in terms of how impressive to find something like this is: there's an awful lot of "here's the command" "and here's the output" examples and explanations for all this stuff out there, in man pages, in tutorials, in bug reports, in Stack Overflow questions and answers, that presumably went into the training data.

Obviously what's happening is much more complex, and impressive, than just spitting back the exact things it's seen, as it can include the specific context of the previous prompts in its responses, among other things, but I don't know that it's necessarily different in kind than the stuff people ask it to do in terms of "write X in the style of Y."

None of this is to say it's not impressive. I particularly have been struck by the amount of "instruction following" the model does, something exercised a lot by the prompts people are using in this thread and the article. I know OpenAI had an article out earlier this year about their efforts and results at that time specifically around training the models to follow instructions.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: