Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Has everyone else been using a different tool than me? ChatGPT is interesting, but is laughably bad at any non-trivial instruction.

It's fantastic at generating content that on first glance looks remarkably right, but always fails fine inspection.

All I've seen from LLMs is much better demos of the types of funny things markov chains were generating two decades ago (anyone remember the various academic paper generators?). However I have yet to see anything that stands out as really remarkable.

My read is that people want to see incredible progress towards strong AI, and LLMs do a great job of letting people feel like they're seeing that if they want to.

I suspect in 5 years we'll largely have forgotten about LLMs and in 10 they'll come back into popularity because techniques will become more efficient and computing power will increase enough that people can train them on their home PCs... but they'll still just be a fun novelty.



Really? I found it to be extremely capable at very difficult tasks. For example, I had it guide me through using the ODBC SQL drivers (C++). It's also extremely good at generating fictional stories. Unlike other AI solutions, it has a lot of context. For example, it generated one story that mostly used generic names like "the king" or "evil wizard", but I was able to get it to fill in those names in a conversational way, not by modifying the original prompt like you'd need to do with plain GPT-3.


> ChatGPT is interesting, but is laughably bad at any non-trivial instruction.

Have you actually looked at the linked Cheat Sheet?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: