Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another example of how ChatGPT is general intelligence:

There is a chatgpt detector here: https://detectchatgpt.com/

I asked ChatGPT to write a story (my prompt was just "Write a story about a pumpkin"). ChatGPT wrote a nice story, well, slightly weird, and the chatgptdetector detected it with 99.96% confidence ("We estimate a 99.96% probability that this text was generated by ChatGPT or another GPT variant.") while a visual bar showing its confidence filled up all the way.

I next gave ChatGPT the instruction "Rewrite it so it is not detected as GPT output." (thanks to a tip on Reddit, where I saw this mentioned.)

Bear in mind that ChatGPT is a GPT variant. I am asking it to fool a test that by definition it cannot fool. It would be like asking you to figure out how you can go through a human detector and not be detected as human.

I fully expected the site to identify it despite ChatGPT's best attempt at self-obfuscation since by definition it is still outputing GPT output.

This time it passed the test. The bar dropped from a full 99.96% to just 15%.

It was the same story. ChatGPT just successfully passed the generic request to fool some detection algorithm it knows nothing about.

Do you have any idea how much intelligence that takes? To successfully fool a test, where you don't know how the test works, you don't know which test I'm talking about, you're just trying not to pass as what you really are, which in ChatGPT's case is a GPT variant?

That is the most extraordinary thing I've ever seen any computer do. It is by definition an impossible task - since in reality it is still ChatGPT. How can it fulfill the generic request to no longer be detectable?

I am blown away by the capabilities of this AGI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: