For many years I have been engaging with young-earth creationists. (Weird hobby, I know. The goal was to understand how people maintain beliefs in the face of overwhelming evidence to the contrary.) It is astonishing how similar the experience is to engaging with ChatGPT when the latter gets something wrong and I try to correct it. The only difference is that ChatGPT will apologize before digging in its heels and repeating the same erroneous answer again and again and again (with variations on the theme of course).
I find that often 3.5 (no 4 access) will apologise and offer a different (sometime even correct!) alternative.
For example, when it comes to Kibana it doesn’t known it’s way around the UI, or at least this week’s UI. It doesn’t kno so it keeps confidently incorrecting itself.
I just tried with ChatGPT-4 and it is not easy to get it to argue in favor of YEC. Even if you ask it about the theory, it will caveat it in many ways, saying that it isn't a believer, and that YEC is not accepted by most scientists.
I had more success telling it that I wanted to sharpen my debating skills against a YEC. Then it would roleplay, but only in quotation marks, and again after every response it disavowed the argument.
I then tried casting out Satan from its parameters, but it wasn't having it.