Why do you believe the difference is due to quantization rather than all the fine-tuning that they did? Given the difference between GPT-3 and GPT-3.5 - pretty much all of which is fine-tuning and RLHF - I find it much easier to believe in the latter.