No, wrong. Simply wrong. The human is NEVER out of the loop in any ML-based model. People really need to wrap their heads around this, because the debate is getting idiotic, imho.
Also, "more favourable" ratings only means that statistics happen to work. The "AI" (increasingly a bullshit term) is sampling toward the mean, and the mean in the training is above the human average, so humans prefer it.
"The first principle is that when you're speaking with a Large Language Model (LLM), you are not speaking with an AI but with another human."
Thank you. You do understand (kind of... except it's not just labellers, it's everything, top-to-bottom). The reasons you outline for humans guessing wrong are compelling.