The Turing Test is a well-enough known concept, where it’s supposed to be a hallmark of a “good enough” AI to have an algorithm converse with a human in such a way that the human cannot tell the difference between another human or the algorithm any better than predicting the outcome of a coin flip.
But do you know humans who could pass a reverse Turing test? Unintentionally? People whose responses remind you of a computer’s: present them with the necessary input and conditions and you get a predictable, useful response. But remove or fuzz-out some of the starting info and all you get is a blank stare. If not a meltdown. It can be maddening, in real-world situations where all the necessary information is never available up front.
By the way, the greatest chess players have no leg up on even mediocre poker players in this respect.