You say you can deliver the Straight Dope on any topic. Try this one. Who am I? or Who is the knower? or What is consciousness? (All the same question, roughly.) If consciousness (which we all experience intimately) is merely an epiphenomenon of the mind, which is an epiphenomenon of the brain, then there must be a physical mechanism in the brain that accounts for it. But then the same question can be (and must be) asked again: What submechanism within the broader mechanism is responsible for consciousness? –Jeremy Fields, Evanston

Best of Chicago voting is live now. Vote for your favorites »

Though small minds might consider it a thumb sucker, inquiry into consciousness has been one of the central debates in the field of artificial intelligence. In 1950, when “thinking machines” first seemed a real possibility, computer pioneer Alan Turing reasoned that since consciousness is subjective and thus inscrutable, the only way we can know if a computer is intelligent is to ask it questions. If its answers can’t be distinguished from those of a human, the computer has to be considered intelligent.

No way, said the skeptics. The best-known argument, formulated in 1980 by the philosopher John Searle, went like this: Suppose I’m locked in a black box with two slots in it marked “Input” and “Output.” Pieces of paper with black squiggles on them are periodically shoved through the Input slot. My job is to look up the squiggles in a rule book I’ve been given and shove pieces of paper marked with other black squiggles through the Output slot as the rule book directs.

Of course, said Searle. Artificial intelligence may be possible. It’s just not likely to arise from computers as currently understood.