|
|
In Steven Pinker's "How the Mind Works" (Norton, 1997) he gives many examples of what the human mind does easily and what it does poorly. A theme running through the book is that since homo sapiens evolved from tree living primates that moved onto the savannahs, our minds are highly evolved for that environment but do less well at tasks that arise from situations very foreign to such life. In his chapter 5 he discusses the mind's perceptions of probability, or more accurately likelihood, since early man did not think of it quantitatively. He writes: "The founders of probability theory, like the founders of logic, assumed they were just formulating common sense......But then why do people often seem to be 'probability blind', in the words of Massimo Pilasttelli-Palmarini? Many mathematicians and scientists have bemoaned the innumeacy of ordinary people when they reason about risk. The psychologists Amos Tversky and Daniel Kahneman have amassed ingenious demonstrations of how people's intuitive grasp of chance appears to flout the elementarty canons of probability theory. Here are some famous examples:" [Part of the explanation for this is that humans evolved in an environment containing few examples of *independent random events* that were of concern to them. Thus, if the weather has been bad for a series of days, they learned to *expect* that the likelihood of good weather increased during that period. This is totally unlike the case of flipping a coin in which the events are statistically independent, but people *expect* that if they get lots of heads in a row, the likelihood of a tail is increased.] The examples that particularly caught my attention is this: "This problem was given to sixty students and staff members at Harvard Medical School: 'If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person's symptoms or signs?' The most popular answer was 0.95. The average answer was 0.56. The correct answer is 0.02, and only eighteen percent of the experts guessed it." Can you confirm that alleged "correct answer"?
7 responses total.
Ok, I'm not a math whiz, but out of an average group of 1000 people, 1 person will have the disease. But 50 people in the group will test positive for it and not really have it. One divided by 50 is 0.02, but that one person with the disease can't be both infected by the disease and have been falsely tested positive. The two conditions are mutually exclusive. So it should be 1/51, which comes to 0.0196.
Thanks for joining in.....your answer is correct, but only by making an additional assumption. The problem is unsolvable as stated. One has to know what the probability is that a person with the disease will test positive. If we assume that is 1.0, then the exact probability is 1/50.95, very close to your answer. But that might not be. If the probability of getting a positive if the person does have the disease is also 5% - that is, the positive rate is 5% whether a person has the disease or not, the probability that a person testing positive has the disease is 0.001, the same as the prevalence. (The problem is one involving what is known as Bayesian estimation.)
This one cames from _A Mathematician Reads the Newspaper_ by Allen Paulos, and concerns on how in real situations random choices make make more sense that deterministic choices. Here's the situation: "A pitcher and a batter are facing each other. The pitcher can throw either a fast ball or a curve ball. If the batter is prepared for a fast ball, he averages 0.500 against such pitches but, thus prepared, he only averages 0.100 against a curve ball. If he's prepared for a curve ball, however, the batter averages 0.400 against them, but he only averages 0.200 against fast balls in this case. "Based on these probabilities, the pitcher must decide which pitch to throw and the batter must anticipate this decision and prepare accordingly." Since any deterministic strategy could be analyzed and countered, the best strategy for both is to choose completely randomly (and independently) to throw a fast or curve ball, and to prepare randomly (and independently) for each. The question is, with what probability should the pitcher choose to throw a fast ball to minimize the probability of the batter getting a hit while at the same time with what probability should the batter choose to prepare for a fast ball to maximize his probability of getting a hit? The author gives the answer but completely omits the *mathematics*, which I thought would be the main content of the book. His point, though, is that there are many situations where the best strategy is to flip a coin - or choose a random number - rather than create some deterministic strategy - in labor conflicts, market battles, sports (as here), in war, etc. Since he didn't give the math for calculating the answer, I spent a good part of the evening doing it. It against demonstrated to me that good perceptions of probability are not inherent in our psyches.
I was hoping that some baseball aficionados, who like to think about sports probabilities, would join in. This problem has turned out to be much more interesting than the author made it in the cited book. The answer given by that author is that the pitcher should throw pitches that are randomly fast balls and curve balls with a probability of 0.5, and the batter should prep for fast balls 1/3 of the time and for curve balls 2/3 of the time, but chosen randomly also. With the hit probabilities given in the question, the probability of a hit is just 0.300. However I have looked at the probability of a hit in the full sample space for the throwing and hitting strategies of the pitcher and batter, and it turns out that: 1. At the optimum point for the pitcher given above, 0.5, the probability of a hit does not depend upon the batter's choice of his probability of preparing for a fast ball - is remains the same, 0.300, for the batter preparing for a fast ball none of the time to all of the time. 2. Likewise, at the optimum point for the batter, 1.3, the probability of a hit does not depend upon the pitcher's choice of his probability of throwing a fast ball - it remains the same, 0.300, for the pitcher throwing a fast ball none of the time to all of the time. 3. HOWEVER - if the pitcher increased his probability of throwing a fast ball from 0.5 to, say, 0.6, and the batter detects this, then the optimum strategy for the batter is to prepare for ALL fast balls, in which case his probability of getting a hit increases to 0.340. 4. LIKEWISE - if the batter increased his probability of preparing for a fast ball from 1/3 to, say, 0.400, and the pitcher detects this, then the optimum strategy for the pitcher is to throw ALL curves, in which case the probability of the batter getting a hit decreases to 0.280. Are any of these results obvious to baseball fans, given the probabilities originally stated? (I realize that the conditions of the problem statement are unrealistic, as it assumes the same batter and pitcher are continually "up", and that those probabilities remain the same, and that it is even possible for a batter to "prepare" for a fast vs curve ball -- but then, this was a contrived example for the author's thesis, that a random strategy in some cases may be better than a deterministic strategy.)
There is another consequence of the pitcher, say, increasing the fraction of fast balls a little, causing the batter to prepare only for fast balls. The pitcher will notice that and immediately switch to all curve balls. But then the batter will notice that and immediately switch to preparing for all curve balls, and this cycling will continue. The "optimum" situation appears to be unstable. Which makes this a lousy example of what the author Paulos was trying to illustrate - that there is an "optimum" probabalistic strategy. So the example he gave is lousy, but he gives no more realistic examples from the other fields he cites: "business (labor conflicts and market battles), sports (virtually all competitive contests), and the military (war games) that can be modeled this way".
Maybe he meant the randomized decision process was best over a large number of games? If the players have equal intelligence, they should be able to counter any decision strategy either could come up with, so over a large number of games their wins would seem even. Then again I wonder if the pitcher has an advantage anyway because the batter couldn't immediately react to the pitcher's strategies as he came up with them; it seems like the batter would always be a few pitches behind as he tried to figure out the pitcher's strategy.
That would be true if the pitcher cannot distinguish the batter preparing for a fast or curve ball. The batter, of course, can more easily distinguish a fast or curve throw. These choices were, by the terms of the problem, all randomized, so it is true that it takes a larger number of throws, if not games, for the probabilities to become evident.
Response not possible - You must register and login before posting.
|
|
- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss