Thursday, September 14, 2006

What Is Evidence?

Over at Beep! Beep! It's Me!, BeepBeep asks a good question -- what is evidence? It gives me an excuse to dust off the old philosopher of science hat...

Evidence-for vs. Evidence-that

Rudolf Carnap (the patron saint of rigorous philosophy of science) argued that the word "evidence" had several meanings, consider two. One is evidence in the sense that a lawyer building a complex case would submit some object as one piece of supporting evidence, or "evidence-for" the hypothesis the lawyer is trying to convince the jury of. By itself the lone fact doesn't make or break the case, but it is an important fact amongst facts that needs to be kept in mind. The other important sense is conclusive evidence or "evidence-that" the hypothesis is true. This is the bit of information that nails the case. When you have conclusive evidence, you have good reason to believe the hypothesis.

Probability Definitions of Evidence

Carnap works out definitions for both senses in terms of probabilities, that is how likely to be true the hypothesis is before and after you know the fact. In the case of evidence-for, Carnap argues that this is determined by an increase in probability. Evidence-for a hypothesis is any sentence whose truth makes the hypothesis more likely to be true than it was before we knew the evidence. If some fact increases the probability that the hypothesis is true of the world, then that fact is "evidence-for" the hypothesis.

Carnap's definition of evidence-that is that the fact makes the hypothesis likely to be true. If given the fact, the probability of the hypothesis being the case in the world is better than 50/50, that fact is conclusive evidence or evidence-that the hypothesis is the case.

Problems with Probability

Philosopher of science Peter Achinstein argues that there are problems here with both of these because in neither case is the probability relation sufficient -- we can have an increase in or high probability and still not have evidence. We can have an increase in probability where we don't want to call something evidence. If I am sitting inside a well-built house with a lightning rod and the walk outside on a beautiful sunny day, walking out does infinitesimally increase the likelihood that I will be hit by lightning, but surely you won't say that you have evidence that I will get hit by lightning because you see me walk out into the sunshine. If I buy a lottery ticket, that is too weak to be evidence that I am going to win the lottery even if my odds went from zero to something slightly above zero.

Similarly, high probability is not enough. In his example, the probability that Michael Jordan will not become pregnant given that you just saw him eat a bowl of Wheaties is very, very high, but the fact that the hypothesis has high probability given the observed fact is not enough to make the fact evidence. Something else guaranteeing relevance is needed.

Explanation and evidence

The other tradition in views of scientific evidence is that a fact is evidence for a hypothesis if there is an explanatory connection between them -- if the fact explains why the hypothesis is true, the hypothesis explains why the fact is true, or there is a common cause that explains both. The job of science is to explain the workings of the natural world and when a hypothesis succeeds in being able to do that, this success is what we call evidence. Einstein's and Newton's theories were attempts to explain how gravitation works, Einstein but not Newton could explain the significant bending of starlight near the sun, so the bending of light is evidence for and that Einstein is right.

Problem with Explanation

Achinstein points out the same sort of problem with explanation as with probability -- you could have a possible explanation and not have evidence. In an example I referred to a couple days back, if my car fails to start one morning, it would be explained by the fact that a monkey escaped from the zoo the previous night, syphoned the gas out of my tank and substituted crushed bananas. It would explain it, but surely my car not starting is not evidence for the bizarre idea. And so it is with creationism, intelligent design, and political conspiracy theories of all types -- simply being able to work facts into a story that if true would explain is not enough to have evidence.

The Reece's Peanut Butter Cup Approach

What Achinstein does is argue that only evidence-that is real evidence and contends that given the truth of the fact, it is evidence-that the hypothesis is the case if and only if it gives both high-probability and it is better than 50/50 that there is an explanatory connection between them. this complementary approach fixes the weakness in both views.

(Here's the part where you pretend to look impressed) In an article I published a few years back in the British Journal for the Philosophy of Science, I take issue with Achinstein's limiting of evidence to evidence-that and contend that we can expand his synthetic approach to also include evidence-for. A fact is evidence-for a hypothesis if and only if it either increase the likelihood of its truth or it increases the likelihood of an explanation including the hypothesis. With this view, I am stuck with the result that evidence-for can be an incredibly weak notion at the margins. I have to be willing to swallow that walking out of the house is some evidence of dying from a lightning strike, and also willing to grant that there is some evidence in favor of intelligent design -- just nowhere near enough evidence to make it anywhere near as likely as the alternative hypothesis (this is not necessarily true for versions of creationism, however).

The article also includes a case where you have explanatory relevance, but statistical irrelevance. Suppose you are an FBI officer working to bust a crooked casino. You have reliable information that of the six craps tables, three are fair, two use dice loaded to make throwing double sixes fifteen times more likely, and one table uses dice I saw in an old Abbott and Costello movie that have sixes on all faces. You walk up to a table at random. Due to homeland security cutbacks relating to non-terrorism cases, you have to use your own money. You are therefore concerned that you will throw sixes and lose money. When you do the calculation to determine the odds of picking a random table and throwing snake eyes in light of the distribution of types of dice, you realize that the odds of two sixes coming up is fifteen times normal, less than 50/50, so you believe it most likely won't happen. Walking up to the table, your training allows you to immediately spot the loaded dice. Knowing that throwing loaded dice makes the chances of double sixes fifteen times more likely, your degree of belief just happens to be the same as before. The new information does not increase the probability of your belief, but if asked for evidence in support of your belief that it is less than likely that you won't throw double sixes, not mentioning the fact that you know you have loaded dice, seems to leave out the operative fact. Hence, you can have some evidence that is explanatory, but not statistically relevant.