Tuesday, December 02, 2025

Brian Blais's attempt to model dependence

Brian Blais has attempted to critique Tim's and my modeling of the argument for the resurrection of Jesus via a criticism of our independence assumption for the alleged witnesses. This post will focus solely on Blais's technical critique, not on broader issues concerning the dependence or independence of the evidence for the resurrection. In a post here, published at about the same time as this post, I go into much greater depth on independence issues and the concrete evidence for the resurrection.

Let me stress: This post is a very technical post for a very narrow purpose. It is just a critique of Blais's modeling. The only relationship it has to my much longer post on dependence is that Blais's argument is an attempt to model the kind of diminishing returns, undermining dependence that I discuss (and respond to) there. The super-short version of that response is...it's all about the varied evidence, man! See the other post for more.

As a modeling attempt, Blais's is technically poor, due to the irrelevance of his all-or-nothing model and the errors in his partial dependence model.

Blais provides two different technical models of multiple witness testimonies to an event (in this case, a miracle, which is why M keeps being used in the formalism) and argues that the slightest deviation from absolute certainty of complete independence, or the slightest degree of dependence, among the items of evidence causes the cumulative Bayes factor to drop precipitously.

Simplification is almost always necessary in modeling a complex empirical case, but simplifications are often worth noting so that one can make mental allowances for their effects. Blais's models share several simplifications that are problematic, and his second attempted modeling contains actual mistakes.

Indeed, it's ironic that he has a rather snide paragraph near the end of the post about the simplistic minds of "apologists" in modeling evidence while apparently unaware that I've published extensively (since the time when we wrote our paper on the resurrection) on the very topic of dependence and independence and have already developed a model that allows for a more nuanced treatment of the relevant issues than his does. A quick look at my curriculum vitae and a search for the word "dependence" on the page would turn up one of these articles immediately, and back-references from the published version of that article would turn up two more on the same topic. Other blind reviewed professional articles on contradictions and undesigned coincidences are also relevant. Lying behind these is this article by Tim on a measure of the extent to which a hypothesis unifies items of evidence. 

Blais assumes (in both of his suggested models) that dependence or independence is identical for both an hypothesis H and its negation, ¬H. In point of fact, taking account of dependence requires a sensitivity to the ways in which conditional dependence varies between an hypothesis and its negation. Positive dependence of various items of evidence given H is, all else being equal, a good thing for H--a point that is relevant to the evidence that we have for the existence and properties of the external world. (See here and here on that topic.) Given that a real apple is in front of me, my expected sensations of "apple-like" touch, sight, and even taste will be more (positively) dependent than given that I am experiencing a hallucination of an apple. This is why, both in fiction (e.g., Macbeth and the dagger) and in the real world, when someone is wondering if he is dreaming or hallucinating, he naturally reaches out to see if he can touch what he seems to see. There are even cases where individual items of evidence have no tendency to confirm or disconfirm some H when considered alone but where their conjunction does confirm it--an effect of independence given ¬H coupled with positive dependence given H. The case of gravitational lensing, discussed by Tim in the article linked above, is one such. So what is important to consider is the relative conditional dependence of evidence given H and its negation. 

Dependence is not a single thing that is per se "bad" for the strength of a positive case. As I discuss in the article on dependence and varied evidence, the "bad" kind of dependence (from the perspective of confirming H) is dependence given the negation of H, and this is why varied evidence that all points to H is helpful--because it tends to be disunified given ¬H.  

Blais also never considers the possibility of negative dependence, yet negative dependence is also important in modeling various types of evidential cases. A particular disease might lead us to expect that the patient would (let's say) probably not have both a fever and a rash at the same time--the fever and the rash would be negatively dependent on each other, given that the patient has that disease. If, then, the patient does have both at once, we can (to some degree) rule out that potential diagnosis. Negative dependence comes in more subtle forms as well. 

These two simplifications are particularly problematic for modeling complex empirical cases, like the case for the resurrection. In fact, in our original article on Jesus' resurrection we explicitly mention the possibility of negative dependence among items of evidence given that Jesus did not rise again. It's at least a little strange that Blais in his critical post doesn't even talk about our discussion of the dependence concern, even if he thinks it was woefully inadequate. One might gather from his critique that we said nothing whatsoever on the topic!

I won't say much more about the actual dependence and independence issues in the resurrection here, as the present post is focused solely on the technical issues in Blais's critique, but I'll just say this: In the course of the following years (more than a decade) as I've done more work on independence and given more thought to its relevance for the resurrection, I have had more to say about this. A few years ago I made a start here and here. As my article on varied evidence indicates, I would particularly relate negative dependence to the (non-deductive) elimination of subhypotheses that could otherwise unify the data under ¬R. In the case of the resurrection this relates especially to any theory in which the disciples were lying. And independence vs. helpful positive dependence relates to the polymodal nature of the disciples' alleged experiences and the fact that groups of them claimed to have participated in those experiences.

In Blais's first model of dependence, he explicitly assumes that dependence is an all-or-nothing matter. Either one testimony leads us to expect another with probability 1, given M or given ¬M, or else the first testimony has absolutely no relevance to the expectation of the other, given M or given ¬M. Blais himself suggests that this assumption might be criticized, as indeed it might, since we are almost never in such a situation where dependence between two items of evidence must be either total or non-existent. I won't have much more to say about his all-or-nothing model, since it is pretty clearly irrelevant to any real case, including the resurrection.

But at least when Blais gives his all-or-nothing model, his use of probability theory is accurate as far as it goes. He appears to be using the Theorem on Total Probability (hereafter the TTP) to model the effect of dependence, given all of his (problematic) simplifying assumptions (all-or-nothing independence, independence or dependence is identical given H or ¬H, and there is no provision for modeling negative dependence). There are no mere mistakes in his use of the TTP there.

When he turns to modeling partial independence, he carries over several of the problematic simplifications, except for the all-or-nothing assumption. But in his attempted partial dependence model something else happens--he makes serious, vitiating mistakes in using the TTP.

A little background on the TTP. Suppose that I want to spell out mathematically the probability of some proposition which I'll call E2. I can spell out that probability in terms of something else, which I'll call H, like this:

P(E2) = P(E2|H) P(H) + P(E2|¬H) P(¬H)

This says that the probability of E2 is equal to the probability of E2 on the assumption that H is true, times the probability that H is true, plus the probability of E2 on the assumption that H is not true, times the probability that H is not true. The probabilities of H and of ¬H are weighting factors, and they have to sum to one, because either H is true or it is false, and we need to weight the probability of E2 given H and the probability of E2 given ¬H in order to represent the total probability of E2 correctly. The probabilities of E2 given H and given ¬H are merely conditional probabilities, and they don't have to sum to one. E2 might be very improbable (or very probable) whether H is true or false. This is a very important point.

As long as you do it carefully, you can use the TTP to represent many different relationships between propositions, even irrelevance. You can express the probability of one proposition in terms of another proposition that has nothing to do with it. You can also put one or more items to the right of the solidus (this sign |) consistently to produce a further conditionalized version of the TTP, as long as you do it right. For example, suppose that I wanted to break down the probability of E2 given H in some more fine-grained way in terms of some subhypothesis of H, H1. A subhypothesis of H, as I'm using the term, might be more rigorously be called a proper subhypothesis: It is a further proposition that entails H but is not entailed by H. For example, if H is, "John is a good person," H1 could be "John is a good person, and he wants to surprise his wife for her birthday." The second, by design, entails the first, but not the other way around. (John could be a good person even if he doesn't want to surprise his wife for her birthday.) A conditionalized version of the TTP for E2 given H, in terms of a further subhypothesis of H, would look like this:

P(E2|H) = P(E2|H & H1 ) P(H1 |H) + P(E2|H & ¬H1) P(¬H1|H)

Here H is found in every term of the equation to the right of the solidus, because I'm trying to represent the probability of E2 given H, but in a more fine-grained way. Here what has to sum to 1 (the weighting factors) are the probabilities of ¬Hgiven H and of Hgiven H. On the assumption of H, His either true or false, and I'm breaking down the probability of E2 given H in relation to H1. But as before, the conditional probabilities of E2 given either of the weighting factors could be low or high, and those conditional probabilities don't have to sum to one.

When Blais goes to represent partial dependence, he gives us the following equation:

𝑃⁡(𝐷2|𝑀,𝐷1) =𝑑 +𝛼⁢(1 −𝑑)

For purposes of this purely technical discussion, it doesn't matter what hypothesis M is. It's just some hypothesis, like H in the above equations. And since Blais does all the same things for dependence and ¬M, the same problems arise there.

When I looked at this equation, something struck me as odd right away--namely, the two addends have factors in them that sum to 1, and those are 𝑑 and ⁢(1 −𝑑). Why should that be strange, you might wonder. Didn't I just say that there are weighting factors that do have to sum to 1 in the TTP? Well, yes, amd the presence of two things that sum to 1 in Blais's equation makes it clear that he's trying to use the TTP (as he did when he modeled all-or-nothing dependence). Indeed, that's the only excuse there could possibly be for his using the equation he gives to try to represent partial dependence. 

But when he was representing all-or-nothing dependence, and indeed throughout the article, 𝑑 represents the individual conditional probabilities of the individual items of evidence given M. So 𝑑 represents P(D1|M) (and also P(D2|M) ). Now if 𝑑 is P(D1|M) or P(D2|M), then 1- 𝑑 is P (¬D1|M) or P (¬D2|M). 

So what I immediately wondered was this: Why is Blais treating P(D1|M) and P(¬D1|M) (or for that matter P(D2|M) and P(¬D2|M)) as the weighting factors? He's clearly trying to give us a conditionalized version of the TTP for 𝑃⁡(𝐷2|𝑀,𝐷1) (the comma there stands for "and"), breaking that down in terms of whether or not D1 and D2 are independent under the assumption of M. So far, so understandable. But in that context, why are we even talking about ⁢(1 −𝑑)? How did that even get in there as a weighting factor? Why are we talking about the probability of the negation of one of the items of evidence, much less using it as a weighting factor? That probability has no reasonable place in the equation for representing these concepts.

(Digression: I did try to think of other possible uses of the TTP that Blais could be making here that would be correct and that would have some claim to represent what he says he's representing, but I couldn't come up with any. I considered the possibility that, contrary to what he says he's showing, P(D2|M, D1), he's actually trying to represent P(D2|M) and using P (D1|M) and P(¬D1|M) as the weighting factors. But it wouldn't be the correct equation for that either. In that case there should still be two factors multiplied together in the left addend, and it still would make no sense to multiply 𝛼⁢, which he is using to represent degree of dependence, by P(¬D1|M) in the right addend.) 

Given what Blais is trying to do with the TTP here, the two addends are supposed to represent the independent case (the left addend) and the dependent case (the right addend). Blais says that 𝛼 represents the degree of dependence. We'll get to a couple of problems with that in a minute, but for now:  𝛼⁢ spelled out in words has to mean "the probability of D2 given M and D1 if D2 and D1 are dependent when M is present." That being the case, the weighting factor should be "The probability that D2 and D1 are dependent when M is present." 

Now there is no reason in the world why that should be the same as P(¬D2|M). Why would it be?

Something has clearly gone wrong.

Let's start over again and do the TTP for partial dependence in a way that is as close as we can get to what Blais apparently wants to do, but without this problem (and several other problems that flow from it).

Let's envisage a subhypothesis of H which we'll call Hfor "H and independence." Its negation then stands for "H and dependence." (Later we'll use Hand its negation under ¬M as well, and for ease of typing, I'll just continue to use that name there with the understanding that when Hand its negation are used in relation to ¬M it refers to independence and dependence under ¬M.)  Now let's break down 𝑃⁡(𝐷2|𝑀,𝐷1) using the TTP and Hi:

𝑃⁡(𝐷2|𝑀,𝐷1) = P(D2|M, D1, Hi) P(Hi|M, D1) +  P(D2|M, D1, ¬Hi) P(¬Hi|M, D1)

Notice that, due to the fact that we're trying to represent 𝑃⁡(𝐷2|𝑀,𝐷1), per Blais's stated intention, this conditionalized version has both D1 and M to the right of the solidus in every term. In his post, Blais has consistently used 𝑑 to represent the individual probabilities of D1 and D2 given M alone, which is of course the same as  P(D2|M, D1, Hi). (In formal epistemology we say that in this case M "screens off" D2 and D1.) So that's no problem, we can restate the TTP above as:

𝑃⁡(𝐷2|𝑀,𝐷1) = (d) P(Hi|M, D1) +  P(D2|M, D1, ¬Hi) P(¬Hi|M, D1)

And as Blais has used it in his equation, 𝛼⁢ should represent P(D2|M, D1, ¬Hi)--i.e., the probability of D2 given D1 and M if M does not screen off D2 from D1. It's the conditional probability in the right addend. We're allowing here one simplification, which I'm not quibbling about, namely, that there is just one probability of D2 given D1 and M in the non-screening case. One could always take the weighted average of that probability if there were various non-screening scenarios under M and call that 𝛼⁢. So,

𝑃⁡(𝐷2|𝑀,𝐷1) = (d) P(Hi|M, D1) +  (𝛼) P(¬Hi|M, D1)

Here M and D1 are simply being put to the right of the solidus in every term, as we did above for H. This is a conditionalized version of the TTP in which every term is conditionalized upon both M and D1, and in which we are representing the probability that the two evidence items are or are not independent given M by a subhypothesis of M. P(Hi|M, D1) and P(¬Hi|M, D1) must sum to one. They are the weighting factors. These are the probabilities that M screens off D1 from D2 and that M does not screen off D1 from D2 (the probabilities of independence and dependence under M), which we have modeled correctly by the probabilities of a subhypothesis and its negation. That is really the only good way to use the TTP to model what we might call meta-probabilities--the probability that some other probabilistic relationship holds. 

But notice here a very important difference between this use of the TTP to represent partial dependence and Blais's attempted model: Here d is not a weighting factor. Nor should it be. Nor is 1 - d a weighting factorNor should it be. Blais has apparently confused the fact that d represents the probability of D2 given M and D1 in the independence case with the idea that d represents the probability of the independence case. He's conflated two completely different things and is treating them in his model as if d can stand for both at the same time:

P(D2|M, D1, Hi)

and

P(Hi|M, D1)

By mashing these together, he's forgotten that he needs a separate weighting factor in the left addend to represent the probability of independence/screening, and this confusion leads further to the irrelevant and incorrect use of P(1- 𝑑) as a weighting factor in the right addend.

This problem feeds into a further mistake in Blais's model. As mentioned above, Blais doesn't seem to have considered the possibility of negative dependence between evidence items, given a particular hypothesis and/or subhypothesis. Since dependence is all-or-nothing in his first, oversimplified model, and since in that model he does have a factor that represents the probability of independence, the issue of negative dependence doesn't arise as a separate problem, because the model is so oversimplified anyway. In that model, if there is any dependence, it's automatically maximal positive dependence.

But in the attempt to model partial dependence, the fact that negative dependence isn't recognized comes up as a separate issue, and it's exacerbated by the mistaken use of 1 - d as a weighting factor. As already noted, 𝛼 in Blais's model must be an attempt to represent the degree of dependence, and in the TTP it has to stand for P(D2|M, D1, ¬Hi) -- that is, the probability of the second item of evidence, given M and the first item, and given that screening fails; the two evidence items are probabilistically dependent under M. In his exposition, Blais states that if 𝛼= 0, this represents complete independence and that if it equals 1, this represents complete dependence. The latter is correct. In this model, if 𝛼 equals 1, we have complete positive dependence. But in this model, if 𝛼 equals 0, that can't represent complete independence, and the only reason that one might think that it does is because of the mistake already noted--namely, that Blais doesn't have a separate weighting factor in the left addend of the TTP (and has the wrong weighting factor in the right addend). Blais's statement about 𝛼 = 0 arises from the fact that, if either factor in the right addend is 0, that addend disappears from consideration. Recall that Blais models the probability of D2 given M and D1 like this:

𝑃⁡(𝐷2|𝑀,𝐷1) =𝑑 +𝛼⁢(1 −𝑑)

In this equation, if either factor in the right addend is 0, this throws us back upon the left addend, so that the probability of D2 given M and D1 is just the screened probability, namely, d. This is why Blais thinks that 𝛼 = 0 means complete independence of the evidence items under M. But when we look at the formally correct equation using the TTP, as I have corrected it, we see that that isn't what 𝛼 = 0 means. Here again is the formally correct equation using the TTP, explicitly breaking out the meta-probability as a subhypothesis:

𝑃⁡(𝐷2|𝑀,𝐷1) = (d) P(Hi|M, D1)  + (𝛼) P(¬Hi|M, D1) 

If 𝛼 = 0, that would mean that, if the two items are dependent at all, conditional on M, then they are maximally negatively dependent, not independent. For concreteness, suppose that P(D2|M, D1) in the screening case is .1. So if D1 and D2 are irrelevant to each other given M, that is the probability of D2 given M. It's the individual likelihood under independence, represented by d. But suppose that 𝛼 = .05 (again, this is just to make the example concrete). Then the probability of D2 given M and D1 in the dependence case, where screening fails (¬H) is lower than the probability of D2 given M and D1 in the screening/independence case. That means that a failure of independence renders D1 and D2 negatively relevant to one another under M. So for any value of 𝛼 < d, D1 and D2 are negatively relevant to each other, given M and screening failure, and 𝛼 = 0 is just an extreme case of this. 

The only way that this point would become irrelevant would be if P(Hi|M, D1) = 1--that is, if screening is guaranteed. Otherwise, 𝑃⁡(𝐷2|𝑀,𝐷1) < d. For example, in the above illustration, suppose that d = .1 and 𝛼 = 0. And suppose, again, to make this concrete, that P(Hi|M, D1) = .8. Then,

𝑃⁡(𝐷2|𝑀,𝐷1) = (d) P(Hi|M, D1)  + (𝛼) P(¬Hi|M, D1)

= (.1) (.8) + (0) (.2)

= .08 < d

So D1 and D2 are negatively relevant to each other given M, if 𝛼 < d. And of course the same is true for ¬M. So, contra his exposition, Blais has accidentally given himself a way of modeling negative dependence, but he appears to be assuming that all dependence is positive dependence. 

In the corrected model, if 𝛼 = 0 or indeed anything very low, this causes 𝑃⁡(𝐷2|𝑀,𝐷1) to be dominated by the left addend, which should be (d) P(Hi|M, D1)--namely, the individual likelihood times the probability of independence. 

But in Blais's model, 

𝑃⁡(𝐷2|𝑀,𝐷1) =𝑑 +𝛼⁢(1 −𝑑)

the left addend is just the individual likelihood, with no weighting factor. These problems, together with the other problems already noted, make a hash out of Blais's claims about when and how dependence creates problems for Tim's and my estimated cumulative Bayes factor. We've departed pretty far from anything that could be enlightening on that topic.

The assumption that dependence is equal (and in the same direction) for both M and its negation rears its head here, too, as does Blais's use of the constant 𝛼 to represent dependence. Not only is dependence often, even usually, not equal given an hypothesis and given its negation, but even if you do want to represent equal dependence given M and its negation, the use of a constant like 𝛼 is a terrible way to do it. 

Recall that in Blais's attempt to use the TTP to represent 𝑃⁡(𝐷2|𝑀,𝐷1), 𝛼 has to be fulfilling the role of P(D2|M, D1, ¬Hi). That is, the constant represents the probability of the second item of evidence, given M, given the first item of evidence, and given that M does not screen off the two items from each other. This is the conditional probability of D2 given M and D1 if screening fails. 

Blais does the exact same thing for ¬M. He uses 𝛼 to represent the probability of D2 given  ¬M and D1 and given that ¬M does not screen these items of evidence from one another. But even if you wanted to model in some sense equal dependence under M and ¬M, simply setting as equal the probability of the second item, given the first item, given the hypothesis, and given that the hypothesis fails to screen is a highly dubious way of representing equal dependence under the two hypotheses. Why should that be considered equal dependence? It doesn't represent an equal relationship between the two items of evidence if the hypothesis fails to screen. 

One complex way to conceive of equal dependence would be to use the ratio developed by Tim in the article linked above and the ratio of ratios that I developed in subsequent papers, especially this one. This modeling of dependence, using the notation I've been using in this article and assuming (as we are asuming here), would be like this:

P(D1, D2|M,  ¬Hi)/ P(D1|M, ¬Hi) × P(D2|M, ¬Hi) = 
P(D1, D2|¬M,  ¬Hi)/ P(D1|¬M, ¬Hi) × P(D2|¬M, ¬Hi)

This says that, even if screening fails, both the hypothesis and its negation have equally unifying or disunifying effects on the evidence. The probability of the conjunction of the items of evidence, given the hypothesis (and screening failure), over the product of the probabilities of the individual items, given the hypothesis (and screening failure) equals that very same ratio conditional on the negation of the hypothesis.

It's undertandable though that Blais wouldn't want to follow this concept of equal dependence, because if this is the case, no correction for dependence is necessary. This ratio of ratios is equal to 1 under these conditions, and so dependence can't harm or help the hypothesis. No correction factor for dependence is needed (see here). If we tried to model equal dependence in this sense it would be impossible for it to "harm" the case for M.

But there is another way that we can think of equal dependence where the correction factor would not necessarily be equal, despite "equal dependence" in this second sense. We could think of equal dependence as equal confirmation (or disconfirmation, but let's just say "confirmation") of one evidence item by another evidence item, given the hypothesis in question.

Simply setting the probability of D2 given D1 and M (and screening failure) equal to the probability of D2 given D1 and ¬M (and screening failure) has no claim at all to represent equal confirmation of D1 by D2 under each of the two hypotheses. That modeling doesn't represent equal confirmation by any of the three measures of confirmation that have the best claim to be standard. The reason that it doesn't represent equal confirmation, and presumably wasn't intended to (by Blais), is pretty self-evident: In the set-up we're envisaging here, each individual item of evidence has a higher probability given M than given ¬M. In Blais's set-up 𝛼 represents the probability each item has given the hypothesis (whether M or ¬M) and given the other item of evidence. E.g. P(D2|M & D1) or P(D2|¬M & D1). Suppose that 𝛼
is higher than both of these. Then, by the set-up, P(D2|M) < P(D2|M & D1) and the same for ¬M. But the gap, as one might call it, between P(D2|M) and P(D2|M & D1) is smaller than the gap between P(D2|¬M) and P(D2|¬M & D1), since P(D2|M) by itself is greater than P(D2|¬M). So in this illustration the confirmation of D2 by D1, given ¬M, is greater than the confirmation of D2 by D1, given M.

Tim's and my work makes it clear that we prefer the L measure of confirmation--that is, the Bayes factor, which in this case would be a conditionalized Bayes factor representing the extent to which D1 confirms D2 under the assumption of M. Obviously 𝛼 doesn't represent a situation where D1 confirms D2 equally given M and ¬M by that measure, which in this case would be

P(D2|M, D1)/ P(D2|M, ¬D1)

And the same ratio for ¬M.

But if the L measure doesn't happen to be your favored measure of confirmation, and if you want to represent equal confirmation under M and under its negation, you should know that Blais's use of 𝛼 also doesn't represent equal confirmation if we use the difference measure, which in this case would be 

P(D2|M, D1) - P(D2|M) or, using as much of Blais's notation as possible,

P(D2|M, D1) - d

And the same difference for ¬M and b, where b is the individual likelihood P(D2|¬M)

Nor does Blais's use of 𝛼 represent equal confirmation under M and under its negation by the popular r measure, which in this case would be 

P(D2|M, D1)/ d

And the same ratio for ¬M and b. 

Nor would Blais's use of 𝛼 in relation to M and ¬M represent equal confirmation (from D1 to D2) under each of these if we placed ¬Hi to the right of the solidus everywhere, thus calculating this confirmation only under the assumption of ¬Hi (dependence). 

So the use of 𝛼 is doubly dubious--both because Blais is not even recognizing the possibility of cases where dependence is not equal given an hypothesis and its negation and because it is ill-suited even to represent equal dependence. 

But there is one more technical problem, one which incorrectly favors ¬M in Blais's model of partial dependence, and which flows from the original technical error of not realizing that the probabilities of dependence and of independence need to be modeled explicitly in the partial dependence case. 

Recall that Blais is wrongly using as his weighting factor in the right addend (1- 𝑑). He does the same for ¬M, and there the individual likelihood is represented by b. b is P(D2|¬M, D1, Hi) which in turn equals P(D2|¬M), because in the screening case D1 is probabilistically irrelevant to D2. So his weighting factor in the right addend for his ¬M calculation is (1- b).

When modeling the case for M, Blais is granting that d > b, even a lot greater. In the model he makes it 1000 times bigger, as we do in most of the individual factors in our paper. This means that 1 - d < 1 - b

Now consider what happens to the cumulative Bayes factor when Blais tries to model partial dependence. For each item of evidence other than the first, the Bayes factor he suggests is 

d + 𝛼 (1 - d)/ b + 𝛼 (1 - b)

You can see how this formula follows from the formula he has suggested to model partial dependence given M and given ¬M, which I've been critiquing. The present point concerns the right addends on the top and bottom of this ratio. I've already argued that the use of 𝛼 in both of these is a dubious way to model equal dependence for M and its negation, even if we have reason to believe that dependence is equal. And in general, if you add the same constant to the top and the bottom of a top-heavy ratio, you will lessen the top-heavyness of the ratio. So even if the weighting factors of the right addend in the top and the bottom here were identical, multiplying 𝛼 by those weighting factors and then adding that to d and to b respectively would have a strong tendency to reduce the Bayes factor, without any good justification. 

But it's even worse than that. Since, as we've discussed, Blais is wrongly using 1 - d and 1 - b as the weighting factors here, it's guaranteed that he's multiplying 𝛼 by something that is larger in the denominator (the ¬M part) than in the numerator (the M part), due to the fact that d is stipulated to be greater than b. (So 1 - d is less than 1 - b.) If there were a good reason to use these as the weighting factors, that would of course just be too bad. But since the use of these weighting factors is just due to a mistake, the fact that this increases the equalizing of the Bayes factor (thus reducing still more the cumulative case for M) is all the more worth pointing out.

There are a lot of things that can be said about independence in the evidence for the resurrection. There is indeed a kind of dependence that can significantly weaken a case for some hypothesis H. There are situations in which modeling lines of evidence as independent evidence for H results in significant overestimation of the strength of the case. It is therefore worth saying more about why modeling the resurrection evidence as independent lines is legitimate. Here, I have merely shown that, bells and whistles notwithstanding, Brian Blais has not given a good model of dependence. This is due in part to oversimplification and in part to errors.

No comments: