For the past few weeks there has been a lot of discussion over evidence that the vaccines caused 17,000,000 deaths worldwide. It’s certainly a large number…
It comes from
, who published an extensive 183 page report on the matter which you can find here. Bret Weinstein breathed fresh life into this claim whilst talking with Tucker Carlson, describing the 17,000,000 as credible. You can imagine what followed: a very lively discussion about this very large number.Rancourt’s claim rests on a correlation of a rise in all-cause-mortality with the rollout with the Covid-19 vaccines across the 17 countries included in his study. But many have been quick to critique this assumption, like Dr Clare Craig who pointed out that Rancourt’s thesis rests on all of the observed excess deaths being attributed to the vaccine. No covid deaths at all?
As Brett himself says, zero is a special number…
put together a very readable critique of this 17,000,000 figure on the Illusion of Consensus. Denis Rancourt responded to the criticism his own substack on January 24th. The links for the arguments for (Rancourt) and arguments against (Høeg) are both linked below.In an age whereby information can’t no longer be swallowed safely wholesale, we are left in a conundrum. How can we determine how plausible this 17,000,000 claim really is? There are reasoned positions on both sides of statements like these, and so it becomes a research task just to work through all the material supporting the assertion. In the age of (warranted) institutional distrust, is it plausible for our society that we plod along with this problem weighing heavy at the feet of our ‘informed’ citizens?
Perhaps you can see where I’m going with this… So let’s continue!
Reading Høeg’s critique, one thing did stand out to me: the relevant quote from Høeg’s critique is below. Rancourt chose 17 countries to be a part of the study, why were those 17 countries specifically chosen? Was there a selection bias known or unknown hidden in those choices?
The first issue is the authors analyze only 17 countries. But why only 17 and why these 17? They don’t explain but should have. For example, why did they look at the Southern Hemisphere and not the Northern? They also did not only include countries with high vaccination rates; in fact, some have relatively very low rates of around 40%. What I am getting at is- where these countries chosen because they had excess all-cause mortality peaks that corresponded with the vaccine rollout and/or were countries that didn’t excluded? In other words, was there selection bias?
Rancourt responded directly to this in his own defence, arguing that his paper made it clear “how and why” those 17 countries in particular were chosen. The full relevant quote is posted below.
#4 - Tracy states that the choice of the 17 countries is not explained in the paper and suggests the choice is biased
It is false that the choice is not explained in the paper. The last three paragraphs of the Introduction are clear as to how and why these 17 countries were chosen.
First, it was decided to study the Southern Hemisphere because booster rollouts occurred in the Southern Hemisphere summers (seasons are inverted in the two hemispheres), thereby avoiding the confounding effects of seasonality of mortality. Second, all countries in the Southern Hemisphere and many in the Equatorial Region, which had all-cause mortality data, were included.
I have to say, I’m not entirely convinced this defense stands up. Go to the original study for yourself and read the last three paragraphs of the introduction - do you think it makes it clear “how and why” these 17 countries in particular were chosen?
What if I wanted an objective third perspective on this?
Well…. want no more.
I created a case on case.science of course! The first thing I added to the case was the original study by Rancourt, which I found here. I then added Tracy Beth Høeg’s critique from the Illusion of Consensus, and finally I added Dennis Rancourt’s defense of his paper. The final case is now viewable for you to see here.
I started querying the case in natural language, and as I do that, case.science finds relevant sections of each source inside the case and it starts to make its arguments based on my questions. The AI is instructed to follow the case title and description as the basis of its argument, so this makes it quite a flexible tool to read through lots of data.
For new readers, I should point out that case.science is a project developed by myself and was born out of the findings of this substack, The Digger. Likes, shares, comments and subscribes are most welcome.
So far, so useful.
However, look at what happens when I ask this question, “Høeg's critique of Rancourt says there's no clear rationale given as to why those 17 countries were chosen for the study - is that true?”
This is how the case AI responded:
case.bot just now
According to Denis' response to Tracy's critique, the choice of the 17 countries for the study is explained in the paper. The decision to study the Southern Hemisphere was made because booster rollouts occurred in the Southern Hemisphere summers, avoiding the confounding effects of seasonality of mortality. Additionally, all countries in the Southern Hemisphere and many in the Equatorial Region with all-cause mortality data were included. Namibia and some small Pacific-island countries were added after the paper was written. The selection of these countries was based on specific criteria outlined in the paper, contrary to the claim that there was no clear rationale given for their inclusion.
See how Case used all the sources available to it to make its answer? That meant that Rancourt’s very clear explanation on why those 17 countries were chosen - detailed in his response to the critique - was actually used to suggest there was a clear rationale for those nations being picked. Right now the AI doesn’t have fine control over which sources it should use for its answer. Rationally, for this particular question, it should only look through the original paper to form its answer. It doesn’t matter what was said in the response to the critique, what matters is what’s in the original paper. Building the toolset to allow the AI to do this automatically is quite an involved piece of work (as you can probably imagine), but I came up with a relatively quick fix which proves pretty useful.
Post continues beyond the fold. With these three sources available to you inside case, can you find any insights from the discussion? Can you make your own case and help us push our understanding further along? As ever, all development of case and the writing on substack (which is coming back!) are all made possible by my paid subscribers. If you like what I’m doing, please do subscribe to The Digger on substack.
Keep reading with a 7-day free trial
Subscribe to The Digger to keep reading this post and get 7 days of free access to the full post archives.