## The Doomsday Argument

I don’t mind admitting that as I get older I get more and more pessimistic about the prospects for humankind’s survival into the distant future.

Unless there are major changes in the way this planet is governed, our planet may become barren and uninhabitable through war or environmental catastrophe. But I do think the future is in our hands, and disaster is, at least in principle, avoidable. In this respect I have to distance myself from a very strange argument that has been circulating among philosophers and physicists for a number of years. It is called *Doomsday argument*, and it even has a sizeable wikipedia entry, to which I refer you for more details and variations on the basic theme. As far as I am aware, it was first introduced by the mathematical physicist Brandon Carter and subsequently developed and expanded by the philosopher John Leslie (not to be confused with the TV presenter of the same name). It also re-appeared in slightly different guise through a paper in the serious scientific journal *Nature* by the eminent physicist Richard Gott. Evidently, for some reason, some serious people take it very seriously indeed.

The Doomsday argument uses the language of probability theory, but it is such a strange argument that I think the best way to explain it is to begin with a more straightforward problem of the same type.

Imagine you are a visitor in an unfamiliar, but very populous, city. For the sake of argument let’s assume that it is in China. You know that this city is patrolled by traffic wardens, each of whom carries a number on their uniform. These numbers run consecutively from 1 (smallest) to T (largest) but you don’t know what T is, i.e. how many wardens there are in total. You step out of your hotel and discover traffic warden number 347 sticking a ticket on your car. What is your best estimate of T, the total number of wardens in the city?

I gave a short lunchtime talk about this when I was working at Queen Mary College, in the University of London. Every Friday, over beer and sandwiches, a member of staff or research student would give an informal presentation about their research, or something related to it. I decided to give a talk about bizarre applications of probability in cosmology, and this problem was intended to be my warm-up. I was amazed at the answers I got to this simple question. The majority of the audience denied that one could make any inference at all about T based on a single observation like this, other than that it must be at least 347.

Actually, a single observation like this *can* lead to a useful inference about T, using Bayes’ theorem. Suppose we have really no idea at all about T before making our observation; we can then adopt a uniform prior probability. Of course there must be an upper limit on T. There can’t be more traffic wardens than there are people, for example. Although China has a large population, the prior probability of there being, say, a billion traffic wardens in a single city must surely be zero. But let us take the prior to be effectively constant. Suppose the actual number of the warden we observe is t. Now we have to assume that we have an equal chance of coming across any one of the T traffic wardens outside our hotel. Each value of t (from 1 to T) is therefore equally likely. I think this is the reason that my astronomers’ lunch audience thought there was no information to be gleaned from an observation of any particular value, i.e. t=347.

Let us simplify this argument further by allowing two alternative “models” for the frequency of Chinese traffic wardens. One has T=1000, and the other (just to be silly) has T=1,000,000. If I find number 347, which of these two alternatives do you think is more likely? Think about the kind of numbers that occupy the range from 1 to T. In the first case, most of the numbers have 3 digits. In the second, most of them have 6. If there were a million traffic wardens in the city, it is quite unlikely you would find a random individual with a number as small as 347. If there were only 1000, then 347 is just a typical number. There are strong grounds for favouring the first model over the second, simply based on the number actually observed. To put it another way, we would be surprised to encounter number 347 if T were actually a million. We would not be surprised if T were 1000.

One can extend this argument to the entire range of possible values of T, and ask a more general question: if I observe traffic warden number t what is the probability I assign to each value of T? The answer is found using Bayes’ theorem. The prior, as I assumed above, is uniform. The *likelihood* is the probability of the observation given the model. If I assume a value of T, the probability P(t|T) of each value of t (up to and including T) is just 1/T (since each of the wardens is equally likely to be encountered). Bayes’ theorem can then be used to construct a posterior probability of P(T|t). Without going through all the nuts and bolts, I hope you can see that this probability will tail off for large T. Our observation of a (relatively) small value for t should lead us to suspect that T is itself (relatively) small. Indeed it’s a reasonable “best guess” that T=2t. This makes intuitive sense because the observed value of t then lies right in the middle of its range of possibilities.

Before going on, it is worth mentioning one other point about this kind of inference: that it is not at all powerful. Note that the likelihood just varies as 1/T. That of course means that small values are favoured over large ones. But note that this probability is uniform in logarithmic terms. So although T=1000 is more probable than T=1,000,000, the range between 1000 and 10,000 is roughly as likely as the range between 1,000,000 and 10,000,0000, assuming there is no prior information. So although it tells us something, it doesn’t actually tell us very *much*. Just like any probabilistic inference, there’s a chance that it is wrong, perhaps very wrong.

What does all this have to do with Doomsday? Instead of traffic wardens, we want to estimate N, the number of humans that will ever be born, Following the same logic as in the example above, I assume that I am a “randomly” chosen individual drawn from the sequence of all humans to be born, in past present and future. For the sake of argument, assume I number n in this sequence. The logic I explained above should lead me to conclude that the total number N is not much larger than my number, n. For the sake of argument, assume that I am the one-billionth human to be born, i.e. n=1,000,000,0000. There should not be many more than a few billion humans ever to be born. At the rate of current population growth, this means that not many more generations of humans remain to be born. Doomsday is nigh.

Richard Gott’s version of this argument is logically similar, but is based on timescales rather than numbers. If whatever thing we are considering begins at some time t_{begin} and ends at a time t_{end} and if we observe it at a “random” time between these two limits, then our best estimate for its future duration is of order how long it has lasted up until now. Gott gives the example of Stonehenge[1], which was built about 4,000 years ago: we should expect it to last a few thousand years into the future. Actually, Stonehenge is a highly dubious . It hasn’t really survived 4,000 years. It is a ruin, and nobody knows its original form or function. However, the argument goes that if we come across a building put up about twenty years ago, presumably we should think it will come down again (whether by accident or design) in about twenty years time. If I happen to walk past a building just as it is being finished, presumably I should hang around and watch its imminent collapse….

But I’m being facetious.

Following this chain of thought, we would argue that, since humanity has been around a few hundred thousand years, it is expected to last a few hundred thousand years more. Doomsday is not quite as imminent as previously, but in any case humankind is not expected to survive sufficiently long to, say, colonize the Galaxy.

You may reject this type of argument on the grounds that you do not accept my logic in the case of the traffic wardens. If so, I think you are wrong. I would say that if you accept all the assumptions entering into the Doomsday argument then it is an equally valid example of inductive inference. The real issue is whether it is reasonable to apply this argument at all in this particular case. There are a number of related examples that should lead one to suspect that something fishy is going on. Usually the problem can be traced back to the glib assumption that something is “random” when or it is not clearly stated what that is supposed to mean.

There are around sixty million British people on this planet, of whom I am one. In contrast there are 3 billion Chinese. If I follow the same kind of logic as in the examples I gave above, I should be very perplexed by the fact that I am not Chinese. After all, the odds are 50: 1 against me being British, aren’t they?

Of course, I am not at all surprised by the observation of my non-Chineseness. My upbringing gives me access to a great deal of information about my own ancestry, as well as the geographical and political structure of the planet. This data convinces me that I am not a “random” member of the human race. My self-knowledge is conditioning information and it leads to such a strong prior knowledge about my status that the weak inference I described above is irrelevant. Even if there were a million million Chinese and only a hundred British, I have no grounds to be surprised at my own nationality given what else I know about how I got to be here.

This kind of conditioning information can be applied to history, as well as geography. Each individual is generated by its parents. Its parents were generated by their parents, and so on. The genetic trail of these reproductive events connects us to our primitive ancestors in a continuous chain. A well-informed alien geneticist could look at my DNA and categorize me as an “early human”. I simply could not be born later in the story of humankind, even if it does turn out to continue for millennia. Everything about me – my genes, my physiognomy, my outlook, and even the fact that I bothering to spend time discussing this so-called paradox – is contingent on my specific place in human history. Future generations will know so much more about the universe and the risks to their survival that they won’t even discuss this simple argument. Perhaps we just happen to be living at the only epoch in human history in which we know enough about the Universe for the Doomsday argument to make some kind of sense, but too little to resolve it.

To see this in a slightly different light, think again about Gott’s timescale argument. The other day I met an old friend from school days. It was a chance encounter, and I hadn’t seen the person for over 25 years. In that time he had married, and when I met him he was accompanied by a baby daughter called Mary. If we were to take Gott’s argument seriously, this was a random encounter with an entity (Mary) that had existed for less than a year. Should I infer that this entity should probably only endure another year or so? I think not. Again, bare numerological inference is rendered completely irrelevant by the conditioning information I have. I know something about babies. When I see one I realise that it is an individual at the start of its life, and I assume that it has a good chance of surviving into adulthood. Human civilization is a baby civilization. Like any youngster, it has dangers facing it. But is not doomed by the mere fact that it is young,

John Leslie has developed many different variants of the basic Doomsday argument, and I don’t have the time to discuss them all here. There is one particularly bizarre version, however, that I think merits a final word or two because is raises an interesting red herring. It’s called the “Shooting Room”.

Consider the following model for human existence. Souls are called into existence in groups representing each generation. The first generation has ten souls. The next has a hundred, the next after that a thousand, and so on. Each generation is led into a room, at the front of which is a pair of dice. The dice are rolled. If the score is double-six then everyone in the room is shot and it’s the end of humanity. If any other score is shown, everyone survives and is led out of the Shooting Room to be replaced by the next generation, which is ten times larger. The dice are rolled again, with the same rules. You find yourself called into existence and are led into the room along with the rest of your generation. What should you think is going to happen?

Leslie’s argument is the following. Each generation not only has more members than the previous one, but also contains more souls than have ever existed to that point. For example, the third generation has 1000 souls; the previous two had 10 and 100 respectively, i.e. 110 altogether. Roughly 90% of all humanity lives in the last generation. Whenever the last generation happens, there bound to be more people in that generation than in all generations up to that point. When you are called into existence you should therefore *expect* to be in the last generation. You should consequently expect that the dice will show double six and the celestial firing squad will take aim. On the other hand, if you think the dice are fair then each throw is independent of the previous one and a throw of double-six should have a probability of just one in thirty-six. On this basis, you should expect to survive. The odds are against the fatal score.

This apparent paradox seems to suggest that it matters a great deal whether the future is predetermined (your presence in the last generation requires the double-six to fall) or “random” (in which case there is the usual probability of a double-six). Leslie argues that if everything is pre-determined then we’re doomed. If there’s some indeterminism then we might survive. This isn’t really a paradox at all, simply an illustration of the fact that assuming different models gives rise to different probability assignments.

While I am on the subject of the Shooting Room, it is worth drawing a parallel with another classic puzzle of probability theory, the St Petersburg Paradox. This is an old chestnut to do with a purported winning strategy for Roulette. It was first proposed by Nicolas Bernoulli but famously discussed at greatest length by Daniel Bernoulli in the pages of *Transactions of the St Petersburg Academy*, hence the name. It works just as well for the case of a simple toss of a coin as for Roulette as in the latter game it involves betting only on red or black rather than on individual numbers.

Imagine you decide to bet such that you win by throwing heads. Your original stake is £1. If you win, the bank pays you at even money (i.e. you get your stake back plus another £1). If you lose, i.e. get tails, your strategy is to play again but bet double. If you win this time you get £4 back but have bet £2+£1=£3 up to that point. If you lose again you bet £8. If you win this time, you get £16 back but have paid in £8+£4+£2+£1=£15 to that point. Clearly, if you carry on the strategy of doubling your previous stake each time you lose, when you do eventually win you will be ahead by £1. It’s a guaranteed winner. Isn’t it?

The answer is yes, as long as you can guarantee that the number of losses you will suffer is finite. But in tosses of a fair coin there is no limit to the number of tails you can throw before getting a head. To get the correct probability of winning you have to allow for *all* possibilities. So what is your expected stake to win this £1? The answer is the root of the paradox. The probability that you win straight off is ½ (you need to throw a head), and your stake is £1 in this case so the contribution to the expectation is £0.50. The probability that you win on the second go is ¼ (you must lose the first time and win the second so it is ½ times ½) and your stake this time is £2 so this contributes the same £0.50 to the expectation. A moment’s thought tells you that each throw contributes the same amount, £0.50, to the expected stake. We have to add this up over all possibilities, and there are an infinite number of them. The result of summing them all up is therefore infinite. If you don’t believe this just think about how quickly your stake grows after only a few losses: £1, £2, £4, £8, £16, £32, £64, £128, £256, £512, £1024, etc. After only ten losses you are staking over a thousand pounds just to get your pound back. Sure, you can win £1 this way, but you need to expect to stake an infinite amount to guarantee doing so. It is not a very good way to get rich.

The relationship of all this to the Shooting Room is that it is shows it is dangerous to pre-suppose a finite value for a number which in principle could be infinite. If the number of souls that could be called into existence is allowed to be infinite, then any individual as no chance at all of being called into existence in any generation!

Amusing as they are, the thing that makes me most uncomfortable about these Doomsday arguments is that they attempt to determine a probability of an event without any reference to underlying mechanism. For me, a valid argument about Doomsday would have to involve a particular physical cause for the extinction of humanity (e.g. asteroid impact, climate change, nuclear war, etc). Given this physical mechanism one should construct a model within which one can estimate probabilities for the model parameters (such as the rate of occurrence of catastrophic asteroid impacts). Only then can one make a valid inference based on relevant observations and their associated likelihoods. Such calculations may indeed lead to alarming or depressing results. I fear that the greatest risk to our future survival is not from asteroid impact or global warming, where the chances can be estimated with reasonable precision, but self-destructive violence carried out by humans themselves. Science has no way of being able to predict what atrocities people are capable of so we can’t make any reliable estimate of the probability we will self-destruct. But the absence of any specific mechanism in the versions of the Doomsday argument I have discussed robs them of any scientific credibility at all.

There are better grounds for worrying about the future than mere numerology.

April 29, 2009 at 9:11 pm

I remember reading Gott’s paper when it first came out. It generated

much discussion then, and continues to do so. A few years later, when

I was working at Jodrell Bank, while driving back home one night to where I was then living (on the grounds at Jodrell Bank), I scanned through the radio stations in my car. I heard someone with a southern U.S. accent mention “95% confidence level” and sure enough it was Gott himself talking about this paper, using examples such as his visit to the Berlin wall, to Broadway plays etc.

Gott’s paper has generated several other papers, most or all of which

claim to refute it.

His argument reminds me of the Anthropic Principle. Most invocations of it are either trivial or complete rubbish. There might be a few cases where it actually delivers interesting information.

As you note, there are numerous examples of wrongly applied arguments from probability. Did you hear about the airplane passenger who tried to smuggle a bomb aboard his flight? He wasn’t a terrorist,

but rather claimed to be preventing a terrorist attack: the probability

that there are TWO bombs on the SAME plane must be quite small. :-)

Most people get this joke since it is rather obvious where the error in

the passenger’s thinking lies.

I recently read a book summarising the history of philosophy via jokes. Perhaps something similar is needed to point out fallacies in probability arguments.

In a comment to some blog I read recently (which I’ll quote again since

it is worth quoting again), someone gave an example of the difference

between the probability of the data given the model and vice versa:

I meet someone and the model is that the person is female and the

data are that the person is pregnant. The probability of the data given

the model is about 3%; the probability of the model given the data is

100%.

April 30, 2009 at 4:24 pm

Philip,

A propos your comment…

….you might like to look at my contributions to the bad statistics thread, especially this item.

April 29, 2009 at 9:18 pm

“Perhaps we just happen to be living at the only epoch in human history in which we know enough about the Universe for the Doomsday argument to make some kind of sense, but too little to resolve it.”

Reminds me of:

He’s old enough to know what’s right

But young enough not to choose it

extra points if you recognise the source without using a search engine.

The book referred to in my previous comment is:

Plato and a Platypus Walk into a Bar . . .: Understanding Philosophy Through Jokes: Thomas Cathcart, Daniel Klein: 9780143113874: Amazon.com: Books

April 29, 2009 at 10:32 pm

Problem with the SNAP feature (I guess a WordPress problem): I was

wondering why the SNAP preview of my web page doesn’t work, even

though the URL is correct and the link to it works. It seems that the

SNAP stuff neglects the fact that it is running on port 8000 (which is of

course contained in the URL) and assumes that it is running on port 80

(the standard port). (I do have a placeholder running on port 80, so to

me it is obvious from the error message in the preview window what the

problem is.)

April 30, 2009 at 10:30 am

Absolutely right Peter, the prior info is different in the traffic warden problem (invented by Steve Gull, I believe) and the Doomsday argument, and consideration of the mechanism of doom is vital. Leslie’s reasoning would lead one to suppose that, since the earth has not been hit by a giant doomsday meteorite for a very long time, it is more likely to be hit by one soon. In fact the long absence of a big hit is evidence that there aren’t many big meteorites in earth-intersecting orbits, so that the longer we survive the more we should *reduce* the probability we assign to a doomsday scenario of this sort. It’s like saying that a coin we are confident is fair has just come up heads ten times, so surely it must come up tails next time we toss it “in compensation”. Nonsense!

Much has been written about the Anthropic Principle. Some of it is sensible and a lot of it is rubbish. Bayesian reasoning is the razor needed to cut the good stuff from the nonsense. It also needs to be said that different answers can be given to the same question in different categories of thought, in this case the question of why the “fundamental constants” take the values they do. (From a more basic theory, or so that human life could emerge?)

You wrote: “Unless there are major changes in the way this planet is governed, our planet may become barren and uninhabitable through war or environmental catastrophe. But I do think the future is in our hands…” Agreed Peter, although I think environmental catastrophe is overplayed and war is the real danger. When WMDs get into the hands of the suicide bomber mentality then the notion of Mutually Assured Destruction ceases to deter and the balloon goes up. What I don’t want in purported soluction is a world government, which may be implicit in your words (forgive me if I am wrong). That would be far too much power concentrated in one place. Not many governments have done good for very long and why should a world government be different? It’s still made of people and it is people who cause the problems you raise. Nor will it do to explain some events in history by saying “the people there were OK, it was just their dictator who led them to commit evil”. If an evil dictator were imposed overnight on a sufficiently moral populace then enough brave men and women would refuse his orders regardless of the personal consequences and he could get nowhere. Evil dictators cannot arise in a sufficiently moral population. In a myriad ways, governments reflect their people. We are all guilty.

Anton

April 30, 2009 at 12:59 pm

On a related note Peter, I have been thinking for a while now that the perfect solution to the ‘coincidence problem’ (why do we live at just the right time to see the universe just beginning to accelerate?) is that the universe will very soon become a place unsuitable for observers to exist in.

April 30, 2009 at 5:18 pm

I think the biggest problem with doomsday-like arguments is that they ask us to ignore all relevant information, apart from the fact that something has existed for a given time. This is a very counterintuitive thing to do – not surprising that it gives counterintuitive results.

Even if you manage to do that, the argument only works if you know that you are not dividing by infinity, as you correctly pointed out. But how can you possibly know that – except by considering some of that relevant information which you were supposed to ignore?

If you know enough to put a credible upper bound on the total number or time, you also know enough to make a much better estimate than the doomsday one.

June 16, 2009 at 4:26 pm

Peter, thank you for your thoughtful post. I would like to call your attention to my paper, “Past Longevity as Evidence for the Future”, in the latest issue of

Philosophy of Science:http://www.journals.uchicago.edu/doi/abs/10.1086/599273

The paper contains, in my judgment, a new and definitive refutation of Leslie’s Doomsday Argument: the Doomsday Argument’s key error is to conflate future longevity and total longevity in the application of Bayes’s theorem.

A brief summary of my refutation has recently been added to the Wikipedia entry for ‘Doomsday argument’, under the heading “Conflation of future duration with total duration.”

http://en.wikipedia.org/wiki/Doomsday_argument

My paper also has much new to say about Gott’s arguments.

My paper is, on the whole, more positive than negative; it argues for an objective means for using knowledge of the past as evidence for the future.

June 28, 2009 at 5:36 pm

Nick Bostrom writes, “From seemingly trivial premises it [the Doomsday Argument] seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybody’s first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no simple flaw in the argument has been identified.”

Until now.

In my judgment, Ronald Pisaturo argues convincingly that there is, indeed, a simple flaw in the Doomsday Argument. (See http://www.journals.uchicago.edu/doi/abs/10.1086/599273) He identifies that the proponents of the Doomsday argument conflate future duration and total duration and, as a result, misuse Bayes’ Theorem to arrive mistakenly at a “Bayesian shift” toward an early doom for the human race. More than that, Pisaturo argues convincingly for an “alternative argument for quantifying how past longevity of a phenomenon does provide evidence for future longevity.”

Glenn Marcus, former professor of mathematics or electrical engineering at Manhattan College, Fordham University, and LaGuardia Community College

July 13, 2009 at 11:44 pm

Re: “If I happen to walk past a building just as it is being finished, presumably I should hang around and watch its imminent collapse….

But I’m being facetious.”

Doesn’t this example show that Gott’s method, and the probability distribution (for different remaining lives) is nonsense? Put differently, if you happened to find yourself standing at the base of a building that has just been finished one second ago, and had the alternative of moving to a building that was constructed 10 years ago, would you promptly run from the new building to the old? You surely would if you believed that there was a 95% chance that the new building was going to collapse within 19 seconds, while the probability of that occurring to the old building is extremely small. If you would not run from the new building to the old, doesn’t that mean that you think the assigned probabilities are completely baseless and bogus?

Thanks in advance for your reply.

July 14, 2009 at 12:32 am

I’m staying well clear of the word “bogus”.

July 14, 2009 at 1:24 am

In case the word “bogus” gets in the way of addressing my point/question, as perhaps the response by telescoper may indicate, forget the word “bogus” and just consider the word “baseless”, as in not based on anything that has any actual relevance to the probability of one building or another building surviving for a given period of time into the future.

December 8, 2011 at 11:41 am

Check out Cusp’s new thoughts: http://cosmic-horizons.blogspot.com/2011/12/how-many-tanks.html

Extra points for guessing how I found this old thread here In The Dark which is relevant to Cusp’s latest post.

April 2, 2013 at 8:54 pm

Have you ever wondered about space? At the rate that we, the humans, are advancing very quickly, and by the time we get to population max, we will have colonized space. So, unless you come up with a better theory, the doomsday argument makes no sense.

April 2, 2013 at 10:07 pm

[...] see how the argument works, let’s start with really nice analogy developed over at the blog In the Dark (I modify it slightly here). Imagine you arrive in a large city and are immediately approached by a [...]

April 2, 2013 at 11:49 pm

[...] see how the argument works, let’s start with really nice analogy developed over at the blog In the Dark (I modify it slightly here). Imagine you arrive in a large city and are immediately approached by [...]

April 3, 2013 at 12:36 am

[...] see how the argument works, let’s start with really nice analogy developed over at the blog In the Dark (I modify it slightly here). Imagine you arrive in a large city and are immediately approached by a [...]

April 3, 2013 at 9:28 am

The most irritatingly overlooked weakness of the doomsday argument is that it has indeed been wrong for most of human history. Imagine the 500,000th human contemplating the argument and determining that it would be curtains by the time there had been one or two billion people born. Obviously, that person had no way of known where they were in the distribution of humans over time. Neither do we.

For example, consider the fact that we may be a true spacefaring race at some point in the future. Were we to colonize even a tiny fraction of the available worlds in our galaxy — even assuming it took generations to do so — those of us alive in the early 21st century might well represent the first 1% of all the potential humans to come. Assuming such a future, it would be madness to place ourselves in the middle of the human continuum.

Also, for the record, the analogy with the traffic wardens in a Chinese city is a false one. For the wardens number, you are relying on the fact that picking one of the total number of wardens currently in existence gives you some reasonable odds of guessing at the total number. The same logic does not hold for the total number of persons yet to be born. In the Chinese case you are picking one of a currently finite and countable pool. For the doomsday scenario, currently living humans represent a temporal cross-section of a population that stretches in a finite and countable way into the past, but an uncountable way into the future.

April 3, 2013 at 12:49 pm

“The most irritatingly overlooked weakness of the doomsday argument is that it has indeed been wrong for most of human history. “Not really. In the case of an exponential population increase, it will be wrong for a very long period of

time, but not for a large fraction ofpeople. It is the exponential increase which makes things interesting. If the population is table, then this reduces to Gott’s temporal Copernican principle. In the case of a rising population, the emotional impact is different.If exponential population increase continues, then colonizing space will hardly make a dent in the problem.

Of course, if the statement is true at 95% confidence, you can always argue that you are among the first 5% of humans. Yes, that caveman would have got it wrong. We expect that a certain fraction will get it wrong. What confuses some people is that this small fraction of people corresponds to most of the time that humans exist.

April 3, 2013 at 10:41 am

The logic used in the Doomsday Argument makes sense is T equals a finite number. In this case, whether we are discussing Chinese Wardens and/or time scales, the logic holds perfectly well. But if T equals infinity, then the Bayesian (or epistemological) interpretation would not fit the parameters as well.

The problem with the Doomsday argument is that it believes that there’s a Doomsday and from there on everything logically follows. If Doomsday is considered a fact, than yes, probability calculations make sense. But as long as Doomsday is not a certainty (numerically) than giving it a value becomes problematic. Epistemologically speaking, the argument works. Not ontologically though. Or perhaps, then, a different explanation is required.

April 3, 2013 at 12:50 pm

Yes, that T is finite is an assumption. But this is certainly the case, though perhaps on a very long timescale. Even if the universe continues to expand forever, there cannot be an infinite number of humans as we now understand the term.

April 4, 2013 at 12:11 pm

[...] see how the argument works, let’s start with really nice analogy developed over at the blog In the Dark (I modify it slightly here). Imagine you arrive in a large city and are immediately approached by [...]

April 9, 2013 at 4:41 am

[...] see how the argument works, let’s start with really nice analogy developed over at the blog In the Dark (I modify it slightly here). Imagine you arrive in a large city and are immediately approached by a [...]