Poker math showdown!

In comments, Rick Schoenberg wrote:

One thing I tried to say as politely as I could in [the book, “Probability with Texas Holdem Applications”] on p146 is that there’s a huge error in Chen and Ankenman’s “The Mathematics of Poker” which renders all the calculations and formulas in the whole last chapter wrong or meaningless or both. I’ve never received a single ounce of feedback about this though, probably because only like 2 people have ever read my whole book.

Jerrod Ankenman replied:

I haven’t read your book, but I’d be happy to know what you think is a “huge” error that invalidates “the whole last chapter” that no one has uncovered so far. (Also, the last chapter of our book contains no calculations—perhaps you meant the chapter preceding the error?). If you contacted one of us about it in the past, it’s possible that we overlooked your communication, although I do try to respond to criticism or possible errors when I can. I’m easy to reach; [email protected] will work for a couple more months.

Hmmm, what’s on page 146 of Rick’s book? It comes up if you search inside the book on Amazon:

Screen Shot 2014-08-25 at 7.24.26 AM

So that’s the disputed point right there. Just go to the example on page 290 where the results are normally distributed with mean and variance 1, check that R(1)=-14%, then run the simulation and check that the probability of the bankroll starting at 1 and reaching 0 or less is approximately 4%.

I went on to Amazon but couldn’t access page 290 of Chen and Ankenman’s book to check this. I did, however, program the simulation in R as I thought Rick was suggesting:

waiting <- function(mu,sigma,nsims,T){
  time_to_ruin <- rep(NA,nsims)
  for (i in 1:nsims){
    virtual_bankroll <- 1 + cumsum(rnorm(T,mu,sigma))
    if (any(virtual_bankroll<0)) {
      time_to_ruin[i] <- min((1:T)[virtual_bankroll<0])
    }
  }
  return(time_to_ruin)
}

a <- waiting(mu=1,sigma=1,nsims=10000,T=100)
print(mean(!is.na(a)))
print(table(a))

Which gave the following result:

> print(mean(!is.na(a)))
[1] 0.0409
> print(table(a))
a
  1   2   3   4   5   6   8   9 
218 107  53  13   9   7   1   1 

These results indicate that (i) the probability is indeed about 4%, and (ii) T=100 is easily enough to get the asymptotic value here.

Actually, the first time I did this I kept getting a probability of ruin of 2% which didn't seem right--I couldn't believe Rick would've got this simple simulation wrong--but then I found the bug in my code: I'd written "cumsum(1+rnorm(T,mu,sigma))" instead of "1+cumsum(rnorm(T,mu,sigma))".

So maybe Chen and Ankenman really did make a mistake. Or maybe Rick is misinterpreting what they wrote. There's also the question of whether Chen and Ankenman's mathematical error (assuming they did make the mistake identified by Rick) actually renders all the calculations and formulas in their whole last chapter, or their second-to-last chapter, wrong or meaningless or both.

P.S. According to the caption at the Youtube site, they're playing rummy, not poker, in the above clip. But you get the idea.

P.P.S. I fixed a typo pointed out by Juho Kokkala in an earlier version of my code.

28 thoughts on “Poker math showdown!

  1. There seems to be a slight bug in the code. time_to_ruin should be set to the minimum of the times virtual_bankroll is below zero, not the maximum. The current code computes the last time your bankroll is below 0 (if you are allowed to continue playing after ruin), rather than the time you drop below 0. This does not change the conclusion, but the table printed underestimates how often the ruin occurs in the first step.

  2. Without access to the Chen and Ankenman book, I conjecture that their solution was derived for a continuous version of this model: Brownian motion with drift 1 and scale 1. At least R(1)=exp(-2) is the correct solution for the continuous case: plugging t=infinity in slide 12 of http://www.mysmu.edu/faculty/christophert/QF204/C03_Brownian.pdf we obtain the probability of ever dropping y units below starting point to be exp(-2y), so R(1) = exp(-2).

    The discrete version simulated here has the same distribution as the continuous model at integer-valued times, but naturally the discrete version has a lower probability-of-ruin: the discrete player survives in cases where the continuous player drops below 0 but recovers before the next integer step.

      • agreed. however, the except does say “… normally distributed, and for other distributions…” what other distributions are used? hypergeometric distribution? Seems like that would fit in this application.

  3. Thanks for commenting on this Andrew. First of all let me just say that I fully realize what a hypocrite I am for pointing out someone else’s errors when my own book is riddled with mistakes (see http://www.stat.ucla.edu/~frederic/errors.html ). But I do think this is a really big one. I moved last year and can’t seem to track down anything anymore, including the Chen and Ankenman book, so excuse further mistakes in what I say now, and the comment I wrote last week claiming it was the last chapter. I guess it was Chapter 22. Chen and Ankenman define R(a) as the risk of ruin if you start with a chips. They then assume that
    (*) R(a+b) = R(a)R(b)
    for any positive numbers a and b. Now, such a thing might hold true for some functions R, but the name “risk of ruin” suggests that R(a) is the probability of hitting zero when one starts with a chips, and (*) doesn’t generally hold for that function. In fact, I think (*) only holds for the exponential distribution, but again I could be mistaken here. Anyway, there are then 2 problems with their subsequent results. One is that the name “risk of ruin” is completely inappropriate, as their results may hold for some function R but not the function that we would ordinarily think of as risk of ruin. Second, they give some examples and (*) does not hold in their examples. Often it is not even close. I provide an example on p146 of my book which Andrew correctly posted an excerpt of here.

    I would be very interested in responses from anyone and will email Ankenman now in case he prefers to communicate that way, but this blog, since it is so widely read, seems to me to be an ideal place for the discussion.
    Yours,
    Rick

    • Rick:

      Just because you have mistakes of your own, that doesn’t make you a hypocrite for pointing out others’ mistakes! We all make mistakes, we should all feel free to point out the mistakes of others, and we should all admit the mistakes we make.

      The only thing that annoys me is when someone claims there’s a mistake but then refuses to point it out. This happened to me once. Someone wrote a letter saying that Bayesian Data Analysis was full of errors, but then when called on it, he refused to say what the errors were! That was just weird.

    • I don’t have time to check it rigorously, but the argument is roughly this. Let P(t,a) be the density of “a” point drop for the first time in time t (we are doing it continuously, right?), that is the probability that on time t you are -a and you never went this low before. Than, to suffer a+b loss you have to lose first a points at some time t1 for the first time and then go extra b below in t-t1 time for the first time since t1. Thus P(t,a+b) = integral P(t1,a)P(t-t1,b) dt1. Probability to go under at any time is R(a) = integral P(t,a)dt. Hence, R(a+b)=integral dt integral dt1 P(t1,a)P(t-t1,b) =
      (integral P(t1,a) dt1)(integral P(t2,b)) = R(a)R(b), where t2 = t-t1 and integration limits are changed appropriately.

      This should work in discrete time as well if you discretize the steps. Just change integrals for sums.

      BTW, if Prof. Gelman is serious about discrete steps in time, why not discrete steps in money?

  4. OK. I think I’ve figured it out. Continuous limit should be approximated with mu=1/L, sigma=1/sqrt(L) were L is some large number, meaning that at each step there is a small (relative to the initial amount of money) drift forward and proportionally small random component. Taking L=10 I have probability of ruin 9%. With L=100 it’s 12.3%.

  5. Interesting about sigma = 1/sqrt(L), D.O. My feeling was this. If your bankroll is large and your SD is small, then the results are kind of meaningless since the risk of ruin will be nearly zero because of your positive drift. If your bankroll is small, then the results based on R(a+b)=R(a)R(b) don’t yield good approximations to a discrete process. So I didn’t see any value in their results. However, what I didn’t realize til now was that there may be a sweet spot. I guess if your bankroll is large, your drift is small, and your SD is moderate, then maybe the Chen and Ankenman results are good. So I guess I was too harsh in calling this a “huge mistake”. My apologies.
    Rick

  6. Well, I’m not sure I meant for this to turn into a “showdown.” But okay, draw!

    Let me recap what we did in the book. First we treated the limit of the standard gambler’s ruin game, where there is a biased coin, and we move +1/-1 depending on whether we won the flip. You get ruined if you ever reach zero. So that’s all exact.

    Then we generalized to a problem where there are sort of arbitrary payouts, but they are all integer-valued, or there is some way of handling combining bankrolls that contain fractional parts and some reason that you can’t lose a lot all at one time (think video poker).

    Next we generalized to this normal distribution game, where at each play, your bankroll gets a Gaussian rv with mean/stdev known, which is the topic of discussion here. Immediately there are definitional issues. A general stochastic process as described above does in fact have a more complex distribution, since from any bankroll, you can immediately jump (with some small probability) to a negative bankroll due to the Gaussian being supported everywhere. Additionally, if we treat the trials as IID like this, then when you have a very small bankroll, you get a huge freeroll, where you get all the positive upside and none of the downside.

    Before I go on, let me answer Andrew’s question:

    “Given that poker is a discrete game and has no normal distributions, now I’m wondering about the relevance of these calculations to poker at all!”

    The answer to this is that the drift in poker is very small compared to the variance; for example, in headsup limit holdem, a decent player’s win rate (mu) might be 0.01-0.02 bets per hand, while the standard deviation (sigma) per hand might be 2.3-2.5 bets per hand. Now the results of individual hands aren’t normal or IID, but because the drift is so small relative to the variance, a lot of individual hands are played. So using the central limit theorem to approximate blocks of hands as Gaussian rvs is a reasonable approximation.

    So in light of this, we chose to ignore the approximation for small bankrolls, which are not of practical concern in poker, and solve for the more general case of combining bankrolls. Notice that in the coin-flipping game, we did in fact calculate that exp(-alpha) was R(1), and it could be exponentiated n times to calculate the risk of n. However, in the normal approximation case, we treated the risk function as a martingale, thus ridding us of problems surrounding the strange behavior of R(1)–where you would very often end up substantially negative, and so on.

    The solution we obtain from that: exp(-2*b*mu/(sigma^2)) is the exact risk of ruin function if we treat the game as a Wiener process, with drift mu and variance sigma^2, where the game stops immediately on reaching zero. It is a very close approximation of the answer for bankrolls of the size we would typically be interested in understanding the risk parameters of (for example, 50-500 bets with win rates and standard deviations as above). This is because for bankrolls of interest, the whole thing is much closer to the continuous process than to a discrete process that jumps all over the place. It is true that R(1) and values close to that are basically just wrong; this is because, as Rick says, the R(a+b)=R(a)R(b) breaks down badly when the variance of the step is on par with the size of the bankroll. But those values are of little interest in our field, and it wasn’t a book on stochastic processes.

    I hope that clears some things up. I’m happy to have the opportunity to discuss this on Andrew’s widely read blog. :)

  7. I’m still a bit suspicious of the R(a+b) = R(a)R(b) assumption for the case that I had in mind and thought Chen and Ankenman were suggesting in their text and examples in chapter 22, namely where ups and downs happen in discrete time and the wins and losses happen independently according to some continuous random variable with infinite support, like the normal distribution. Does the argument by D.O. make sense there? D.O. claims you can just replace the integrals with sums. I’m not so sure.
    Does R(10) = R(7)R(3)? This would mean the risk of going from 10 chips to 0, eventually, is equal to the risk of losing 7 chips eventually times the risk of losing 3 chips eventually. D.O.’s argument is saying that, to go from 10 to 0, you have to go from 10 to 3, and then from 3 to 0. I have two problems with this. First, if jumps are happening according to the normal distribution, then there is some chance of going straight from 10 to below zero, without first hitting 3. It may be small, but there is some positive probability of a huge negative jump like this. Second, and more importantly, there’s no chance of going from 10 to 3. If jumps are happening according to a continuous distribution like the normal, then there’s zero probability of ever hitting exactly 3. If you replace “going from 10 to 3” to “going from 10 to below 3”, then the other part isn’t right, since the probability of going from something BELOW 3 to 0 is different from R(3). I guess you could discretize the jump space and replace “going from 10 to 3” to something like “going from 10 to 3 +/- delta”, for small delta, but again the logic seems wrong since if delta is small, and if you go from 10 to below 0, you can (and typically will) do this without hitting 3 +/- delta.
    So, it still seems to me like there may be a real problem here, and it’s not all so easy to explain away a trivial substitution of integrals for sums.

    • I think when Ankenman says “for bankrolls of interest” he means hundreds or thousands of chips (above)

      I think there’s a point to be made about how the unit size of a chip scales with bankroll size. I have to assume that poker players playing with $50k bankrolls don’t have 50k $1 chips, they probably have 100 x $500 chips or 1000 $50 chips etc.

      If the size of the discrete jumps (the denomination of the chips) changes with the absolute size of the bankroll, then the normal / continuous type approximation will break down.

      • To take this comment to the next level of pushing my own hobby horse (dimensionless analysis). The relevant dimensionless ratio seems to be b/B_0 (no pun intended), where b is a measure of the typical bet size, and B_0 is the initial bankroll. Assuming that people with large bankrolls also tend to make larger bets, and also that people find it inconvenient to work with enormous numbers of chips, b/B_0 is probably greater than 0.001 in essentially all games. One has to assume that if the typical bet size is on the order of 0.1 of an initial bankroll, then because of the variance in outcomes, the games would tend to be short-ish. So my guess is that the ratio is usually between about 0.01 and 0.001.

        These may be small enough that normal approximations don’t get you into too much trouble for many things, but there are certainly conditions in which the normal approximation will get you in trouble even here. The big issue is whether bet size changes under various circumstances. For example, if a pot “heats up” and people respond by dropping out, and the remaining players stay in by upping the size of their bets or things like that. It seems to me that the normal approximation gives you some kind of guidelines under certain regimes of play, but in other regimes the discrete nature will dominate.

        I think the fancy stochastic process stuff tends to be given more weight than is really wise. The fact that the math is fancy makes people think that it’s therefore “more true” whereas in fact it’s only valid as a model in certain circumstances.

        • A few things to keep in mind:

          1. Most poker games these days are ‘table stakes’ which means that you can only risk the chips that are on the table at the start of the hand. During a hand, you cannot go to your pocket for more chips. (The table stakes rule limits how much you can lose on the hand, and also how much you can win.)

          2. Typically, a poker pro does not ‘buy in’ at a tournament or cash game for his entire bankroll. His buyin will be a prudent fraction of his entire bankroll. It is this fraction that should typically be between 0.01 and 0.001.

          3. The typical buyin for a game will be large in compared to the smallest permitted bet size. For example, at the World Series of Poker Main Event, you receive 30,000 in tournament chips for your $10,000 buyin. The minimum bet size during the first level of play is 100, so players are buying in for 300 (minimum) bets.

          As another example, in a $1-$2 blind no limit hold’em game, players often buyin for $500 (250 bets).

          4. Combining 2 & 3, we see that a prudent bankroll for a no limit hold’em pro should be tens of thousands of (minimum) bets or more.

          5. It is not uncommon for a player’s table stakes to comprise an enormous number of bets. It would indeed be inconvenient to deal with so many chips. This is why higher denomination chips were invented!

        • Paul: your points are very similar to my own but with actual numbers to support them. You are saying that people “buy in” for between maybe 200 and 500 minimum bets, which is the number (B_0/b) provided you make B_0 mean the “table buy in” rather than “total bankroll”.

          Total bankroll is something closer to “riskable net worth” which I assume a real player would rarely if ever risk entirely in one game.

          Higher denomination chips are useful of course, but they may limit the effective granularity of betting in some contexts. If a person just raised you $100 will you see them and raise an additional $37? probably not. You’re probably going to re-raise $25 or $50 or $100 because that’s how human minds tend to work. Whereas if a person just raised $5 you are probably going to re-raise 1,2,5,10 etc. The point I was trying to make is that it’s possible that granularity isn’t a context independent constant. Certain phases of play may involve larger increments than others.

        • I assure you that I could care less about the fanciness of the process. It is a real, practical problem to assess the chance of losing a fixed amount of money before running off to infinity, given a distribution of hand outcomes that is hard to characterize and doesn’t behave like a nice distribution that one made up in the lab. Nevertheless, the formula we derived for a normal approximation is quite close in reality (which we have verified by simulation).

          Also from your second paragraph, you may be confused about the topic here; this is not any kind of guide to playing inside the poker game, but is about meta-considerations of longer-term risk as a poker professional who wants to weather inevitable variance, or similar. Paul’s comments are on target here.

        • Jerrod: yes I see your point. and I wasn’t referring to you when I made that “fancy math” comment, more to the tendency in general to over reify stochastic calculus. In finance for example there are all kinds of fancy theorems about martingales and stochastic calculus, but although they are taken as kind of gospel in some circles they are not always the best way to model financial decision making. Stochastic calculus might be mainly of interest to say professional market makers, whereas discrete event models are more relevant to say retired people looking to take fixed payments each month and do a small number of rebalancing portfolio changes each year.

          If as you say we’re really talking about the long term “professional” poker risk issues, then yes, the incremental bet is going to be enormous compared to the full bankroll, and the continuous approximation is going to be a lot better guide as you have verified.

          The big point I was trying to make is that the usefulness of the model depends in large part on how much any one game matters to you. As long as any one game makes no major difference at all, then you can treat the whole thing as a continuous model and you’re golden. As soon as one bet is a nontrivial fraction of your bankroll… the discrete nature will become clearly important.

    • Yes, you are right, for my simple argument to work you have to make sure that each step in money is taken separately. Otherwise, say on each step you can go -1 or +2, then starting at 1 you can lose in 1 step, but not in 2.

  8. I was asked to write a second edition of my probability textbook that just uses poker examples, and in the first edition I was a little critical of part of Chen and Ankenman’s book “The Mathematics of Poker”. There was some discussion here about it. So, I decided to track down the book and revisit their approximations. In case you’re interested, here is what I found.

    First of all, here is what I said in my first edition, on page 146. “Note, however, that calculating probabilities involving complex random walks can be tricky. In Chapter 22 of Chen and Ankenman (2006), the authors define the risk of ruin function R(b) as the probability of losing one’s entire bankroll b. They calculate R(b) for several cases such as where results of each step are normally distributed and for other distributions, but their derivations rely heavily on their assumption that R(a+b) = R(a)R(b) and this result does not hold for their examples, so their resulting formulas are incorrect. If, for instance, the results at each step are normally distributed with mean and variance 1, then using their formula on p.290, Chen and Ankenman (2006) obtain R(1) = exp(-2) ~ 13.53%, but simulations indicate the probability of the bankroll starting at 1 and reaching 0 or less is approximately 4.15%.”

    I am currently thinking of not changing a word in the 2nd edition. I would be curious to see if people feel that is unfair or inappropriate. Here is a synopsis of the results in Chapter 22 of Chen and Ankenman (2006).

    Chen and Ankenman (2006) first give an example where you lose -$100 if a die is 1 or 2, and you gain $100 otherwise. You start with a bankroll of $100. They show P(ruin) = 1/2. This is true. In example 22.2 they then derive the risk of ruin to be exponential for this same game where you start with $100*b. Fine.

    They then go to other examples and things start to break down as far as I can see.

    In Example 22.3, at each step you have a 12% probability of gaining $785, 13% chance of gaining $385, 13% of gaining $185, 62% chance of losing $215, and you start with $500. They claim R(b) = .729.
    Actually, no, it’s about 68.9%, based on 200,000 simulations.
    They provide the formula R(b) = exp(-ab), with a ~ .000632, and they list R(b) for 8 different values of b ranging from 500 to 10000. I tried investigating a few of them by simulation. They’re all off by about 5 or 6%. Note that this expression R(b) = exp(-ab), the way they present it, is not an approximation but an exact expression, assuming R(a+b) = R(a)R(b).

    Here is my code to simulate this.
    n = 10000
    m = 10000
    y = rep(0,n)
    for(i in 1:n){
    x = min(500+cumsum(sample(c(785,385,185,-215),m,rep=TRUE,prob=c(.12,.13,.13,.62))))
    if(x > .5) y[i] = 1
    if(i/1000 == floor(i/1000)) cat(i/1000)
    }
    1-mean(y)

    In the same table, they say if you start with b = $1000, then R(b) = 53.2%.
    In simulations I get more like 51.4%. I get results of (0.51574, 0.5126, 0.51445, .51443, .51394), each result from 100,000 simulations, and in each simulation I run m=10,000 hands, and if you’re still alive after 10,000 hands I consider you not having been ruined. I guess it is possible that if I increased m, then the probability of ruin would increase slightly, but I doubt this accounts for the disparity. When I ran the same code with m = 1000 I got virtually identical results, so it seemed to have converged already.

    For b=$5000 they get R(b) = 4.3%.
    It’s actually about 4.5% in my simulations.

    They then go to the case where the jumps are normal, and find R(b) = exp(-2mub/sigma^2), which immediately following its derivation they describe as an important result though they note that in reality poker results are not normal. I’m not concerned about approximating poker results with the normal distribution. If these are useful approximations, that’s fine with me. The very first illustration is to tournament results and they show the normal is a poor approximation, for the case where you either win 99 buyins 2% of the time, the other 98% of the time you lose 1 buyin, and you start with 100 buyins in your bankroll. They claim the true risk of ruin is 18.6%, and show their normal approximation yields 36.2%. But the true percentage is actually about 19.9% from simulations.

    They then consider the case where a player wins a certain number of small bets (SB) per hand: 0 with proability 70%, -1 with probability 4.07%, +2 with probability 2.81%, -2 with probability 8.14%, …, and a bunch of other possibilities, with a mean of .02SB/hand and a variance of 10.8. Their normal approximation yields a 32.89% risk of ruin for a bankroll of 300SB. They compare this with 32.17% which is what they get using R(a+b) = R(a)R(b).
    Actually the probability is about 31.6% from simulations.

    Their next example, which is their last example in Chapter 22, is very similar. Here you win 0, 1, -1, 50, or -50, each with probability 70%, 15%, 13%, 1.1%, and 0.9%, respectively, and you start with 500 units. Their normal approximation yields a risk of ruin of 9.19% and their equation 22.1 yields 9.12%. Actually it is about 8.2% based on simulations.

    That concludes Chapter 22. I am not cherrypicking examples. These are all the ones they give. I see no reason to alter what I wrote, but if anyone cares to chime in I would certainly be interested. Basically, I feel like their formulas yield very poor approximations when your bankroll is small, and when your bankroll is large your risk of ruin is very nearly 0 anyway, and when your bankroll is medium sized, their formulas are still not great as they are typically off by 5% or so even in their examples. More generally, I think there is a tendency among some mathematicians to be a bit overexuberant in applying simple mathematical formulas to real world problems without necessarily noting all of their limitations.

  9. I found this post because I was searching for a recent post on poker books. I just finished Pocket Kings and thought it was terrible, I thought it got good reviews here and wanted to check if I got it mixed up with another book.

    The initial problem is just the standard textbook gambler’s ruin. Against an opponent with limited resources you let the total winnings available go to infinity and the probability of ruin is (q/p)^b, when you start at b and p>q.

    It’s a simple recurrence relation that leads to a second order linear difference equation. That’s why R(a+b) = R(a)R(b), there’s no assumptions except independence and unit bets. But the authors don’t seem to understand this and introduce an arbitrary number of states, but then their original auxiliary equation no longer holds. They need a higher order equation and more boundary conditions.

    You can see the simple random walk case is as predicted by (q/p)^b

    ruinProb <- function(b, nSims, nHands, p, q){

    ruin = rep(0, nSims)

    for(i in 1:nSims){
    bankroll = b + cumsum(sample(c(1, -1), nHands ,rep=TRUE, prob=c(p, q)))
    if(min(bankroll) <= 0.0) ruin[i] = 1
    }

    return (mean(ruin))
    }

    p = 0.55
    q = 1-p

    nSims = 10000
    nHands = 10000

    set.seed(101)
    b = 4

    print(ruinProb(b, nSims, nHands, p, q))
    print((q/p)^b)

    # b = 10
    [1] 0.133
    [1] 0.1344306

    # b = 6
    [1] 0.295
    [1] 0.2999846

    # b = 4
    [1] 0.4465
    [1] 0.4481251

    # simulated
    print(0.295 * 0.4465)

    # exact
    print(0.2999846 * 0.4481251)

    Using the normal approximation is fraught with problems, because the distribution of the random walk is non stationary. Strictly speaking if you want to use the approximation, you need to specify the number of hands or steps in advance. This means their integration on pages 289-290 is not correct. There's probbaly a small subset of problems, with a large enough starting bankroll and the right relative mu and sigma, where it is relatively correct. But I'd use it with caution. I think generally it's used to roughly size your bankroll to cover say 3 sigmas, rather than precise estimates of ruin probability.

    # normal approximation
    ruinProbNorm <- function(b, mu, sigma, nSims, nHands){

    ruins <- 0
    for (i in 1:nSims)
    {
    bankroll <- b + cumsum(rnorm(nHands, mu, sigma))
    if (min(bankroll) <= 0) ruins <- ruins + 1
    }

    return(ruins/nSims)
    }

    b = 50
    mu = 0.1
    sigma = 5

    nSims = 1000
    nHands = 10000

    set.seed(101)
    print(ruinProbNorm(b, mu, sigma, nSims, nHands))
    print(exp(-2 * mu * b/(sigma^2)))

    [1] 0.624
    [1] 0.67032

    The simulation of tournament results on page 287 is just plain wrong, the formula makes no sense. If R(b) = e^(alpha * b), leaving alpha as negative, then 0.62 * e^(alpha * (-215)) means the probability of ruin starting with bankroll b = -215, in which case you are already ruined. Plugging negative numbers into the exponent is not meaningful if the boundary condition is at zero.

    A simulation gives 69%, but you should estimate the standard errors as well.

    # tournaments simulation
    b1 = 785; b2 = 385; b3 = 185; b4 = -215
    p1 = 0.12; p2 = 0.13; p3 = 0.13; p4 = 0.62

    b = 500
    nSims = 10000
    nTournaments = 100000
    ruin = rep(0, nSims)
    set.seed(101)

    for(i in 1:nSims){

    path = b + cumsum(sample(c(b1, b2, b3, b4), nTournaments ,rep=TRUE, prob=c(p1, p2, p3, p4)))
    if(min(path) <= 0.0) ruin[i] = 1
    }

    print(mean(ruin))

    [1] 0.6911

    But even with that, their estimate of the root is very inaccurate. It leads to a 1% error in probability, even in their own incorrect calculation.

    f <- function (alpha){
    return(p1 * exp (alpha * b1) + p2 * exp (alpha * b2) + p3 * exp (alpha * b3) + p4 * exp (alpha * b4) -1)
    }

    x = seq(-0.00000632, -0.000632, -0.000001)
    y = f(x)

    options(repr.plot.width=10, repr.plot.height=5)
    plot(x, y, type = 'l')

    root = uniroot(f, upper = -0.00000632, lower = -0.000632, tol = .Machine$double.eps, maxiter = 1000)
    alpha = root$root
    print(alpha)

    [1] -0.0006056068

    print(exp(alpha * b))
    print(exp(-0.000632 * b))

    [1] 0.7387443
    [1] 0.7290595

    I didn't bother with the other examples, I think the authors don't understand linear difference equations or simulations.

Leave a Reply

Your email address will not be published. Required fields are marked *