Blackjack Forum  Poker Tournaments 
Mistakes in David McDowell's Blackjack Ace Prediction


FROM ET FAN: 
Convexing Calculations for McDowell's Blackjack Ace Prediction Or  I'm Sorry!By ETFan(From Blackjack Forum XXIV #2, Spring 2005) © ETFan 3/9/05 In the Spring issue of Blackjack Forum Online, Arnold Snyder published his critique of David McDowell's book, Blackjack Ace Prediction, and put forth a correction of the now infamous EV calculation in Chapter 7. Since I wasted much of my youth pondering ace sequencing, I excitedly posted another approach to solving this EV, which I thought might be of interest to the few people involved in such an arcane endeavor. I had no idea my little post would cause such a furor. It didn't occur to me there would be any controversy regarding Arnold's review of the book. The book came out. Some people endorsed it. Then Arnold took the time to study it and found some problems  something which has happened over and over through the years. Generally, the reaction of the blackjack community has been to look more closely at the product, do some calculations, and close ranks behind the Bish, since he is widely acknowledged as a blackjack expert and all around sage, soothsayer, and holy man. But this time was different. Since I posted my little calculation, and a half dozen or so follow ups, I've been informed that I couldn't possibly know anything about the arguments posed in the book, since I hadn't read it. That I needed to look at Table 32. That the errata explained everything. That I was grievously illinformed to believe Snyder, since he panned the book purely to embarrass another authority. That any word of defense for Arnold is like spitting on that authority, since Arnold has friends who say mean things about that person. That Snyder is [words too severe for your ear, gentle reader.] That I am dishonest. That all books have errors anyway. That I have lost all credibility. And that I have angered forces so powerful, that my personal, private communications are no longer secure. I'm sorry. I'd like to retract everything I said about the book. I'd like to, but I can't. Once I had the book in my hot little hands, three things were quickly evident. 1) Everything Snyder said was accurate. 2) The reference to Table 32 was a sidestep; pure flimflam. And 3) The errata explains nothing. It actually provides more nails for McDowell's coffin. Those who are deeply concerned about any of these three points, or my credibility, have permission to skip to the Infamous EV section. Those who passed Probability 101 and would like to see some quick samples of elementary errors in the book, may want to skip to The First Calculation and A Broken Calculation. I wish Blackjack Ace Prediction was the greatest book ever written on any exotic blackjack technique. Then dealers would have to expend tons of energy learning new shuffle routines, and that would divert attention from my own advantage play, which has nothing to do with ace sequencing. I wish ace sequencing was all it's cracked up to be in the book and more. Then I could just post that Arnold had lost it, and my calculation had no relation to anything in the book. I would have loved writing that post. But I've owned the book for three weeks now and  sorry  no can do. The remainder of this article will attempt to explain why I don't believe the pieinthesky promises outlined in the first few pages of Chapter One. Why I turned my attention away from ace sequencing some 15 years ago, and toward a simpler, more rewarding ploy. And  sorry again  why you must take everything McDowell says about ace sequencing with two or three truckloads of salt. The Promise: Chapter 1, pg 18: "If you get it right, Ace Prediction in modern games will get you (conservatively) a 34% edge." It's clear, from the context, that McDowell is talking about overall edge  not just the edge on an occasional hand, or a half dozen times an hour. He compares it with card counting, but clearly, card counting provides you with an occasional 34% edge. The reason we are to take up sequencing is the far greater net edge  34% compared with 11.5%. This 34% figure is not supported anywhere in the book. Sorry  I will show you it is insupportable. Filler and More Promises Pages 1 through 16 simply contain cover pages, Table of Contents, a list of "Tables and Figures," Acknowledgments, and the Foreword by Michael Dalton. Then on pg 17 we read: "huge advantages over the casino," and on page 18: "Predicting Aces is a very simple idea and it's easy to do at the table." The First Calculation Chapter 1, pg 19: "To pull this off requires nerve, but getting big money on to the table when you have the advantage is the real key  not the size of your edge. Four bets at 2% is just as good as two bets at 4%, and it looks less suspicious." Sorry, no. It isn't. When you have a 4% edge, you can bet twice as much and keep the same risk of ruin. 4 bets of one unit at 2% are only half as good as 2 bets of 2 units at 4%. The first has an EV of 4 x 0.02 x 1u = 0.08 units, the second, an EV of 2 x 0.04 x 2u = 0.16 units. You can't rehabilitate the argument by saying "we were going to bet table max in either case." If you bet the same, with a higher edge, you're rewarded by a smoother uphill climb in bankroll. Your situation improves by every riskadjusted measure, such as Kelly's G, N0, or SCORE. This should be reflexive knowledge for all advantage players. Note the formula for SCORE invokes advantage squared in the numerator, not just the advantage. It has little to do with the rest of the book, but it's a note of caution that we should carefully chew any further math in the book before swallowing. This is no typo. He's got the suspicion factor turned round 180 degrees. The doubling of EV allows extra room for all manner of cover play. The very first calculation in the book betrays a fundamental misunderstanding of advantage play math. More Acknowledgments and History The rest of pg 19 and 20 through 37 contain brief synopses of prior works, each generally getting one paragraph; a bit longer for Golumb and Thorp. Some have said this is the best feature of the book  its thorough list of references. However, no original math in this section, and nothing of earth shaking importance for sequencers in the real world. The Second Calculation On pg 38, McDowell correctly interprets what Epstein wrote about "highly expert dealers"  up to a point. He correctly plugs the numbers into Epstein's formula and comes up with 8/9, 8/81, and 1/81 as the probabilities for 1, 2 and 3 card interlacings in a single riffle. But before we even have a chance to say "well done, McD" he tries to get creative. Here is the quote from the bottom of the same page: "When calculating n above, zerocard interlacings were grouped with onecard interlacings." [No, they weren't.] "Since, in practice, they are identical to twocard packets, we deduct 8/81 from r1 and add it to r2. The probability of onecard packets becomes 8/9  8/81 = 64/81 = 0.78; twocard packets = 8/81 + 8/81 = 0.20 ..." Sorry, incorrect. Nowhere, in Epstein, is permission given for such an adjustment. In fact, part of the rationale McDowell gives  the centrality of "transition probabilities" in Epstein's analysis  provides proof this was no part of Epstein's premise. The very first transition probability Epstein gives is Pi(AAA) = 1, which means that a transition to the alternate packet (ie. the dealer's other thumb during a riffle) is certain when three cards in a row fall from a given packet. This is clearly untrue when zerocard interlaces are permitted. All the other transition probabilities are also untenable with zerocard interlaces. Epstein was simply saying one, two or three cards could come down from a given packet with specified probabilities. Perhaps McDowell was confusing interlaces with what he calls "gaps," later on. (A twocard interlace results in onezero card gap, a threecard interlace results in twozero card gaps, etc.) Or maybe he just wanted Epstein's results to fall in line with Hannum's and Curtis' results for Table 25. Sorry  they don't. Of course, making similar adjustments in Hannum's or Curtis' results wouldn't have made such a nice, neat chart. Moving On Pg 40 presents some more of Hannum's results, without comment, and gives an encouraging quote from a 1988 paper entitled NonRandom Shuffling Strategies in Blackjack. No practical data. Pg 41 contains an explanation of Shannon's formula for information and entropy. I am not sure why base 2 was chosen, but using another base (eg. natural logs) would simply change the unit of information measure, so I have no comment. I'm not an expert on information theory. But pg 42  here we go again. McDowell presents  with an air of authority  Table 27, "Information Loss in Card Shuffling," which he attributes to Trefethan, Lloyd N., AND Lloyd M. Trefethan. He concludes "After the first riffle ... 52 bits of information about deck order are destroyed (23.05%) and 173.58 bits remain (76.95%) ... After the fourth riffle, only 12% of the original information remains ... After ten riffle, I = 0 bits and U ... = 225.58 bits." Sorry. You don't need to be an expert to see this for the nonsense that it is. On pg 167 of The Theory of Gambling and Statistical Logic, Epstein presents similar formulae and immediately concludes: "The transition probabilities for a 52card deck ... are far beyond the reach of any practical highspeed digital computer." No, computers aren't that much faster today than they were when Epstein was around. McDowell makes it seem these information figures apply to any dealer. To any riffle! There's no report on the experimental procedure used, no transition matrices, no specification of how the various Pi values were obtained. The Pi values (the various probabilities ascribed to every possible deck state) depend critically on the dealer's individual riffle. I have a copy of the paper by the Trefethens. A computer shuffle was used with the property that "one shuffle is equivalent to separating the n cards into two subsequences at random, then concatenating the subsequences, a process that comes very close to moving the deck to one of 2^n possible configurations with equal probability." Thus, the "riffle" used moved the deck into any one of 2^52 = 4.5 quadrillion possible states with "close to" equal probability. Sorry  such a riffle, while interesting from an information theoretic point of view, has virtually nothing to do with a riffle we're likely to encounter in a casino. Chapter 3 This is where he lays out his methodology (which he credits to Thorp) for counting "gaps" between initially adjacent cards. Before presenting the tables, he writes "1530 observations of Di (n = 1530) were recorded for a one and tworiffle shuffle, and ... 5100 observations (n = 5100) for a threeriffle shuffle. Therefore, the statistics presented can be looked upon as good estimates of the parameters of the parent group for all card shuffles ..." Sorry  no way. First of all, he's assuming the gaps are distributed identically through all parts of each riffle  start, middle, and finish. You can't just do that. Even if you feel you have a very smooth, even shuffle, you need evidence for that in the form of statistics. You can't just magically transform a sample of 30 into a sample of 1530 based on what you think should happen. Consider what happens when you riffle a deck of cards. Don't 35 cards sometimes come out together at the beginning of the riffle? Isn't there usually a packet of 26 "leftovers" at the end? McDowell may, indeed, be one of the smoothest rifflers in Kingdom Come, but identical distribution, from bottom to top?? Moreover, McDowell assumes his shuffle is a fine sample for all shuffles. Ultimately, his sample size is one (1). And I honestly believe it's an inappropriate sample, because his shuffle is apparantly quite unusual. More on this later. From pg 47: "The exact number of singlecard separations may vary from person to person but, over a large number of trials, the most frequently occurring distance between cards after one riffle always will be one card." [Emphasis in the original] Sorry  wrong. The most common "distance" (gap) for me, by far, is zero cards. Remember, a zero card gap, under his definition, occurs when two initially adjacent cards stay adjacent through the shuffle. I would challenge you to pull out a deck of cards, put it in any welldefined order, riffle once, and check to see how often precisely one card is interleaved between initially adjacent cards. I just did it now, and I got 9 onecard gaps and 22 zerocard gaps. And the number of separations may vary from person to person? May vary?? Think about skin tone and the thickness of a single card. Think about what's required to make precisely one and only one card drop before action switches to the other thumb. Think about dealer's with long fingernails! Well, on the same page he presents his table of gaps, based purely on his personal shuffle. Then he does a little massage on the numbers: "The percentage of sequences broken was ... 1.96%. Before dividing the total for singlecard separations, which includes broken sequences, the raw value was reduced by the number of broken sequences." I was a little perplexed when I read this. Then I looked at his table of adjusted percentages vs. raw percentages. In the Adj. column, the percentage for a 1card gap (his mean) goes down and all the other percentages go up! A little study led me to discover he had subtracted all 30 of the broken sequences (one for each riffle) from the mean as well as the total sample, but none of the broken sequences from any of the other card gaps. Is there some reason to believe 0gap sequences can never be broken? Sorry. None that he explains, and none that I can imagine. 2gap sequences  unbreakable? Not. He might claim he's trying to be conservative, since bets will be placed based on the mean (which is also the mode  the most common  in his charts) gap. But he bases his mean and standard deviation statistics on this adjusted sample! It would have been easy for McDowell to subtract out broken sequences from all gap sizes on a prorated basis, but, for reasons known only to him, he chose to distort the percentages. But wait  I skipped a page. From pg 46: "This distribution has an almost symmetrical "bell shaped" normal curve. This makes the arithmetic mean and standard deviation appropriate measures of location and dispersion ..." Well, the mean and standard deviation are always good measures of, well, the mean and dispersion about the mean. But he's basing his normal distribution theory on four data points for the oneriffle shuffle. The only reason it looks normal is because he's connected the dots with a nice, bell shaped curve. And the curves for two and three riffles, with a few more data points, look less and less normal. This is a common mistake, even among experienced statisticians. People tend to assume raw data is normally distributed, when in fact, it's quite an unusual distribution in nature. But  sorry  four data points?!? It was very important to McDowell that his oneriffle gaps conformed to a normal distribution. You see, the central limit theorem tells us that any large set of independent random variables (typically 20 or more, but certainly 5 or 6 at minimum), when added, have a normal distribution. But McDowell has page after page of calculations with the normal distribution on two and three riffles. Normal distribution calculations are very simple. All you need are two parameters  mean and variance  and you know everything about that distribution. But if oneriffle gaps aren't very near normal, there's no reason to assume two or threeriffle gaps are normal either. Let's see what results from this "normal" distribution. On pg 48, McDowell gives the mean = a gap of 1, with standard deviation = 0.61 of a card. Using the usual formula, the mean should be represented at 0.5 to 1.5, or 1.22 standard deviations on either side of the arithmetic mean. From my trusty table of areas under the normal curve, I get 78% compared to McDowell's recorded 69%. That leaves 11% for everything to the left of the mean and 11% for everything to the right of the mean. McDowell has 18% of his data to the left of mean, and 13% to the right  a lopsided curve. I'm sorry. I don't believe McDowell has enough evidence to declare that a "typical" dealer's riffle has gaps which are normally distributed. Let's look at some facts. Item #1: Here are my results for 30 rifrif shuffles, corresponding to McDowell's Table 32. Gaps Percentages: 0 16.40523% 1 17.71242% 2 16.53595% 3 15.42484% 4 9.542483% 5 7.45098% 6 4.575163% 7 1.960784% 8 1.764706% 9 .9803922% 10 .7843137% 11 .3267974% 12 .3267974% 13 .130719% 14 .130719% 15 6.535948E02% 16 0% 17 0% 18 0% 19 0% Broken: 5.882353%I defy anyone to fit this to a normal curve. Now people will point out that my riffles can't compare to the riffles of someone who deals for a living. However, for several years I handled and shuffled cards nearly every day, playing gin rummy with friends and relatives. I was considered overly precise  overly fussy. Every time I walk into a casino I see dealers with riffles that make me look like Steve Forte. But if nothing else, my results show that different dealers have very different signatures. McDowell's mean for two rifs was gap = 3 at 36% compared to my 15%, and at gap = 0 he had 5.1% compared to my 16.4%. Item #2: On pgs 90 and 91, McDowell discusses the BayerDiaconis formula: A/(A+B) for determining the probability the next card will come from a given thumb in a riffle. If a riffle follows this distribution, the gaps after one riffle will be approximately 0.5, 0.25, 0.125, 0.0625 ... not even vaguely approaching normal. After two rifs, it's easy to show zero sized gaps will come in at approximately 25% (compared to McDowell's 5.1%). Again, not a normal distribution. Item #3: McDowell also mentions Curtis' interleave results where interleave = 1, 66% of the time. This means gap = 0 well over 30% of the time. Not normal. After two rifs, gap = 0 approximately 10% of the time. Item #4: Finally, let's look at riffle results for someone who was a professional. For those who have Wong's Professional Blackjack, take a look at Table 89 "PostShuffle Gaps Between Initially Adjacent Cards." This is Wong's record of ten rifrifstriprif shuffles on a single deck by a professional dealer who worked in Las Vegas for five years. Point #1: The most common gap for this dealer was one (1) card, compared to three (3) cards for McDowell's rifrifrif. The extra strip made initially adjacent cards come closer together?? Sorry  afraid not. Point #2: There are 23 data points to the left of the mode in Wong's data, and 430 points to the right of the mode. Not even remotely a normal distribution. Even if we try to shoehorn Wong's data into McDowell's model, there are 120 points to the left of 3 card gaps, and 347 to the right. I'm not even sure Wong's chart represents what we'll see on an average day in your corner casino. I think dealers who know they are being tested are likely to be more fastidious than dealers in a casino, even if they've been instructed to behave "normally." But if a dealer riffles with anything like the distributions in items 1 through 4, it destroys McDowell's frequency calculations in Chapter 8, because if the mode clusters toward small gaps, such as zero or onecard gaps, instead of three, the frequency of betting opportunities is cut 50% or more. Maybe the normal curve only applies (very loosely) to "highly skilled" dealers, as Epstein described them. Maybe. But how many times have you seen experienced casino dealers riffle 30 cards in one hand and 40 cards in the other? You can't have that and get results like McDowell's. How many times have you seen a group of 4 or more cards slap down together at the end of a riffle? If the piles were even to start, that means there's 4 cards somewhere else that didn't fit neatly into the ABAB ... pattern, and I promise you, you're going to come in below his 69% per riffle mean. McDowell must have practiced riffling long and hard. It takes tremendous effort to interleave cards one to one over twice as often as any other combination, as showed in his one riffle statistics. Sorry  I don't believe very many dealers train hard to make their riffles predictable for all the wouldbe sequencers in the world. Do you begin to get the picture? On nearly every page where McDowell isn't giving a history lesson, or quoting an authority, or telling a cute story, I find deep and troubling evidence that he's in far over his head. Fundamental errors in math and methodology. Buy it for the history, or the list of authorities, or the beautiful cover, or for the cute stories. But don't buy it if you need someone to hold your hand through the math, or because you believe sequencing is an easy road to quick riches. I won't go through every page. Let's get some background, then skip to the good part. A Very Exclusive Club On pg 14 of Epstein's Theory of Gambling and Statistical Logic we find something learned by every school boy who's made it through week one of Probability 101. Not to get too technical, I'll put Epstein's Axiom III into my own words: If you have a group of events which are mutually exclusive, which means that no two of them can occur together, the probability that any of these events will occur is the sum of the probabilities that each one of them will occur individually. People who understand this rule, along with the rule for multiplying independent events, are in a very exclusive club, since they are able to solve a whole class of interesting problems in probability. Epstein goes on, with equation 22, to extend the rule for events which are not mutually exclusive. Again, in my own words: If A and B are any two events, the probability A and/or B will occur is P(A+B) = P(A) + P(B)  P(AB), where P(AB) is the probability that both A and B occur. What we can learn from equation 22 is that people who ignore the mutually exclusive rule are doomed to fail. If events are not mutually exclusive, there is, by definition, a nonzero probability they can occur together. P(AB) <> 0. Therefore, probabilities obtained by simply adding or subtracting nonexclusive events are always wrong. You need extra terms to subtract out all the ways the nonexclusive events can happen simultaneously. In addition to the words "independent" and "mutually exclusive," first year probability students usually hear the word "exhaustive," which means, simply, that you have to make sure you enumerate all mutually exclusive events you are interested in before adding. A Broken Calculation On pg 59 of BJAP, McDowell presents Table 34, of four different twocard sequences, together with their probabilities of being broken (4/51 in each case). He also presents the total  16/51  for some reason. Note that if there had been 13 or more sequences (certainly conceivable) the total would be >1. You can't have a probability greater than one. Of course, these are not mutually exclusive events, so we shouldn't be adding them in the first place. McDowell repeats this mistake in Tables 37 and 38, and in the errata, we learn he believes these totals are probabilities. "The total probability for "Stripping" (12/31) should be replaced with 12/51" Sorry  McDowell is apparantly not a member of the club. On to pg 60. Quoth McDowell: "The average probability p1 of one sequence being broken is 4/50 = 0.08. The probability p2 of two sequences being broken is 0.08 X 0.08 = 0.01, (p3 and p4 ~= 0). Well, this assumes the four sequences are independent. I won't object. Close enough. But then we have: "Finally, 1  (p1 + p2 + p3 + p4) gives the probability p0 for zero sequences. Sorry  no. For this to be true, it would follow that p0 + p1 + p2 + p3 + p4 = 1. In other words, he's saying it's certain that either no sequences are broken, or one specific sequence is broken, or two specific sequences are broken, or three or four. Do you see the problem? He has a list of things that can happen, but it's not exclusive, and it's not exhaustive. There are many other terms needed in ths calculation. When McDowell writes 4/51 = the probability of a specific sequence being broken, he does not exclude the possibility that another sequence may be broken at the same time. In fact, he has the nonzero probability that two sequences will break together right in the calculation! McDowell's is also not an exhaustive set of events. For example, if two sequences are broken, that doesn't prove that some other two sequences aren't also broken. A mutually exclusive list looks something like this: (The probability that no sequences are broken) + (The probability that any one of the four sequences are broken alone, with no others) + (The probability that any two of the four sequences are broken alone, with no others) + (The probability that any three of the four sequences are broken alone, with no other) + (The probability that all four of the sequences are broken) = 1 Rather than write out all the possible combinations of four events, to use the above formula, there is a simple way to compute this probability. Since the sequences are assumed to be independent, it follows that their negations are also independent. In other words, the probability of the first sequence not being broken is 1  4/51 = 47/51. Similarly the probability of the second, third and fourth sequences not being broken are each 47/51. So the probability that no sequence is broken = p0 = (47/51)x(47/51)x(47/51)x(47/51) = 0.72129. Ambitious students are encouraged to work out the probability for the long list I provided above to see that the first term works out to this exact same result. This is quite different from the 0.91 given in the book. It's such a common, elementary mistake, I believe I noticed it within ten seconds after I flipped to page 60. Within a minute or two I cranked out the correct answer on my trusty TI89 calculator. And I'm no math professor. My point being, sorry, but to anyone who's taken elementary probability, this is not rocket science. But I'm afraid it gets worse. On the next page, pg 70, McDowell does a little calculation using his formula 31: B = (ih  ic) + bh, for a more complex rifrif shuffle he analyzed on computer. He arrives at 0.096 for the probability of a broken sequence, which, he says, "is in close agreement with the 0.9 estimate calculated using pencil and paper above." I believe he means the 0.09 calculated in Table 35 from 1  0.91  ostensibly the probability that one or more sequences will be broken in a rifrif shuffle. This is the only interpretation that makes sense, since we know that 0.096 is not in close agreement with 0.9. (Where probabilities are concerned, these are about as disparate as it gets.) But we now know the 0.91 given in the book for p0 was incorrect, so we know the 0.09 for sequences > 0 was also incorrect. The correct probability for sequences > 0 is 1  0.72129 = 0.27871, which can hardly be taken as corroboration for the 0.096 produced by McDowell's formula! How can the formula be so wrong? Guess what  same problem. Once again, he's adding and subtracting probabilities without the critical assessment that they represent mutually exclusive events. You simply can not take a bunch of probabilities and say "let's add the ones we like and subtract the ones we don't like," but it honestly seems this is how McDowell thinks. Formula 31 also provides the answer to why McDowell did the little "massage" I mentioned on pg 47. He subtracts out the broken sequences earlier, so he can add them back in using his pet formula 31 later on. So (sorry!) he does an invalid transformation on his raw data, which messes up his standard deviation calculations on a normal curve which isn't normal, so that he can retransform later using his invalid formula. At a certain point you just have to laugh. It's really rather charming, this valiant attempt to slay the great sequencing Goliath. But at least that David had a slingshot. ;) ;) Let's mush on to... Page 82 I just wanted to pause here and congratulate David. This may be the only page in the book filled with calculations that are all correct. I don't know where he got the formula for the probability of two consecutive aces, but I think it's 100% accurate, as are all the numbers and the chart. Nice job. The Infamous EV Calculation This is where all the fun started. But before we get to the EV calculation proper, let's look briefly at the preceding page 113, where we find the author's formula 72 for P(h)  the probability the ace will "hit the money." His formula: a  (b + f), again violates the rule about adding probabilities that aren't mutually exclusive. I just wanted to mention that it seems reasonable to assume these probabilities are independent, so a better approach might be: a(1b)(1f), giving 0.29 compared to McDowell's 0.13, for a = 0.38, b = 0.15 and f = 0.10. I haven't tested it; I just wanted to throw it out there. But before anyone decides to plug 0.29 into the equation, they might want to study Radar and Snyder's work on the proper estimation of false keys. Now the formula: E(X) = E1h + E2d + E3m Where, E1 = player's expectation if the Ace hits the money E2 = player's expectation if the dealer gets the Ace by accident E3 = player's expectation if the Ace misses the money h = probability that the Ace will hit the money d = probability that the dealer will get the Ace by accident m = probability that the Ace will miss the money Note that he's dropped the P() from P(h) for this formula. Now, there is an interpretation under which this formula is correct! If the Ace in question refers to the one and only one ace being sequenced, then I have no problem with the formula as written. Note that E3 doesn't say expectation off the top, precisely (though he uses 0.5% in the example, which is the prototypical off the top edge). Unfortunately, in the preceding paragraph, McDowell writes "In this case that means the dealer gets six additional Aces." Aces, plural. If there is more than one ace floating around, then there's a perfectly good chance one of them will hit the player, and another one will hit the dealer, and maybe a third will "miss the money" altogether. (Note: I am using McDowell's terminology, though I may not always like it.) Thus, McDowell may be adding nonexclusive events once again. Now, although the formula is not adding probabilities per se, the prohibition against adding nonexclusive events applies to prorated EVs as well as probabilities. If you don't watch out for this, you are practically guaranteed to get the wrong answer. In addition to this ambiguity, McDowell clearly messed up in his invocation of "Snyder's rule of thumb" which, according to his own definition, splits aces equally between player and dealer. Yet, in his example, he sets h = 0.13 and d = 0.06. This is such an obvious goof he had to write up a correction in his errata. Also, McDowell gives us no hint on how to calculate E3, the expectation if the ace "misses the money," sharply curtailing the usefulness of the formula, save for the (not too useful) case of an infinite deck, where E3 would just be the edge off the top. Desperately, we seek out the errata, in hopes it will clarify the ambiguities and save this centrally important formula, which, we were promised back in Chapter 1, is "as good as the accuracy of the figures plugged into it." Sadly, I quote the errata: "Page 114, Line 15: At this point we invoke Snyder's rule of thumb  the player and the dealer share the Aces 50/50. In this case that means the player and the dealer get three additional Aces each. The probability of the Ace "hitting the money" P(h) and the probability of the dealer getting the Ace by accident P(d) become 0.10 while P(m) is reduced from 0.87 to 0.80." Thus, McDowell is using h and d in the formula as an aggregate probability that any ace will hit one of these two spots. In confirmation of this interpretation, McDowell uses d = 0.07 in the errata for the case where "the dealer can be prevented from getting the ace," 1/13 ~= 0.077. If we were talking exclusively about the tracked ace, d would be zero for this case. But, sorry, since we are talking about more than one ace, we no longer have mutually exclusive events. My friend Zenfighter repeats this same mistake in his "final take" calculation posted on www.bjrnet.com. He derives different values for E1, E2 and E3, but then he simply plugs those values into the formula as restated in McDowell's errata. To present the formula in the best possible light, I want to use the best possible version. Here is Zenfighter's improved version of the McDowell formula: E(X) = 0.10 * 50.79 + 0.10 * (34.17%) + (1.5246%) = 0.1374% where 1.5246% represents "Exact cost for the 80% of the hands where neither the player nor the dealer gets a first card ace." Let's rewrite: E(X) = 0.10 * 50.79% + 0.10 * (34.17%) + 0.80 * (1.90575%) = 0.1374% so we can clearly see: E1 = 50.79%, E2 = 34.17%, E3 = 1.90575%, h = 0.10, d = 0.10, and m = 0.80 Here are some paradoxes which result if we accept this formula as gospel: Paradox 1) Assume all the aces are distributed as in a regular shoe (ie. no sequencing). Then we have h = 1/13, d = 1/13, and m = 11/13. E(X) = (1/13) * 50.79% + (1/13) * (34.17%) + (11/13) * (1.90575%) = 0.3341% 0.3341% is different from the  0.4069% computed by CA, and quoted by Zenfighter on 3 Feb 2005 at www.advantageplayer.com . Paradox 2) 0.1374% is substantially different from the 0.03% I calculate below using nothing but elementary probability. If Zenfighter believes his number is correct, he needs to show the error in my calculation. Paradox 3) Suppose we have the best of all possible sequencing opportunities. We're tracking a single deck where the dealer only does one rif. With single deck we can forget about false keys. Let's further assume dealer always breaks right at 26 cards, and we saw the keyace combination go into the discards at 10 and 9. Now we can forget about a broken sequence. Finally, we have this dealer's signature down pat, and he never interlaces more than one card from his right thumb. The bottom of the pack  with our keyace combo  was in the dealer's left hand. Now, we know without doubt there will be exactly zero or one cards between the key and the ace. We're heads up, one hand, against the dealer. We get lucky. The last card of the first round is the key! Under the McDowell/Snyder rule of thumb, the probability the tracked ace will be player's first card is 0.50, and probability it will be the upcard is also 0.50. There is also a positive 1.5/51 chance one of the other three aces will hit either spot. h = 0.50 + 1.5/51 = 0.5294, d = 0.50.+ 1.5/51 = 0.5294. Calculating m with the McDowell/Zenfighter method, we have m = (1  0.5294  0.5294) = .0588. Ladies and gentlemen, we have a negative probability! Should we throw out three and a half centuries of probability theory and accept a negative probability? Should we just plug it into the formula? Or should we begin to suspect there's something seriously wrong with this formula? At this point, I hope you can guess my vote. A brief aside, here, to a brilliant programmer who shall remain nameless. The Correct Calculation below (which first appeared on advantageplayer.com) very definitely pertains to the book, as well as the errata. Table 32 doesn't enter into the discussion here, because we (Arnold and I) are accepting the numbers from Table 32 as inputs. The formula gives incorrect answers no matter what numbers are input, so Table 32 is irrelevant. And the errata doesn't change the formula, but instead, further undermines its validity by pinning down the meaning of some of the inputs. The Correct EV Calculation Here is the right way to find this Ev, given all the inputs required by McDowell's formula. We will follow the one tracked ace over the various positions, since one ace can't be in two places at the same time. Thus we can list EVs and probabilities for mutually exclusive events. Therefore we will redefine h, d, and m as probabilities for the given ace to land as the player's first card, the dealer's upcard, or somewhere else in the shoe respectively, and we'll redefine E3 as the expectation when the tracked ace goes somewhere other than the first two cards. In addition, I'll show a simple way to calculate E3 given the E1, E2 and the off the top expectation. We'll assume 6dks, das, spl3, nrsa as Zenfighter proposed on advantageplayer and bjrnet, so we can borrow some of the numbers he has kindly provided. The premise is: we're tracking an ace, and based on this information, we know there is a 0.10 probability any ace will be dealt to the player's first spot, and a 0.10 probability any ace will be dealt to the dealer's upcard. If the tracked card does not go to one of those two spots, it's assumed to be in some other completely random position. It may not be realistic, but that is the premise. All other cards are assumed to be randomly dispersed as well, in the remaining positions. I. First off, let's look at the player's first card. We know there is a 0.10 probability this card is an ace. Therefore there is a 0.90 probability it is a nonace. There are 288 nonaces in the shoe. Therefore, the probability any one of those nonaces will hit the player's first spot is: 0.90/288 = 0.003125 II. Since all untracked cards are evenly distributed, it follows that all the aces but one also have a 0.003125 probability of hitting the player's first card. There are 23 such aces. We therefore know: 23 x 0.003125 + h = 0.10, where h = the probability that the tracked ace will hit the player's first card. Solving, we find: h = 0.028125 III. By premise, the distribution for the dealer's upcard is the same: d = 0.028125 IV. We now have the probability that the tracked ace will hit either the player's first card or the upcard, with associated EVs (E1 = 50.79% and E2 = 34.17%) provided by Zenfighter. Since the tracked ace has to land somewhere in the shoe, we know the probability the tracked ace will hit any spot other than those two spots is: m = 1  2x0.028125 = 0.94375 V. If we now had the EV associated with a hand where all we know is the tracked ace did not hit the first two spots, but could have hit anywhere else at random, then we'd have three mutually exclusive EVs covering all possibilities (the ace can only go to one spot at a time, but it has to go somewhere) which we could add to find our total EV for this problem. But we don't have that EV. Or do we ... ... Set up a hypo. Suppose we've tracked this ace, and come to the conclusion there is a 1/312 chance of it hitting the player's first card, a 1/312 chance of it hitting the upcard, and a 310/312 chance of it hitting anywhere else. All other cards are randomly dispersed just as we said earlier. Hey! This is normal, off the top distribution! Using the overall off the top EV (0.406923%) provided by Zenfighter, we can write: 1/312 x 50.79% + 1/312 x (34.17%) + 310/312 x E3 = 0.406923% Or solving: E3 = .463161% VI. Now we can substitute this EV into our original set of facts: 0.028125 x 50.79% + 0.028125 x (34.17%) + 0.94375 x (.463161%) = 0.0303% Total EV on the one tracked hand = +0.03% It's a very straightforward problem in probability. But, just to be sure, I had a PhD who teaches a course in probability review it. This person occasionally plays blackjack, but has not been involved in any of the controversy surrounding this book. (S)he states that it is accurate, given the premises (the same premises Zenfighter used in his "final take" calculation). At every stage we're dealing with mutually exclusive events. The final result is completely reliable, assuming the premises and the EVs are accurate. (Actually, one more digit of accuracy in the EVs would be nice, to assure the .03% isn't actually .04% or .02% due to cumulative roundoff error.) Note this is just your EV when you're "lucky" enough to track an ace with this (weak) degree of accuracy. It goes without saying that no matter the bet spread, your waiting bets are going to wipe out any potential profit. If we start with the premises laid out in the errata: a 0.10 probability any ace will be dealt to the player's first spot, and any ace will be dealt to the dealer's upcard, with E1 = 51%, E2 = 34%, and assuming the ambiguous 0.5% refers to expectation off the top, the calculation becomes: 1/312 x (51%  34%) + 310/312 x E3 = 0.5% E3 = 0.5580% .028125 x 51% + 0.028125 x (34%) + 0.94375 x (0.558%) = 0.0485%, compared to McDowell's +0.0130% Simulation Verification I ran four simulations by rewriting tracking simulation software written by me (remember my misspent youth?) and available only to vetted APs: 6D S17 DAS SP3 NRS NS, fixed number of 33 rounds per shoe: Round 500000000 was completed at: 03022005 11:38:38 Dealer garnered 4025786 Players accumulated: Player 1 : 4025786 / 1000000000 = .4025786 % av. bet = 2 A maximum of 221 cards were dealt from the shoe. Same as above, but with an ace removed from the shoe and dealt to the player's first card every time: Round 500000000 was completed at: 03042005 13:36:04 Dealer garnered 253927397 Players accumulated: Player 1 : 253927397 / 500000000 = 50.7854794 % av. bet = 1 A maximum of 189 cards were dealt from the shoe. Simulated E1 = 50.7854794% Same as above, but with an ace removed from the shoe and dealt to the dealer's upcard every time: Round 500000000 was completed at: 03052005 14:21:57 Dealer garnered 170791276.5 Players accumulated: Player 1 : 170791276.5 / 500000000 = 34.1582553 % av. bet = 1 A maximum of 190 cards were dealt from the shoe. Simulated E2 = 34.15822553% 1/312 x 50.754794% + 1/312 x (34.15822553%) + 310/312 x E3 = .4025786% Or solving: E3 = .458713198% EV on the tracked hand = 0.028125 x 50.754794% + 0.028125 x (34.15822553%) + 0.94375 x (.458713198%) = +0.03387% If Zenfighter's equation is correct, the EV should either be the 0.1374% quoted above, or very slightly higher, since my simulated edge off the top was slightly higher (though well within one standard deviation) and E1 + E2  the positive contribution of the ace  is 0.007% higher. So we'll test it ... Same as above, but with an ace removed from the show and dealt as the first card whenever (total aces to first card)/rounds < 0.1, or if that doesn't occur, to the second card whenever (total aces to upcard)/rounds < 0.1, or else to one of the other 310 positions in the shoe (dealt or undealt) chosen at random. Round 500000000 was completed at: 03082005 02:53:04 Dealer garnered 150428.5 Players accumulated: Player 1 : 150428.5 / 500000000 = .0300857 % av. bet = 1 A maximum of 227 cards were dealt from the shoe. Aces to Player's first spot: 50000000 Aces to dealer's upcard: 50000004 Now, if we put an ace on the first spot 1/40 times, and deal from a full shoe the other 39/40 times, we'd get an ace on first spot 10% of the time as in the sim. This suggests adjusting the variance per hand to (1/40)x1.495439 + (39/40)x1.34 = 1.35. [The 1.495439 is from the Grosjean/Mankodi article.] But this doesn't take into account the effect of the extra aces to the upcard. This will have little effect, since splits and doubles are unusual against an ace, but there are also fewer pushes with an Ace up. But heck, since I'm in a generous mood we'll nudge the variance all the way up to 1.4. Standard deviation for the 500M hands is then SqRt(1.4/500,000,000) = 0.0053%. So my predicted 0.03387% is 0.71 standard deviations from the sim, while the best McDowell/Zen prediction to date (0.1374%) is over 20 standard deviations from the sim. More Than One Ace If you have reliable figures for EV when the dealer's holecard is an ace, the above method of calculation is easily extended to any distribution of one tracked ace into the first four cards in a heads up game. Also, small variations in the distribution of the tracked ace to possible hit cards should have little effect on the EV. However, it must be noted that you normally need to track more than one sequence per shoe to have a viable advantage. Unfortunately, when you have more than one sequence, the EV on each key is reduced. Each time you see a given key, the other tracked aces  not associated with that key  are basically "unsequenced." You may have noted my EVs all generally come in lower than McDowell's. Tracking multiple sequences brings the EV down lower still. It may be possible to work up a formula for the multisequence approach, but at a certain point writing a simulator begins to look easy in comparison. ;Q A Word About Chapter 8 Chapter 8 of BJAP is devoted to determining how much ace sequencers should bet. The first section is entitled "Expected Return" wherein he calculated (somehow  he doesn't make all his variables explicit) that a sequencer can lay down 4 bets per hour with the positive EV from the previous chapter (obviated by the errata) of +4%. After much arithmetic sorcery (and a few sprinkles of magic dust) he winds up with the figure 2051 hands for a onethird Kelly bettor to double his bankroll. Let's grant the four x 4% bets per hour. Although the rest of the book doesn't tell you how to get such an edge, or even how to calculate it if you've got it, four x 4% is not an unachievable goal. One small fact he neglects to mention (I'm so sorry): The whole scenario he lays out to get those four bets per hour requires controlling four spots at all times, at a table with 7 spots, and assumes 60 rounds per hour. Thus, the number of negative EV waiting bets you need to make per hour = 236. None of the growth rate conjury takes this into account! And the 2051 hands he calculates actually represent 2051 positive EV hands plus 121,009 negative EV bets (no, I'm not kidding.), after which your bankroll will very definitely not be doubled, nor anywhere close to doubled, since McDowell's growth calculation doesn't subtract the drain from the negative EV waiting bets. Also remember, the negative EV waiting bets will be more negative than the regular off the top expectation. Since the waiting bets have no associated keys, they are, in effect, "unsequenced," and have a lowered probability of catching an ace on either the first or second card. Now it has been pointed out that McDowell's techniques may work better in European casinos, where backbetting is common. This skirts the problem of waiting bet drain, but it leaves several other problems. 1) Even if you can get down four +4% bets per hour down, your time to doubling will be approximately 2051/4 = 513 hours or half a year if you play 20 hours per week. (Note this is a Kelly growth calculation, so it involves continuous bet resizing, which is something most APs don't like to do.) 2) If you have competition for backbet spots, this cuts your EV even more. 3) You are relying on the basic strategy of European strangers  cut the 4% down to about 3%, and 4) All this assumes European dealers have riffles as neat and precise as McDowell's charts. Hey, maybe they do  I've never played in Europe. Also, McDowell's risk calculations use the oftquoted standard deviation of 1.1 units for a blackjack hand. Since substantial bets will be placed on the assumption that an ace will appear, this needs to be prorated and adjusted along the lines mentioned by Grosjean and Mankodi in their article "42.08%: More on the Ace in Hand." Blackjack Forum Winter 2003/04, Vol XXIII #4. There's no mention in BJAP of adjusting strategy in order to reduce variance on the hand. In sum, I'm afraid I must counsel you that Chapter 8 offers very little clue on how much to bet  even if you are a math whiz, with the ability to develop valid EV formulae to replace the invalid formula in Chapter 7. Also note that tracking four sequences at a time involves juggling up to 8 keys in your head at a time. You need to be learning new keys at the same time you're remembering, and using, keys from the previous shoe. You also need the ability to forget old keys very rapidly, to make room in your head for fresh ones. Realistic Expectations I mentioned an EV of 4 x 4% with 236 waiting bets was an achievable goal, so I owe you this calculation. Assuming an off the top edge of 0.5%, we can find the EV of waiting bets (call it E4) as follows: 4 x 4% + 236 x E4 = 240 x (0.5%), E4 = 0.57627%. With a 20 to 1 bet spread, your expectation is: 20 x 4 x 4%  1 x 236 x 0.57627% = 1.84 units per hour. Total action = 20 x 4 + 1 x 236 = 316 units per hour. EV = 1.84/316 = +0.58% with the huge risk that always goes with a huge bet spread. I'm not going to tell you what to look for  Arnold isn't paying me enough for that article  but let me say I think it's possible  with a lot of hunting, and studying of dealer signatures  to do better than +0.58% with ace sequencing. In fact, I think +1% may be achievable in a few select games around the US. But this involves constantly juggling a dozen or more keys in your head at all times  much more difficult than counting, which only requires you to track one number. I can't categorize this as "huge advantages over the casino" nor "easy to do at the table," and obviously it isn't close to approaching the 34%  conservatively!  promised by McDowell on pg 18. Why am I so sorry? I'm sorry that there is no free lunch. I'm sorry it's so difficult to carve out an edge with the voracious doublebust sinkhole sucking on us hand after hand. I'm sorry so many gamblers tell tall tales. I'm sorry the numbers don't sit up, roll over, and bark at our command. I, personally, have no stake in the worth (or lack thereof) of this book. I have no enmity for any of the parties involved on either side of the issue. I wish David's work really was "the final chapter in advantageous blackjack play." I'm sorry it isn't. I'm sorry I think very few dealers have riffles as predictable as McDowell's. I'm sorry most dealers have shuffles much more like Wong's dealer, or like mine, or Curtis'. And I'm very sorry several august authorities continue to defend this dangerous, unsound work with vengeful attacks on people they once proclaimed "brilliant." Summary To the best of my knowledge, never before in the history of blackjack literature has such a thoroughly flawed work  flawed in both math and methodology  received so many accolades from highly respected authorities. It's not that the book isn't perfect. It's that there's almost nothing of value (from the point of view of beating the casinos) in the book. An errata for all the errors in the book would be nearly as long as the book itself. ♠ Spring 2005 Blackjack Forum Return to the Blackjack Forum Professional Gambling Library Return to Blackjack Forum Online Home 
© 20042005 Blackjack Forum Online, All Rights Reserved  