Arnold Snyder shows the false logic behind David Sklansky's and Mason Malmuth's poker tournament chip value theory and shows how the false logic has lead to losing tight and conservative poker tournament strategy advice in over two decades of Sklansky's and Malmuth's poker tournament writings. Snyder also shows the logic behind his own chip utility value theory for no-limit hold'em poker tournaments, and how that logic leads to a winning loose and aggressive poker tournament strategy.
Arnold Snyder's Blackjack Forum Online Home Arnold Snyder Contact Information About Arnold Snyder's Blackjack Forum Online How to Win at Blackjack Arnold Snyder's Blackjack Forum, Trade Journal for Professional Gamblers Since 1981, New Issue Blackjack and Gambling Forums Moderated by Professional Gamblers female professional poker player From Blackjack Forum Gambling Library Best Internet Casinos and Smart Online Gaming Gambling Books Recommended by Professional Gamblers Links to Gambling Sites Recommended by Professional Gamblers  
 

Reverse Chip Value Theory vs. Chip Utility Value Theory for No-Limit Hold'em Poker Tournaments

 
 
This article deals with poker tournament chip value theory and its implications for poker tournament strategy.
 
POKER: CONTENTS
Live and online poker tournament strategy Chip Value in Poker Tournaments
     With Implications for Winning Strategy
     By Arnold Snyder
Live and online poker tournament strategy The Implied Discount:
     New Insights Into Optimal Poker
     Tournament Strategy
     By Arnold Snyder
Live and online poker tournament strategy Rebuy Analysis for Skilled Players
    In Multi-Table Poker Tournaments
     By Pikachu
Live and online poker tournament strategy Weekend at Camp Hellmuth
     By Happy Camper
card counting lawsuit Tommy Hyland team Response to Mason Malmuth on the
    Rebuy Advice in The
    Poker Tournament Formula
    By Arnold Snyder
Live and online poker tournament strategy Getting Started at Poker Tournaments:
     The Ball-Cap Kids Meet the Oaf from
     Hell
     By Math Boy
card counting lawsuit Tommy Hyland team Multi-Table Online Poker Hell
     By Syph
card counting lawsuit Tommy Hyland team A Female at the Poker Table
     By Cat Hulbert
Live and online poker tournament strategy The Poker Tournament Formula
 
 
FROM ET FAN:
professional blackjack card counting simulation software also works for other professional gambling techniques
 

 

Reverse Chip Value Theory for Poker Tournaments:
Good Math, Bad Logic
(A response to David Sklansky)
By Arnold Snyder

(From Blackjack Forum , Vol. XXV #4, Fall 2006)
© Arnold Snyder 2006

In this article, I’d like to address in an easily understandable and comprehensive way the widely accepted, but fallacious, “reverse chip value theory”—that is, the theory that asserts that, in poker tournaments, the bigger your chip stack the less each individual chip is worth, and its corollary theory, that a chip you win is worth less than a chip you lose. These concepts were first put forward in the works of David Sklansky and Mason Malmuth, though numerous other poker authors have since glommed onto these “truths,” as these ideas have gone unchallenged for decades.

These theories arise from attempts by competent mathematicians (who are terrible logicians) to analyze tournaments using bad models that ignore crucial factors in real-world tournaments. The bad models and false logic have lead to the major problem we find in the recommended strategies of the authors who believe in these theories, notably: overly tight and conservative play.

In order to cover all the problems with the “reverse chip value theory,” I’m going to deal with a wide range of poker tournament topics in this article, including early aggression versus “survival” strategies; aggression with small advantages, bluffing, and shot-taking versus waiting for premium hands; Sklansky’s “gap” concept; rebuys and add-ons; and the meaning of chip “utility,” with a discussion of how skill affects chip utility and value. All of these topics are related in that they all deal with the best way to use chips in a tournament, based on how players should value their chips.

Mathematicians want to put numbers on everything. As a professional gambler, I understand the desire to figure out things like odds, advantages, and probabilities. But just because we may want to come up with these numbers doesn’t mean we should find some way to force them when there is no practical and accurate way to generate them. And to devise irrelevant numbers, and then use them to develop gambling strategies, is not only an academic mistake for a game analyst, but a costly mistake for every player who follows that analyst’s advice.

The Two Conflicting Chip Value Theories Defined

To understand the arguments in this article, you must first understand the two basic theories of chip value that are in conflict. The first is the Sklansky/Malmuth theory that, again, states that the bigger your chip stack the less each individual chip is worth. Throughout this article, I refer to this theory as the “reverse chip value theory.”

The second is my own chip utility value theory, which is the basis for the tournament strategies provided in my book The Poker Tournament Formula II: Advanced Strategies. I first explained my chip utility value theory in my article “The Implied Discount”:

It is incorrect to convert chips to dollar values with no consideration for how individual players might use those chips…

When the tournament director says, “Shuffle up and deal!” a battle has begun, and chips are nothing more nor less than ammunition… If a chip is a bullet, and I have 500 bullets, and you have 4500 bullets, you can utilize your ammo in many ways that I cannot.

You can fire test shots to see if you can pick up a small pile of ammo that none of your enemies are all that interested in defending. You can engage in small speculative battles to try and pick up more ammo, and you can back out of these little skirmishes if necessary without much damage to your stockpile.

Most importantly, because all of your enemies can see your huge stockpile, you can get them to surrender ammo to you without fighting, even in battles they would have won, were it not for their fear of losing everything… So, intrinsically, each of your bullets has a greater value than each of mine purely as a function of its greater utility… The more chips you have, the more each chip is worth.

I expanded on this idea in my follow-up article, “Chip Value in Poker Tournaments,” and provided two formulas that should guide tournament players in their decisions:

  1. The more skill you have, the more your chips are worth.
  2. The more chips you have, the more your skill is worth.

Why is Chip Value Theory Important?

Chip value theory is important to players because every tournament strategy and decision is ultimately based on chip value theory.

As I stated in the articles mentioned above, belief in the concept of “reverse chip value” will inevitably lead to overly tight and conservative play. If the chips I am trying to win are worth less than the chips I must risk to win them, then I must wait for a bigger advantage before putting my chips at risk. For example, if I must bet $2 to win chips valued at $1, then I have to win the bet twice as often as I lose it just to break even, and I must wait for better hands to put my chips in action.

By contrast, if my “chip utility” theory is correct—that the bigger your chip stack the more each chip is worth—looser and more aggressive play will be correct. If I am risking chips that have less value than the chips I can win, I can risk my chips with a much smaller advantage over my opponent. For example, if I can win chips valued at $2 for a $1 bet, then I’d have to lose the bet twice as often as I’d win it before it would not be a bet worth making, and getting my chips in action much more frequently, with smaller advantages, would be correct.

Before we can evaluate the merit of these conflicting theories, we need to define the real-world implications of them. Let me start by defining exactly what I mean by “chip utility”.

What is “Chip Utility” and Why Is It So Valuable?

By chip utility, I am simply referring to the ways chips can be used in poker tournaments. Some common uses of chips:

1. Chips can be used to drive opponents out of pots. Say that I have an overpair to the board, which shows potentially dangerous straight or flush draws. I can drive these players out of the pot by making a bet that makes it too expensive for them to try to draw out on me.

This bet would also eliminate players who simply held an overcard to the board, say an ace, which would beat my hand if an ace appeared on the turn or river. The more chips I have compared to my opponents, the easier it is for me to make it too expensive for them to stay in a pot with me. With very few chips, I may not be able to make the odds wrong for opponents to call.

2. Chips can be used to steal blinds and antes. With a big chip stack, we can do this in any unraised pot from any late position seat. Likewise, we can steal pots from preflop limpers.

The bigger our chip stack, the more ruthless and aggressive we can be on these preflop steals. With a small stack, we may be able to go after the blinds from the button, but we must play more cautiously because any play back at us may cost us more chips than we can afford to lose.

3. Chips can be used to see a greater number of flops. The more chips we have, the greater the number of speculative hands—small pairs, medium connected cards, suited cards, etc.—we can play. These hands often must be thrown away after the flop, but if the flop hits a hand like this, the implied odds can be tremendous. With a big stack, we can even make raises or call raises with these speculative hands.

With a short stack, most of these types of hands must be abandoned. With a desperate stack, your only move with a speculative hand, other than folding, would be to push all-in pre-flop.

4. Chips can be used to bluff. It’s easy to steal pots from players who cannot afford to call without jeopardizing either the viability of their chip stack or their tournament survival. And really big chip stacks can be ruthlessly aggressive in stealing from opponents. Short stacks get very few legitimate bluffing opportunities, and all of them are dangerous.

5. Chips can be used to call down suspected bluffers. In fact, a big chip stack will often deter players from bluffing in the first place, because they can see that the big stack can well afford to call. Big chip stacks get much more of the true value of their cards than small stacks. Short stacks will often be putting their tournament life on the line when they call suspected bluffs.

6. Chips can be used to semi-bluff when a player has a strong draw. The semi-bluff not only builds the pot for a player for when he does make his hand, it also often gets him a free card on the river. With a big chip stack backing up a semi-bluff on the flop, the check by an opponent on the turn is almost guaranteed. In fact, a big stack can often steal the pot with a bet on the turn, even without making his hand. Semi-bluffing is very dangerous in a tournament if you cannot afford to lose the chips you bluff with.

7. Chips can be used to bet for value. When a player believes he has the best hand, a big stack permits him to bet an amount his opponent will call because of the size of the pot. Value bets add to a player’s chip stack with little danger, though they are much more dangerous for short stacks because of the possibility of being put to the test with a reraise or an all-in bet from the opponent.

8. Chips can be used to bet for information. If a preflop raiser bets after the flop, and you believe the bet may be merely a continuation bet, a big stack allows you to raise to see if your opponent really has the hand he’s representing. Say, for example, that I have a pocket pair, but an overcard comes down on the flop and my opponent bets. With a big chip stack, I can reraise my opponent to find out where I stand. Often, my reraise will fold a bettor who was just trying to keep the lead, winning me a nice pot.

With a short stack, however, that information bet may be too expensive. Without a strong read on my opponent, I may have to fold the best hand, because my opponent can make it even more expensive for me to stay in the pot on later streets.

9. Chips can be used just to call a player to see what he has. Many times I have called another player’s value bet on the river, knowing I was beat (as I did not hit my draw), but just to see what he was betting with if I couldn’t put him on a hand based on how the betting had gone. This is valuable information that I may be able to use later when involved in a pot with this player. With a short stack, I can’t afford to get this information.

10. Chips can be used just to sit and wait. A big stack gives a player the luxury of changing gears at will, and using patience during times when he is not getting decent cards and the action of the other players at his table is too dangerous to get involved with. In other words, chips allow a player to use patience at those times when patience is the best strategy. Short-stacked players cannot use patience strategically, as the blind and ante costs will be eating away too big a percentage of their precious few chips.

With a short chip stack, the opportunities to use chips as described above are seriously diminished and often impossible. And the above list is by no means comprehensive. The shorter your stack, the less useful your chips become to you, until you are reduced to nothing but a single move—all-in before the flop, with whatever cards you have.

To correctly estimate the value of tournament chips, we must also consider the skill of the player who has them. Even an incredibly skillful player who is short-stacked will find many of his skills crippled by his lack of chips. He cannot use all of the possible strategic plays in his repertoire; in fact, he is in very much the same position as the short-stacked player with few skills at this point—just looking for a hand to take an all-in stand with.

So, skill alone cannot give chips their value. Much of the value of skill comes from having a big enough stack to use a full range of skills freely. This is why my chip utility value theory proposes that the more chips you have, the more skills that can be put to use, and the more each individual chip in your stack is worth.

Definition of Tight and Conservative Strategy

When I say that Sklansky’s and Malmuth’s reverse chip value theory leads to overly tight, conservative play in tournaments, exactly what do I mean by tight and conservative?

Tight means entering very few pots. Tight players primarily make their pot entering decisions on the basis of their cards, and too often they require premium cards to enter at all. They rarely steal the blinds, or any other pots, as they stay out of pots or leave pots if they do not believe they have the best hand. They respect the bets of other players to the point of requiring even more premium cards to enter or remain in a pot if another player has raised in front of them—even though they have position on that player.

A conservative player is different from a tight player, although a tournament player using strategies based on the reverse chip value theory will usually be both tight and conservative. A conservative player is one who bets weakly, if at all. He does more checking and calling than he does betting or raising, unless his cards are very strong. He is overly concerned with survival. He holds his chips very precious and is hesitant to part with them. He tends to think of his chips as money, as opposed to ammunition.

The overly tight and conservative strategy that is an inevitable byproduct of the reverse chip value theory will result in a skillful player waiting too long for premium cards and situations, and failing to take advantage of many betting opportunities that are worth the risk. The penalty for this overly conservative play is a steadily dwindling chip stack, drastically reducing the player’s opportunities to employ his full range of skills. This cuts not only the value of his remaining chips, but his overall tournament edge.

Players who are tight or conservative, and especially players who are both tight and conservative, only make it into the money in tournaments if they are dealt an abnormally high percentage of strong cards. Their long run tournament expectation is sharply reduced, or even negative, whenever they must compete with players who know how to exploit chip utility.

To illustrate how bad models and faulty logic lead to bad tournament strategies, let’s look at some specific examples of overly tight and/or conservative play recommendations for tournaments from the writings of Sklansky and Malmuth.

A Bad Model for Analyzing the Value of Early Risk in Poker Tournaments

Here’s a real goof by Sklansky that leads to overly conservative play. On pages 19-24 of Tournament Poker for Advanced Players, in a chapter titled “You’re Broke—You’re Done,” David Sklansky provides his reasoning on why the best players should not take risks with all of their chips even when they know they have small advantages early in a tournament. He provides an example of a player who has $100 and who is offered a coin-flip bet in which he would either lose $100 or win $120.

It is obviously an advantageous bet for the player to win $20 more than he would lose on a coin flip. Since you would win half the time and lose half the time, your overall dollar expectation is to come ahead $20 every two times you take this bet, for a $10 profit for every $100 bet. This is a 10% advantage, and one that any pro gambler would jump at.

But, Sklansky makes the question of whether or not the player should take this bet more interesting by considering whether this would be a wise bet if the player knew that on the following day he would be offered a coin-flip where he would either lose $100 or win $200. If heads wins $200, while tails loses only $100, then you will come ahead $100 every two times you take this bet, for a $50 profit on every $100 bet, or a 50% advantage. Sklansky stipulates that if the player loses the first bet, he will not have the funds to make the second, more advantageous, bet.

With a few lines of math notation, Sklansky demonstrates that if the player knew for a fact that the second (better) bet would be offered the following day, he would be making an error to make the first bet. His math shows that if the player declined the first bet that paid $120 for $100, and waited for the second bet that paid $200 for $100, the player would expect to be ahead by $50, on average, for the two-bet series. On the other hand, if the player took the first bet, which allowed him to make the second bet only half the time, his expectation on the two-bet series would drop to only $35.

Sklansky concludes from this example that, in tournaments, taking an early risk on a small advantage is not the most profitable way for a highly-skilled player to play. He stipulates that his advice assumes you are “one of the best players,” that this is a risk you are taking for all or a significant portion of your chips, and that you are calling a bet for all or most of your chips, not pushing in on another player.

But I have a major disagreement with Sklansky’s extension of his logic—not his basic math, but again, his logic—from the individual coin-flip bets in the example, to coin-flip bets within the structure of a tournament. Again, Sklansky is using this example to prescribe actual tournament strategy, without considering actual tournament structure.

Let’s do the math for this example, within the structure of a no-limit coin-flip tournament

The way this series of bets would work within an actual tournament would be that the player would still lose his first bet of $100 half the time. But the other half of the time, he would win $120, giving him a total of $220. Now, when the 50% advantage bet came along, he would be able to bet $220. Again, he will lose half the time and end up with nothing, but the other half of the time he would be paid $440, ending up with $660 for his two all-in wins. Since he will have this win/win result only one-fourth of the time, his average finishing bankroll will be $660 / 4 = $165, for an average net gain of $65 on the two-bet series.

An average gain of $65 is clearly better than the $50 average gain Sklansky shows from declining the first bet and taking only the second one with the bigger edge. And it’s way better than the average net gain of only $35 that Sklansky shows for taking both bets of the two-bet series. The mistake Sklansky made in coming up with this diminished $35 gain is his assumption that even on those occasions when the player wins his first bet and profits $120, he will still bet only $100 of his $220 bankroll when the second, more advantageous bet comes along.

Sklansky’s analysis would only have validity if we assumed that all bets were limited to $100, and that there would be no blind/ante costs eating away at the $100 while waiting for the second bet opportunity. His analysis also assumes that the blinds, and the cost of playing, were not rising from day one to day two.

But this is rarely the case in a no-limit tournament, and since the cost of playing keeps going up, and the blinds keep rising, the player who declines that 10% shot to increase his chips from $100 to $220, will probably not even be around when that 50% betting opportunity appears tomorrow. In fact, in a tournament, how much of that $100 would still be left to bet with when that more advantageous bet comes along?

Sklansky’s assumption that you “know” that a bet with five times the advantage will appear later may be true in many tournaments that last a very long time, but it makes a big difference in a tournament how much later it appears. And it is incorrect for Sklansky to assume that by the time of the later-appearing bet—when the blinds will have gone up, and the field will have been diminished, giving all remaining players bigger chip stacks—that the player who won the first bet would not go all-in on the second bet, but limit his bet to $100.

Sklansky’s example is irrelevant for developing a tournament strategy because it fails to account for tournament structure—a critical factor he often ignores in his mathematical analyses of tournaments. He believes that his example demonstrates that there is a dollar cost to early risks with small advantages, when, in fact, he is demonstrating that this cost really exists only for players who fail to use the chips they earn.

If, when you earn chips, you “protect” them by playing more conservatively, you will not gain the advantage available from having those chips, and the risks you took to earn them will not be worth it. If, on the other hand, you intend to use the chips you earn to make even more chips, then early risks will pay dividends.

And, as I will show directly below in the section titled “More Good Math, More Bad Logic”, Sklansky’s stipulation that his advice to pass up the first bet with the smaller advantage is only meant if you are “one of the best players” is actually terrible advice to give a highly-skilled player. It is precisely the most skilled player who will be able to use those extra chips to earn even more chips.

In “The Implied Discount,” I wrote:

The only time chips do not have more value in a bigger stack is when the bigger stack is in the hands of a player who does not know how to use them. For example, any player who plays according to Harrington’s M strategy will not gain the full available advantage from having a bigger stack of chips.

When you are waiting for hands, primarily playing your cards, and taking so little advantage of the edges available from other types of poker moves, your bigger stack will not be in action enough to earn you this greater value. So, when I say that the more chips you have the more each chip is worth, that assumes that the player will be deploying his chips in such a way as to extract their full potential earning value.

More Good Math, More Bad Logic

Sklansky’s argument against taking early risks is similar to Malmuth’s argument in Gambling Theory and Other Topics (page 204) against taking early risks to obtain a chip lead. As I described in my BJF article, “The Implied Discount,” Malmuth provides an example of two players who enter a small buy-in tournament where each receives $100 in chips. Player A plays very conservatively, so that he always has exactly $100 in chips at the end of the first hour. Player B plays a very aggressive style such that he busts out in the first hour three out of four times, but one out of four times finishes the first hour with $400 in chips.

Malmuth concludes that: “…because of the mathematics that govern percentage-payback tournaments, we know that the less chips a player has, the more each individual chip is worth, and the more chips a player has, the less each individual chip is worth. This means that it is better to have $100 in chips all the time than to have $400 in chips one-fourth of the time and zero three-fourths of the time. Consequently, A’s approach of following survival tactics is clearly superior.”

Just like Sklansky, Malmuth totally ignores tournament structure. The first thing any tournament player must think about with any tournament question is the structure, as the structure has a huge effect on optimal strategy. The structure also determines the factors that must be included in any model used to analyze and develop strategy. Without valid logic, the math is in a vacuum.

In considering Malmuth’s example, try to compare it with a real tournament situation. I do not know of any real-world tournament in which players start with only $100 in chips, so for a real-world example, I’ll use the daily $60 tournament held at the Flamingo in Las Vegas, where players start with $1000 in chips. This is a fast tournament that I describe and analyze in The Poker Tournament Formula (pages 40-41). Tournaments with similar starting chips and structures are also available daily at other poker rooms in Las Vegas.

To keep as close to Malmuth’s example as possible, I’ll imagine two players who each start with $1000 in chips: Player A, who always has exactly $1000 at the end of the first hour, and Player B, who loses his $1000 three out of four times, but increases his chip stack to $4000 one out of four times. Within the structure of this real-world tournament, where will these players stand after that first hour?

In the Flamingo tournament, blinds start at $25/$50 and double every 20 minutes. So, at the end of the first hour, the blinds enter their fourth level, which is $200/$400. Now, what do I think of the chances of a player who always has $1000 at this blind level versus the chances of a player who one-fourth of the time has $4000 at this blind level?

The player who always has $1000 in chips when the blinds become $200/$400 will have close to a zero chance of surviving to make it into the money. At this point, he will lose 60% of his chip stack in going through the blinds just once.

In fact, he will pretty much be forced to play any two cards from the big blind because 40% of his chip stack will already be in jeopardy when he is in the big blind. His chips have absolutely no utility value. All he can do is pray for a hand that holds up.

And even doubling up at this point gets him nowhere but looking for another push-in-and-pray situation. The player with $4000 in chips, by contrast, is not nearly so desperate. He can still steal a pot with an all-in bet, his blind-off time will be quite a bit further down the road, and a double-up for him will make him a real contender.

For a slower format, let’s consider what happens to these players if the blind structure is the same, but the players start with $2000 in chips. This is similar to the daily $60 tournaments at Treasure Island and a few other Las Vegas properties. In this case, Player A will have $2000 when the blinds become $200/$400, and Player B will have $8000. Now how do we rate their chances?

Again, with Player A having a total chip stack of only five times the big blind, I’d still put his survival chances at close to zip. He still does not have enough chips to steal a pot, unless he is playing with total wimps, and he still cannot afford to go through the blinds even once without losing 30% of his chips. Player B, by contrast, with a chip stack totaling 20 big blinds, is a much more viable player in this tournament. Again, his chips have utility value far beyond Player A’s chips.

As for Malmuth’s assumption that each of Player A’s chips are worth more because he has a chip stack totaling only five big blinds, while each of Player B’s chips are worth less in a stack that totals 20 big blinds: No way.

Player B can use his chips and his skill in many ways that Player A cannot. These real-world examples illustrate the folly of the reverse-chip-value theory. Player A’s only move with a chip stack totaling five big blinds is to move all-in and pray. And he does not have the luxury of waiting for a premium hand. (It also shows the folly of Malmuth’s contention that structure has no effect on strategy.)

But What About Quadrupling Up in a Long, Slow Tournament?

Let’s consider Malmuth’s quadruple-up model in the context of another tournament structure—a long, slow event where the blinds do not rise so rapidly. We’ll use the $1000 event from the 2006 WSOP as our model. Players started out with $1500 in chips; the starting blinds were $25-$25; blind levels lasted 60 minutes; and the second blind level was $25-$50.

In this case, at the end of the first hour, Player A, who simply maintains his $1500 in starting chips, will have a chip stack of 30 times the $50 big blind, so the structure does not leave him as desperate as it leaves the players in the faster events. Player B, who quadruples-up one quarter of the time, will find himself with a chip stack of $6000, or 120 big blinds, on those occasions when his quadruple-up succeeds.

Malmuth has stated repeatedly that he does not believe tournament structure affects optimal strategy, but I believe this example shows clearly the huge effect of tournament structure on strategy. Within this slow structure, Player A’s conservative strategy does, at least, leave him a viable player at the end of the first hour.

But Player B is still following a superior strategy.

Essentially, what Player B is doing with this strategy is paying $4000 to enter a tournament with $6000 in chips while all of his opponents are paying $1000 to enter with $1500 in chips. This is not an option offered in any tournament I’ve ever heard of—where only one player is offered the option of making a triple rebuy off the top. But the question is: is there value for a skillful player in simply buying four times as many chips as his opponents are starting with?

The fact is, this is a question I’ve already answered in The Poker Tournament Formula in my chapter on rebuys. In that chapter I showed that, for a player with a skill advantage on his opponents, the more rebuys the skilled player makes, even at full price, the more money he will make from his less-skilled opponents. The reason he will make more money is that the more chips he has, the longer he forces his opponents to play against him at a disadvantage, so that the dollar increase in the skilled player’s winnings exceeds the cost of the extra chips.

So, why don’t most tournament pros use a go-all-in-twice quadruple-up strategy? The reason they don’t is because, in real-world tournaments, it’s not usually possible to quadruple up this way.

How do you get into a number of legitimate coin-flip situations for all of your chips in the first hour? If you have AK, how do you know an opponent who pushes all-in isn’t pushing in on AA or KK? If you have QQ, how can you be certain your opponent has AK and not AA or KK? Many players, and especially loose aggressive players, might call an all-in with AK or QQ, even early in a tournament, taking the risk of finding out that an opponent has AA or KK, because they know what that double-up will mean to their overall chances in the tournament. But this is a big risk.

And even if you are very aggressive, and you manage to double up once, none of your opponents are likely to have the same number of chips as you, and even if one does, what are the chances that you will find an opportunity to get involved in an all-in pot with that player to try for that second double-up? To quadruple up in real life, against stacks that are smaller, you may have to take the chips of three all-in pots.

You would have to be at a very loose table to find three players willing to put their tournaments at risk in the first hour at times when you had hands strong enough to believe you had at least a coin-flip chance of doubling against them. If I could quadruple up once every four tournaments I entered, believe me, I’d do it in a heartbeat. The opportunities just don’t present themselves.

In a post on the poker message board at this Web site, GardenaMiracle raised a point about the possibility of a chip-dumping scam at the 2006 WSOP, based on the method that was used for bringing in alternates after the tournament had begun. In his post “How valuable is this WSOP strategy?”, he wrote:

My buddy and I drove to Vegas for the main event and got there in time to register as alternates. We signed up together and were expecting to be seated randomly. OMFG!! We were seated right f'ing next to each other. That's how they seated ALL the alternates. How valuable is a chip position on the first day where 8 players dump off to one guy? How could this happen at the friggin WSOP? Has this been written about?

A scenario of eight friends colluding to dump their chips to one member of the group, giving him an $80,000 starting chip stack, would absolutely provide this player (assuming he’s a highly-skilled tournament player) with a huge advantage over his opponents and an excellent return on the group’s money, despite the fact that, in essence, they paid $80,000 for this player’s chips. That’s because a massive chip lead in the hands of a skillful, aggressive player would provide more than just the value inherent in being able to get more action with an edge. The enormous chip utility value of this giant stack would provide even more of an advantage, far in excess of the chip costs.

In one of the $1000 events at the 2006 WSOP, I had the opportunity to play for an hour early in the tournament at a table with John Phan, a highly-successful young tournament pro known for his hyper-aggressive style. We were in the third blind level when I was moved to his table, and Phan already had a huge chip lead on the table. I’ll estimate the average chip stack at that time as around $2500, while Phan’s stack probably equaled over $15,000.

Phan’s chip stack, combined with his aggression, literally had the rest of the table frozen, even though most of the players had very viable chip stacks in relation to the blinds with that tournament’s slow structure. If Phan was involved in a pot, or was positioned behind you so that he could still become involved in the pot—and he involved himself in well over 50% of the pots—any player who bet had to be prepared to commit all of his chips.

Players tried skillful moves, but Phan had more than enough chips to call or reraise them. For the hour or so I was at that table, Phan took down close to 90% of the pots he played. There was no choice for the other players at the table but to sit there waiting for a big hand to try to trap him, while his chip lead, and ability to withstand risks, just kept increasing. Phan went on to take second place in that tournament.

The problem with Malmuth’s example and conclusion is the same as the problem with Sklansky’s argument for waiting for better hands. Their models completely ignore critical tournament factors. They create examples where the utility and intimidation value of chips is ignored, where blinds do not exist, where waiting for a premium hand has no cost, and where the cost of playing a hand stays the same because the average chip stacks of competitors are not continually rising.

In a post on the 2+2 message board, Mason Malmuth addressed the comments I made about his first-hour strategy model in my “Implied Discount” article. He said:

As usual, Snyder misses the point. If the more aggressive player busted out three of four tournaments but then had $500 instead of $400 in chips, I would agree, and I also agree (as we'll see below) that playing aggressively is usually the best approach. But I was illustrating a certain point about percentage payback math, nothing more.

But I didn’t miss his point at all. I disagree with his “percentage payback math” because it is math based on poor models and bad logic, and is the math behind Sklansky’s and Malmuth’s conclusion that the more chips you have the less each chip is worth. As for Malmuth’s comment that he agrees that “playing aggressively is usually the best approach,” I will be discussing his actual tournament strategy advice in more detail below. For now, I wish to respond to another point in Malmuth’s post, where he attempts to distance himself from the costs to players of his own chip value theory by claiming that his reverse chip value theory has always been meant to apply only in the later portions of a tournament. He provides a quote from his Gambling Theory:

“…It needs to be noted that this force [reverse chip value] becomes significant only late in a tournament. Early in a tournament, it is not that crucial.”

To refute his claim that he has applied his theory only in his strategy advice for late in a tournament, I wish to point out that in his first-hour strategy model, which is his most extensive writing on early tournament strategy, he is specifically providing advice on how to play in the first hour of a tournament. And, as I pointed out in my “implied discount” and chip value articles, both he and Sklansky also base their published rebuy/add-on advice on the reverse chip value theory, again strategy decisions that must always be made in the beginning stages of tournaments.

You can’t use reverse chip value theory as the basis of your advice on the most important strategic decisions a player has to make in the first hour of a tournament, and then try to squirm out of your mistake by claiming that your theory applies only to the end of a tournament.

Sklansky “Disproves” My Chip Utility Value Theory

In Sklansky’s latest 2+2 article, titled “Chips Changing Value in Tournaments,” Sklansky purports to prove that the value of individual chips in a big stack does not go up as more chips are added to the stack. Here is the “proof” he offers:

Suppose for instance that in a 32 player tournament a good player’s $100 buy-in was worth $150. Could her doubling to $200 in tournament chips be worth $301. And to $400 in tournament chips, $603, and the $800 in tournament chips, $1,207? And to $1,600 in tournament chips, $2,415? And to $3,200 in tournament chips, $4,831? Obviously not.

In other words, what Sklansky provides to disprove what I call chip utility value is the fact that if a skilled player wins all of the chips in a tournament, the chips obviously can’t have more value than the total prize pool.

But there are two major errors in logic in Sklansky’s “proof.” First, when we say that a player has a $150 expected value (or EV) on his $100 buy-in, we are saying that this player, because of his skill and overall strategy, expects an average return of $150 per $100 buy-in for every tournament like this that he enters. You can’t assign that overall expectation on the tournament as the chip utility value of the starting stack, nor can you automatically multiply this overall EV as you multiply your stack.

The reason you can’t do this is because your overall $150 EV is derived from your aggressive stack-building strategy; it’s because the player is consistently building chip leads that he has that $150 EV. The $150 does not represent the chips’ value, it represents the strategy’s value, and the value of an overall strategy doesn’t double every time a player’s chip stack does.

In incorrectly assigning the player’s overall EV as the utility value of his starting chips, Sklansky errs in his tournament logic. Now onward to...

The Second Major Error in Sklansky’s “Proof”

The second major error in Sklansky’s logic in this “proof” is his reliance on a model that, again, ignores crucial tournament factors. His doubling-up player is playing in a tournament vacuum—with no opponents and no steadily-increasing blinds. To illustrate the importance of this mistake, let’s consider some situations where a chip increase does not increase utility.

Let’s say that a player, who has started with $100 in chips, has increased his chip stack to $200 a few hours into the tournament. Does this doubling of his stack to $200 mean that the chips in his stack are now worth more than the chips in the $100 stack he had when he started the tournament? To answer this question, we must consider whether or not this double up has increased the player’s chip utility. In order to answer this question, we have to consider how his chip stack at this point measures up against his opponents’ chip stacks, as well as the current blind and ante costs.

If a player doubles his initial chip stack, but the average chip stack at his table has tripled during this time period, then the utility value of his chips has decreased since the tournament started. For the chip utility value to increase, the player’s stack would have to be big enough to provide a bigger ratio between the size of the stack and the blind/ante costs, and/or a bigger chip lead over the chip stacks of the player’s opponents.

So, any model that automatically assigns twice the chip utility value every time a player doubles up is just a silly model. Real chip utility value depends on when that double-up occurs and how the size of that newly doubled stack relates to the size of competitors’ stacks.

If the average chip stack of a player’s opponents has also doubled by this point in the tournament, so that our player has merely kept up with the field, while the blinds have doubled or more than doubled during the same time period, the bigger cost of the blinds will in and of themselves have diminished our player’s chip utility. It will have become more difficult and more expensive to steal the blinds or any pot, and more expensive and dangerous to use many of the chip utility functions described at the beginning of this article. If a player is losing ground to the blind costs, his chip utility goes down, even if he’s keeping up with his competitors, because his skill options are constricted and any move he makes can jeopardize his tournament survival.

But these tournament factors in no way invalidate the chip utility concept that the more chips you have, the more each chip is worth. While chip utility has the greatest value when a player is not only well stacked in relation to the blinds, but also has built a significant chip lead on his opponents, chip utility has some value even when a player is short-stacked.

For example, let’s say that at the point in a tournament when the average chip stack at a table has tripled to $300, our player’s chip stack is still just $100. Now, he’s in real trouble. No matter how much skill he may possess in comparison to his opponents, his chip utility is seriously diminished.

Suddenly, however, he gets that miracle hand that doubles him up to $200. Now, has this increase in chips increased the value of the chips in his stack? Absolutely it has, because he has increased his chip utility. The problem is that his opponents with $300 in chips have even greater chip utility. They have more ammunition to outplay him.

Still, even though the individual chips in this $200 stack now have less value than the individual chips in the $100 stack this player had at the start of the tournament, they have more value than the chips in the $100 stack this player had just before his double-up. So, in that sense—and this is the only sense that matters strategically when you are actually playing—at virtually any specific point in a tournament, the more chips you have, the more each chip is worth. (And I am assuming here that you are a skillful player who knows how to use your chips for their utility value.)

Again, chip utility value does not increase or decrease in lockstep with an increase or decrease in an individual player’s stack. If you wanted to try to assign a specific dollar value to chip utility at any moment in a tournament, the calculation would be complex, because there are so many tournament factors that affect a player’s chip utility value.

A poker tournament is a constantly shifting terrain of power relationships. For instance, what happens to the value of a significant chip lead if a new player with a much bigger chip stack is suddenly moved to the same table? If the bigger stack is in front of a loose, aggressive player, then his dominating chip stack can cause a decrease in the former chip leader’s chip utility, and this decrease can be further exacerbated by the giant stack’s seating position in relation to our player.

Likewise, a player at a table can suddenly double up or triple up and take a big chip lead on a former chip leader; or a chip leader can suddenly be moved to a new table where his chip position is less optimal. There are many factors within a tournament that can instantly affect a player’s chip utility even when the size of his chip stack has not changed at all.

In fact, a simple change in the blind level can have a big effect on any skillful player’s chip utility, not only because it affects his betting costs, but because of the way it can more seriously affect shorter stacked opponents. For example, a blind level change may increase the chip utility of big stacks by decreasing the chip utility of shorter stacks that are suddenly desperate stacks.

Sometimes, the opponents’ chip stacks get too short for the big-stacked player to use his full range of poker skills against them. Extremely short-stacked players can’t afford to play poker and are often limited to looking for all-in shots to take. But this does not cause a decline in chip utility value for the big stack, because the big stack can simply switch to the skill set optimal for the situation.

When your opponents are too short-stacked to afford the blind costs, and really can’t wait for a premium hand, then the value of patience and hand selection for the big stack skyrocket. Likewise, unless his opponents are so short-stacked that they must call any all-in bet with any two cards, the big stack can still use his chips to steal aggressively preflop.

Here’s an interesting question. As your chip lead gets greater and the stacks around you grow more desperate, should a skillful player slow down his aggressive stack-building play? On page 167 of The Poker Tournament Formula, in the section titled “You’re Long, They’re Short”, I advise players with significant chip leads to continue playing with aggression but to be more selective in picking the spots to make moves. And I advise against risking too many of your chips in very dangerous situations. You don’t want to double up an opponent who is on the brink of extinction. And you can’t bluff him out of a pot when he will likely call an all-in with any legitimate hand.

And on page 323 of The Poker Tournament Formula, in the section called “The Crunch Strategies,” I advise chip-rich players to avoid confrontations with other chip-rich players except for very aggressive moves with strong cards. I also advise players to avoid confrontations with desperate stacks without a relatively strong hand, unless you have an exceptionally big chip lead.

But I advise increased aggression against medium-stack players, with the exception that I advise avoiding multi-way raised pots out of position. (The specific hand recommendations in this section of my book are based on fast tournament structures. See both of these sections for more detailed playing advice.)

Pikachu’s “Time Value” Argument

I am not the only analyst who finds Sklansky’s arguments less than convincing. Another point Sklansky makes in his recent 2+2 article is in defense of early tight play, or avoiding early risk with a small edge. Sklansky says:

Now let’s get back to the question of whether extra chips are worth less and how the answer should affect your tournament strategy. Earlier I said that even in one winner tournaments the best players are usually in a situation where extra chips are worth less than owned chips (again though, still often worth more than face value). To show this one need merely ask the question “is such a player favored to double up before he goes broke?” If the answer is yes, as you would expect it to be for great players, chips must lose value.

Pikachu, who has posted the results of numerous poker tournament simulations on our poker message boards, emailed the following comments on Sklansky’s point:

Sklansky makes the claim that a player should avoid a coin flip if he has greater than a 50% chance of doubling up without making the coin flip.  I can see where he gets this from.  Consider a player with a 55% chance of doubling before busting.  That’s obviously better than taking a coin flip where he’s only got a 50% chance of doubling up before busting, right?  Wrong. 

It takes time to accumulate those chips.  Bankers and economists and really most people are familiar with the time value of money.   Would you rather have a 55% chance of doubling your chips an hour from now, or a 50% chance of doubling them right now?  The answer all depends on how well we could use those extra chips in the next hour. 

Given that we could have a 55% chance of doubling our original stack in the next hour, we probably have a good shot at accumulating even more chips with a bigger stack.

Pikachu’s point is similar to the point I made earlier in this article regarding Sklansky’s flawed logic (in the coin-flip example in his tournament book) in advising that players should avoid early risks with small advantages when a bigger advantage may be coming later. There is a value to having more chips now if you can use them to generate even greater amounts of chips for later.

And if you are a skilled player, you should be increasingly able to earn chips as you add chips and your chip utility value increases. Sklansky’s math is correct, but again, he fails to account for critical tournament factors in his flawed models, and the tight playing style he advises—waiting for a very high advantage hand to double up—will most often leave a player who follows his advice short-stacked even after that high advantage double-up occurs (if it occurs).

Sklansky’s Opinion of “Chip Utility” in Poker Tournaments

In the same 2+2 article, Sklansky does at least acknowledge that there may on rare occasions be a utility value to chips for some “rare” players. He says:

But there are exceptions. One rare one is the player who plays a lot better with a big stack. In other words he is not a favorite to double up until he has gotten a lot of chips (almost inconceivable for limit tournaments). This is sometimes the case for psychological reasons, either in his mind or his opponents. Or it might simply be that he is weak playing shorter stacks. Such a player would be well advised to gamble early in a tournament including even calling all-in bets.

Truly skilled players may indeed be “rare,” but I find it amazing that Sklansky thinks that a player skilled enough to play “…a lot better with a big stack,” might be “weak playing shorter stacks.” Every truly skilled player is weaker with a short stack than with a big stack, because his skill options are so limited with fewer chips.

The fact is there is never much skill involved in playing a short stack. Any player’s skill options on a short stack are limited to hand selection and an occasional kamikaze shot at the pot, which is why short stack strategies have been ably reduced to simple formulas in a number of books, including The Poker Tournament Formula.

It’s optimal big stack play that takes real skill, because you have so many options.

Loose versus Tight on the Bubble

Let’s return to the issue of how the two conflicting chip value theories have influenced playing advice for real-world tournament situations. In Gambling Theory and Other Topics, page 210, Malmuth writes:

Also, the further along the tournament is, the more important it is to survive. For example, if the top eight players receive money, it is much more important to be in your survival mode when nine people are left than when fifty people are left.

Compare this to my bubble advice in The Poker Tournament Formula (page 320):

The players with medium-sized stacks (which, in almost all cases are really short stacks relative to the blinds, though these players don’t know it), slow down at this point because they are so close to finishing in the money they can almost taste it…These medium stacks are the players that you will feed on during crunch time. They are terrified of going out when they feel they’ve almost arrived.

In fact, most of these players are doomed. With fast play you will eat them alive. Their chips will become your chips. You will play a high-risk game, but I guarantee you it will pay off…

It is absolutely imperative that you do not think like one of these doomed players. You do not want to finish in a bottom-rung money position. Finishing in a bottom money position is not much better than busting out in the first hour. You can’t make money on these poor finishes, and if you’ve played this many hours you want more than just some meager “courtesy win” for making it this far.

You make your money by finishing at the top. That’s your whole reason for being here.

Going Out with a Bang vs. a Whimper: Poker Tournament Exit Moves

There has been some debate between 2+2 and this Web site over whether or not the tournament strategies advocated by Sklansky and Malmuth are really tight. Malmuth has insisted that their strategies are not tight. Although I have already addressed numerous examples of Malmuth’s and Sklansky’s tight strategy advice in this article, I am going to provide still more examples to make sure the point is understood by players once and for all. Here’s a quote from Sklansky’s Tournament Poker for Advanced Players (page 68):

… the advice you often hear, to start playing poor hands when your stack is short, since you no longer have time to wait for a good hand, is for the most part wrong… There is no reason to play much weaker hands to avoid going all-in soon thereafter. Even if you throw all your hands away until that one last hand, where you are forced all-in, it is not so terrible. To give up sooner than that, and play a very bad hand is a big mistake.

Here Sklansky is advising players ”to wait for a good hand” even to the point of being slowly blinded off, which he calls “not that terrible.” What’s more, he states that to make a stand with “a much weaker hand” is “to give up.” I say that the fundamental definition of tight is an overemphasis on cards, and on waiting for a “good hand”.

Malmuth advises the same strategy in Gambling Theory and Other Topics (page 210), where he asserts that it’s better for tournament players to “go out with a whimper,” (i.e., being slowly blinded off while waiting for a premium hand), than it is to “go out with a bang” (i.e., taking a shot with a more marginal hand).

Here’s my take on this exit strategy, from The Poker Tournament Formula (page 143):

A wimp generally exits a tournament by watching his chip stack get ground down by the blinds. He is the most pathetic player in any no-limit hold’em tournament, the player who gets blinded off while waiting for a hand. He epitomizes the term dead money. His chips have no life to them at all. They just sit there in front of him getting eaten away slowly. Wimps can be counted on to do this in almost every tournament they enter.

Both Malmuth and Sklansky do provide some tournament advice in their books where they address certain specific tournament situations in which they believe a player would be correct to play looser than he would in a ring game. Most of these situations, however, are restricted to play against extremely short-stacked players, or extremely tight players.

This does not mean that their overall perspective is not conservative and survival oriented, or that their general approach is not overly tight, because, by limiting their looser play to too few circumstances, their advice is once again too tight. Their overall strategy advice is too tight because it will consistently fail to build the kind of stack a tournament player needs to win.

The Add-On Dispute

In Tournament Poker for Advanced Players, Sklansky advises players not to make an add-on unless they have less than the average number of chips of their competitors. To refute my advice in The Poker Tournament Formula that in tournaments that allow add-ons, skilled players with big stacks should just go ahead and make the add-on, Sklansky tells us in his 2+2 article that Aaron Brown “did a simulation that convincingly demonstrates that very big stacks should not purchase add-ons.”

On the Blackjack Forum Online / Poker Tournament Formula Web site, Pikachu did a simulation that showed the same thing as Aaron Brown’s simulation, assuming all players have equal skill. But Aaron Brown’s simulation, like Sklansky’s logic, is based on a flawed model.

Pikachu also ran other simulations that showed that players who have a skill advantage over their opponents should make add-ons when they have big stacks. For example, according to one of Pikachu’s simulations (a 100-player tournament, $100 buy-in for $100 in chips, plus $9 to house fee), a $100 add-on for $100 more in chips will show a positive return to the player, even if the player has tripled up prior to the add-on, if he has an advantage over the field of 20% or more.

And, the higher the player’s skill advantage over his competitors, the greater the value of making the add-on with a big stack, which is exactly what I advised in The Poker Tournament Formula. (And it should be noted that Pikachu’s simulation is based on an assumption that a player’s advantage is fixed. It does not include the additional chip utility of a big stack.)

Here’s a link to Pikachu’s simulation results: Rebuy Analysis for Skilled Players in Multi-Table Poker Tournaments

Don’t Fall Into “The Gap”

Sklansky wraps up his 2+2 article by reasserting his “gap” concept (though failing to provide a proof or argument for his assertion):

Finally, always remember the giant differences between calling and raising, in other words my “gap” concept, especially if your opponents are not maniacs. Even if your won chips are worth less than your owned chips, a big bet that might win immediately can be well worth it with hands that are underdogs if called.

Since this is even truer in those situations where your short stack temporarily makes the second derivative of the value of your chips positive, you better be doing a lot of first to act, all-in moves, in this situation. Just like Dan Harrington and I have already told you.

Sklansky’s allegation that “your won chips are worth less than your owned chips” is essentially a rephrasing of the reverse chip value theory, which I have already shown is based on faulty logic that ignores the utility value of chips. But in addition to this error, I also have a serious problem with his “gap” concept in no-limit hold’em tournaments, a concept I dispute in The Poker Tournament Formula.

The gap concept essentially says that it takes a stronger hand to call a raise than it does to make a raise. The logic here is that the earlier your position, the more players there are to act after you, and the greater the likelihood of another player finding a hand that is stronger than yours.

Therefore, since you need a stronger hand to raise from an early position, you must assume that a raise from a player in an earlier position than you is indicative of a very strong hand. So, in order to call a raise from a player who enters the pot from an earlier position than you, you would need a stronger hand than you would need to enter the pot with a raise yourself if you were the first player to enter the pot.

The gap concept makes sense in certain types of limit hold’em cash games. Assuming you are in a game where most of the players understand this logic, and play accordingly, it can make sense to fold some hands that you might normally raise with if an early raiser has entered the pot before you. In limit games, it is often difficult to get a player with a premium hand—say a high pocket pair—out of a pot with a bluff, as the limit structure restricts how much you can bet.

In no-limit cash games, however, or even limit cash games played short-handed, or limit games played at very high stakes where pros do not necessarily adhere to predictable betting patterns, the gap concept falls apart. And in tournaments—especially in no-limit tournaments—the concept is utterly useless. It is a concept that in and of itself will make any player too tight in a tournament.

If someone raises in front of you, and you are following the gap concept, you are, with very few exceptions, stuck looking for a monster hand, and if you don’t find one you’re out of the pot. And, in his tournament book, Sklansky alleges that in tournaments the gap actually widens from what it is in cash games!

As I mentioned at the start of this section, I first challenged the validity of the gap concept in The Poker Tournament Formula 1. If you look at my card strategies in Chapter Nine, you will see that I recommend calling preflop raises (and in some cases reraising) with pretty much the same hands that you would make preflop raises with if first into the pot.

This is because in tournaments, there is much more value to position than cards. (If you’ve read the book, you may recall the rock-paper-scissors analogy: Position beats cards. This is because, unlike cash games, it is comparatively easy to scare players out of pots postflop in tournaments, even players with premium preflop hands. Tournament players are also much less likely to keep the lead in betting if the flop does not hit them, providing a player with position a chance to bet and steal many pots uncontested.)

In fact, not only do I advise calling or reraising preflop raises with the same cards with which you would make preflop raises if first in, I take this idea further. When on the button, I recommend calling standard raises with any two cards. In Chapter 24 of The Poker Tournament Formula, titled “Break Out of the Mold,” I also suggest that tournament players should take advantage of players who believe that the gap concept applies in tournaments:

One “truth”… which is spouted in book after book on the game, is that it takes a stronger hand to call a raise than it takes to make a raise. In limit hold’em games, this is generally true. Raisers, and especially raisers in early position, do tend to have strong hands, and you’re unlikely to get rid of them with fancy play…

In the fast tournaments, the actual rule to remember if you want to get into the money is: “It takes a stronger position to call a raise….” If you are up against a solid poker player, one who has read all the books and generally plays legitimate hands accordingly, he will be very concerned if you call his pre-flop raise. He will automatically assume you must have a medium to high pair, or else AK, or maybe AQ suited…

He does not understand that all you really need to call his normal raise is a seat to his left. Your seat is stronger than his hand

These players evaluate every move you make based on what they think your cards are. What you’re really doing… is finding reasons to bet that have nothing to do with your cards, reasons beyond the mental world of many players who have read all the books, or who are coming from a limit hold’em, non-tournament background…

These post-flop steals are where a sizable amount of your final table chips will come from. You don’t bet on your own luck, you bet against the other guy’s luck… nothing is less lucky than to be dealt good cards in a bad position.

Because Sklansky fails to recognize the advantages available from aspects of poker that have little to do with cards, such as position plays and chip shots, he cannot give up his logic that makes cards the supreme decider of all strategies, even in situations, like no-limit tournaments, where showdowns are much less likely. Showdowns are much less likely in tournaments because chips are so precious, and except during rebuy periods, not replenishable if you lose them.

Again, the gap concept is a card-based concept that inevitably leads to tighter and more conservative play. As Sklansky puts it in his “Tournament Theory Afterthought,” on page 100 of his book:

Tournament differences impact how you should play your hands, they increase the “Gap,” they force you to give up on many small edges, and frequently make overall play tighter.

In Conclusion: A Golden Opportunity for Smart Players

Most of what I have written in this article is implicit in the strategies advised in The Poker Tournament Formula, although I avoid complex discussion of much of the underlying theory and math in the book. If a looser, more aggressive style of play does not appeal to you, then I would advise you to stick to the cash games. Tournaments are won by risk-takers who have the guts to steal their way to the top. Your cards will never get you there in today’s aggressive tournament environment.

The upside of Sklansky’s and Malmuth’s mistaken chip value theories for smart players is that most of the poker tournament books that have been published in the past twenty years have strategies that have been based, directly or indirectly, on this reverse chip value assumption—which is why most tournament books’ recommended strategies are so tight and conservative. So, many of the players who are entering tournaments today and who are studying the major authors for advice are going in like lambs to the slaughter.

This is good for you if you play tournaments, as these books will continue to sell and it will probably be a number of years before the masses catch up. (Already, however, I see that Eric Lindgren’s new book, Making the Final Table, challenges many of the old, tight, survival theories.)

I cannot stress enough the importance of abandoning the theory of reverse chip value, and the conservative strategies to which it leads, if you are trying to make money in poker tournaments. This widely-held belief is the reason why 80+% of the players who enter no-limit hold’em tournaments today don’t have a prayer of a chance of making a final table. They are so tight and so easy to read that the smart, aggressive kids just keep stealing the prize pools from them. Despite what most of the popular books tell you about tournament “survival” strategies, I’m telling you to stop sitting there like a loser.

In closing: Look, gang, the reverse chip value theory was devised two decades ago by way of faulty analyses of bad tournament models, and just because no one in all that time ever specifically challenged it in print doesn’t mean it wasn’t always wrong.  ♠

Acknowledgements

Special thanks to Pikachu for his simulations that address the faulty models on which overly tight rebuy strategies are based, and for pointing out the time factor in chip value.

Special thanks also to Radar O’Reilly for pointing out a number of mistakes in Sklansky’s double-up model for analyzing chip value.

Thanks also to BigAlK, GardenaMiracle, and WRX for their insightful contributions to the discussions on poker tournament chip value theory on the message boards at this Web site.  

For more information on poker tournament theory and strategy, see the Poker Tournament Formula Home.

Return to the Blackjack Forum Professional Gambling Library

Return to Blackjack Forum Online Home.

  
 
© 2004-2006 Blackjack Forum Online/Poker Tournament Formula, All Rights Reserved