When the members of the 2016 November Nine sit down to play to a champion Oct. 30 inside The Penn & Teller Theater at the Rio All-Suite Hotel and Casino in Las Vegas, no one knows what will actually happen, but the folks here at PokerNews and Advanced Poker Training may have a little insight.
In advance of the 2016 World Series of Poker Main Event final table, we got together and ran 100 final table simulations, printing the results, key hands and strategic analysis of how it all played in a three part series dubbed Simulating the November Nine.
Using the information we had gathered to put together bios of each of the November Nine, we devised a basic playing style for each of the players. Of course, we never imagined we could predict exactly who would win the Main Event this year with any real accuracy or even devise playing styles that fit the November Niners perfectly. There are simply too many variables at play.
However, thinking we could build playing styles that were reasonably close to reality and that this would be a unique way to create some interesting Main-Event-focused content, all while giving us a more of a scientific way to predict who might come out on top, we went ahead.
Advanced Poker Training was developed by a pair of Floridian academics, software developers and brothers named Steve Blay and Allen Blay. The organization offers poker training in a bit of a different way. Instead of the usual video tutorial or one-on-one in the lab formats, APT sends its players off to play in simulated games against artificial intelligence bots. In fact, they’ve designed thousands of different AI bots with different playing styles, weak and strong, for players to test themselves against, so it was easy to envision they could take the basic November Niner playing styles we’d come up with and design bots around them.
Steve Blay, the software developer behind the APT product, said its bots are designed based on a set of criteria that include tendencies, frequencies and adjustments real players make, and are designed to play like real humans with real flaws.
Blay made an effort to map the information PokerNews provided about each player, their personality, playing style and experience to the 42 configurable personality traits available for APT’s AI bots and adjusted them based on an assessment of how comfortable each player would be in the spotlight, creating bots that played similarly to each of the November Nine.
Then he made two predictions, including how many wins each would be expected to collect based on stack size as a percentage of the total chips in play, and the average money won, using the Independent Chip Model (ICM).
The 100 sims provided the following results:
2016 November Nine Simulation Results (100 Sessions):
|Player||Actual Wins||Predicted Wins (Per 100)||Total $ Won||Actual Average $ Won||Predicted Average $ Won||Percent Difference|
The numbers came relatively close to the predicted values, with the Qui Nguyen bot providing the most interesting outlier, winning 26 out of 100 times – the most of any player. Chip leader and clear favorite, the Cliff Josephy bot, underperformed, winning only 17 of the 100 simulations. However, his average money won was down only seven percent overall, which suggested he finished in the top half of the payouts in most of the sims.
The third statistical outlier was the fact the bot designed to play like 888poker qualifier Fernando Pons managed to win five times, although it was predicted to only win twice. It also won 19 percent more prize money than expected, leading the group.
The performance of the bot designed to play like professional gambler Qui Nguyen was clearly the most surprising. It was designed to gamble and did so successfully. However, the bot still displayed some savvy. In one of the key hands (revealed in the second part of the series) that led the bot to victory, that savvy was clearly on display.
The hand in question began with the Jerry Wong bot picking up the kind of hand it couldn’t really fold to a three-bet from Nguyen, the most aggressive player on the table at the time.
As you can see from the replay, however, the Nguyen bot got paid off this time because it slowed down a little and checked the flop with a monster.
Of course, the Nguyen bot got extremely lucky its opponent turned top pair, but that check on the flop may have helped get more money out of the Wong bot even with a blank on the turn, considering it was probably a good spot for the Wong bot to turn into a bluff.
It clearly ran good in several of the sims, but the Nguyen bot obviously played good as well. Looking at the strategy it employed with great success throughout the sims, APT actually offered Nguyen a sponsorship for the final table that comes with a patch and some advice based on what APT learned.
For the third part of the series, PokerNews asked APT’s Director of Operations Steve Blay to give each of the player’s keys to winning and pitfalls to avoid based on what he saw in the simulations.
Most predominately, Blay suggested Cliff Josephy should avoid big confrontations with the player holding stacks that can hurt him and put pressure on the rest.
He also suggested Nguyen should take advantage of timid opponents and use his reputation as a gambler to make other players fear confrontations with him.
The full strategic analysis for all the players is available in Part 3 of the series. All of the key hand replays are available in Part 2 of the series and the full results and more on the entire process can be found in Part 1.