TD-Gammon is a computer backgammon program developed in the 1990s by Gerald Tesauro at IBM's Thomas J. Watson Research Center. Its name comes from the fact that it is an artificial neural net trained by a form of temporal-difference learning, specifically TD-Lambda. It explored strategies that humans had not pursued and led to advances in the theory of correct backgammon play.
In 1993, TD-Gammon (version 2.1) was trained with 1.5 million games of self-play, and achieved a level of play just slightly below that of the top human backgammon players of the time. In 1998, during a 100-game series, it was defeated by the world champion by a mere margin of 8 points. Its unconventional assessment of some opening strategies had been accepted and adopted by expert players.
TD-gammon is commonly cited as an early success of reinforcement learning and neural networks, and was cited in, for example, papers for deep Q-learning and AlphaGo.
TD-Gammon's learning algorithm consists of updating the weights in its neural net after each turn to reduce the difference between its evaluation of previous turns' board positions and its evaluation of the present turn's board position—hence "temporal-difference learning". The score of any board position is a set of four numbers reflecting the program's estimate of the likelihood of each possible game result: White wins normally, Black wins normally, White wins a gammon, Black wins a gammon. For the final board position of the game, the algorithm compares with the actual result of the game rather than its own evaluation of the board position.
The core of TD-gammon is a neural network with 3 layers.
After each turn, the learning algorithm updates each weight in the neural net according to the following rule:
is the amount to change the weight from its value on the previous turn. | |
is the difference between the current and previous turn's board evaluations. | |
is a "learning rate" parameter. | |
is a parameter that affects how much the present difference in board evaluations should feed back to previous estimates. makes the program correct only the previous turn's estimate; makes the program attempt to correct the estimates on all previous turns; and values of between 0 and 1 specify different rates at which the importance of older estimates should "decay" with time. | |
is the gradient of neural-network output with respect to weights: that is, how much changing the weight affects the output. |
Version 1.0 used simple 1-ply search: every next move is scored by the neural net, and the highest-scoring move is selected.
Versions 2.0 and 2.1 used 2-ply search:
Versions 3.0 and 3.1 used 3-ply search, using possible dice rolls instead of 21.
The last version, 3.1, was trained specifically for an exhibition match against Malcolm Davis at the 1998 AAAI Hall of Champions. It lost at -8 points, mainly due to one blunder, where TD-Gammon opted to double and got gammoned at -32 points.
Even though TD-Gammon discovered insightful features on its own, Tesauro wondered if its play could be improved by using hand-designed features like Neurogammon's. Indeed, the self-training TD-Gammon with expert-designed features soon surpassed all previous computer backgammon programs. It stopped improving after about 1,500,000 games (self-play) using a three-layered neural network, with 198 input units encoding expert-designed features, 80 hidden units, and one output unit representing predicted probability of winning.
Late 1991, Bill Robertie, Paul Magriel, and Malcolm Davis, were invited to play against TD-Gammon (version 1.0). A total of 51 games were played, with TD-Gammon losing at -0.25 ppg. Robertie found TD-Gammon to be at the level of a competent advanced player, and better than any previous backgammon program. Robertie subsequently wrote about the use of TD-Gammon for backgammon study.
For example, on the opening play, the conventional wisdom was that given a roll of 2-1, 4-1, or 5-1, White should move a single checker from point 6 to point 5. Known as "slotting", this technique trades the risk of a hit for the opportunity to develop an aggressive position. TD-Gammon found that the more conservative play of splitting 24-23 was superior. Tournament players began experimenting with TD-Gammon's move, and found success. Within a few years, slotting had disappeared from tournament play, replaced by splitting, though in 2006 it made a reappearance for 2-1.
Backgammon expert Kit Woolsey found that TD-Gammon's positional judgement, especially its weighing of risk against safety, was superior to his own or any human's.
TD-Gammon's excellent positional play was undercut by occasional poor endgame play. The endgame requires a more analytical approach, sometimes with extensive lookahead. TD-Gammon's limitation to two-ply lookahead put a ceiling on what it could achieve in this part of the game. TD-Gammon's strengths and weaknesses were the opposite of symbolic artificial intelligence programs and most computer software in general: it was good at matters that require an intuitive "feel" but bad at systematic analysis.
It is also poor at doubling strategies. This is likely due to the fact that the neural network is trained without the doubling cube, with the doubling added by feeding the neural network's cubeless equity estimates into theoretically-based heuristic formulae. This was particularly the case in the 1988 exhibition match, where it played 100 games against Malcolm Davis. A single doubling blunder lost the match.
TD-gammon was never commercialized or released to the public in some other form, but it inspired commercial backgammon programs based on neural networks, such as JellyFish (1994) and Snowie (1998).
See also
Works by Tesauro
External links
|
|