In information theory, perplexity is a measure of uncertainty in the value of a sample from a discrete probability distribution. The perplexity of a fair coin toss is 2, and that of a fair die roll is 6; and generally, for a probability distribution with exactly outcomes each having a probability of exactly , the perplexity is simply . But perplexity can also be applied to unfair dice, and to other non-uniform probability distributions. It can be defined as the exponentiation of the information entropy. The larger the perplexity, the less likely it is that an observer can guess the value which will be drawn from the distribution.
Perplexity was originally introduced in 1977 in the context of speech recognition by Frederick Jelinek, Robert Mercer, Lalit R. Bahl, and James K. Baker.
where ranges over the events, where is defined to be , and where the value of does not affect the result; can be chosen to be , , , or any other positive value other than . In some contexts, this measure is also referred to as the Diversity index.
The logarithm is the entropy of the distribution; it is expressed in if the base of the logarithm is 2, and it is expressed in nats if the natural logarithm is used.
Perplexity of a random variable may be defined as the perplexity of the distribution over its possible values . It can be thought of as a measure of uncertainty or "surprise" related to the outcomes.
For a probability distribution where exactly outcomes each have a probability of and all other outcomes have a probability of zero, the perplexity of this distribution is simply . This is because the distribution models a fair -sided Dice, with each of the outcomes being equally likely. In this context, the perplexity indicates that there is as much uncertainty as there would be when rolling a fair -sided die. Even if a random variable has more than possible outcomes, the perplexity will still be if the distribution is uniform over outcomes and zero for the rest. Thus, a random variable with a perplexity of can be described as being "-ways perplexed," meaning it has the same level of uncertainty as a fair -sided die.
Perplexity is sometimes used as a measure of the difficulty of a prediction problem. It is, however, generally not a straightforward representation of the relevant probability. For example, if you have two choices, one with probability 0.9, your chances of a correct guess using the optimal strategy are 90 percent. Yet, the perplexity is The inverse of the perplexity, 1/1.38 = 0.72, does not correspond to the 0.9 probability.
The perplexity is the exponentiation of the entropy, a more commonly encountered quantity. Entropy measures the expected or "average" number of bits required to encode the outcome of the random variable using an optimal variable-length code. It can also be regarded as the expected information gain from learning the outcome of the random variable, providing insight into the uncertainty and complexity of the underlying probability distribution.
where is customarily 2. Better models of the unknown distribution will tend to assign higher probabilities to the test events. Thus, they have lower perplexity because they are less surprised by the test sample. This is equivalent to saying that better models have higher likelihoods for the test data, which leads to a lower perplexity value.
The exponent above may be regarded as the average number of bits needed to represent a test event if one uses an optimal code based on . Low-perplexity models do a better job of Data compression the test sample, requiring few bits per test element on average because tends to be high.
The exponent may also be interpreted as a cross-entropy:
where denotes the empirical distribution of the test sample (i.e., if appeared times in the test sample of size ).
By the definition of KL divergence, it is also equal to , which is . Consequently, the perplexity is minimized when .
Suppose the average text in the corpus has a probability of according to the language model. This would give a model perplexity of 2190 per sentence. However, in NLP, it is more common to normalize by the length of a text. Thus, if the test sample has a length of 1,000 tokens, and could be coded using 7.95 bits per token, one could report a model perplexity of 27.95 = 247 per token. In other words, the model is as confused on test data as if it had to choose uniformly and independently among 247 possibilities for each token.
There are two standard evaluation metrics for language models: perplexity or word error rate (WER). The simpler of these measures, WER, is simply the percentage of erroneously recognized words E (deletions, insertions, substitutions) to total number of words N, in a speech recognition task i.e.The second metric, perplexity (per token), is an information theoretic measure that evaluates the similarity of proposed model to the original distribution . It can be computed as a inverse of (geometric) average probability of test set
where is the number of tokens in test set . This equation can be seen as the exponentiated cross entropy, where cross entropy is approximated as
This measure was employed to compare different models on the same dataset and guide the optimization of hyperparameters, although it has been found sensitive to factors such as linguistic features and sentence length.
Despite its pivotal role in language model development, perplexity has shown limitations, particularly as an inadequate predictor of speech recognition performance, overfitting and generalization, raising questions about the benefits of blindly optimizing perplexity alone.
In the context of the Brown Corpus, simply guessing that the next word is "the" will achieve an accuracy of 7 percent, contrasting with the 1/247 = 0.4 percent that might be expected from a naive use of perplexity. This difference underscores the importance of the statistical model used and the nuanced nature of perplexity as a measure of predictiveness.Wilcox, Ethan Gotlieb, et al. "On the predictive power of neural language models for human real-time comprehension behavior." arXiv preprint arXiv:2006.01912 (2020). [2] The guess is based on unigram statistics, not on the trigram statistics that yielded the perplexity of 247, and utilizing trigram statistics would further refine the prediction.
|
|