In natural language processing, Latent Dirichlet allocation (LDA) is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word’s creation is attributable to one of the document’s topics. LDA is an example of a topic model and was first presented as a graphical model for topic discovery by David Blei, Andrew Ng, and Michael I. Jordan in 2003. Essentially the same model was also proposed independently by J. K. Pritchard, M. Stephens, and P. Donnelly in the study of population genetics. Both papers have been highly influential, with 13320 and 15857 citations respectively in January 2016.

In LDA, each document may be viewed as a mixture of various topics. This is similar to probabilistic latent semantic analysis (pLSA), except that in LDA the topic distribution is assumed to have a Dirichlet prior. In practice, this results in more reasonable mixtures of topics in a document. It has been noted, however, that the pLSA model is equivalent to the LDA model under a uniform Dirichlet prior distribution.

For example, an LDA model might have topics that can be classified as CAT_related and DOG_related. A topic has probabilities of generating various words, such as milk, meow, and kitten, which can be classified and interpreted by the viewer as “CAT_related”. Naturally, the word cat itself will have high probability given this topic. The DOG_related topic likewise has probabilities of generating each word: puppy, bark, and bone might have high probability. Words without special relevance, such as the (see function word), will have roughly even probability between classes (or can be placed into a separate category). A topic is not strongly defined, neither semantically nor epistemologically. It is identified on the basis of supervised labeling and (manual) pruning on the basis of their likelihood of co-occurrence. A lexical word may occur in several topics with a different probability, however, with a different typical set of neighboring words in each topic.

Each document is assumed to be characterized by a particular set of topics. This is akin to the standard bag of words model assumption, and makes the individual words exchangeable.

With plate notation, the dependencies among the many variables can be captured concisely. The boxes are “plates” representing replicates. The outer plate represents documents, while the inner plate represents the repeated choice of topics and words within a document. M denotes the number of documents, N the number of words in a document. Thus:

The are the only observable variables, and the other variables are latent variables. Mostly, the basic LDA model will be extended to a smoothed version to gain better results .[citation needed] The plate notation is shown on the right, where K denotes the number of topics considered in the model and:

The generative process is as follows. Documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words. LDA assumes the following generative process for a corpus consisting of documents each of length :

1. Choose , where and is the Dirichlet distribution for parameter

2. Choose , where

3. For each of the word positions , where , and

(Note that the Multinomial distribution here refers to the Multinomial with only one trial. It is formally equivalent to the categorical distribution.)

The lengths are treated as independent of all the other data generating variables ( and ). The subscript is often dropped gucci outlet, as in the plate diagrams shown here.

A formal description of smoothed LDA is as follows:

We can then mathematically describe the random variables as follows:

Learning the various distributions (the set of topics, their associated word probabilities, the topic of each word, and the particular topic mixture of each document) is a problem of Bayesian inference. The original paper used a variational Bayes approximation of the posterior distribution; alternative inference techniques use Gibbs sampling and expectation propagation.

Following is the derivation of the equations for collapsed Gibbs sampling, which means s and s will be integrated out. For simplicity, in this derivation the documents are all assumed to have the same length . The derivation is equally valid if the document lengths vary.

According to the model, the total probability of the model is:

where the bold-font variables denote the vector version of the variables. First of all, and need to be integrated out.

All the s are independent to each other and the same to all the s. So we can treat each and each separately. We now focus only on the part.

We can further focus on only one as the following:

Actually, it is the hidden part of the model for the document. Now we replace the probabilities in the above equation by the true distribution expression to write out the explicit equation.

Let be the number of word tokens in the document with the same word symbol (the word in the vocabulary) assigned to the topic. So, is three dimensional. If any of the three dimensions is not limited to a specific value, we use a parenthesized point to denote. For example, denotes the number of word tokens in the document assigned to the topic. Thus, the right most part of the above equation can be rewritten as:

So the integration formula can be changed to:

Clearly, the equation inside the integration has the same form as the Dirichlet distribution. According to the Dirichlet distribution,

Thus,

Now we turn our attentions to the part. Actually, the derivation of the part is very similar to the part. Here we only list the steps of the derivation:

For clarity, here we write down the final equation with both and integrated out:

The goal of Gibbs Sampling here is to approximate the distribution of . Since is invariable for any of Z, Gibbs Sampling equations can be derived from directly. The key point is to derive the following conditional probability:

where denotes the hidden variable of the word token in the document. And further we assume that the word symbol of it is the word in the vocabulary. denotes all the s but . Note that Gibbs Sampling needs only to sample a value for , according to the above probability, we do not need the exact value of but the ratios among the probabilities that can take value. So, the above equation can be simplified as:

Finally, let be the same meaning as but with the excluded. The above equation can be further simplified leveraging the property of gamma function. We first split the summation and then merge it back to obtain a -independent summation, which could be dropped:

Note that the same formula is derived in the article on the Dirichlet-multinomial distribution, as part of a more general discussion of integrating Dirichlet distribution priors out of a Bayesian network.

Recent research has been focused on speeding up the inference of Latent Dirichlet Allocation to support capture of a massive number of topics in large number of documents. The update equation of the Collapsed Gibbs Sampler mentioned in the earlier section has a natural sparsity within it that can be taken advantage of. Intuitively, since each document only contains a subset of topics , and a word also only appears in a subset of topics , the above update equation could be rewritten to take advantage of this sparsity.

In this equation, we have three terms, out of which two of them are sparse, and the other is small. We call these terms and respectively. Now, if we normalize each term by summing over all the topics, we get:

Here, we can see that is a summation of the topics that appear in document , and is also a sparse summation of the topics that a word has been observed by in the whole corpus. on the other hand, is dense but because of the small values of & , the value is very small compared to the two other terms.

Now, while sampling a topic, if we sample a random variable uniformly from , we can check which bucket our sample lands in. Since is small, we are very unlikely to fall into this bucket; however, if we do fall into this bucket, sampling a topic takes O(K) time (same as the original Collapsed Gibbs Sampler). However, if we fall into the other two buckets, we only need to check a subset of topics if we keep record of the sparse topics. A topic can be sampled from the bucket in time, and a topic can be sampled in time.

Notice that after sampling each topic, updating these buckets are all basic arithmetic operations.

Topic modeling is a classic problem in information retrieval gucci outlet. Related models and techniques are, among others, latent semantic indexing, independent component analysis, probabilistic latent semantic indexing, non-negative matrix factorization, and Gamma-Poisson distribution.

The LDA model is highly modular and can therefore be easily extended. The main field of interest is modeling relations between topics. This is achieved by using another distribution on the simplex instead of the Dirichlet. The Correlated Topic Model follows this approach, inducing a correlation structure between topics by using the logistic normal distribution instead of the Dirichlet. Another extension is the hierarchical LDA (hLDA), where topics are joined together in a hierarchy by using the nested Chinese restaurant process. LDA can also be extended to a corpus in which a document includes two types of information (e.g., words and names), as in the LDA-dual model. Nonparametric extensions of LDA include the Hierarchical Dirichlet process mixture model, which allows the number of topics to be unbounded and learnt from data and the nested Chinese Restaurant Process which allows topics to be arranged in a hierarchy whose structure is learnt from data.

As noted earlier, PLSA is similar to LDA. The LDA model is essentially the Bayesian version of PLSA model. The Bayesian formulation tends to perform better on small datasets because Bayesian methods can avoid overfitting the data. For very large datasets, the results of the two models tend to converge. One difference is that PLSA uses a variable to represent a document in the training set. So in PLSA, when presented with a document the model hasn’t seen before, we fix —the probability of words under topics—to be that learned from the training set and use the same EM algorithm to infer —the topic distribution under . Blei argues that this step is cheating because you are essentially refitting the model to the new data.

Variations on LDA have been used to automatically put natural images into categories, such as “bedroom” or “forest”, by treating an image as a document, and small patches of the image as words; one of the variations is called Spatial Latent Dirichlet Allocation.

Recently, LDA has been also applied to bioinformatics context.